prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
## Loan EDA
```
import pandas as pd
import numpy as np
dtrain = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
```
## Data Cleaning
```
dtrain.head()
dtrain.shape
# Removing the commas form `Loan_Amount_Requested`
dtrain['Loan_Amount_Requested'] = dtrain.Loan_Amount_Requested.str.replace(',', '').astype(int)
test['Loan_Amount_Requested'] = test.Loan_Amount_Requested.str.replace(',', '').astype(int)
# Filling 0 for `Annual_Income` column
dtrain['Annual_Income'] = dtrain['Annual_Income'].fillna(0).astype(int)
test['Annual_Income'] = test['Annual_Income'].fillna(0).astype(int)
# Showing the different types of values for `Home_Owner`
dtrain['Home_Owner'] = dtrain['Home_Owner'].fillna('NA')
test['Home_Owner'] = test['Home_Owner'].fillna('NA')
print(dtrain.Home_Owner.value_counts())
```
We converted the ```NaN``` in ```Home_Owner``` into ```NA```. We are going to calculate the hash value of the string and we don't know how ```NaN``` should be replced. Now we see that there are almost **25349** rows which was NaN. Dropping these would cause loosing a lot of data. Hence we replaced these with sting "NA" and then converted these into hash values
```
# Filling 0 for missing `Months_Since_Deliquency`
dtrain['Months_Since_Deliquency'] = dtrain['Months_Since_Deliquency'].fillna(0)
test['Months_Since_Deliquency'] = test['Months_Since_Deliquency'].fillna(0)
dtrain.isnull().values.any()
dtrain['Length_Employed'] = dtrain['Length_Employed'].fillna('0 year')
test['Length_Employed'] = test['Length_Employed'].fillna('0 year')
def convert_length_employed(elem):
if elem[0] == '<':
return 0.5 # because mean of 0 to 1 is 0.5
elif str(elem[2]) == '+':
return 15.0 # because mean of 10 to 20 is 15
elif str(elem) == '0 year':
return 0.0
else:
return float(str(elem).split()[0])
dtrain['Length_Employed'] = dtrain['Length_Employed'].apply(convert_length_employed)
test['Length_Employed'] = test['Length_Employed'].apply(convert_length_employed)
dtrain['Loan_Grade'] = dtrain['Loan_Grade'].fillna('NA')
test['Loan_Grade'] = test['Loan_Grade'].fillna('NA')
dtrain.Loan_Grade.value_counts()
# dtrain[(dtrain.Annual_Income == 0) & (dtrain.Income_Verified == 'not verified')]
from sklearn.preprocessing import LabelEncoder
number = LabelEncoder()
dtrain['Loan_Grade'] = number.fit_transform(dtrain.Loan_Grade.astype('str'))
dtrain['Income_Verified'] = number.fit_transform(dtrain.Income_Verified.astype('str'))
dtrain['Area_Type'] = number.fit_transform(dtrain.Area_Type.astype('str'))
dtrain['Gender'] = number.fit_transform(dtrain.Gender.astype('str'))
test['Loan_Grade'] = number.fit_transform(test.Loan_Grade.astype('str'))
test['Income_Verified'] = number.fit_transform(test.Income_Verified.astype('str'))
test['Area_Type'] = number.fit_transform(test.Area_Type.astype('str'))
test['Gender'] = number.fit_transform(test.Gender.astype('str'))
dtrain.head()
# Converting `Purpose_Of_Loan` and `Home_Owner` into hash
import hashlib
def convert_to_hashes(elem):
return round(int(hashlib.md5(elem.encode('utf-8')).hexdigest(), 16) / 1e35, 5)
dtrain['Purpose_Of_Loan'] = dtrain['Purpose_Of_Loan'].apply(convert_to_hashes)
dtrain['Home_Owner'] = dtrain['Home_Owner'].apply(convert_to_hashes)
test['Purpose_Of_Loan'] = test['Purpose_Of_Loan'].apply(convert_to_hashes)
test['Home_Owner'] = test['Home_Owner'].apply(convert_to_hashes)
import xgboost as xgb
features = np.array(dtrain.iloc[:, 1:-1])
labels = np.array(dtrain.iloc[:, -1])
import operator
from xgboost import plot_importance
from matplotlib import pylab as plt
from collections import OrderedDict
def xgb_feature_importance(features, labels, num_rounds, fnames, plot=False):
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.1
param['max_depth'] = 6
param['silent'] = 1
param['num_class'] = 4
param['eval_metric'] = "merror"
param['min_child_weight'] = 1
param['subsample'] = 0.7
param['colsample_bytree'] = 0.7
param['seed'] = 42
nrounds = num_rounds
xgtrain = xgb.DMatrix(features, label=labels)
xgb_params = list(param.items())
gbdt = xgb.train(xgb_params, xgtrain, nrounds)
importance = sorted(gbdt.get_fscore().items(), key=operator.itemgetter(1), reverse=True)
if plot:
df = pd.DataFrame(importance, columns=['feature', 'fscore'])
df['fscore'] = df['fscore'] / df['fscore'].sum()
plt.figure()
df.plot(kind='bar', x='feature', y='fscore', legend=False, figsize=(8, 8))
plt.title('XGBoost Feature Importance')
plt.xlabel('relative importance')
plt.show()
else:
# fnames = dtrain.columns.values[1:-1].tolist()
imp_features = OrderedDict()
imps = dict(importance)
for each in list(imps.keys()):
index = int(each.split('f')[-1])
imp_features[fnames[index]] = imps[each]
return imp_features
xgb_feature_importance(features, labels, num_rounds=1000, fnames=dtrain.columns.values[1:-1].tolist(), plot=True)
```
## Features Scores
```python
OrderedDict([('Debt_To_Income', 29377),
('Loan_Amount_Requested', 22157),
('Annual_Income', 21378),
('Total_Accounts', 16675),
('Months_Since_Deliquency', 13287),
('Number_Open_Accounts', 13016),
('Length_Employed', 9140),
('Loan_Grade', 7906),
('Purpose_Of_Loan', 7284),
('Inquiries_Last_6Mo', 5691),
('Home_Owner', 4946),
('Income_Verified', 4434),
('Area_Type', 3755),
('Gender', 2027)])
```
## Feature Creation
### 1. RatioOfLoanAndIncome
```
dtrain['RatioOfLoanAndIncome'] = dtrain.Loan_Amount_Requested / (dtrain.Annual_Income + 1)
test['RatioOfLoanAndIncome'] = test.Loan_Amount_Requested / (test.Annual_Income + 1)
```
### 2. RatioOfOpenAccToTotalAcc
```
dtrain['RatioOfOpenAccToTotalAcc'] = dtrain.Number_Open_Accounts / (dtrain.Total_Accounts + 0.001)
test['RatioOfOpenAccToTotalAcc'] = test.Number_Open_Accounts / (test.Total_Accounts + 0.001)
dtrain.drop(['Interest_Rate', 'Loan_Amount_Requested',
'Annual_Income', 'Number_Open_Accounts', 'Total_Accounts'], inplace=True, axis=1)
dtrain['Interest_Rate'] = labels
test.drop(['Loan_Amount_Requested',
'Annual_Income', 'Number_Open_Accounts', 'Total_Accounts'], inplace=True, axis=1)
features = np.array(dtrain.iloc[:, 1:-1])
testFeatures = np.array(test.iloc[:, 1:])
xgb_feature_importance(features, labels, num_rounds=1000, fnames=dtrain.columns.values[1:-1].tolist(), plot=True)
```
## Feature Score with new features
```python
OrderedDict([('Debt_To_Income', 23402),
('RatioOfLoanAndIncome', 20254),
('RatioOfOpenAccToTotalAcc', 17868),
('Loan_Amount_Requested', 16661),
('Annual_Income', 14228),
('Total_Accounts', 11892),
('Months_Since_Deliquency', 10293),
('Number_Open_Accounts', 8553),
('Length_Employed', 7614),
('Loan_Grade', 6938),
('Purpose_Of_Loan', 6013),
('Inquiries_Last_6Mo', 4284),
('Home_Owner', 3760),
('Income_Verified', 3451),
('Area_Type', 2892),
('Gender', 1708)])
```
```
from sklearn.model_selection import train_test_split # for splitting the training and testing set
from sklearn.decomposition import PCA # For possible dimentionality reduction
from sklearn.feature_selection import SelectKBest # For feature selction
from sklearn.model_selection import StratifiedShuffleSplit # For unbalanced class cross-validation
from sklearn.preprocessing import MaxAbsScaler, StandardScaler, MinMaxScaler # Different scalars
from sklearn.pipeline import Pipeline # For putting tasks in Pipeline
from sklearn.model_selection import GridSearchCV # For fine tuning the classifiers
from sklearn.naive_bayes import BernoulliNB # For Naive Bayes
from sklearn.neighbors import NearestCentroid # From modified KNN
from sklearn.svm import SVC # For SVM Classifier
from sklearn.tree import DecisionTreeClassifier # From decision Tree Classifier
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.4, random_state=42)
from sklearn.metrics import accuracy_score
```
### Naive Bayes
```
clf = BernoulliNB()
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
print(accuracy_score(y_test, predictions, normalize=True))
X_test.shape
```
### XGBoost Cassification
```
def xgb_classification(X_train, X_test, y_train, y_test, num_rounds, fnames='*'):
if fnames == '*':
# All the features are being used
pass
else:
# Feature selection is being performed
fnames.append('Interest_Rate')
dataset = dtrain[fnames]
features = np.array(dataset.iloc[:, 0:-1])
labels = np.array(dataset.iloc[:, -1])
X_train, X_test, y_train, y_test = train_test_split(features, labels,
test_size=0.4,
random_state=42)
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.01
param['max_depth'] = 6
param['silent'] = 1
param['num_class'] = 4
param['nthread'] = -1
param['eval_metric'] = "merror"
param['min_child_weight'] = 1
param['subsample'] = 0.8
param['colsample_bytree'] = 0.5
param['seed'] = 42
xg_train = xgb.DMatrix(X_train, label=y_train)
xg_test = xgb.DMatrix(X_test, label=y_test)
watchlist = [(xg_train, 'train'), (xg_test, 'test')]
bst = xgb.train(param, xg_train, num_rounds, watchlist)
pred = bst.predict(xg_test)
return accuracy_score(y_test, pred, normalize=True)
# 0.70447629480859353 <-- Boosting Rounds 1000
# 0.70506968535086123 <--- Boosting Rounds 862
# 0.70520662162984604 <-- Boosting Rounds 846
xgb_classification(X_train, X_test, y_train, y_test, num_rounds=1000, fnames='*')
```
## Submission
```
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.01
param['max_depth'] = 10
param['silent'] = 1
param['num_class'] = 4
param['nthread'] = -1
param['eval_metric'] = "merror"
param['min_child_weight'] = 1
param['subsample'] = 0.8
param['colsample_bytree'] = 0.5
param['seed'] = 42
xg_train = xgb.DMatrix(features,label=labels)
xg_test = xgb.DMatrix(testFeatures)
watchlist = [(xg_train, 'train')]
gbm = xgb.train(param,xg_train, 846, watchlist)
test_pred = gbm.predict(xg_test)
test['Interest_Rate'] = test_pred
test['Interest_Rate'] = test.Interest_Rate.astype(int)
test[['Loan_ID', 'Interest_Rate']].to_csv('submission5.csv', index=False)
pd.read_csv('submission5.csv')
# test[['Loan_ID', 'Interest_Rate']]
testFeatures.shape
features.shape
```
| true |
code
| 0.368519 | null | null | null | null |
|
# Dimensionality Reduction with the Shogun Machine Learning Toolbox
#### *By Sergey Lisitsyn ([lisitsyn](https://github.com/lisitsyn)) and Fernando J. Iglesias Garcia ([iglesias](https://github.com/iglesias)).*
This notebook illustrates <a href="http://en.wikipedia.org/wiki/Unsupervised_learning">unsupervised learning</a> using the suite of dimensionality reduction algorithms available in Shogun. Shogun provides access to all these algorithms using [Tapkee](http://tapkee.lisitsyn.me/), a C++ library especialized in <a href="http://en.wikipedia.org/wiki/Dimensionality_reduction">dimensionality reduction</a>.
## Hands-on introduction to dimension reduction
First of all, let us start right away by showing what the purpose of dimensionality reduction actually is. To this end, we will begin by creating a function that provides us with some data:
```
import numpy as np
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
def generate_data(curve_type, num_points=1000):
if curve_type=='swissroll':
tt = np.array((3*np.pi/2)*(1+2*np.random.rand(num_points)))
height = np.array((np.random.rand(num_points)-0.5))
X = np.array([tt*np.cos(tt), 10*height, tt*np.sin(tt)])
return X,tt
if curve_type=='scurve':
tt = np.array((3*np.pi*(np.random.rand(num_points)-0.5)))
height = np.array((np.random.rand(num_points)-0.5))
X = np.array([np.sin(tt), 10*height, np.sign(tt)*(np.cos(tt)-1)])
return X,tt
if curve_type=='helix':
tt = np.linspace(1, num_points, num_points).T / num_points
tt = tt*2*np.pi
X = np.r_[[(2+np.cos(8*tt))*np.cos(tt)],
[(2+np.cos(8*tt))*np.sin(tt)],
[np.sin(8*tt)]]
return X,tt
```
The function above can be used to generate three-dimensional datasets with the shape of a [Swiss roll](http://en.wikipedia.org/wiki/Swiss_roll), the letter S, or an helix. These are three examples of datasets which have been extensively used to compare different dimension reduction algorithms. As an illustrative exercise of what dimensionality reduction can do, we will use a few of the algorithms available in Shogun to embed this data into a two-dimensional space. This is essentially the dimension reduction process as we reduce the number of features from 3 to 2. The question that arises is: what principle should we use to keep some important relations between datapoints? In fact, different algorithms imply different criteria to answer this question.
Just to start, lets pick some algorithm and one of the data sets, for example lets see what embedding of the Swissroll is produced by the Isomap algorithm. The Isomap algorithm is basically a slightly modified Multidimensional Scaling (MDS) algorithm which finds embedding as a solution of the following optimization problem:
$$
\min_{x'_1, x'_2, \dots} \sum_i \sum_j \| d'(x'_i, x'_j) - d(x_i, x_j)\|^2,
$$
with defined $x_1, x_2, \dots \in X~~$ and unknown variables $x_1, x_2, \dots \in X'~~$ while $\text{dim}(X') < \text{dim}(X)~~~$,
$d: X \times X \to \mathbb{R}~~$ and $d': X' \times X' \to \mathbb{R}~~$ are defined as arbitrary distance functions (for example Euclidean).
Speaking less math, the MDS algorithm finds an embedding that preserves pairwise distances between points as much as it is possible. The Isomap algorithm changes quite small detail: the distance - instead of using local pairwise relationships it takes global factor into the account with shortest path on the neighborhood graph (so-called geodesic distance). The neighborhood graph is defined as graph with datapoints as nodes and weighted edges (with weight equal to the distance between points). The edge between point $x_i~$ and $x_j~$ exists if and only if $x_j~$ is in $k~$ nearest neighbors of $x_i$. Later we will see that that 'global factor' changes the game for the swissroll dataset.
However, first we prepare a small function to plot any of the original data sets together with its embedding.
```
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
def plot(data, embedded_data, colors='m'):
fig = plt.figure()
fig.set_facecolor('white')
ax = fig.add_subplot(121,projection='3d')
ax.scatter(data[0],data[1],data[2],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
ax = fig.add_subplot(122)
ax.scatter(embedded_data[0],embedded_data[1],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
plt.show()
import shogun as sg
# wrap data into Shogun features
data, colors = generate_data('swissroll')
feats = sg.create_features(data)
# create instance of Isomap converter and set number of neighbours used in kNN search to 20
isomap = sg.create_transformer('Isomap', target_dim=2, k=20)
# create instance of Multidimensional Scaling converter and configure it
mds = sg.create_transformer('MultidimensionalScaling', target_dim=2)
# embed Swiss roll data
embedded_data_mds = mds.transform(feats).get('feature_matrix')
embedded_data_isomap = isomap.transform(feats).get('feature_matrix')
plot(data, embedded_data_mds, colors)
plot(data, embedded_data_isomap, colors)
```
As it can be seen from the figure above, Isomap has been able to "unroll" the data, reducing its dimension from three to two. At the same time, points with similar colours in the input space are close to points with similar colours in the output space. This is, a new representation of the data has been obtained; this new representation maintains the properties of the original data, while it reduces the amount of information required to represent it. Note that the fact the embedding of the Swiss roll looks good in two dimensions stems from the *intrinsic* dimension of the input data. Although the original data is in a three-dimensional space, its intrinsic dimension is lower, since the only degree of freedom are the polar angle and distance from the centre, or height.
Finally, we use yet another method, Stochastic Proximity Embedding (SPE) to embed the helix:
```
# wrap data into Shogun features
data, colors = generate_data('helix')
features = sg.create_features(data)
# create MDS instance
converter = sg.create_transformer('StochasticProximityEmbedding', target_dim=2)
# embed helix data
embedded_features = converter.transform(features)
embedded_data = embedded_features.get('feature_matrix')
plot(data, embedded_data, colors)
```
## References
- Lisitsyn, S., Widmer, C., Iglesias Garcia, F. J. Tapkee: An Efficient Dimension Reduction Library. ([Link to paper in JMLR](http://jmlr.org/papers/v14/lisitsyn13a.html#!).)
- Tenenbaum, J. B., de Silva, V. and Langford, J. B. A Global Geometric Framework for Nonlinear Dimensionality Reduction. ([Link to Isomap's website](http://isomap.stanford.edu/).)
| true |
code
| 0.503235 | null | null | null | null |
|
# Feature List View
## Usage
```
import sys, json, math
from mlvis import FeatureListView
from random import uniform, gauss
from IPython.display import display
if sys.version_info[0] < 3:
import urllib2 as url
else:
import urllib.request as url
def generate_random_steps(k):
randoms = [uniform(0, 1) / 2 for i in range(0, k)]
steps = [0] * (k - 1)
t = 0
for i in range(0, k - 1):
steps[i] = t + (1 - t) * randoms[i]
t = steps[i]
return steps + [1]
def generate_categorical_feature(states):
size = len(states)
distro_a = [uniform(0, 1) for i in range(0, size)]
distro_b = [uniform(0, 1) for i in range(0, size)]
return {
'name': 'dummy-categorical-feature',
'type': 'categorical',
'domain': list(states.values()),
'distributions': [distro_a, distro_b],
'distributionNormalized': [distro_a, distro_b],
'colors': ['#47B274', '#6F5AA7'],
'divergence': uniform(0, 1)
}
def generate_numerical_feature():
domain_size = 100
distro_a = [uniform(0, 1) for i in range(0, domain_size)]
distro_b = [uniform(0, 1) for i in range(0, domain_size)]
return {
'name': 'dummy-categorical-numerical',
'type': 'numerical',
'domain': generate_random_steps(domain_size),
'distributions': [distro_a, distro_b],
'distributionNormalized': [distro_a, distro_b],
'colors': ['#47B274', '#6F5AA7'],
'divergence': uniform(0, 1)
}
def generate_random_categorical_values(states):
k = 10000
values = [None] * k
domain = list(states.values())
size = len(states)
for i in range(0, k):
d = int(math.floor(uniform(0, 1) * size))
values[i] = domain[d]
return values
def generate_raw_categorical_feature(states):
return {
'name': 'dummy-raw-categorical-feature',
'type': 'categorical',
'values': [generate_random_categorical_values(states),
generate_random_categorical_values(states)]
}
def generate_raw_numerical_feature():
return {
'name': 'dummy-raw-numerical-feature',
'type': 'numerical',
'values': [
[gauss(2, 0.5) for i in range(0, 2500)],
[gauss(0, 1) for i in range(0, 7500)]
]
}
# load the US states data
PREFIX = 'https://d1a3f4spazzrp4.cloudfront.net/mlvis/'
response = url.urlopen(PREFIX + 'jupyter/states.json')
states = json.loads(response.read().decode())
# Randomly generate the data for the feature list view
categorical_feature = generate_categorical_feature(states)
raw_categorical_feature = generate_raw_categorical_feature(states)
numerical_feature = generate_numerical_feature()
raw_numerical_feature = generate_raw_numerical_feature()
data = [categorical_feature, raw_categorical_feature, numerical_feature, raw_numerical_feature]
feature_list_view = FeatureListView(props={"data": data, "width": 1000})
display(feature_list_view)
```
| true |
code
| 0.374862 | null | null | null | null |
|
# Investigating ocean models skill for sea surface height with IOOS catalog and Python
The IOOS [catalog](https://ioos.noaa.gov/data/catalog) offers access to hundreds of datasets and data access services provided by the 11 regional associations.
In the past we demonstrate how to tap into those datasets to obtain sea [surface temperature data from observations](http://ioos.github.io/notebooks_demos/notebooks/2016-12-19-exploring_csw),
[coastal velocity from high frequency radar data](http://ioos.github.io/notebooks_demos/notebooks/2017-12-15-finding_HFRadar_currents),
and a simple model vs observation visualization of temperatures for the [Boston Light Swim competition](http://ioos.github.io/notebooks_demos/notebooks/2016-12-22-boston_light_swim).
In this notebook we'll demonstrate a step-by-step workflow on how ask the catalog for a specific variable, extract only the model data, and match the nearest model grid point to an observation. The goal is to create an automated skill score for quick assessment of ocean numerical models.
The first cell is only to reduce iris' noisy output,
the notebook start on cell [2] with the definition of the parameters:
- start and end dates for the search;
- experiment name;
- a bounding of the region of interest;
- SOS variable name for the observations;
- Climate and Forecast standard names;
- the units we want conform the variables into;
- catalogs we want to search.
```
import warnings
# Suppresing warnings for a "pretty output."
warnings.simplefilter("ignore")
%%writefile config.yaml
date:
start: 2018-2-28 00:00:00
stop: 2018-3-5 00:00:00
run_name: 'latest'
region:
bbox: [-71.20, 41.40, -69.20, 43.74]
crs: 'urn:ogc:def:crs:OGC:1.3:CRS84'
sos_name: 'water_surface_height_above_reference_datum'
cf_names:
- sea_surface_height
- sea_surface_elevation
- sea_surface_height_above_geoid
- sea_surface_height_above_sea_level
- water_surface_height_above_reference_datum
- sea_surface_height_above_reference_ellipsoid
units: 'm'
catalogs:
- https://data.ioos.us/csw
```
To keep track of the information we'll setup a `config` variable and output them on the screen for bookkeeping.
```
import os
import shutil
from datetime import datetime
from ioos_tools.ioos import parse_config
config = parse_config("config.yaml")
# Saves downloaded data into a temporary directory.
save_dir = os.path.abspath(config["run_name"])
if os.path.exists(save_dir):
shutil.rmtree(save_dir)
os.makedirs(save_dir)
fmt = "{:*^64}".format
print(fmt("Saving data inside directory {}".format(save_dir)))
print(fmt(" Run information "))
print("Run date: {:%Y-%m-%d %H:%M:%S}".format(datetime.utcnow()))
print("Start: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["start"]))
print("Stop: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["stop"]))
print(
"Bounding box: {0:3.2f}, {1:3.2f},"
"{2:3.2f}, {3:3.2f}".format(*config["region"]["bbox"])
)
```
To interface with the IOOS catalog we will use the [Catalogue Service for the Web (CSW)](https://live.osgeo.org/en/standards/csw_overview.html) endpoint and [python's OWSLib library](https://geopython.github.io/OWSLib).
The cell below creates the [Filter Encoding Specification (FES)](http://www.opengeospatial.org/standards/filter) with configuration we specified in cell [2]. The filter is composed of:
- `or` to catch any of the standard names;
- `not` some names we do not want to show up in the results;
- `date range` and `bounding box` for the time-space domain of the search.
```
def make_filter(config):
from owslib import fes
from ioos_tools.ioos import fes_date_filter
kw = dict(
wildCard="*", escapeChar="\\", singleChar="?", propertyname="apiso:Subject"
)
or_filt = fes.Or(
[fes.PropertyIsLike(literal=("*%s*" % val), **kw) for val in config["cf_names"]]
)
not_filt = fes.Not([fes.PropertyIsLike(literal="GRIB-2", **kw)])
begin, end = fes_date_filter(config["date"]["start"], config["date"]["stop"])
bbox_crs = fes.BBox(config["region"]["bbox"], crs=config["region"]["crs"])
filter_list = [fes.And([bbox_crs, begin, end, or_filt, not_filt])]
return filter_list
filter_list = make_filter(config)
```
We need to wrap `OWSlib.csw.CatalogueServiceWeb` object with a custom function,
` get_csw_records`, to be able to paginate over the results.
In the cell below we loop over all the catalogs returns and extract the OPeNDAP endpoints.
```
from ioos_tools.ioos import get_csw_records, service_urls
from owslib.csw import CatalogueServiceWeb
dap_urls = []
print(fmt(" Catalog information "))
for endpoint in config["catalogs"]:
print("URL: {}".format(endpoint))
try:
csw = CatalogueServiceWeb(endpoint, timeout=120)
except Exception as e:
print("{}".format(e))
continue
csw = get_csw_records(csw, filter_list, esn="full")
OPeNDAP = service_urls(csw.records, identifier="OPeNDAP:OPeNDAP")
odp = service_urls(
csw.records, identifier="urn:x-esri:specification:ServiceType:odp:url"
)
dap = OPeNDAP + odp
dap_urls.extend(dap)
print("Number of datasets available: {}".format(len(csw.records.keys())))
for rec, item in csw.records.items():
print("{}".format(item.title))
if dap:
print(fmt(" DAP "))
for url in dap:
print("{}.html".format(url))
print("\n")
# Get only unique endpoints.
dap_urls = list(set(dap_urls))
```
We found 10 dataset endpoints but only 9 of them have the proper metadata for us to identify the OPeNDAP endpoint,
those that contain either `OPeNDAP:OPeNDAP` or `urn:x-esri:specification:ServiceType:odp:url` scheme.
Unfortunately we lost the `COAWST` model in the process.
The next step is to ensure there are no observations in the list of endpoints.
We want only the models for now.
```
from ioos_tools.ioos import is_station
from timeout_decorator import TimeoutError
# Filter out some station endpoints.
non_stations = []
for url in dap_urls:
try:
if not is_station(url):
non_stations.append(url)
except (IOError, OSError, RuntimeError, TimeoutError) as e:
print("Could not access URL {}.html\n{!r}".format(url, e))
dap_urls = non_stations
print(fmt(" Filtered DAP "))
for url in dap_urls:
print("{}.html".format(url))
```
Now we have a nice list of all the models available in the catalog for the domain we specified.
We still need to find the observations for the same domain.
To accomplish that we will use the `pyoos` library and search the [SOS CO-OPS](https://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/) services using the virtually the same configuration options from the catalog search.
```
from pyoos.collectors.coops.coops_sos import CoopsSos
collector_coops = CoopsSos()
collector_coops.set_bbox(config["region"]["bbox"])
collector_coops.end_time = config["date"]["stop"]
collector_coops.start_time = config["date"]["start"]
collector_coops.variables = [config["sos_name"]]
ofrs = collector_coops.server.offerings
title = collector_coops.server.identification.title
print(fmt(" Collector offerings "))
print("{}: {} offerings".format(title, len(ofrs)))
```
To make it easier to work with the data we extract the time-series as pandas tables and interpolate them to a common 1-hour interval index.
```
import pandas as pd
from ioos_tools.ioos import collector2table
data = collector2table(
collector=collector_coops,
config=config,
col="water_surface_height_above_reference_datum (m)",
)
df = dict(
station_name=[s._metadata.get("station_name") for s in data],
station_code=[s._metadata.get("station_code") for s in data],
sensor=[s._metadata.get("sensor") for s in data],
lon=[s._metadata.get("lon") for s in data],
lat=[s._metadata.get("lat") for s in data],
depth=[s._metadata.get("depth") for s in data],
)
pd.DataFrame(df).set_index("station_code")
index = pd.date_range(
start=config["date"]["start"].replace(tzinfo=None),
end=config["date"]["stop"].replace(tzinfo=None),
freq="1H",
)
# Preserve metadata with `reindex`.
observations = []
for series in data:
_metadata = series._metadata
series.index = series.index.tz_localize(None)
obs = series.reindex(index=index, limit=1, method="nearest")
obs._metadata = _metadata
observations.append(obs)
```
The next cell saves those time-series as CF-compliant netCDF files on disk,
to make it easier to access them later.
```
import iris
from ioos_tools.tardis import series2cube
attr = dict(
featureType="timeSeries",
Conventions="CF-1.6",
standard_name_vocabulary="CF-1.6",
cdm_data_type="Station",
comment="Data from http://opendap.co-ops.nos.noaa.gov",
)
cubes = iris.cube.CubeList([series2cube(obs, attr=attr) for obs in observations])
outfile = os.path.join(save_dir, "OBS_DATA.nc")
iris.save(cubes, outfile)
```
We still need to read the model data from the list of endpoints we found.
The next cell takes care of that.
We use `iris`, and a set of custom functions from the `ioos_tools` library,
that downloads only the data in the domain we requested.
```
from ioos_tools.ioos import get_model_name
from ioos_tools.tardis import is_model, proc_cube, quick_load_cubes
from iris.exceptions import ConstraintMismatchError, CoordinateNotFoundError, MergeError
print(fmt(" Models "))
cubes = dict()
for k, url in enumerate(dap_urls):
print("\n[Reading url {}/{}]: {}".format(k + 1, len(dap_urls), url))
try:
cube = quick_load_cubes(url, config["cf_names"], callback=None, strict=True)
if is_model(cube):
cube = proc_cube(
cube,
bbox=config["region"]["bbox"],
time=(config["date"]["start"], config["date"]["stop"]),
units=config["units"],
)
else:
print("[Not model data]: {}".format(url))
continue
mod_name = get_model_name(url)
cubes.update({mod_name: cube})
except (
RuntimeError,
ValueError,
ConstraintMismatchError,
CoordinateNotFoundError,
IndexError,
) as e:
print("Cannot get cube for: {}\n{}".format(url, e))
```
Now we can match each observation time-series with its closest grid point (0.08 of a degree) on each model.
This is a complex and laborious task! If you are running this interactively grab a coffee and sit comfortably :-)
Note that we are also saving the model time-series to files that align with the observations we saved before.
```
import iris
from ioos_tools.tardis import (
add_station,
ensure_timeseries,
get_nearest_water,
make_tree,
)
from iris.pandas import as_series
for mod_name, cube in cubes.items():
fname = "{}.nc".format(mod_name)
fname = os.path.join(save_dir, fname)
print(fmt(" Downloading to file {} ".format(fname)))
try:
tree, lon, lat = make_tree(cube)
except CoordinateNotFoundError:
print("Cannot make KDTree for: {}".format(mod_name))
continue
# Get model series at observed locations.
raw_series = dict()
for obs in observations:
obs = obs._metadata
station = obs["station_code"]
try:
kw = dict(k=10, max_dist=0.08, min_var=0.01)
args = cube, tree, obs["lon"], obs["lat"]
try:
series, dist, idx = get_nearest_water(*args, **kw)
except RuntimeError as e:
print("Cannot download {!r}.\n{}".format(cube, e))
series = None
except ValueError:
status = "No Data"
print("[{}] {}".format(status, obs["station_name"]))
continue
if not series:
status = "Land "
else:
raw_series.update({station: series})
series = as_series(series)
status = "Water "
print("[{}] {}".format(status, obs["station_name"]))
if raw_series: # Save cube.
for station, cube in raw_series.items():
cube = add_station(cube, station)
try:
cube = iris.cube.CubeList(raw_series.values()).merge_cube()
except MergeError as e:
print(e)
ensure_timeseries(cube)
try:
iris.save(cube, fname)
except AttributeError:
# FIXME: we should patch the bad attribute instead of removing everything.
cube.attributes = {}
iris.save(cube, fname)
del cube
print("Finished processing [{}]".format(mod_name))
```
With the matched set of models and observations time-series it is relatively easy to compute skill score metrics on them. In cells [13] to [16] we apply both mean bias and root mean square errors to the time-series.
```
from ioos_tools.ioos import stations_keys
def rename_cols(df, config):
cols = stations_keys(config, key="station_name")
return df.rename(columns=cols)
from ioos_tools.ioos import load_ncs
from ioos_tools.skill_score import apply_skill, mean_bias
dfs = load_ncs(config)
df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)
skill_score = dict(mean_bias=df.to_dict())
# Filter out stations with no valid comparison.
df.dropna(how="all", axis=1, inplace=True)
df = df.applymap("{:.2f}".format).replace("nan", "--")
from ioos_tools.skill_score import rmse
dfs = load_ncs(config)
df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)
skill_score["rmse"] = df.to_dict()
# Filter out stations with no valid comparison.
df.dropna(how="all", axis=1, inplace=True)
df = df.applymap("{:.2f}".format).replace("nan", "--")
import pandas as pd
# Stringfy keys.
for key in skill_score.keys():
skill_score[key] = {str(k): v for k, v in skill_score[key].items()}
mean_bias = pd.DataFrame.from_dict(skill_score["mean_bias"])
mean_bias = mean_bias.applymap("{:.2f}".format).replace("nan", "--")
skill_score = pd.DataFrame.from_dict(skill_score["rmse"])
skill_score = skill_score.applymap("{:.2f}".format).replace("nan", "--")
```
Last but not least we can assemble a GIS map, cells [17-23],
with the time-series plot for the observations and models,
and the corresponding skill scores.
```
import folium
from ioos_tools.ioos import get_coordinates
def make_map(bbox, **kw):
line = kw.pop("line", True)
zoom_start = kw.pop("zoom_start", 5)
lon = (bbox[0] + bbox[2]) / 2
lat = (bbox[1] + bbox[3]) / 2
m = folium.Map(
width="100%", height="100%", location=[lat, lon], zoom_start=zoom_start
)
if line:
p = folium.PolyLine(
get_coordinates(bbox), color="#FF0000", weight=2, opacity=0.9,
)
p.add_to(m)
return m
bbox = config["region"]["bbox"]
m = make_map(bbox, zoom_start=8, line=True, layers=True)
all_obs = stations_keys(config)
from glob import glob
from operator import itemgetter
import iris
from folium.plugins import MarkerCluster
iris.FUTURE.netcdf_promote = True
big_list = []
for fname in glob(os.path.join(save_dir, "*.nc")):
if "OBS_DATA" in fname:
continue
cube = iris.load_cube(fname)
model = os.path.split(fname)[1].split("-")[-1].split(".")[0]
lons = cube.coord(axis="X").points
lats = cube.coord(axis="Y").points
stations = cube.coord("station_code").points
models = [model] * lons.size
lista = zip(models, lons.tolist(), lats.tolist(), stations.tolist())
big_list.extend(lista)
big_list.sort(key=itemgetter(3))
df = pd.DataFrame(big_list, columns=["name", "lon", "lat", "station"])
df.set_index("station", drop=True, inplace=True)
groups = df.groupby(df.index)
locations, popups = [], []
for station, info in groups:
sta_name = all_obs[station]
for lat, lon, name in zip(info.lat, info.lon, info.name):
locations.append([lat, lon])
popups.append("[{}]: {}".format(name, sta_name))
MarkerCluster(locations=locations, popups=popups, name="Cluster").add_to(m)
titles = {
"coawst_4_use_best": "COAWST_4",
"pacioos_hycom-global": "HYCOM",
"NECOFS_GOM3_FORECAST": "NECOFS_GOM3",
"NECOFS_FVCOM_OCEAN_MASSBAY_FORECAST": "NECOFS_MassBay",
"NECOFS_FVCOM_OCEAN_BOSTON_FORECAST": "NECOFS_Boston",
"SECOORA_NCSU_CNAPS": "SECOORA/CNAPS",
"roms_2013_da_avg-ESPRESSO_Real-Time_v2_Averages_Best": "ESPRESSO Avg",
"roms_2013_da-ESPRESSO_Real-Time_v2_History_Best": "ESPRESSO Hist",
"OBS_DATA": "Observations",
}
from itertools import cycle
from bokeh.embed import file_html
from bokeh.models import HoverTool, Legend
from bokeh.palettes import Category20
from bokeh.plotting import figure
from bokeh.resources import CDN
from folium import IFrame
# Plot defaults.
colors = Category20[20]
colorcycler = cycle(colors)
tools = "pan,box_zoom,reset"
width, height = 750, 250
def make_plot(df, station):
p = figure(
toolbar_location="above",
x_axis_type="datetime",
width=width,
height=height,
tools=tools,
title=str(station),
)
leg = []
for column, series in df.iteritems():
series.dropna(inplace=True)
if not series.empty:
if "OBS_DATA" not in column:
bias = mean_bias[str(station)][column]
skill = skill_score[str(station)][column]
line_color = next(colorcycler)
kw = dict(alpha=0.65, line_color=line_color)
else:
skill = bias = "NA"
kw = dict(alpha=1, color="crimson")
line = p.line(
x=series.index,
y=series.values,
line_width=5,
line_cap="round",
line_join="round",
**kw
)
leg.append(("{}".format(titles.get(column, column)), [line]))
p.add_tools(
HoverTool(
tooltips=[
("Name", "{}".format(titles.get(column, column))),
("Bias", bias),
("Skill", skill),
],
renderers=[line],
)
)
legend = Legend(items=leg, location=(0, 60))
legend.click_policy = "mute"
p.add_layout(legend, "right")
p.yaxis[0].axis_label = "Water Height (m)"
p.xaxis[0].axis_label = "Date/time"
return p
def make_marker(p, station):
lons = stations_keys(config, key="lon")
lats = stations_keys(config, key="lat")
lon, lat = lons[station], lats[station]
html = file_html(p, CDN, station)
iframe = IFrame(html, width=width + 40, height=height + 80)
popup = folium.Popup(iframe, max_width=2650)
icon = folium.Icon(color="green", icon="stats")
marker = folium.Marker(location=[lat, lon], popup=popup, icon=icon)
return marker
dfs = load_ncs(config)
for station in dfs:
sta_name = all_obs[station]
df = dfs[station]
if df.empty:
continue
p = make_plot(df, station)
marker = make_marker(p, station)
marker.add_to(m)
folium.LayerControl().add_to(m)
def embed_map(m):
from IPython.display import HTML
m.save("index.html")
with open("index.html") as f:
html = f.read()
iframe = '<iframe srcdoc="{srcdoc}" style="width: 100%; height: 750px; border: none"></iframe>'
srcdoc = html.replace('"', """)
return HTML(iframe.format(srcdoc=srcdoc))
embed_map(m)
```
| true |
code
| 0.22777 | null | null | null | null |
|
### Made by Kartikey Sharma (IIT Goa)
### GOAL
Predicting the costs of used cars given the data collected from various sources and distributed across various locations in India.
#### FEATURES:
<b>Name</b>: The brand and model of the car.<br>
<b>Location</b>: The location in which the car is being sold or is available for purchase.<br>
<b>Year</b>: The year or edition of the model.<br>
<b>Kilometers_Driven</b>: The total kilometres driven in the car by the previous owner(s) in KM.<br>
<b>Fuel_Type</b>: The type of fuel used by the car.<br>
<b>Transmission</b>: The type of transmission used by the car.<br>
<b>Owner_Type</b>: Whether the ownership is Firsthand, Second hand or other.<br>
<b>Mileage</b>: The standard mileage offered by the car company in kmpl or km/kg.<br>
<b>Engine</b>: The displacement volume of the engine in cc.<br>
<b>Power</b>: The maximum power of the engine in bhp.<br>
<b>Seats</b>: The number of seats in the car.<br>
<b>Price</b>: The price of the used car in INR Lakhs.<br>
### Process
Clean the data (missing values and categorical variables).'.
<br>Build the model and check the MAE.
<br>Try to improve the model.
<br>Brand matters too! I could select the brand name of the car and treat them as categorical data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
import seaborn as sns
sns.set_style('darkgrid')
warnings.filterwarnings('ignore')
#Importing datasets
df_train = pd.read_excel("Data_Train.xlsx")
df_test = pd.read_excel("Data_Test.xlsx")
df_train.head()
df_train.shape
df_train.info()
#No of duplicated values in the train set
df_train.duplicated().sum()
#Seeing the number of duplicated values
df_test.duplicated().sum()
#Number of null values
df_train.isnull().sum()
df_train.nunique()
df_train['Name'] = df_train.Name.str.split().str.get(0)
df_test['Name'] = df_test.Name.str.split().str.get(0)
df_train.head()
# all rows have been modified
df_train['Name'].value_counts().sum()
```
### Missing Values
```
# Get names of columns with missing values
cols_with_missing = [col for col in df_train.columns
if df_train[col].isnull().any()]
print("Columns with missing values:")
print(cols_with_missing)
df_train['Seats'].fillna(df_train['Seats'].mean(),inplace=True)
df_test['Seats'].fillna(df_test['Seats'].mean(),inplace=True)
# for more accurate predicitions
data = pd.concat([df_train,df_test], sort=False)
plt.figure(figsize=(20,5))
data['Mileage'].value_counts().head(100).plot.bar()
plt.show()
df_train['Mileage'] = df_train['Mileage'].fillna('17.0 kmpl')
df_test['Mileage'] = df_test['Mileage'].fillna('17.0 kmpl')
# o(zero) and null are both missing values clearly
df_train['Mileage'] = df_train['Mileage'].replace("0.0 kmpl", "17.0 kmpl")
df_test['Mileage'] = df_test['Mileage'].replace("0.0 kmpl", "17.0 kmpl")
plt.figure(figsize=(20,5))
data['Engine'].value_counts().head(100).plot.bar()
plt.show()
df_train['Engine'] = df_train['Engine'].fillna('1000 CC')
df_test['Engine'] = df_test['Engine'].fillna('1000 CC')
plt.figure(figsize=(20,5))
data['Power'].value_counts().head(100).plot.bar()
plt.show()
df_train['Power'] = df_train['Power'].fillna('74 bhp')
df_test['Power'] = df_test['Power'].fillna('74 bhp')
#null bhp created a problem during LabelEncoding
df_train['Power'] = df_train['Power'].replace("null bhp", "74 bhp")
df_test['Power'] = df_test['Power'].replace("null bhp", "74 bhp")
# Method to extract 'float' from 'object'
import re
def get_number(name):
title_search = re.search('([\d+\.+\d]+\W)', name)
if title_search:
return title_search.group(1)
return ""
df_train.isnull().sum()
df_train.info()
#Acquring float values and isolating them
df_train['Mileage'] = df_train['Mileage'].apply(get_number).astype('float')
df_train['Engine'] = df_train['Engine'].apply(get_number).astype('int')
df_train['Power'] = df_train['Power'].apply(get_number).astype('float')
df_test['Mileage'] = df_test['Mileage'].apply(get_number).astype('float')
df_test['Engine'] = df_test['Engine'].apply(get_number).astype('int')
df_test['Power'] = df_test['Power'].apply(get_number).astype('float')
df_train.info()
df_test.info()
df_train.head()
```
### Categorical Variables
```
from sklearn.model_selection import train_test_split
y = np.log1p(df_train.Price) # Made a HUGE difference. MAE went down highly
X = df_train.drop(['Price'],axis=1)
X_train, X_valid, y_train, y_valid = train_test_split(X,y,train_size=0.82,test_size=0.18,random_state=0)
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
X_train['Name'] = label_encoder.fit_transform(X_train['Name'])
X_valid['Name'] = label_encoder.transform(X_valid['Name'])
df_test['Name'] = label_encoder.fit_transform(df_test['Name'])
X_train['Location'] = label_encoder.fit_transform(X_train['Location'])
X_valid['Location'] = label_encoder.transform(X_valid['Location'])
df_test['Location'] = label_encoder.fit_transform(df_test['Location'])
X_train['Fuel_Type'] = label_encoder.fit_transform(X_train['Fuel_Type'])
X_valid['Fuel_Type'] = label_encoder.transform(X_valid['Fuel_Type'])
df_test['Fuel_Type'] = label_encoder.fit_transform(df_test['Fuel_Type'])
X_train['Transmission'] = label_encoder.fit_transform(X_train['Transmission'])
X_valid['Transmission'] = label_encoder.transform(X_valid['Transmission'])
df_test['Transmission'] = label_encoder.fit_transform(df_test['Transmission'])
X_train['Owner_Type'] = label_encoder.fit_transform(X_train['Owner_Type'])
X_valid['Owner_Type'] = label_encoder.transform(X_valid['Owner_Type'])
df_test['Owner_Type'] = label_encoder.fit_transform(df_test['Owner_Type'])
X_train.head()
X_train.info()
```
## Model
```
from xgboost import XGBRegressor
from sklearn.metrics import mean_absolute_error,mean_squared_error,mean_squared_log_error
from math import sqrt
my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05)
my_model.fit(X_train, y_train,
early_stopping_rounds=5,
eval_set=[(X_valid, y_valid)],
verbose=False)
predictions = my_model.predict(X_valid)
print("MAE: " + str(mean_absolute_error(predictions, y_valid)))
print("MSE: " + str(mean_squared_error(predictions, y_valid)))
print("MSLE: " + str(mean_squared_log_error(predictions, y_valid)))
print("RMSE: "+ str(sqrt(mean_squared_error(predictions, y_valid))))
```
## Prediciting on Test
```
preds_test = my_model.predict(df_test)
preds_test = np.exp(preds_test)-1 #converting target to original state
print(preds_test)
# The Price is in the format xx.xx So let's round off and submit.
preds_test = preds_test.round(5)
print(preds_test)
output = pd.DataFrame({'Price': preds_test})
output.to_excel('Output.xlsx', index=False)
```
#### NOTE
Treating 'Mileage' and the others as categorical variables was a mistake. Eg.: Mileage went up from 23.6 to around 338! Converting it to numbers fixed it.
LabelEncoder won't work if there are missing values.
ValueError: y contains previously unseen label 'Bentley'. Fixed it by increasing training_size in train_test_split.
Scaling all the columns made the model worse (as expected).
==============================================End of Project======================================================
```
# Code by Kartikey Sharma
# Veni.Vidi.Vici.
```
| true |
code
| 0.310949 | null | null | null | null |
|
# `GiRaFFE_NRPy`: Solving the Induction Equation
## Author: Patrick Nelson
This notebook documents the function from the original `GiRaFFE` that calculates the flux for $A_i$ according to the method of Harten, Lax, von Leer, and Einfeldt (HLLE), assuming that we have calculated the values of the velocity and magnetic field on the cell faces according to the piecewise-parabolic method (PPM) of [Colella and Woodward (1984)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf), modified for the case of GRFFE.
**Notebook Status:** <font color=green><b> Validated </b></font>
**Validation Notes:** This code has been validated by showing that it converges to the exact answer at the expected order
### NRPy+ Source Code for this module:
* [GiRaFFE_NRPy/Afield_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Afield_flux.py)
Our goal in this module is to write the code necessary to solve the induction equation
$$
\partial_t A_i = \underbrace{\epsilon_{ijk} v^j B^k}_{\rm Flux\ terms} - \underbrace{\partial_i \left(\alpha \Phi - \beta^j A_j \right)}_{\rm Gauge\ terms}.
$$
To properly handle the flux terms and avoiding problems with shocks, we cannot simply take a cross product of the velocity and magnetic field at the cell centers. Instead, we must solve the Riemann problem at the cell faces using the reconstructed values of the velocity and magnetic field on either side of the cell faces. The reconstruction is done using PPM (see [here](Tutorial-GiRaFFE_NRPy-PPM.ipynb)); in this module, we will assume that that step has already been done. Metric quantities are assumed to have been interpolated to cell faces, as is done in [this](Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb) tutorial.
Tóth's [paper](https://www.sciencedirect.com/science/article/pii/S0021999100965197?via%3Dihub), Eqs. 30 and 31, are one of the first implementations of such a scheme. The original GiRaFFE used a 2D version of the algorithm from [Del Zanna, et al. (2002)](https://arxiv.org/abs/astro-ph/0210618); but since we are not using staggered grids, we can greatly simplify this algorithm with respect to the version used in the original `GiRaFFE`. Instead, we will adapt the implementations of the algorithm used in [Mewes, et al. (2020)](https://arxiv.org/abs/2002.06225) and [Giacomazzo, et al. (2011)](https://arxiv.org/abs/1009.2468), Eqs. 3-11.
We first write the flux contribution to the induction equation RHS as
$$
\partial_t A_i = -E_i,
$$
where the electric field $E_i$ is given in ideal MHD (of which FFE is a subset) as
$$
-E_i = \epsilon_{ijk} v^j B^k,
$$
where $v^i$ is the drift velocity, $B^i$ is the magnetic field, and $\epsilon_{ijk} = \sqrt{\gamma} [ijk]$ is the Levi-Civita tensor.
In Cartesian coordinates,
\begin{align}
-E_x &= [F^y(B^z)]_x = -[F^z(B^y)]_x \\
-E_y &= [F^z(B^x)]_y = -[F^x(B^z)]_y \\
-E_z &= [F^x(B^y)]_z = -[F^y(B^x)]_z, \\
\end{align}
where
$$
[F^i(B^j)]_k = \sqrt{\gamma} (v^i B^j - v^j B^i).
$$
To compute the actual contribution to the RHS in some direction $i$, we average the above listed field as calculated on the $+j$, $-j$, $+k$, and $-k$ faces. That is, at some point $(i,j,k)$ on the grid,
\begin{align}
-E_x(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^y(B^z)]_{x(i,j+1/2,k)}+[F_{\rm HLL}^y(B^z)]_{x(i,j-1/2,k)}-[F_{\rm HLL}^z(B^y)]_{x(i,j,k+1/2)}-[F_{\rm HLL}^z(B^y)]_{x(i,j,k-1/2)} \right) \\
-E_y(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^z(B^x)]_{y(i,j,k+1/2)}+[F_{\rm HLL}^z(B^x)]_{y(i,j,k-1/2)}-[F_{\rm HLL}^x(B^z)]_{y(i+1/2,j,k)}-[F_{\rm HLL}^x(B^z)]_{y(i-1/2,j,k)} \right) \\
-E_z(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^x(B^y)]_{z(i+1/2,j,k)}+[F_{\rm HLL}^x(B^y)]_{z(i-1/2,j,k)}-[F_{\rm HLL}^y(B^x)]_{z(i,j+1/2,k)}-[F_{\rm HLL}^y(B^x)]_{z(i,j-1/2,k)} \right). \\
\end{align}
Note the use of $F_{\rm HLL}$ here. This change signifies that the quantity output here is from the HLLE Riemann solver. Note also the indices on the fluxes. Values of $\pm 1/2$ indicate that these are computed on cell faces using the reconstructed values of $v^i$ and $B^i$ and the interpolated values of the metric gridfunctions. So,
$$
F_{\rm HLL}^i(B^j) = \frac{c_{\rm min} F_{\rm R}^i(B^j) + c_{\rm max} F_{\rm L}^i(B^j) - c_{\rm min} c_{\rm max} (B_{\rm R}^j-B_{\rm L}^j)}{c_{\rm min} + c_{\rm max}}.
$$
The speeds $c_\min$ and $c_\max$ are characteristic speeds that waves can travel through the plasma. In GRFFE, the expressions defining them reduce a function of only the metric quantities. $c_\min$ is the negative of the minimum amongst the speeds $c_-$ and $0$ and $c_\max$ is the maximum amongst the speeds $c_+$ and $0$. The speeds $c_\pm = \left. \left(-b \pm \sqrt{b^2-4ac}\right)\middle/ \left(2a\right) \right.$ must be calculated on both the left and right faces, where
$$a = 1/\alpha^2,$$
$$b = 2 \beta^i / \alpha^2$$
and $$c = g^{ii} - (\beta^i)^2/\alpha^2.$$
An outline of a general finite-volume method is as follows, with the current step in bold:
1. The Reconstruction Step - Piecewise Parabolic Method
1. Within each cell, fit to a function that conserves the volume in that cell using information from the neighboring cells
* For PPM, we will naturally use parabolas
1. Use that fit to define the state at the left and right interface of each cell
1. Apply a slope limiter to mitigate Gibbs phenomenon
1. Interpolate the value of the metric gridfunctions on the cell faces
1. **Solving the Riemann Problem - Harten, Lax, (This notebook, $E_i$ only)**
1. **Use the left and right reconstructed states to calculate the unique state at boundary**
We will assume in this notebook that the reconstructed velocities and magnetic fields are available on cell faces as input. We will also assume that the metric gridfunctions have been interpolated on the metric faces.
Solving the Riemann problem, then, consists of two substeps: First, we compute the flux through each face of the cell. Then, we add the average of these fluxes to the right-hand side of the evolution equation for the vector potential.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#prelim): Preliminaries
1. [Step 2](#a_i_flux): Computing the Magnetic Flux
1. [Step 2.a](#hydro_speed): GRFFE characteristic wave speeds
1. [Step 2.b](#fluxes): Compute the HLLE fluxes
1. [Step 3](#code_validation): Code Validation against `GiRaFFE_NRPy.Afield_flux` NRPy+ Module
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='prelim'></a>
# Step 1: Preliminaries \[Back to [top](#toc)\]
$$\label{prelim}$$
We begin by importing the NRPy+ core functionality. We also import the Levi-Civita symbol, the GRHD module, and the GRFFE module.
```
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os, sys # Standard Python modules for multiplatform OS-level functions
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
from outputC import outCfunction, outputC # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
thismodule = "GiRaFFE_NRPy-Afield_flux"
import GRHD.equations as GRHD
# import GRFFE.equations as GRFFE
```
<a id='a_i_flux'></a>
# Step 2: Computing the Magnetic Flux \[Back to [top](#toc)\]
$$\label{a_i_flux}$$
<a id='hydro_speed'></a>
## Step 2.a: GRFFE characteristic wave speeds \[Back to [top](#toc)\]
$$\label{hydro_speed}$$
Next, we will find the speeds at which the hydrodynamics waves propagate. We start from the speed of light (since FFE deals with very diffuse plasmas), which is $c=1.0$ in our chosen units. We then find the speeds $c_+$ and $c_-$ on each face with the function `find_cp_cm`; then, we find minimum and maximum speeds possible from among those.
Below is the source code for `find_cp_cm`, edited to work with the NRPy+ version of GiRaFFE. One edit we need to make in particular is to the term `psim4*gupii` in the definition of `c`; that was written assuming the use of the conformal metric $\tilde{g}^{ii}$. Since we are not using that here, and are instead using the ADM metric, we should not multiply by $\psi^{-4}$.
```c
static inline void find_cp_cm(REAL &cplus,REAL &cminus,const REAL v02,const REAL u0,
const REAL vi,const REAL lapse,const REAL shifti,
const REAL gammadet,const REAL gupii) {
const REAL u0_SQUARED=u0*u0;
const REAL ONE_OVER_LAPSE_SQUARED = 1.0/(lapse*lapse);
// sqrtgamma = psi6 -> psim4 = gammadet^(-1.0/3.0)
const REAL psim4 = pow(gammadet,-1.0/3.0);
//Find cplus, cminus:
const REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED;
const REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) );
const REAL c = u0_SQUARED*vi*vi * (1.0-v02) - v02 * ( gupii -
shifti*shifti*ONE_OVER_LAPSE_SQUARED);
REAL detm = b*b - 4.0*a*c;
//ORIGINAL LINE OF CODE:
//if(detm < 0.0) detm = 0.0;
//New line of code (without the if() statement) has the same effect:
detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */
cplus = 0.5*(detm-b)/a;
cminus = -0.5*(detm+b)/a;
if (cplus < cminus) {
const REAL cp = cminus;
cminus = cplus;
cplus = cp;
}
}
```
Comments documenting this have been excised for brevity, but are reproduced in $\LaTeX$ [below](#derive_speed).
We could use this code directly, but there's substantial improvement we can make by changing the code into a NRPyfied form. Note the `if` statement; NRPy+ does not know how to handle these, so we must eliminate it if we want to leverage NRPy+'s full power. (Calls to `fabs()` are also cheaper than `if` statements.) This can be done if we rewrite this, taking inspiration from the other eliminated `if` statement documented in the above code block:
```c
cp = 0.5*(detm-b)/a;
cm = -0.5*(detm+b)/a;
cplus = 0.5*(cp+cm+fabs(cp-cm));
cminus = 0.5*(cp+cm-fabs(cp-cm));
```
This can be simplified further, by substituting `cp` and `cm` into the below equations and eliminating terms as appropriate. First note that `cp+cm = -b/a` and that `cp-cm = detm/a`. Thus,
```c
cplus = 0.5*(-b/a + fabs(detm/a));
cminus = 0.5*(-b/a - fabs(detm/a));
```
This fulfills the original purpose of the `if` statement in the original code because we have guaranteed that $c_+ \geq c_-$.
This leaves us with an expression that can be much more easily NRPyfied. So, we will rewrite the following in NRPy+, making only minimal changes to be proper Python. However, it turns out that we can make this even simpler. In GRFFE, $v_0^2$ is guaranteed to be exactly one. In GRMHD, this speed was calculated as $$v_{0}^{2} = v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right),$$ where the Alfvén speed $v_{\rm A}^{2}$ $$v_{\rm A}^{2} = \frac{b^{2}}{\rho_{b}h + b^{2}}.$$ So, we can see that when the density $\rho_b$ goes to zero, $v_{0}^{2} = v_{\rm A}^{2} = 1$. Then
\begin{align}
a &= (u^0)^2 (1-v_0^2) + v_0^2/\alpha^2 \\
&= 1/\alpha^2 \\
b &= 2 \left(\beta^i v_0^2 / \alpha^2 - (u^0)^2 v^i (1-v_0^2)\right) \\
&= 2 \beta^i / \alpha^2 \\
c &= (u^0)^2 (v^i)^2 (1-v_0^2) - v_0^2 \left(\gamma^{ii} - (\beta^i)^2/\alpha^2\right) \\
&= -\gamma^{ii} + (\beta^i)^2/\alpha^2,
\end{align}
are simplifications that should save us some time; we can see that $a \geq 0$ is guaranteed. Note that we also force `detm` to be positive. Thus, `detm/a` is guaranteed to be positive itself, rendering the calls to `nrpyAbs()` superfluous. Furthermore, we eliminate any dependence on the Valencia 3-velocity and the time compoenent of the four-velocity, $u^0$. This leaves us free to solve the quadratic in the familiar way: $$c_\pm = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$$.
```
# We'll write this as a function so that we can calculate the expressions on-demand for any choice of i
def find_cp_cm(lapse,shifti,gammaUUii):
# Inputs: u0,vi,lapse,shift,gammadet,gupii
# Outputs: cplus,cminus
# a = 1/(alpha^2)
a = sp.sympify(1)/(lapse*lapse)
# b = 2 beta^i / alpha^2
b = sp.sympify(2) * shifti /(lapse*lapse)
# c = -g^{ii} + (beta^i)^2 / alpha^2
c = - gammaUUii + shifti*shifti/(lapse*lapse)
# Now, we are free to solve the quadratic equation as usual. We take care to avoid passing a
# negative value to the sqrt function.
detm = b*b - sp.sympify(4)*a*c
import Min_Max_and_Piecewise_Expressions as noif
detm = sp.sqrt(noif.max_noif(sp.sympify(0),detm))
global cplus,cminus
cplus = sp.Rational(1,2)*(-b/a + detm/a)
cminus = sp.Rational(1,2)*(-b/a - detm/a)
```
In flat spacetime, where $\alpha=1$, $\beta^i=0$, and $\gamma^{ij} = \delta^{ij}$, $c_+ > 0$ and $c_- < 0$. For the HLLE solver, we will need both `cmax` and `cmin` to be positive; we also want to choose the speed that is larger in magnitude because overestimating the characteristic speeds will help damp unwanted oscillations. (However, in GRFFE, we only get one $c_+$ and one $c_-$, so we only need to fix the signs here.) Hence, the following function.
We will now write a function in NRPy+ similar to the one used in the old `GiRaFFE`, allowing us to generate the expressions with less need to copy-and-paste code; the key difference is that this one will be in Python, and generate optimized C code integrated into the rest of the operations. Notice that since we eliminated the dependence on velocities, none of the input quantities are different on either side of the face. So, this function won't really do much besides guarantee that `cmax` and `cmin` are positive, but we'll leave the machinery here since it is likely to be a useful guide to somebody who wants to something similar. The only modifications we'll make are those necessary to eliminate calls to `fabs(0)` in the C code. We use the same technique as above to replace the `if` statements inherent to the `MAX()` and `MIN()` functions.
```
# We'll write this as a function, and call it within HLLE_solver, below.
def find_cmax_cmin(field_comp,gamma_faceDD,beta_faceU,alpha_face):
# Inputs: flux direction field_comp, Inverse metric gamma_faceUU, shift beta_faceU,
# lapse alpha_face, metric determinant gammadet_face
# Outputs: maximum and minimum characteristic speeds cmax and cmin
# First, we need to find the characteristic speeds on each face
gamma_faceUU,unusedgammaDET = ixp.generic_matrix_inverter3x3(gamma_faceDD)
# Original needed for GRMHD
# find_cp_cm(alpha_face,beta_faceU[field_comp],gamma_faceUU[field_comp][field_comp])
# cpr = cplus
# cmr = cminus
# find_cp_cm(alpha_face,beta_faceU[field_comp],gamma_faceUU[field_comp][field_comp])
# cpl = cplus
# cml = cminus
find_cp_cm(alpha_face,beta_faceU[field_comp],gamma_faceUU[field_comp][field_comp])
cp = cplus
cm = cminus
# The following algorithms have been verified with random floats:
global cmax,cmin
# Now, we need to set cmax to the larger of cpr,cpl, and 0
import Min_Max_and_Piecewise_Expressions as noif
cmax = noif.max_noif(cp,sp.sympify(0))
# And then, set cmin to the smaller of cmr,cml, and 0
cmin = -noif.min_noif(cm,sp.sympify(0))
```
<a id='fluxes'></a>
## Step 2.b: Compute the HLLE fluxes \[Back to [top](#toc)\]
$$\label{fluxes}$$
Here, we we calculate the flux and state vectors for the electric field. The flux vector is here given as
$$
[F^i(B^j)]_k = \sqrt{\gamma} (v^i B^j - v^j B^i).
$$
Here, $v^i$ is the drift velocity and $B^i$ is the magnetic field.
This can be easily handled for an input flux direction $i$ with
$$
[F^j(B^k)]_i = \epsilon_{ijk} v^j B^k,
$$
where $\epsilon_{ijk} = \sqrt{\gamma} [ijk]$ and $[ijk]$ is the Levi-Civita symbol.
The state vector is simply the magnetic field $B^j$.
```
def calculate_flux_and_state_for_Induction(field_comp,flux_dirn, gammaDD,betaU,alpha,ValenciavU,BU):
# Define Levi-Civita symbol
def define_LeviCivitaSymbol_rank3(DIM=-1):
if DIM == -1:
DIM = par.parval_from_str("DIM")
LeviCivitaSymbol = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# From https://codegolf.stackexchange.com/questions/160359/levi-civita-symbol :
LeviCivitaSymbol[i][j][k] = (i - j) * (j - k) * (k - i) * sp.Rational(1,2)
return LeviCivitaSymbol
GRHD.compute_sqrtgammaDET(gammaDD)
# Here, we import the Levi-Civita tensor and compute the tensor with lower indices
LeviCivitaDDD = define_LeviCivitaSymbol_rank3()
for i in range(3):
for j in range(3):
for k in range(3):
LeviCivitaDDD[i][j][k] *= GRHD.sqrtgammaDET
global U,F
# Flux F = \epsilon_{ijk} v^j B^k
F = sp.sympify(0)
for j in range(3):
for k in range(3):
F += LeviCivitaDDD[field_comp][j][k] * (alpha*ValenciavU[j]-betaU[j]) * BU[k]
# U = B^i
U = BU[flux_dirn]
```
Now, we write a standard HLLE solver based on eq. 3.15 in [the HLLE paper](https://epubs.siam.org/doi/pdf/10.1137/1025002),
$$
F^{\rm HLL} = \frac{c_{\rm min} F_{\rm R} + c_{\rm max} F_{\rm L} - c_{\rm min} c_{\rm max} (U_{\rm R}-U_{\rm L})}{c_{\rm min} + c_{\rm max}}
$$
```
def HLLE_solver(cmax, cmin, Fr, Fl, Ur, Ul):
# This solves the Riemann problem for the flux of E_i in one direction
# F^HLL = (c_\min f_R + c_\max f_L - c_\min c_\max ( st_j_r - st_j_l )) / (c_\min + c_\max)
return (cmin*Fr + cmax*Fl - cmin*cmax*(Ur-Ul) )/(cmax + cmin)
```
Here, we will use the function we just wrote to calculate the flux through a face. We will pass the reconstructed Valencia 3-velocity and magnetic field on either side of an interface to this function (designated as the "left" and "right" sides) along with the value of the 3-metric, shift vector, and lapse function on the interface. The parameter `flux_dirn` specifies which face through which we are calculating the flux. However, unlike when we used this method to calculate the flux term, the RHS of each component of $A_i$ does not depend on all three of the flux directions. Instead, the flux of one component of the $E_i$ field depends on flux through the faces in the other two directions. This will be handled when we generate the C function, as demonstrated in the example code after this next function.
Note that we allow the user to declare their own gridfunctions if they wish, and default to declaring basic symbols if they are not provided. The default names are chosen to imply interpolation of the metric gridfunctions and reconstruction of the primitives.
```
def calculate_E_i_flux(flux_dirn,alpha_face=None,gamma_faceDD=None,beta_faceU=None,\
Valenciav_rU=None,B_rU=None,Valenciav_lU=None,B_lU=None):
global E_fluxD
E_fluxD = ixp.zerorank1()
for field_comp in range(3):
find_cmax_cmin(field_comp,gamma_faceDD,beta_faceU,alpha_face)
calculate_flux_and_state_for_Induction(field_comp,flux_dirn, gamma_faceDD,beta_faceU,alpha_face,\
Valenciav_rU,B_rU)
Fr = F
Ur = U
calculate_flux_and_state_for_Induction(field_comp,flux_dirn, gamma_faceDD,beta_faceU,alpha_face,\
Valenciav_lU,B_lU)
Fl = F
Ul = U
E_fluxD[field_comp] += HLLE_solver(cmax, cmin, Fr, Fl, Ur, Ul)
```
Below, we will write some example code to use the above functions to generate C code for `GiRaFFE_NRPy`. We need to write our own memory reads and writes because we need to add contributions from *both* faces in a given direction, which is expressed in the code as adding contributions from adjacent gridpoints to the RHS, which is not something `FD_outputC` can handle. The `.replace()` function calls adapt these reads and writes to the different directions. Note that, for reconstructions in a given direction, the fluxes are only added to the other two components, as can be seen in the equations we are implementing.
\begin{align}
-E_x(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^y(B^z)]_{x(i,j+1/2,k)}+[F_{\rm HLL}^y(B^z)]_{x(i,j-1/2,k)}-[F_{\rm HLL}^z(B^y)]_{x(i,j,k+1/2)}-[F_{\rm HLL}^z(B^y)]_{x(i,j,k-1/2)} \right) \\
-E_y(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^z(B^x)]_{y(i,j,k+1/2)}+[F_{\rm HLL}^z(B^x)]_{y(i,j,k-1/2)}-[F_{\rm HLL}^x(B^z)]_{y(i+1/2,j,k)}-[F_{\rm HLL}^x(B^z)]_{y(i-1/2,j,k)} \right) \\
-E_z(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^x(B^y)]_{z(i+1/2,j,k)}+[F_{\rm HLL}^x(B^y)]_{z(i-1/2,j,k)}-[F_{\rm HLL}^y(B^x)]_{z(i,j+1/2,k)}-[F_{\rm HLL}^y(B^x)]_{z(i,j-1/2,k)} \right). \\
\end{align}
From this, we can see that when, for instance, we reconstruct and interpolate in the $x$-direction, we must add only to the $y$- and $z$-components of the electric field.
Recall that when we reconstructed the velocity and magnetic field, we constructed to the $i-1/2$ face, so the data at $i+1/2$ is stored at $i+1$.
```
def generate_Afield_flux_function_files(out_dir,subdir,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU,inputs_provided=True):
if not inputs_provided:
# declare all variables
alpha_face = sp.symbols(alpha_face)
beta_faceU = ixp.declarerank1("beta_faceU")
gamma_faceDD = ixp.declarerank2("gamma_faceDD","sym01")
Valenciav_rU = ixp.declarerank1("Valenciav_rU")
B_rU = ixp.declarerank1("B_rU")
Valenciav_lU = ixp.declarerank1("Valenciav_lU")
B_lU = ixp.declarerank1("B_lU")
Memory_Read = """const double alpha_face = auxevol_gfs[IDX4S(ALPHA_FACEGF, i0,i1,i2)];
const double gamma_faceDD00 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF, i0,i1,i2)];
const double gamma_faceDD01 = auxevol_gfs[IDX4S(GAMMA_FACEDD01GF, i0,i1,i2)];
const double gamma_faceDD02 = auxevol_gfs[IDX4S(GAMMA_FACEDD02GF, i0,i1,i2)];
const double gamma_faceDD11 = auxevol_gfs[IDX4S(GAMMA_FACEDD11GF, i0,i1,i2)];
const double gamma_faceDD12 = auxevol_gfs[IDX4S(GAMMA_FACEDD12GF, i0,i1,i2)];
const double gamma_faceDD22 = auxevol_gfs[IDX4S(GAMMA_FACEDD22GF, i0,i1,i2)];
const double beta_faceU0 = auxevol_gfs[IDX4S(BETA_FACEU0GF, i0,i1,i2)];
const double beta_faceU1 = auxevol_gfs[IDX4S(BETA_FACEU1GF, i0,i1,i2)];
const double beta_faceU2 = auxevol_gfs[IDX4S(BETA_FACEU2GF, i0,i1,i2)];
const double Valenciav_rU0 = auxevol_gfs[IDX4S(VALENCIAV_RU0GF, i0,i1,i2)];
const double Valenciav_rU1 = auxevol_gfs[IDX4S(VALENCIAV_RU1GF, i0,i1,i2)];
const double Valenciav_rU2 = auxevol_gfs[IDX4S(VALENCIAV_RU2GF, i0,i1,i2)];
const double B_rU0 = auxevol_gfs[IDX4S(B_RU0GF, i0,i1,i2)];
const double B_rU1 = auxevol_gfs[IDX4S(B_RU1GF, i0,i1,i2)];
const double B_rU2 = auxevol_gfs[IDX4S(B_RU2GF, i0,i1,i2)];
const double Valenciav_lU0 = auxevol_gfs[IDX4S(VALENCIAV_LU0GF, i0,i1,i2)];
const double Valenciav_lU1 = auxevol_gfs[IDX4S(VALENCIAV_LU1GF, i0,i1,i2)];
const double Valenciav_lU2 = auxevol_gfs[IDX4S(VALENCIAV_LU2GF, i0,i1,i2)];
const double B_lU0 = auxevol_gfs[IDX4S(B_LU0GF, i0,i1,i2)];
const double B_lU1 = auxevol_gfs[IDX4S(B_LU1GF, i0,i1,i2)];
const double B_lU2 = auxevol_gfs[IDX4S(B_LU2GF, i0,i1,i2)];
REAL A_rhsD0 = 0; REAL A_rhsD1 = 0; REAL A_rhsD2 = 0;
"""
Memory_Write = """rhs_gfs[IDX4S(AD0GF,i0,i1,i2)] += A_rhsD0;
rhs_gfs[IDX4S(AD1GF,i0,i1,i2)] += A_rhsD1;
rhs_gfs[IDX4S(AD2GF,i0,i1,i2)] += A_rhsD2;
"""
indices = ["i0","i1","i2"]
indicesp1 = ["i0+1","i1+1","i2+1"]
for flux_dirn in range(3):
calculate_E_i_flux(flux_dirn,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU)
E_field_to_print = [\
sp.Rational(1,4)*E_fluxD[(flux_dirn+1)%3],
sp.Rational(1,4)*E_fluxD[(flux_dirn+2)%3],
]
E_field_names = [\
"A_rhsD"+str((flux_dirn+1)%3),
"A_rhsD"+str((flux_dirn+2)%3),
]
desc = "Calculate the electric flux on the left face in direction " + str(flux_dirn) + "."
name = "calculate_E_field_D" + str(flux_dirn) + "_right"
outCfunction(
outfile = os.path.join(out_dir,subdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,const REAL *auxevol_gfs,REAL *rhs_gfs",
body = Memory_Read \
+outputC(E_field_to_print,E_field_names,"returnstring",params="outCverbose=False").replace("IDX4","IDX4S")\
+Memory_Write,
loopopts ="InteriorPoints",
rel_path_for_Cparams=os.path.join("../"))
desc = "Calculate the electric flux on the left face in direction " + str(flux_dirn) + "."
name = "calculate_E_field_D" + str(flux_dirn) + "_left"
outCfunction(
outfile = os.path.join(out_dir,subdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,const REAL *auxevol_gfs,REAL *rhs_gfs",
body = Memory_Read.replace(indices[flux_dirn],indicesp1[flux_dirn]) \
+outputC(E_field_to_print,E_field_names,"returnstring",params="outCverbose=False").replace("IDX4","IDX4S")\
+Memory_Write,
loopopts ="InteriorPoints",
rel_path_for_Cparams=os.path.join("../"))
```
<a id='code_validation'></a>
# Step 3: Code Validation against `GiRaFFE_NRPy.Induction_Equation` NRPy+ Module \[Back to [top](#toc)\]
$$\label{code_validation}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for the $\texttt{GiRaFFE}$ evolution equations and auxiliary quantities we intend to use between
1. this tutorial and
2. the NRPy+ [GiRaFFE_NRPy.Induction_Equation](../../edit/in_progress/GiRaFFE_NRPy/Induction_Equation.py) module.
Below are the gridfunction registrations we will need for testing. We will pass these to the above functions to self-validate the module that corresponds with this tutorial.
```
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="C2P_P2C."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
# These are the standard gridfunctions we've used before.
#ValenciavU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","ValenciavU",DIM=3)
#gammaDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gammaDD","sym01")
#betaU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","betaU")
#alpha = gri.register_gridfunctions("AUXEVOL",["alpha"])
#AD = ixp.register_gridfunctions_for_single_rank1("EVOL","AD",DIM=3)
#BU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","BU",DIM=3)
# We will pass values of the gridfunction on the cell faces into the function. This requires us
# to declare them as C parameters in NRPy+. We will denote this with the _face infix/suffix.
alpha_face = gri.register_gridfunctions("AUXEVOL","alpha_face")
gamma_faceDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gamma_faceDD","sym01")
beta_faceU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","beta_faceU")
# We'll need some more gridfunctions, now, to represent the reconstructions of BU and ValenciavU
# on the right and left faces
Valenciav_rU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Valenciav_rU",DIM=3)
B_rU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","B_rU",DIM=3)
Valenciav_lU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Valenciav_lU",DIM=3)
B_lU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","B_lU",DIM=3)
import GiRaFFE_NRPy.Afield_flux as Af
expr_list = []
exprcheck_list = []
namecheck_list = []
for flux_dirn in range(3):
calculate_E_i_flux(flux_dirn,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU)
Af.calculate_E_i_flux(flux_dirn,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU)
namecheck_list.extend([gfnm("E_fluxD",flux_dirn)])
exprcheck_list.extend([Af.E_fluxD[flux_dirn]])
expr_list.extend([E_fluxD[flux_dirn]])
for mom_comp in range(len(expr_list)):
comp_func(expr_list[mom_comp],exprcheck_list[mom_comp],namecheck_list[mom_comp])
import sys
if all_passed:
print("ALL TESTS PASSED!")
else:
print("ERROR: AT LEAST ONE TEST DID NOT PASS")
sys.exit(1)
```
We will also check the output C code to make sure it matches what is produced by the python module.
```
import difflib
import sys
subdir = os.path.join("RHSs")
out_dir = os.path.join("GiRaFFE_standalone_Ccodes")
cmd.mkdir(out_dir)
cmd.mkdir(os.path.join(out_dir,subdir))
valdir = os.path.join("GiRaFFE_Ccodes_validation")
cmd.mkdir(valdir)
cmd.mkdir(os.path.join(valdir,subdir))
generate_Afield_flux_function_files(out_dir,subdir,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU,inputs_provided=True)
Af.generate_Afield_flux_function_files(valdir,subdir,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU,inputs_provided=True)
print("Printing difference between original C code and this code...")
# Open the files to compare
files = ["RHSs/calculate_E_field_D0_right.h",
"RHSs/calculate_E_field_D0_left.h",
"RHSs/calculate_E_field_D1_right.h",
"RHSs/calculate_E_field_D1_left.h",
"RHSs/calculate_E_field_D2_right.h",
"RHSs/calculate_E_field_D2_left.h"]
for file in files:
print("Checking file " + file)
with open(os.path.join(valdir,file)) as file1, open(os.path.join(out_dir,file)) as file2:
# Read the lines of each file
file1_lines = file1.readlines()
file2_lines = file2.readlines()
num_diffs = 0
for line in difflib.unified_diff(file1_lines, file2_lines, fromfile=os.path.join(valdir,file), tofile=os.path.join(out_dir,file)):
sys.stdout.writelines(line)
num_diffs = num_diffs + 1
if num_diffs == 0:
print("No difference. TEST PASSED!")
else:
print("ERROR: Disagreement found with .py file. See differences above.")
```
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-GiRaFFE_NRPy-Induction_Equation.pdf](Tutorial-GiRaFFE_NRPy-Induction_Equation.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GiRaFFE_NRPy-Afield_flux")
```
| true |
code
| 0.555194 | null | null | null | null |
|
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
```
### Import Data Set & Normalize
---
we have imported the famoous mnist dataset, it is a 28x28 gray-scale hand written digits dataset. we have loaded the dataset, split the dataset. we also need to normalize the dataset. The original dataset has pixel value between 0 to 255. we have normalized it to 0 to 1.
```
import keras
from keras.datasets import mnist # 28x28 image data written digits 0-9
from keras.utils import normalize
#print(keras.__version__)
#split train and test dataset
(x_train, y_train), (x_test,y_test) = mnist.load_data()
#normalize data
x_train = normalize(x_train, axis=1)
x_test = normalize(x_test, axis=1)
import matplotlib.pyplot as plt
plt.imshow(x_train[0], cmap=plt.cm.binary)
plt.show()
#print(x_train[0])
```
## Specify Architecture:
---
we have specified our model architecture. added commonly used densely-connected neural network. For the output node we specified our activation function **softmax** it is a probability distribution function.
```
from keras.models import Sequential
from keras.layers import Flatten
from keras.layers import Dense
# created model
model = Sequential()
# flatten layer so it is operable by this layer
model.add(Flatten())
# regular densely-connected NN layer.
#layer 1, 128 node
model.add(Dense(128, activation='relu'))
#layer 2, 128 node
model.add(Dense(128, activation='relu'))
#output layer, since it is probability distribution we will use 'softmax'
model.add(Dense(10, activation='softmax'))
```
### Compile
---
we have compiled the model with earlystopping callback. when we see there are no improvement on accuracy we will stop compiling.
```
from keras.callbacks import EarlyStopping
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics = ['accuracy'])
#stop when see model not improving
early_stopping_monitor = EarlyStopping(monitor='val_loss', patience=2)
```
### Fit
---
Fit the model with train data, with epochs 10.
```
model.fit(x_train, y_train, epochs=10, callbacks=[early_stopping_monitor], validation_data=(x_test, y_test))
```
### Evaluate
---
Evaluate the accuracy of the model.
```
val_loss, val_acc = model.evaluate(x_test,y_test)
print(val_loss, val_acc)
```
### Save
---
Save the model and show summary.
```
model.save('mnist_digit.h5')
model.summary()
```
### Load
----
Load the model.
```
from keras.models import load_model
new_model = load_model('mnist_digit.h5')
```
### Predict
----
Here our model predicted the probability distribution, we have to covnert it to classifcation/label.
```
predict = new_model.predict([x_test])
#return the probability
print(predict)
print(predict[1].argmax(axis=-1))
plt.imshow(x_test[1])
plt.show()
```
| true |
code
| 0.714018 | null | null | null | null |
|
(docs-contribute)=
# Contributing to the Ray Documentation
There are many ways to contribute to the Ray documentation, and we're always looking for new contributors.
Even if you just want to fix a typo or expand on a section, please feel free to do so!
This document walks you through everything you need to do to get started.
## Building the Ray documentation
If you want to contribute to the Ray documentation, you'll need a way to build it.
You don't have to build Ray itself, which is a bit more involved.
Just clone the Ray repository and change into the `ray/doc` directory.
```shell
git clone git@github.com:ray-project/ray.git
cd ray/doc
```
To install the documentation dependencies, run the following command:
```shell
pip install -r requirements-doc.txt
```
Additionally, it's best if you install the dependencies for our linters with
```shell
pip install -r ../python/requirements_linters.txt
```
so that you can make sure your changes comply with our style guide.
Building the documentation is done by running the following command:
```shell
make html
```
which will build the documentation into the `_build` directory.
After the build finishes, you can simply open the `_build/html/index.html` file in your browser.
It's considered good practice to check the output of your build to make sure everything is working as expected.
Before committing any changes, make sure you run the
[linter](https://docs.ray.io/en/latest/ray-contribute/getting-involved.html#lint-and-formatting)
with `../scripts/format.sh` from the `doc` folder,
to make sure your changes are formatted correctly.
## The basics of our build system
The Ray documentation is built using the [`sphinx`](https://www.sphinx-doc.org/) build system.
We're using the [Sphinx Book Theme](https://github.com/executablebooks/sphinx-book-theme) from the
[executable books project](https://github.com/executablebooks).
That means that you can write Ray documentation in either Sphinx's native
[reStructuredText (rST)](https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html) or in
[Markedly Structured Text (MyST)](https://myst-parser.readthedocs.io/en/latest/).
The two formats can be converted to each other, so the choice is up to you.
Having said that, it's important to know that MyST is
[common markdown compliant](https://myst-parser.readthedocs.io/en/latest/syntax/reference.html#commonmark-block-tokens).
If you intend to add a new document, we recommend starting from an `.md` file.
The Ray documentation also fully supports executable formats like [Jupyter Notebooks](https://jupyter.org/).
Many of our examples are notebooks with [MyST markdown cells](https://myst-nb.readthedocs.io/en/latest/index.html).
In fact, this very document you're reading _is_ a notebook.
You can check this for yourself by either downloading the `.ipynb` file,
or directly launching this notebook into either Binder or Google Colab in the top navigation bar.
## What to contribute?
If you take Ray Tune as an example, you can see that our documentation is made up of several types of documentation,
all of which you can contribute to:
- [a project landing page](https://docs.ray.io/en/latest/tune/index.html),
- [a getting started guide](https://docs.ray.io/en/latest/tune/getting-started.html),
- [a key concepts page](https://docs.ray.io/en/latest/tune/key-concepts.html),
- [user guides for key features](https://docs.ray.io/en/latest/tune/tutorials/overview.html),
- [practical examples](https://docs.ray.io/en/latest/tune/examples/index.html),
- [a detailed FAQ](https://docs.ray.io/en/latest/tune/faq.html),
- [and API references](https://docs.ray.io/en/latest/tune/api_docs/overview.html).
This structure is reflected in the
[Ray documentation source code](https://github.com/ray-project/ray/tree/master/doc/source/tune) as well, so you
should have no problem finding what you're looking for.
All other Ray projects share a similar structure, but depending on the project there might be minor differences.
Each type of documentation listed above has its own purpose, but at the end our documentation
comes down to _two types_ of documents:
- Markup documents, written in MyST or rST. If you don't have a lot of (executable) code to contribute or
use more complex features such as
[tabbed content blocks](https://docs.ray.io/en/latest/ray-core/walkthrough.html#starting-ray), this is the right
choice. Most of the documents in Ray Tune are written in this way, for instance the
[key concepts](https://github.com/ray-project/ray/blob/master/doc/source/tune/key-concepts.rst) or
[API documentation](https://github.com/ray-project/ray/blob/master/doc/source/tune/api_docs/overview.rst).
- Notebooks, written in `.ipynb` format. All Tune examples are written as notebooks. These notebooks render in
the browser like `.md` or `.rst` files, but have the added benefit of adding launch buttons to the top of the
document, so that users can run the code themselves in either Binder or Google Colab. A good first example to look
at is [this Tune example](https://github.com/ray-project/ray/blob/master/doc/source/tune/examples/tune-serve-integration-mnist.ipynb).
## Fixing typos and improving explanations
If you spot a typo in any document, or think that an explanation is not clear enough, please consider
opening a pull request.
In this scenario, just run the linter as described above and submit your pull request.
## Adding API references
We use [Sphinx's autodoc extension](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html) to generate
our API documentation from our source code.
In case we're missing a reference to a function or class, please consider adding it to the respective document in question.
For example, here's how you can add a function or class reference using `autofunction` and `autoclass`:
```markdown
.. autofunction:: ray.tune.integration.docker.DockerSyncer
.. autoclass:: ray.tune.integration.keras.TuneReportCallback
```
The above snippet was taken from the
[Tune API documentation](https://github.com/ray-project/ray/blob/master/doc/source/tune/api_docs/integration.rst),
which you can look at for reference.
If you want to change the content of the API documentation, you will have to edit the respective function or class
signatures directly in the source code.
For example, in the above `autofunction` call, to change the API reference for `ray.tune.integration.docker.DockerSyncer`,
you would have to [change the following source file](https://github.com/ray-project/ray/blob/7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065/python/ray/tune/integration/docker.py#L15-L38).
## Adding code to an `.rST` or `.md` file
Modifying text in an existing documentation file is easy, but you need to be careful when it comes to adding code.
The reason is that we want to ensure every code snippet on our documentation is tested.
This requires us to have a process for including and testing code snippets in documents.
In an `.rST` or `.md` file, you can add code snippets using `literalinclude` from the Sphinx system.
For instance, here's an example from the Tune's "Key Concepts" documentation:
```markdown
.. literalinclude:: doc_code/key_concepts.py
:language: python
:start-after: __function_api_start__
:end-before: __function_api_end__
```
Note that in the whole file there's not a single literal code block, code _has to be_ imported using the `literalinclude` directive.
The code that gets added to the document by `literalinclude`, including `start-after` and `end-before` tags,
reads as follows:
```
# __function_api_start__
from ray import tune
def objective(x, a, b): # Define an objective function.
return a * (x ** 0.5) + b
def trainable(config): # Pass a "config" dictionary into your trainable.
for x in range(20): # "Train" for 20 iterations and compute intermediate scores.
score = objective(x, config["a"], config["b"])
tune.report(score=score) # Send the score to Tune.
# __function_api_end__
```
This code is imported by `literalinclude` from a file called `doc_code/key_concepts.py`.
Every Python file in the `doc_code` directory will automatically get tested by our CI system,
but make sure to run scripts that you change (or new scripts) locally first.
You do not need to run the testing framework locally.
In rare situations, when you're adding _obvious_ pseudo-code to demonstrate a concept, it is ok to add it
literally into your `.rST` or `.md` file, e.g. using a `.. code-cell:: python` directive.
But if your code is supposed to run, it needs to be tested.
## Creating a new document from scratch
Sometimes you might want to add a completely new document to the Ray documentation, like adding a new
user guide or a new example.
For this to work, you need to make sure to add the new document explicitly to the
[`_toc.yml` file](https://github.com/ray-project/ray/blob/master/doc/source/_toc.yml) that determines
the structure of the Ray documentation.
Depending on the type of document you're adding, you might also have to make changes to an existing overview
page that curates the list of documents in question.
For instance, for Ray Tune each user guide is added to the
[user guide overview page](https://docs.ray.io/en/latest/tune/tutorials/overview.html) as a panel, and the same
goes for [all Tune examples](https://docs.ray.io/en/latest/tune/examples/index.html).
Always check the structure of the Ray sub-project whose documentation you're working on to see how to integrate
it within the existing structure.
In some cases you may be required to choose an image for the panel. Images are located in
`doc/source/images`.
## Creating a notebook example
To add a new executable example to the Ray documentation, you can start from our
[MyST notebook template](https://github.com/ray-project/ray/tree/master/doc/source/_templates/template.md) or
[Jupyter notebook template](https://github.com/ray-project/ray/tree/master/doc/source/_templates/template.ipynb).
You could also simply download the document you're reading right now (click on the respective download button at the
top of this page to get the `.ipynb` file) and start modifying it.
All the example notebooks in Ray Tune get automatically tested by our CI system, provided you place them in the
[`examples` folder](https://github.com/ray-project/ray/tree/master/doc/source/tune/examples).
If you have questions about how to test your notebook when contributing to other Ray sub-projects, please make
sure to ask a question in [the Ray community Slack](https://forms.gle/9TSdDYUgxYs8SA9e8) or directly on GitHub,
when opening your pull request.
To work off of an existing example, you could also have a look at the
[Ray Tune Hyperopt example (`.ipynb`)](https://github.com/ray-project/ray/blob/master/doc/source/tune/examples/hyperopt_example.ipynb)
or the [Ray Serve guide for RLlib (`.md`)](https://github.com/ray-project/ray/blob/master/doc/source/serve/tutorials/rllib.md).
We recommend that you start with an `.md` file and convert your file to an `.ipynb` notebook at the end of the process.
We'll walk you through this process below.
What makes these notebooks different from other documents is that they combine code and text in one document,
and can be launched in the browser.
We also make sure they are tested by our CI system, before we add them to our documentation.
To make this work, notebooks need to define a _kernel specification_ to tell a notebook server how to interpret
and run the code.
For instance, here's the kernel specification of a Python notebook:
```markdown
---
jupytext:
text_representation:
extension: .md
format_name: myst
kernelspec:
display_name: Python 3
language: python
name: python3
---
```
If you write a notebook in `.md` format, you need this YAML front matter at the top of the file.
To add code to your notebook, you can use the `code-cell` directive.
Here's an example:
````markdown
```{code-cell} python3
:tags: [hide-cell]
import ray
import ray.rllib.agents.ppo as ppo
from ray import serve
def train_ppo_model():
trainer = ppo.PPOTrainer(
config={"framework": "torch", "num_workers": 0},
env="CartPole-v0",
)
# Train for one iteration
trainer.train()
trainer.save("/tmp/rllib_checkpoint")
return "/tmp/rllib_checkpoint/checkpoint_000001/checkpoint-1"
checkpoint_path = train_ppo_model()
```
````
Putting this markdown block into your document will render as follows in the browser:
```
import ray
import ray.rllib.agents.ppo as ppo
from ray import serve
def train_ppo_model():
trainer = ppo.PPOTrainer(
config={"framework": "torch", "num_workers": 0},
env="CartPole-v0",
)
# Train for one iteration
trainer.train()
trainer.save("/tmp/rllib_checkpoint")
return "/tmp/rllib_checkpoint/checkpoint_000001/checkpoint-1"
checkpoint_path = train_ppo_model()
```
As you can see, the code block is hidden, but you can expand it by click on the "+" button.
### Tags for your notebook
What makes this work is the `:tags: [hide-cell]` directive in the `code-cell`.
The reason we suggest starting with `.md` files is that it's much easier to add tags to them, as you've just seen.
You can also add tags to `.ipynb` files, but you'll need to start a notebook server for that first, which may
not want to do to contribute a piece of documentation.
Apart from `hide-cell`, you also have `hide-input` and `hide-output` tags that hide the input and output of a cell.
Also, if you need code that gets executed in the notebook, but you don't want to show it in the documentation,
you can use the `remove-cell`, `remove-input`, and `remove-output` tags in the same way.
### Testing notebooks
Removing cells can be particularly interesting for compute-intensive notebooks.
We want you to contribute notebooks that use _realistic_ values, not just toy examples.
At the same time we want our notebooks to be tested by our CI system, and running them should not take too long.
What you can do to address this is to have notebook cells with the parameters you want the users to see first:
````markdown
```{code-cell} python3
num_workers = 8
num_gpus = 2
```
````
which will render as follows in the browser:
```
num_workers = 8
num_gpus = 2
```
But then in your notebook you follow that up with a _removed_ cell that won't get rendered, but has much smaller
values and make the notebook run faster:
````markdown
```{code-cell} python3
:tags: [remove-cell]
num_workers = 0
num_gpus = 0
```
````
### Converting markdown notebooks to ipynb
Once you're finished writing your example, you can convert it to an `.ipynb` notebook using `jupytext`:
```shell
jupytext your-example.md --to ipynb
```
In the same way, you can convert `.ipynb` notebooks to `.md` notebooks with `--to myst`.
And if you want to convert your notebook to a Python file, e.g. to test if your whole script runs without errors,
you can use `--to py` instead.
## Where to go from here?
There are many other ways to contribute to Ray other than documentation.
See {ref}`our contributor guide <getting-involved>` for more information.
| true |
code
| 0.774274 | null | null | null | null |
|
# SVM
```
import numpy as np
import sympy as sym
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(1)
```
## Simple Example Application
对于简单的数据样本例子(也就是说可以进行线性划分,且不包含噪声点)
**算法:**
输入:线性可分训练集$T={(x_1,y_1),(x_2,y_2),...,(x_N,y_N)}$,其中$x_i \in \textit{X}=\textit{R},y_i \in \textit{Y}={+1,-1},i=1,2...,N$
输出:分离超平面和分类决策函数
(1) 构造并求解约束条件最优化问题
$\underset{\alpha}{min}$ $\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_i \alpha_j y_i y_j <x_i \cdot x_j>-\sum_{i=1}^{N}\alpha_i$
s.t $\sum_{i=1}^{N}\alpha_i y_i=0$
$\alpha_i \geq 0,i=1,2,...,N$
求得最优$\alpha^{*}=(\alpha_1^{*},\alpha_2^{*},...,\alpha_n^{*})$
其中正分量$\alpha_j^{*}>0$就为支持向量
(2) 计算
$w^{*} = \sum_{i=1}^{N}\alpha_i^{*}y_ix_i$
选择$\alpha^{*}$的一个正分量$\alpha_j^{*}>0$,计算
$b^{*}=y_j-\sum_{i=1}^{N}\alpha_i^{*}y_i<x_i \cdot x_j>$
(3) 求得分离超平面
$w^{*}\cdot x + b^{*}=0$
分类决策函数:
$f(x)=sign(w^{*}\cdot x + b^{*})$
这里的sign表示:值大于0的为1,值小于0的为-1.
```
def loadSimpleDataSet():
"""
从文本加载数据集
返回:
数据集和标签集
"""
train_x = np.array([[3,3],[4,3],[1,1]]).T
train_y = np.array([[1,1,-1]]).T
return train_x,train_y
train_x,train_y = loadSimpleDataSet()
print("train_x shape is : ",train_x.shape)
print("train_y shape is : ",train_y.shape)
plt.scatter(train_x[0,:],train_x[1,:],c=np.squeeze(train_y))
```
为了方便计算$\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_i \alpha_j y_i y_j <x_i \cdot x_j>$
我们需要先求出train_x、train_y、alphas的内积然后逐个元素相乘然后累加.
计算train_x的内积
```
Inner_train_x = np.dot(train_x.T,train_x)
print("Train_x is:\n",train_x)
print("Inner train x is:\n",Inner_train_x)
```
计算train_y的内积
```
Inner_train_y = np.dot(train_y,train_y.T)
print("Train y is:\n",train_y)
print("Inner train y is:\n",Inner_train_y)
```
计算alphas(拉格朗日乘子)的内积,但是要注意,我们在这里固定拉格朗日乘子中的某两个alpha之外的其他alpha,因为根据理论知识,我们需要固定两个alpha之外的其他alphas,然后不断的再一堆alphas中去迭代更新这两个alpha.由于这个例子过于简单,且只有3个样本点(事实上$\alpha_1,\alpha_3$就是支持向量).
将约束条件带入其中:
$\sum_{i=1}^3\alpha_i y_i=\alpha_1y_1+\alpha_2y_2+\alpha_3y_3 =0 \Rightarrow $
--
$\alpha_3 = -(\alpha_1y_1+\alpha_2y_2)/y_3 $
--
```
alphas_sym = sym.symbols('alpha1:4')
alphas = np.array([alphas_sym]).T
alphas[-1]= -np.sum(alphas[:-1,:]*train_y[:-1,:]) / train_y[-1,:]
Inner_alphas = np.dot(alphas,alphas.T)
print("alphas is: \n",alphas)
print("Inner alphas is:\n",Inner_alphas)
```
现在求最优的$\alpha^{*}=(\alpha_1^{*},\alpha_2^{*},...,\alpha_n^{*})$
$\underset{\alpha}{min}$ $\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_i \alpha_j y_i y_j <x_i \cdot x_j>-\sum_{i=1}^{N}\alpha_i$
**注意:**
这里需要使用sympy库,详情请见[柚子皮-Sympy符号计算库](https://blog.csdn.net/pipisorry/article/details/39123247)
或者[Sympy](https://www.sympy.org/en/index.html)
```
def compute_dual_function(alphas,Inner_alphas,Inner_train_x,Inner_train_y):
"""
Parameters:
alphas: initialization lagrange multiplier,shape is (n,1).
n:number of example.
Inner_alphas: Inner product of alphas.
Inner_train_x: Inner product of train x set.
Inner_train_y: Inner product of train y set.
simplify : simplify compute result of dual function.
return:
s_alpha: result of dual function
"""
s_alpha = sym.simplify(1/2*np.sum(Inner_alphas * Inner_train_x*Inner_train_y) - (np.sum(alphas)))
return s_alpha
s_alpha = compute_dual_function(alphas,Inner_alphas,Inner_train_x,Inner_train_y)
print('s_alpha is:\n ',s_alpha)
```
现在对每一个alpha求偏导令其等于0.
```
def Derivative_alphas(alphas,s_alpha):
"""
Parameters:
alphas: lagrange multiplier.
s_alpha: dual function
return:
bool value.
True: Meet all constraints,means,all lagrange multiplier >0
False:Does not satisfy all constraints,means some lagrange multiplier <0.
"""
cache_derivative_alpha = []
for alpha in alphas.squeeze()[:-1]: # remove the last element.
derivative = s_alpha.diff(alpha) # diff: derivative
cache_derivative_alpha.append(derivative)
derivative_alpha = sym.solve(cache_derivative_alpha,set=True) # calculate alphas.
print('derivative_alpha is: ',derivative_alpha)
# check alpha > 0
check_alpha_np = np.array(list(derivative_alpha[1])) > 0
return check_alpha_np.all()
check_alpha = Derivative_alphas(alphas,s_alpha)
print("Constraint lagrange multiplier is: ",check_alpha)
```
可以看出如果是对于$\alpha_2<0$,不满足$\alpha_2 \geqslant 0 $所以我们不能使用极值
-------------
由于在求偏导的情况下不满足拉格朗日乘子约束条件,所以我们将固定某一个$\alpha_i$,将其他的$\alpha$令成0,使偏导等于0求出当前$\alpha_i$,然后在带入到对偶函数中求出最后的结果.比较所有的结果挑选出结果最小的值所对应的$\alpha_i$,在从中选出$\alpha_i>0$的去求我们最开始固定的其他alphas.
**算法:**
输入: 拉格朗日乘子数组,数组中不包括最开始固定的其他alphas
输出: 最优的拉格朗日乘子,也就是支持向量
(1) 将输入的拉格朗日数组扩增一行或者一列并初始化为0
- alphas_zeros = np.zeros((alphas.shape[0],1))[:-1]
- alphas_add_zeros = np.c_[alphas[:-1],alphas_zeros]
(2) 将扩增后的数组进行"mask"掩模处理,目的是为了将一个$\alpha$保留,其他的$\alpha$全部为0.
- mask_alpha = np.ma.array(alphas_add_zeros, mask=False) # create mask array.
- mask_alpha.mask[i] = True # masked alpha
- 在sysmpy中使用掩模处理会报出一个警告:将掩模值处理为None,其实问题不大,应该不会改变对偶方程中的alpha对象
(3) 使用掩模后的数组放入对偶函数中求偏导$\alpha_i$,并令其等于0求出$\alpha_i$
(4) 将求出的$\alpha_i$和其他都等于0的alphas带入到对偶函数中求出值
(5) 比较所有的对偶函数中的值,选取最小值所对应的alpha组.计算最开始固定值的alphas.
```
def choose_best_alphas(alphas,s_alpha):
"""
Parameters:
alphas: Lagrange multiplier.
s_alpha: dual function
return:
best_vector: best support vector machine.
"""
# add col in alphas,and initialize value equal 0. about 2 lines.
alphas_zeros = np.zeros((alphas.shape[0],1))[:-1]
alphas_add_zeros = np.c_[alphas[:-1],alphas_zeros]
# cache some parameters.
cache_alphas_add = np.zeros((alphas.shape[0],1))[:-1] # cache derivative alphas.
cache_alphas_compute_result = np.zeros((alphas.shape[0],1))[:-1] # cache value in dual function result
cache_alphas_to_compute = alphas_add_zeros.copy() # get minmux dual function value,cache this values.
for i in range(alphas_add_zeros.shape[0]):
mask_alpha = np.ma.array(alphas_add_zeros, mask=False) # create mask array.
mask_alpha.mask[i] = True # masked alpha
value = sym.solve(s_alpha.subs(mask_alpha).diff())[0] # calculate alpha_i
cache_alphas_add[i] = value
cache_alphas_to_compute[i][1] = value
cache_alphas_compute_result[i][0] = s_alpha.subs(cache_alphas_to_compute) # calculate finally dual function result.
cache_alphas_to_compute[i][1] = 0 # make sure other alphas equal 0.
min_alpha_value_index = cache_alphas_compute_result.argmin()
best_vector =np.array([cache_alphas_add[min_alpha_value_index]] + [- cache_alphas_add[min_alpha_value_index] / train_y[-1]])
return [min_alpha_value_index]+[2],best_vector
min_alpha_value_index,best_vector = choose_best_alphas(alphas,s_alpha)
print(min_alpha_value_index)
print('support vector machine is:',alphas[min_alpha_value_index])
```
$w^{*} = \sum_{i=1}^{N}\alpha_i^{*}y_ix_i$
```
w = np.sum(np.multiply(best_vector , train_y[min_alpha_value_index].T) * train_x[:,min_alpha_value_index],axis=1)
print("W is: ",w)
```
选择$\alpha^{*}$的一个正分量$\alpha_j^{*}>0$,计算
$b^{*}=y_j-\sum_{i=1}^{N}\alpha_i^{*}y_i<x_i \cdot x_j>$
这里我选alpha1
```
b = train_y[0]-np.sum(best_vector.T * np.dot(train_x[:,min_alpha_value_index].T,train_x[:,min_alpha_value_index])[0]
* train_y[min_alpha_value_index].T)
print("b is: ",b)
```
所以超平面为:
$f(x)=sign[wx+b]$
# SMO
这里实现简单版本的smo算法,这里所谓的简单版本指的是速度没有SVC快,参数自动选择没有SCV好等.但是通过调节参数一样可以达到和SVC差不多的结果
### 算法:
#### 1.SMO选择第一个变量的过程为选择一个违反KKT条件最严重的样本点为$\alpha_1$,即违反以下KKT条件:
$\alpha_i=0\Leftrightarrow y_ig(x_i)\geqslant1$
$0<\alpha_i<C\Leftrightarrow y_ig(x_i)=1$
$\alpha_i=C \Leftrightarrow y_ig(x_i)\leqslant1$
其中:
$g(x_i)=\sum_{j=1}^{N}\alpha_iy_iK(x_i,x_j)+b$
**注意:**
- 初始状态下$\alpha_i$定义为0,且和样本数量一致.
- 该检验是在$\varepsilon$范围内的
- 在检验过程中我们先遍历所有满足$0<\alpha_i<C$的样本点,即在间隔边界上的支持向量点,找寻违反KKT最严重的样本点
- 如果没有满足$0<\alpha_i<C$则遍历所有的样本点,找违反KKT最严重的样本点
- 这里的*违反KKT最严重的样本点*可以选择为$y_ig(x_i)$最小的点作为$\alpha_1$
#### 2.SMO选择第二个变量的过程为希望$\alpha_2$有足够的变化
因为$\alpha_2^{new}$是依赖于$|E_1-E_2|$的,并且使得|E_1-E_2|最大,为了加快计算,一种简单的做法是:
如果$E_1$是正的,那么选择最小的$E_i$作为$E_2$,如果$E_1$是负的,那么选择最大的$E_i$作为$E_2$,为了节省计算时间,将$E_i$保存在一个列表中
**注意:**
- 如果通过以上方法找到的$\alpha_2$不能使得目标函数有足够的下降,那么采用以下启发式方法继续选择$\alpha_2$,遍历在间隔边上的支持向量的点依次将其对应的变量作为$\alpha_2$试用,直到目标函数有足够的下降,若还是找不到使得目标函数有足够下降,则抛弃第一个$\alpha_1$,在重新选择另一个$\alpha_1$
- 这个简单版本的SMO算法并没有处理这种特殊情况
#### 3.计算$\alpha_1^{new},\alpha_2^{new}$
计算$\alpha_1^{new},\alpha_2^{new}$,是为了计算$b_i,E_i$做准备.
3.1 计算$\alpha_2$的边界:
- if $y_1 \neq y_2$:$L=max(0,\alpha_2^{old}-\alpha_1^{old})$,$H=min(C,C+\alpha_2^{old}-\alpha_1^{old})$
- if $y_1 = y_2$:$L=max(0,\alpha_2^{old}+\alpha_1^{old}-C)$,$H=min(C,C+\alpha_2^{old}+\alpha_1^{old})$
3.2 计算$\alpha_2^{new,unc} = \alpha_2^{old}+\frac{y_2(E_1-E_2)}{\eta}$
其中:
$\eta = K_{11}+K_{22}-2K_{12}$,这里的$K_n$值得是核函数,可以是高斯核,多项式核等.
3.3 修剪$\alpha_2$
$\alpha_2^{new}=\left\{\begin{matrix}
H, &\alpha_2^{new,unc}>H \\
\alpha_2^{new,unc},& L\leqslant \alpha_2^{new,unc}\leqslant H \\
L,& \alpha_2^{new,unc}<L
\end{matrix}\right.$
3.3 计算$\alpha_1^{new}$
$\alpha_1^{new}=\alpha_1^{old}+y_1y_2(\alpha_2^{old}-\alpha_2^{new})$
#### 4.计算阈值b和差值$E_i$
$b_1^{new}=-E_1-y_1K_{11}(\alpha_1^{new}-\alpha_1^{old})-y_2K_{21}(\alpha_2^{new}-\alpha_2^{old})+b^{old}$
$b_2^{new}=-E_2-y_1K_{12}(\alpha_1^{new}-\alpha_1^{old})-y_2K_{22}(\alpha_2^{new}-\alpha_2^{old})+b^{old}$
如果$\alpha_1^{new},\alpha_2^{new}$,同时满足条件$0<\alpha_i^{new}<C,i=1,2$,
那么$b_1^{new}=b_2^{new}=b^{new}$.
如果$\alpha_1^{new},\alpha_2^{new}$是0或者C,那么$b_1^{new},b_2^{new}$之间的数
都符合KKT条件阈值,此时取中点为$b^{new}$
$E_i^{new}=(\sum_sy_j\alpha_jK(x_i,x_j))+b^{new}-y_i$
其中s是所有支持向量$x_j$的集合.
#### 5. 更新参数
更新$\alpha_i,E_i,b_i$
#### 注意:
在训练完毕后,绝大部分的$\alpha_i$的分量都为0,只有极少数的分量不为0,那么那些不为0的分量就是支持向量
### SMO简单例子
加载数据,来自于scikit中的的鸢尾花数据,其每次请求是变化的
```
# data
def create_data():
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['label'] = iris.target
df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label']
data = np.array(df.iloc[:100, [0, 1, -1]])
for i in range(len(data)):
if data[i,-1] == 0:
data[i,-1] = -1
return data[:,:2], data[:,-1]
X, y = create_data()
# 划分训练样本和测试样本
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
plt.scatter(X[:,0],X[:,1],c=y)
```
### 开始搭建SMO算法代码
```
class SVM:
def __init__(self,max_iter = 100,kernel = 'linear',C=1.,is_print=False,sigma=1):
"""
Parameters:
max_iter:最大迭代数
kernel:核函数,这里只定义了"线性"和"高斯"
sigma:高斯核函数的参数
C:惩罚项,松弛变量
is_print:是否打印
"""
self.max_iter = max_iter
self.kernel = kernel
self.C = C # 松弛变量C
self.is_print = is_print
self.sigma = sigma
def init_args(self,features,labels):
"""
self.m:样本数量
self.n:特征数
"""
self.m,self.n = features.shape
self.X = features
self.Y = labels
self.b = 0.
# 将E_i 保存在一个列表中
self.alpha = np.zeros(self.m) + 0.0001
self.E = [self._E(i) for i in range(self.m)]
def _g(self,i):
"""
预测值g(x_i)
"""
g_x = np.sum(self.alpha*self.Y*self._kernel(self.X[i],self.X)) + self.b
return g_x
def _E(self,i):
"""
E(x) 为g(x) 对输入x的预测值和y的差值
"""
g_x = self._g(i) - self.Y[i]
return g_x
def _kernel(self,x1,x2):
"""
计算kernel
"""
if self.kernel == "linear":
return np.sum(np.multiply(x1,x2),axis=1)
if self.kernel == "Gaussion":
return np.sum(np.exp(-((x1-x2)**2)/(2*self.sigma)),axis=1)
def _KKT(self,i):
"""
判断KKT
"""
y_g = np.round(np.float64(np.multiply(self._g(i),self.Y[i]))) # 存在精度问题也就是说在epsilon范围内,所以这里使用round
if self.alpha[i] == 0:
return y_g >= 1,y_g
elif 0<self.alpha[i]<self.C:
return y_g == 1,y_g
elif self.alpha[i] == self.C:
return y_g <=1,y_g
else:
return ValueError
def _init_alpha(self):
"""
外层循环首先遍历所有满足0<a<C的样本点,检验是否满足KKT
0<a<C的样本点为间隔边界上支持向量点
"""
index_array = np.where(np.logical_and(self.alpha>0,self.alpha<self.C))[0] # 因为这里where的特殊性,所以alpha必须是(m,)
if len(index_array) !=0:
cache_list = []
for i in index_array:
bool_,y_g = self._KKT(i)
if not bool_:
cache_list.append((y_g,i))
# 如果没有则遍历整个样本
else:
cache_list = []
for i in range(self.m):
bool_,y_g = self._KKT(i)
if not bool_:
cache_list.append((y_g,i))
#获取违反KKT最严重的样本点,也就是g(x_i)*y_i 最小的
min_i = sorted(cache_list,key=lambda x:x[0])[0][1]
# 选择第二个alpha2
E1 = self.E[min_i]
if E1 > 0:
j = np.argmin(self.E)
else:
j = np.argmax(self.E)
return min_i,j
def _prune(self,alpha,L,H):
"""
修剪alpha
"""
if alpha > H:
return H
elif L<=alpha<=H:
return alpha
elif alpha < L:
return L
else:
return ValueError
def fit(self,features, labels):
self.init_args(features, labels)
for t in range(self.max_iter):
# 开始寻找alpha1,和alpha2
i1,i2 = self._init_alpha()
# 计算边界
if self.Y[i1] == self.Y[i2]: # 同号
L = max(0,self.alpha[i2]+self.alpha[i1]-self.C)
H = min(self.C,self.alpha[i2]+self.alpha[i1])
else:
L = max(0,self.alpha[i2]-self.alpha[i1])
H = min(self.C,self.C+self.alpha[i2]-self.alpha[i1])
# 计算阈值b_i 和差值E_i
E1 = self.E[i1]
E2 = self.E[i2]
eta = self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i1]) + \
self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i2]) - \
2 * self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i2])
if eta <=0:
continue
alpha2_new_nuc = self.alpha[i2] + (self.Y[i2] * (E1-E2) /eta)
# 修剪 alpha2_new_nuc
alpha2_new = self._prune(alpha2_new_nuc,L,H)
alpha1_new = self.alpha[i1] + self.Y[i1] * self.Y[i2] * (self.alpha[i2]-alpha2_new)
# 计算b_i
b1_new = -E1-self.Y[i1]*self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i1])*(alpha1_new - self.alpha[i1])\
- self.Y[i2] * self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i1])*(alpha2_new - self.alpha[i2]) + self.b
b2_new = -E2-self.Y[i1]*self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i2])*(alpha1_new - self.alpha[i1])\
- self.Y[i2] * self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i2])*(alpha2_new - self.alpha[i2]) + self.b
if 0 < alpha1_new < self.C:
b_new = b1_new
elif 0 < alpha2_new < self.C:
b_new = b2_new
else:
# 选择中点
b_new = (b1_new + b2_new) / 2
# 更新参数
self.alpha[i1] = alpha1_new
self.alpha[i2] = alpha2_new
self.b = b_new
self.E[i1] = self._E(i1)
self.E[i2] = self._E(i2)
if self.is_print:
print("Train Done!")
def predict(self,data):
predict_y = np.sum(self.alpha*self.Y*self._kernel(data,self.X)) + self.b
return np.sign(predict_y)[0]
def score(self,test_X,test_Y):
m,n = test_X.shape
count = 0
for i in range(m):
predict_i = self.predict(test_X[i])
if predict_i == np.float(test_Y[i]):
count +=1
return count / m
```
由于鸢尾花数据每次请求都会变化,我们在这里取正确率的均值与SVC进行对比
```
count = 0
failed2 = []
for i in range(20):
X, y = create_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
svm = SVM(max_iter=200,C=2,kernel='linear')
svm.fit(X_train,y_train)
test_accourate = svm.score(X_test,y_test)
train_accourate = svm.score(X_train,y_train)
if test_accourate < 0.8:
failed2.append((X_train, X_test, y_train, y_test)) # 储存正确率过低的样本集
print("Test accourate:",test_accourate)
print("Train accourate:",train_accourate)
print('--------------------------')
count += test_accourate
print("Test average accourate is: ",count/20)
```
可以发现,有些数据的正确率较高,有些正确率非常的底,我们将低正确率的样本保存,取出进行试验
```
failed2X_train, failed2X_test, failed2y_train, failed2y_test= failed2[2]
```
我们可以看出,在更改C后,正确率依然是客观的,这说明简单版本的SMO算法是可行的.只是我们在测算
平均正确率的时候,C的值没有改变,那么可能有些样本的C值不合适.
```
svm = SVM(max_iter=200,C=5,kernel='linear')
svm.fit(failed2X_train,failed2y_train)
accourate = svm.score(failed2X_test,failed2y_test)
accourate
```
使用Scikit-SVC测试
### Scikit-SVC
基于scikit-learn的[SVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC.decision_function)
例子1:
```
from sklearn.svm import SVC
count = 0
for i in range(10):
X, y = create_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
clf = SVC(kernel="linear",C=2)
clf.fit(X_train, y_train)
accourate = clf.score(X_test, y_test)
print("accourate",accourate)
count += accourate
print("average accourate is: ",count/10)
```
当然由于是简单版本的SMO算法,所以平均正确率肯定没有SVC高,但是我们可以调节C和kernel来使得正确率提高
## Multilabel classification
多标签:一个实例可以有多个标签比如一个电影可以是动作,也可以是爱情.
多类分类(multi-class classification):有多个类别需要分类,但一个样本只属于一个类别
多标签分类(multi-label classificaton):每个样本有多个标签
对于多类分类,最后一层使用softmax函数进行预测,训练阶段使用categorical_crossentropy作为损失函数
对于多标签分类,最后一层使用sigmoid函数进行预测,训练阶段使用binary_crossentropy作为损失函数
This example simulates a multi-label document classification problem. The dataset is generated randomly based on the following process:
- pick the number of labels: n ~ Poisson(n_labels)
- n times, choose a class c: c ~ Multinomial(theta)
- pick the document length: k ~ Poisson(length)
- k times, choose a word: w ~ Multinomial(theta_c)
In the above process, rejection sampling is used to make sure that n is more than 2, and that the document length is never zero. Likewise, we reject classes which have already been chosen. The documents that are assigned to both classes are plotted surrounded by two colored circles.
The classification is performed by projecting to the first two principal components found by [PCA](http://www.cnblogs.com/jerrylead/archive/2011/04/18/2020209.html) and [CCA](https://files-cdn.cnblogs.com/files/jerrylead/%E5%85%B8%E5%9E%8B%E5%85%B3%E8%81%94%E5%88%86%E6%9E%90.pdf) for visualisation purposes, followed by using the [sklearn.multiclass.OneVsRestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.multiclass.OneVsRestClassifier.html#sklearn.multiclass.OneVsRestClassifier) metaclassifier using two SVCs with linear kernels to learn a discriminative model for each class. Note that PCA is used to perform an unsupervised dimensionality reduction, while CCA is used to perform a supervised one.
Note: in the plot, “unlabeled samples” does not mean that we don’t know the labels (as in semi-supervised learning) but that the samples simply do not have a label.
```
from sklearn.datasets import make_multilabel_classification
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.cross_decomposition import CCA
def plot_hyperplance(clf,min_x,max_x,linestyle,label):
# get the separating heyperplance
# 0 = w0*x0 + w1*x1 +b
w = clf.coef_[0]
a = -w[0] /w[1]
xx = np.linspace(min_x -5,max_x + 5)
yy = a * xx -(clf.intercept_[0]) / w[1] # clf.intercept_[0] get parameter b,
plt.plot(xx,yy,linestyle,label=label)
def plot_subfigure(X,Y,subplot,title,transform):
if transform == "pca": # pca执行无监督分析(不注重label)
X = PCA(n_components=2).fit_transform(X)
print("PCA",X.shape)
elif transform == "cca": # pca 执行监督分析(注重label),也即是说会分析label之间的关系
X = CCA(n_components=2).fit(X, Y).transform(X)
print("CCA",X.shape)
else:
raise ValueError
min_x = np.min(X[:, 0])
max_x = np.max(X[:, 0])
min_y = np.min(X[:, 1])
max_y = np.max(X[:, 1])
classif = OneVsRestClassifier(SVC(kernel='linear')) # 使用 one -reset 进行SVM训练
classif.fit(X, Y)
plt.subplot(2, 2, subplot)
plt.title(title)
zero_class = np.where(Y[:, 0]) # 找到第一类的label 索引
one_class = np.where(Y[:, 1]) # 找到第二类的
plt.scatter(X[:, 0], X[:, 1], s=40, c='gray', edgecolors=(0, 0, 0))
plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b',
facecolors='none', linewidths=2, label='Class 1')
plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange',
facecolors='none', linewidths=2, label='Class 2')
# classif.estimators_[0],获取第一个估算器,得到第一个决策边界
plot_hyperplance(classif.estimators_[0], min_x, max_x, 'k--',
'Boundary\nfor class 1')
# classif.estimators_[1],获取第二个估算器,得到第一个决策边界
plot_hyperplance(classif.estimators_[1], min_x, max_x, 'k-.',
'Boundary\nfor class 2')
plt.xticks(())
plt.yticks(())
plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x)
plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y)
if subplot == 2:
plt.xlabel('First principal component')
plt.ylabel('Second principal component')
plt.legend(loc="upper left")
```
**make_multilabel_classification:**
make_multilabel_classification(n_samples=100, n_features=20, n_classes=5, n_labels=2, length=50, allow_unlabeled=True, sparse=False, return_indicator='dense', return_distributions=False, random_state=None)
```
plt.figure(figsize=(8, 6))
# If ``True``, some instances might not belong to any class.也就是说某些实例可以并不属于任何标签([[0,0]]),使用hot形式
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
allow_unlabeled=True,
random_state=1)
print("Original:",X.shape)
plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca")
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
allow_unlabeled=False,
random_state=1)
print("Original:",X.shape)
plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca")
plt.subplots_adjust(.04, .02, .97, .94, .09, .2)
plt.show()
```
由于是使用多标签(也就是说一个实例可以有多个标签),无论是标签1还是标签2还是未知标签(“没有标签的样本”).图中直观来看应该是CCA会由于PCA(无论是有没有采用"没有标签的样本"),因为CCA考虑了label之间的关联.
因为我们有2个标签在实例中,所以我们能够绘制2条决策边界(使用classif.estimators_[index])获取,并使用$x_1 = \frac{w_0}{w_1}x_1-\frac{b}{w_1}$绘制决策边界
| true |
code
| 0.53358 | null | null | null | null |
|
> Code to accompany **Chapter 10: Defending Against Adversarial Inputs**
# Fashion-MNIST - Generating Adversarial Examples on a Drop-out Network
This notebook demonstrates how to generate adversarial examples using a network that incorporates randomised drop-out.
```
import tensorflow as tf
from tensorflow import keras
import numpy as np
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = train_images/255.0
test_images = test_images/255.0
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
## Create a Simple Network with drop-out for Image Classification
We need to use the Keras __functional API__ (rather than the sequential API) to access the
dropout capability with `training = True` at test time.
The cell below has drop-out enabled at training time only. You can experiment by moving the drop-out layer
or adding drop-out to test time by replacing the `Dropout` line as indicated in the comments.
```
from tensorflow.keras.layers import Input, Dense, Flatten, Dropout
from tensorflow.keras.models import Model
inputs = Input(shape=(28,28))
x = Flatten()(inputs)
x = Dense(56, activation='relu')(x)
x = Dropout(0.2)(x) # Use this line for drop-out at training time only
# x = Dropout(0.2)(x, training=True) # Use this line instead for drop-out at test and training time
x = Dense(56, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
model = Model(inputs=inputs, outputs=predictions)
print(model)
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
```
Train the model and evaluate it.
If drop-out is included at test time, the model will be unpredictable.
```
model.fit(train_images, train_labels, epochs=6)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Model accuracy based on test data:', test_acc)
```
## Create Some Adversarial Examples Using the Model
```
# Import helper function
import sys
sys.path.append('..')
from strengtheningdnns.adversarial_utils import generate_adversarial_data
import foolbox
fmodel = foolbox.models.TensorFlowModel.from_keras(model, bounds=(0, 255))
num_images = 1000
x_images = train_images[0:num_images, :]
attack_criterion = foolbox.criteria.Misclassification()
attack_fn = foolbox.attacks.GradientSignAttack(fmodel, criterion=attack_criterion)
x_adv_images, x_adv_perturbs, x_labels = generate_adversarial_data(original_images = x_images,
predictions = model.predict(x_images),
attack_fn = attack_fn)
```
## Take a Peek at some Results
The adversarial examples plotted should all be misclassified. However, if the model is running with drop-out at test
time also (see model creation above), they may be classified correctly due to uncertainty of the model's behaviour.
```
images_to_plot = x_adv_images
import matplotlib.pyplot as plt
adversarial_predictions = model.predict(images_to_plot)
plt.figure(figsize=(15, 30))
for i in range(30):
plt.subplot(10,5,i+1)
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(images_to_plot[i], cmap=plt.cm.binary)
predicted_label = np.argmax(adversarial_predictions[i])
original_label = x_labels[i]
if predicted_label == original_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} ({})".format(class_names[predicted_label],
class_names[original_label]),
color=color)
```
Save the images if you wish so you can load them later.
```
np.save('../resources/test_images_GSAttack_dropout', x_adv_images)
```
| true |
code
| 0.725427 | null | null | null | null |
|
## _*Using Qiskit Aqua for exact cover problems*_
In mathematics, given a collection $S$ of subsets of a set $X$.
An exact cover is a subcollection $S_{ec} \subseteq S$ such that each element in $X$ is contained in exactly one subset $\in S_{ec}$.
We will go through three examples to show (1) how to run the optimization in the non-programming way, (2) how to run the optimization in the programming way, (3) how to run the optimization with the VQE.
We will omit the details for the support of CPLEX, which are explained in other notebooks such as maxcut.
### The problem and the brute-force method.
first, let us take a look at the list of subsets.
```
import numpy as np
import json
from qiskit import Aer
from qiskit_aqua import run_algorithm
from qiskit_aqua.input import EnergyInput
from qiskit_aqua.translators.ising import exactcover
from qiskit_aqua.algorithms import ExactEigensolver
input_file = 'sample.exactcover'
with open(input_file) as f:
list_of_subsets = json.load(f)
print(list_of_subsets)
qubitOp, offset = exactcover.get_exactcover_qubitops(list_of_subsets)
algo_input = EnergyInput(qubitOp)
```
Then we apply the brute-force method. Basically, we exhaustively try all the binary assignments. In each binary assignment, the entry of a subset is either 0 (meaning the subset is not in the cover) or 1 (meaning the subset is in the cover). We print the binary assignment that satisfies the definition of the exact cover.
```
def brute_force():
# brute-force way: try every possible assignment!
has_sol = False
def bitfield(n, L):
result = np.binary_repr(n, L)
return [int(digit) for digit in result] # [2:] to chop off the "0b" part
L = len(list_of_subsets)
max = 2**L
for i in range(max):
cur = bitfield(i, L)
cur_v = exactcover.check_solution_satisfiability(cur, list_of_subsets)
if cur_v:
has_sol = True
break
return has_sol, cur
has_sol, cur = brute_force()
if has_sol:
print("solution is", cur)
else:
print("no solution is found")
```
### Part I: run the optimization in the non-programming way
```
params = {
'problem': {'name': 'ising'},
'algorithm': {'name': 'ExactEigensolver'}
}
result = run_algorithm(params, algo_input)
x = exactcover.sample_most_likely(len(list_of_subsets), result['eigvecs'][0])
ising_sol = exactcover.get_solution(x)
np.testing.assert_array_equal(ising_sol, [0, 1, 1, 0])
if exactcover.check_solution_satisfiability(ising_sol, list_of_subsets):
print("solution is", ising_sol)
else:
print("no solution is found")
```
### Part II: run the optimization in the programming way
```
algo = ExactEigensolver(algo_input.qubit_op, k=1, aux_operators=[])
result = algo.run()
x = exactcover.sample_most_likely(len(list_of_subsets), result['eigvecs'][0])
ising_sol = exactcover.get_solution(x)
np.testing.assert_array_equal(ising_sol, [0, 1, 1, 0])
if exactcover.check_solution_satisfiability(ising_sol, list_of_subsets):
print("solution is", ising_sol)
else:
print("no solution is found")
```
### Part III: run the optimization with VQE
```
algorithm_cfg = {
'name': 'VQE',
'operator_mode': 'matrix'
}
optimizer_cfg = {
'name': 'COBYLA'
}
var_form_cfg = {
'name': 'RYRZ',
'depth': 5
}
params = {
'problem': {'name': 'ising', 'random_seed': 10598},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg
}
backend = Aer.get_backend('statevector_simulator')
result = run_algorithm(params, algo_input, backend=backend)
x = exactcover.sample_most_likely(len(list_of_subsets), result['eigvecs'][0])
ising_sol = exactcover.get_solution(x)
if exactcover.check_solution_satisfiability(ising_sol, list_of_subsets):
print("solution is", ising_sol)
else:
print("no solution is found")
```
| true |
code
| 0.334834 | null | null | null | null |
|
# Model Checking
After running an MCMC simulation, `sample` returns a `MultiTrace` object containing the samples for all the stochastic and deterministic random variables. The final step in Bayesian computation is model checking, in order to ensure that inferences derived from your sample are valid. There are two components to model checking:
1. Convergence diagnostics
2. Goodness of fit
Convergence diagnostics are intended to detect lack of convergence in the Markov chain Monte Carlo sample; it is used to ensure that you have not halted your sampling too early. However, a converged model is not guaranteed to be a good model. The second component of model checking, goodness of fit, is used to check the internal validity of the model, by comparing predictions from the model to the data used to fit the model.
## Convergence Diagnostics
Valid inferences from sequences of MCMC samples are based on the
assumption that the samples are derived from the true posterior
distribution of interest. Theory guarantees this condition as the number
of iterations approaches infinity. It is important, therefore, to
determine the **minimum number of samples** required to ensure a reasonable
approximation to the target posterior density. Unfortunately, no
universal threshold exists across all problems, so convergence must be
assessed independently each time MCMC estimation is performed. The
procedures for verifying convergence are collectively known as
*convergence diagnostics*.
One approach to analyzing convergence is **analytical**, whereby the
variance of the sample at different sections of the chain are compared
to that of the limiting distribution. These methods use distance metrics
to analyze convergence, or place theoretical bounds on the sample
variance, and though they are promising, they are generally difficult to
use and are not prominent in the MCMC literature. More common is a
**statistical** approach to assessing convergence. With this approach,
rather than considering the properties of the theoretical target
distribution, only the statistical properties of the observed chain are
analyzed. Reliance on the sample alone restricts such convergence
criteria to **heuristics**. As a result, convergence cannot be guaranteed.
Although evidence for lack of convergence using statistical convergence
diagnostics will correctly imply lack of convergence in the chain, the
absence of such evidence will not *guarantee* convergence in the chain.
Nevertheless, negative results for one or more criteria may provide some
measure of assurance to users that their sample will provide valid
inferences.
For most simple models, convergence will occur quickly, sometimes within
a the first several hundred iterations, after which all remaining
samples of the chain may be used to calculate posterior quantities. For
more complex models, convergence requires a significantly longer burn-in
period; sometimes orders of magnitude more samples are needed.
Frequently, lack of convergence will be caused by **poor mixing**.
Recall that *mixing* refers to the degree to which the Markov
chain explores the support of the posterior distribution. Poor mixing
may stem from inappropriate proposals (if one is using the
Metropolis-Hastings sampler) or from attempting to estimate models with
highly correlated variables.
```
%matplotlib inline
import numpy as np
import seaborn as sns; sns.set_context('notebook')
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
from pymc3 import Normal, Binomial, sample, Model
from pymc3.math import invlogit
# Samples for each dose level
n = 5 * np.ones(4, dtype=int)
# Log-dose
dose = np.array([-.86, -.3, -.05, .73])
deaths = np.array([0, 1, 3, 5])
with Model() as bioassay_model:
# Logit-linear model parameters
alpha = Normal('alpha', 0, sd=100)
beta = Normal('beta', 0, sd=100)
# Calculate probabilities of death
theta = invlogit(alpha + beta * dose)
# Data likelihood
obs_deaths = Binomial('obs_deaths', n=n, p=theta, observed=deaths)
with bioassay_model:
bioassay_trace = sample(1000)
from pymc3 import traceplot
traceplot(bioassay_trace, varnames=['alpha'])
```
### Informal Methods
The most straightforward approach for assessing convergence is based on
simply **plotting and inspecting traces and histograms** of the observed
MCMC sample. If the trace of values for each of the stochastics exhibits
asymptotic behavior over the last $m$ iterations, this may be
satisfactory evidence for convergence.
```
traceplot(bioassay_trace, varnames=['beta'])
```
A similar approach involves
plotting a histogram for every set of $k$ iterations (perhaps 50-100)
beyond some burn in threshold $n$; if the histograms are not visibly
different among the sample intervals, this may be considered some evidence for
convergence. Note that such diagnostics should be carried out for each
stochastic estimated by the MCMC algorithm, because convergent behavior
by one variable does not imply evidence for convergence for other
variables in the analysis.
```
import matplotlib.pyplot as plt
beta_trace = bioassay_trace['beta']
fig, axes = plt.subplots(2, 5, figsize=(14,6))
axes = axes.ravel()
for i in range(10):
axes[i].hist(beta_trace[100*i:100*(i+1)])
plt.tight_layout()
```
An extension of this approach can be taken
when multiple parallel chains are run, rather than just a single, long
chain. In this case, the final values of $c$ chains run for $n$
iterations are plotted in a histogram; just as above, this is repeated
every $k$ iterations thereafter, and the histograms of the endpoints are
plotted again and compared to the previous histogram. This is repeated
until consecutive histograms are indistinguishable.
Another *ad hoc* method for detecting lack of convergence is to examine
the traces of several MCMC chains initialized with different starting
values. Overlaying these traces on the same set of axes should (if
convergence has occurred) show each chain tending toward the same
equilibrium value, with approximately the same variance. Recall that the
tendency for some Markov chains to converge to the true (unknown) value
from diverse initial values is called *ergodicity*. This property is
guaranteed by the reversible chains constructed using MCMC, and should
be observable using this technique. Again, however, this approach is
only a heuristic method, and cannot always detect lack of convergence,
even though chains may appear ergodic.
```
with bioassay_model:
bioassay_trace = sample(1000, chains=2, start=[{'alpha':0.5}, {'alpha':5}])
bioassay_trace.get_values('alpha', chains=0)[0]
plt.plot(bioassay_trace.get_values('alpha', chains=0)[:200], 'r--')
plt.plot(bioassay_trace.get_values('alpha', chains=1)[:200], 'k--')
```
A principal reason that evidence from informal techniques cannot
guarantee convergence is a phenomenon called ***metastability***. Chains may
appear to have converged to the true equilibrium value, displaying
excellent qualities by any of the methods described above. However,
after some period of stability around this value, the chain may suddenly
move to another region of the parameter space. This period
of metastability can sometimes be very long, and therefore escape
detection by these convergence diagnostics. Unfortunately, there is no
statistical technique available for detecting metastability.
### Formal Methods
Along with the *ad hoc* techniques described above, a number of more
formal methods exist which are prevalent in the literature. These are
considered more formal because they are based on existing statistical
methods, such as time series analysis.
PyMC currently includes three formal convergence diagnostic methods. The
first, proposed by [Geweke (1992)](http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177011446), is a time-series approach that
compares the mean and variance of segments from the beginning and end of
a single chain.
$$z = \frac{\bar{\theta}_a - \bar{\theta}_b}{\sqrt{S_a(0) + S_b(0)}}$$
where $a$ is the early interval and $b$ the late interval, and $S_i(0)$ is the spectral density estimate at zero frequency for chain segment $i$. If the
z-scores (theoretically distributed as standard normal variates) of
these two segments are similar, it can provide evidence for convergence.
PyMC calculates z-scores of the difference between various initial
segments along the chain, and the last 50% of the remaining chain. If
the chain has converged, the majority of points should fall within 2
standard deviations of zero.
In PyMC, diagnostic z-scores can be obtained by calling the `geweke` function. It
accepts either (1) a single trace, (2) a Node or Stochastic object, or
(4) an entire Model object:
```
from pymc3 import geweke
with bioassay_model:
tr = sample(2000, tune=1000)
z = geweke(tr, intervals=15)
plt.scatter(*z[0]['alpha'].T)
plt.hlines([-1,1], 0, 1000, linestyles='dotted')
plt.xlim(0, 1000)
```
The arguments expected are the following:
- `x` : The trace of a variable.
- `first` : The fraction of series at the beginning of the trace.
- `last` : The fraction of series at the end to be compared with the section at the beginning.
- `intervals` : The number of segments.
Plotting the output displays the scores in series, making it is easy to
see departures from the standard normal assumption.
A second convergence diagnostic provided by PyMC is the Gelman-Rubin
statistic [Gelman and Rubin (1992)](http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177011136). This diagnostic uses multiple chains to
check for lack of convergence, and is based on the notion that if
multiple chains have converged, by definition they should appear very
similar to one another; if not, one or more of the chains has failed to
converge.
The Gelman-Rubin diagnostic uses an analysis of variance approach to
assessing convergence. That is, it calculates both the between-chain
varaince (B) and within-chain varaince (W), and assesses whether they
are different enough to worry about convergence. Assuming $m$ chains,
each of length $n$, quantities are calculated by:
$$\begin{align}B &= \frac{n}{m-1} \sum_{j=1}^m (\bar{\theta}_{.j} - \bar{\theta}_{..})^2 \\
W &= \frac{1}{m} \sum_{j=1}^m \left[ \frac{1}{n-1} \sum_{i=1}^n (\theta_{ij} - \bar{\theta}_{.j})^2 \right]
\end{align}$$
for each scalar estimand $\theta$. Using these values, an estimate of
the marginal posterior variance of $\theta$ can be calculated:
$$\hat{\text{Var}}(\theta | y) = \frac{n-1}{n} W + \frac{1}{n} B$$
Assuming $\theta$ was initialized to arbitrary starting points in each
chain, this quantity will overestimate the true marginal posterior
variance. At the same time, $W$ will tend to underestimate the
within-chain variance early in the sampling run. However, in the limit
as $n \rightarrow
\infty$, both quantities will converge to the true variance of $\theta$.
In light of this, the Gelman-Rubin statistic monitors convergence using
the ratio:
$$\hat{R} = \sqrt{\frac{\hat{\text{Var}}(\theta | y)}{W}}$$
This is called the potential scale reduction, since it is an estimate of
the potential reduction in the scale of $\theta$ as the number of
simulations tends to infinity. In practice, we look for values of
$\hat{R}$ close to one (say, less than 1.1) to be confident that a
particular estimand has converged. In PyMC, the function
`gelman_rubin` will calculate $\hat{R}$ for each stochastic node in
the passed model:
```
from pymc3 import gelman_rubin
gelman_rubin(bioassay_trace)
```
For the best results, each chain should be initialized to highly
dispersed starting values for each stochastic node.
By default, when calling the `forestplot` function using nodes with
multiple chains, the $\hat{R}$ values will be plotted alongside the
posterior intervals.
```
from pymc3 import forestplot
forestplot(bioassay_trace)
```
## Autocorrelation
In general, samples drawn from MCMC algorithms will be autocorrelated. This is not a big deal, other than the fact that autocorrelated chains may require longer sampling in order to adequately characterize posterior quantities of interest. The calculation of autocorrelation is performed for each lag $i=1,2,\ldots,k$ (the correlation at lag 0 is, of course, 1) by:
$$\hat{\rho}_i = 1 - \frac{V_i}{2\hat{\text{Var}}(\theta | y)}$$
where $\hat{\text{Var}}(\theta | y)$ is the same estimated variance as calculated for the Gelman-Rubin statistic, and $V_i$ is the variogram at lag $i$ for $\theta$:
$$\text{V}_i = \frac{1}{m(n-i)}\sum_{j=1}^m \sum_{k=i+1}^n (\theta_{jk} - \theta_{j(k-i)})^2$$
This autocorrelation can be visualized using the `autocorrplot` function in PyMC3:
```
from pymc3 import autocorrplot
autocorrplot(tr);
```
### Effective sample size
The effective sample size is estimated using the partial sum:
$$\hat{n}_{eff} = \frac{mn}{1 + 2\sum_{i=1}^T \hat{\rho}_i}$$
where $T$ is the first odd integer such that $\hat{\rho}_{T+1} + \hat{\rho}_{T+2}$ is negative.
The issue here is related to the fact that we are **estimating** the effective sample size from the fit output. Values of $n_{eff} / n_{iter} < 0.001$ indicate a biased estimator, resulting in an overestimate of the true effective sample size.
```
from pymc3 import effective_n
effective_n(bioassay_trace)
```
Both low $n_{eff}$ and high $\hat{R}$ indicate **poor mixing**.
It is tempting to want to **thin** the chain to eliminate the autocorrelation (*e.g.* taking every 20th sample from the traces above), but this is a waste of time. Since thinning deliberately throws out the majority of the samples, no efficiency is gained; you ultimately require more samples to achive a particular desired sample size.
## Diagnostics for Gradient-based Samplers
Hamiltonian Monte Carlo is a powerful and efficient MCMC sampler when set up appropriately. However, this typically requires carefull tuning of the sampler parameters, such as tree depth, leapfrog step size and target acceptance rate. Fortunately, the NUTS algorithm takes care of some of this for us. Nevertheless, tuning must be carefully monitored for failures that frequently arise. This is particularly the case when fitting challenging models, such as those with high curvature or heavy tails.
Fortunately, however, gradient-based sampling provides the ability to diagnose these pathologies. PyMC makes several diagnostic statistics available as attributes of the `MultiTrace` object returned by the `sample` function.
```
bioassay_trace.stat_names
```
- `mean_tree_accept`: The mean acceptance probability for the tree that generated this sample. The mean of these values across all samples but the burn-in should be approximately `target_accept` (the default for this is 0.8).
- `diverging`: Whether the trajectory for this sample diverged. If there are many diverging samples, this usually indicates that a region of the posterior has high curvature. Reparametrization can often help, but you can also try to increase `target_accept` to something like 0.9 or 0.95.
- `energy`: The energy at the point in phase-space where the sample was accepted. This can be used to identify posteriors with problematically long tails. See below for an example.
- `energy_error`: The difference in energy between the start and the end of the trajectory. For a perfect integrator this would always be zero.
- `max_energy_error`: The maximum difference in energy along the whole trajectory.
- `depth`: The depth of the tree that was used to generate this sample
- `tree_size`: The number of leafs of the sampling tree, when the sample was accepted. This is usually a bit less than $2 ^ \text{depth}$. If the tree size is large, the sampler is using a lot of leapfrog steps to find the next sample. This can for example happen if there are strong correlations in the posterior, if the posterior has long tails, if there are regions of high curvature ("funnels"), or if the variance estimates in the mass matrix are inaccurate. Reparametrisation of the model or estimating the posterior variances from past samples might help.
- `tune`: This is `True`, if step size adaptation was turned on when this sample was generated.
- `step_size`: The step size used for this sample.
- `step_size_bar`: The current best known step-size. After the tuning samples, the step size is set to this value. This should converge during tuning.
If the name of the statistic does not clash with the name of one of the variables, we can use indexing to get the values. The values for the chains will be concatenated.
We can see that the step sizes converged after the 2000 tuning samples for both chains to about the same value. The first 3000 values are from chain 1, the second from chain 2.
```
with bioassay_model:
trace = sample(1000, tune=2000, init=None, chains=2, discard_tuned_samples=False)
plt.plot(trace['step_size_bar'])
```
The `get_sampler_stats` method provides more control over which values should be returned, and it also works if the name of the statistic is the same as the name of one of the variables. We can use the `chains` option, to control values from which chain should be returned, or we can set `combine=False` to get the values for the individual chains:
The `NUTS` step method has a maximum tree depth parameter so that infinite loops (which can occur for non-identified models) are avoided. When the maximum tree depth is reached (the default value is 10), the trajectory is stopped. However complex (but identifiable) models can saturate this threshold, which reduces sampling efficiency.
The `MultiTrace` stores the tree depth for each iteration, so inspecting these traces can reveal saturation if it is occurring.
```
sizes1, sizes2 = trace.get_sampler_stats('depth', combine=False)
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, sharey=True)
ax1.plot(sizes1)
ax2.plot(sizes2)
```
We can also check the acceptance for the trees that generated this sample. The mean of these values across all samples (except the tuning stage) is expected to be the same as `target_accept`, which is 0.8 by default.
```
accept = trace.get_sampler_stats('mean_tree_accept', burn=1000)
sns.distplot(accept, kde=False)
```
### Divergent transitions
Recall that simulating Hamiltonian dynamics via a symplectic integrator uses a discrete approximation of a continuous function. This is only a reasonable approximation when the step sizes of the integrator are suitably small. A divergent transition may indicate that the approximation is poor.
If there are too many divergent transitions, then samples are not being drawn from the full posterior, and inferences based on the resulting sample will be biased
If there are diverging transitions, PyMC3 will issue warnings indicating how many were discovered. We can obtain the indices of them from the trace.
```
trace['diverging'].nonzero()
```
### Bayesian Fraction of Missing Information
The Bayesian fraction of missing information (BFMI) is a measure of how hard it is to
sample level sets of the posterior at each iteration. Specifically, it quantifies how well momentum resampling matches the marginal energy distribution. A small value indicates that the adaptation phase of the sampler was unsuccessful, and invoking the central limit theorem may not be valid. It indicates whether the sampler is able to adequately explore the posterior distribution.
Though there is not an established rule of thumb for an adequate threshold, values close to one are optimal. Reparameterizing the model is sometimes helpful for improving this statistic.
```
from pymc3 import bfmi
bfmi(trace)
```
Another way of diagnosting this phenomenon is by comparing the overall distribution of
energy levels with the *change* of energy between successive samples. Ideally, they should be very similar.
If the distribution of energy transitions is narrow relative to the marginal energy distribution, this is a sign of inefficient sampling, as many transitions are required to completely explore the posterior. On the other hand, if the energy transition distribution is similar to that of the marginal energy, this is evidence of efficient sampling, resulting in near-independent samples from the posterior.
```
energy = trace['energy']
energy_diff = np.diff(energy)
sns.distplot(energy - energy.mean(), label='energy')
sns.distplot(energy_diff, label='energy diff')
plt.legend()
```
If the overall distribution of energy levels has longer tails, the efficiency of the sampler will deteriorate quickly.
## Goodness of Fit
Checking for model convergence is only the first step in the evaluation
of MCMC model outputs. It is possible for an entirely unsuitable model
to converge, so additional steps are needed to ensure that the estimated
model adequately fits the data. One intuitive way of evaluating model
fit is to compare model predictions with the observations used to fit
the model. In other words, the fitted model can be used to simulate
data, and the distribution of the simulated data should resemble the
distribution of the actual data.
Fortunately, simulating data from the model is a natural component of
the Bayesian modelling framework. Recall, from the discussion on
imputation of missing data, the posterior predictive distribution:
$$p(\tilde{y}|y) = \int p(\tilde{y}|\theta) f(\theta|y) d\theta$$
Here, $\tilde{y}$ represents some hypothetical new data that would be
expected, taking into account the posterior uncertainty in the model
parameters.
Sampling from the posterior predictive distribution is easy
in PyMC. The `sample_ppc` function draws posterior predictive checks from all of the data likelhioods. Consider the `gelman_bioassay` example,
where deaths are modeled as a binomial random variable for which
the probability of death is a logit-linear function of the dose of a
particular drug.
The posterior predictive distribution of deaths uses the same functional
form as the data likelihood, in this case a binomial stochastic. Here is
the corresponding sample from the posterior predictive distribution (we typically need very few samples relative to the MCMC sample):
```
from pymc3 import sample_ppc
with bioassay_model:
deaths_sim = sample_ppc(bioassay_trace, samples=500)
```
The degree to which simulated data correspond to observations can be evaluated in at least two ways. First, these quantities can simply be compared visually. This allows for a qualitative comparison of model-based replicates and observations. If there is poor fit, the true value of the data may appear in the tails of the histogram of replicated data, while a good fit will tend to show the true data in high-probability regions of the posterior predictive distribution. The Matplot package in PyMC provides an easy way of producing such plots, via the `gof_plot` function.
```
fig, axes = plt.subplots(1, 4, figsize=(14, 4))
for obs, sim, ax in zip(deaths, deaths_sim['obs_deaths'].T, axes):
ax.hist(sim, bins=range(7))
ax.plot(obs+0.5, 1, 'ro')
```
## Exercise: Meta-analysis of beta blocker effectiveness
Carlin (1992) considers a Bayesian approach to meta-analysis, and includes the following examples of 22 trials of beta-blockers to prevent mortality after myocardial infarction.
In a random effects meta-analysis we assume the true effect (on a log-odds scale) $d_i$ in a trial $i$
is drawn from some population distribution. Let $r^C_i$ denote number of events in the control group in trial $i$,
and $r^T_i$ denote events under active treatment in trial $i$. Our model is:
$$\begin{aligned}
r^C_i &\sim \text{Binomial}\left(p^C_i, n^C_i\right) \\
r^T_i &\sim \text{Binomial}\left(p^T_i, n^T_i\right) \\
\text{logit}\left(p^C_i\right) &= \mu_i \\
\text{logit}\left(p^T_i\right) &= \mu_i + \delta_i \\
\delta_i &\sim \text{Normal}(d, t) \\
\mu_i &\sim \text{Normal}(m, s)
\end{aligned}$$
We want to make inferences about the population effect $d$, and the predictive distribution for the effect $\delta_{\text{new}}$ in a new trial. Build a model to estimate these quantities in PyMC, and (1) use convergence diagnostics to check for convergence and (2) use posterior predictive checks to assess goodness-of-fit.
Here are the data:
```
r_t_obs = [3, 7, 5, 102, 28, 4, 98, 60, 25, 138, 64, 45, 9, 57,
25, 33, 28, 8, 6, 32, 27, 22]
n_t_obs = [38, 114, 69, 1533, 355, 59, 945, 632, 278,1916, 873, 263,
291, 858, 154, 207, 251, 151, 174, 209, 391, 680]
r_c_obs = [3, 14, 11, 127, 27, 6, 152, 48, 37, 188, 52, 47, 16, 45,
31, 38, 12, 6, 3, 40, 43, 39]
n_c_obs = [39, 116, 93, 1520, 365, 52, 939, 471, 282, 1921, 583, 266,
293, 883, 147, 213, 122, 154, 134, 218, 364, 674]
N = len(n_c_obs)
# Write your answer here
```
## References
Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science. A Review Journal of the Institute of Mathematical Statistics, 457–472.
Geweke, J., Berger, J. O., & Dawid, A. P. (1992). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. In Bayesian Statistics 4.
Brooks, S. P., Catchpole, E. A., & Morgan, B. J. T. (2000). Bayesian Animal Survival Estimation. Statistical Science. A Review Journal of the Institute of Mathematical Statistics, 15(4), 357–376. doi:10.1214/ss/1177010123
Gelman, A., Meng, X., & Stern, H. (1996). Posterior predicitive assessment of model fitness via realized discrepencies with discussion. Statistica Sinica, 6, 733–807.
Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv.org.
| true |
code
| 0.61257 | null | null | null | null |
|
# 16장. 로지스틱 회귀 분석 과제
```
import matplotlib.pyplot as plt
import os
from typing import List, Tuple
import csv
from scratch.linear_algebra import Vector, get_column
```
## 1. 데이터셋
### 1.1 데이터셋 다운로드
```
import requests
data = requests.get("https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data")
dataset_path = os.path.join('data', 'wdbc.data')
with open(dataset_path, "w") as f:
f.write(data.text)
```
### 1.2 데이터 파싱
```
def parse_cancer_row(row: List[str]) -> Tuple[Vector, int]:
measurements = [float(value) for value in row[2:]]
label = row[1]
label = 1 if label == 'M' else 0
return measurements, label
```
### 1.3 데이터 읽기
위스콘신 유방암 진단 데이터셋 (Wisconsin Breast Cancer Diagnostic dataset)
https://www.kaggle.com/uciml/breast-cancer-wisconsin-data
```
X_cancer : List[Vector] = []
y_cancer : List[int] = []
with open(dataset_path) as f:
reader = csv.reader(f)
for row in reader:
x, y = parse_cancer_row(row)
X_cancer.append(x)
y_cancer.append(y)
print(X_cancer[0])
print(y_cancer[0])
```
#### 1.4 데이터 컬럼명
```
columns = [
"radius_mean", "texture_mean", "perimeter_mean", "area_mean", "smoothness_mean",
"compactness_mean", "concavity_mean", "points_mean", "symmetry_mean", "dimension_mean",
"radius_se", "texture_se", "perimeter_se", "area_se", "smoothness_se",
"compactness_se", "concavity_se", "points_se", "symmetry_se", "dimension_se",
"radius_worst", "texture_worst", "perimeter_worst", "area_worst", "smoothness_worst",
"compactness_worst", "concavity_worst", "points_worst", "symmetry_worst", "dimension_worst",
]
```
## 2. 데이터 탐색
### 2.1 클래스 비율 확인
```
from collections import defaultdict
label_type = defaultdict(int)
for y in y_cancer:
label = 'M' if y == 1 else 'B'
label_type[label] += 1
plt.figure(figsize=(8,4))
plt.subplot(1, 2, 1)
plt.bar(label_type.keys(),
label_type.values(),
0.5,
facecolor="#2E495E",
edgecolor=(0, 0, 0)) # Black edges for each bar
plt.xlabel("Diagnosis")
plt.ylabel("# of diagnosis")
plt.title("Cancer diagnosis")
plt.subplot(1, 2, 2)
pies = plt.pie(label_type.values(),
labels=label_type.keys(),
startangle=90)
plt.legend()
plt.show()
```
### 2.2 특징 별 히스토그램
```
def histogram(ax, col : int):
n, bins, patches = ax.hist(get_column(X_cancer, col),
8,
facecolor="#2E495E",
edgecolor=(0, 0, 0))
ax.set_title(columns[col], fontsize=8)
from matplotlib import pyplot as plt
num_rows = 6
num_cols = 5
fig, ax = plt.subplots(num_rows, num_cols, figsize=(num_cols*4, num_rows*4))
for row in range(num_rows):
for col in range(num_cols):
histogram(ax[row][col], num_cols * row + col)
plt.show()
```
### 2.3 특징 쌍 별 산포도
```
from typing import Dict
points_by_diagnosis: Dict[str, List[Vector]] = defaultdict(list)
for i, x in enumerate(X_cancer):
y = y_cancer[i]
label = 'M' if y == 1 else 'B'
points_by_diagnosis[label].append(x)
start = 0
end = start + 10
pairs = [(i, j) for i in range(start, end) for j in range(i+1, end) if i < j]
print(pairs)
marks = ['+', '.']
from matplotlib import pyplot as plt
num_rows = 9
num_cols = 5
fig, ax = plt.subplots(num_rows, num_cols, figsize=(num_cols*3, num_rows*3))
for row in range(num_rows):
for col in range(num_cols):
i, j = pairs[num_cols * row + col]
ax[row][col].set_title(f"{columns[i]} vs {columns[j]}", fontsize=8)
ax[row][col].set_xticks([])
ax[row][col].set_yticks([])
for mark, (diagnosis, points) in zip(marks, points_by_diagnosis.items()):
xs = [point[i] for point in points]
ys = [point[j] for point in points]
ax[row][col].scatter(xs, ys, marker=mark, label=diagnosis)
ax[-1][-1].legend(loc='lower right', prop={'size': 6})
plt.show()
```
## 3. 데이터 전처리
### 3.1 데이터셋 분리
#### 입력 데이터에 상수 항에 대한 입력 1 추가
```
X_cancer = [[1.0] + row for row in X_cancer]
import random
from scratch.machine_learning import train_test_split
random.seed(12)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, 0.25)
print('train dataset :', len(X_train))
print('test dataset :', len(X_test))
```
### 3.2 데이터 표준화 (Standardization) (Q1)
훈련 데이터의 평균과 표준 편차로 테스트 데이터를 표준화 하도록 normalization() 함수를 작성해 보시오.
```
from scratch.working_with_data import scale, rescale
def normalization(data: List[Vector],
means : Vector = None,
stdevs : Vector = None) -> List[Vector]:
# your code
dim = len(data[0])
if (means == None or stdevs == None):
means, stdevs = scale(data)
# Make a copy of each vector
rescaled = [v[:] for v in data]
for v in rescaled:
for i in range(dim):
if stdevs[i] > 0:
v[i] = (v[i] - means[i]) / stdevs[i]
return rescaled, means, stdevs
X_train_normed, X_train_means, X_train_stdevs = normalization(X_train)
X_test_normed, _, _ = normalization(X_test, X_train_means, X_train_stdevs)
```
## 4. 로지스틱 회귀
### 4.1 로지스틱 함수 (Logistic Function) (Q2)
로지스틱 함수와 미분을 구현해 보시오.
```
import math
# your code
#로지스틱 함수
def logistic(x: float) -> float:
return 1.0/(1 + math.exp(-x))
#로지스틱 함수의 미분
def logistic_prime(x: float) -> float:
y = logistic(x)
return y * (1 - y)
```
### 4.2 손실 함수 (Q3)
베르누이 분포의 음의 로그 우도(NLL)로 정의되는 손실 함수를 구현해 보시오.
```
from scratch.linear_algebra import Vector, dot
from typing import List
# your code
#음의 로그 우도
def _negative_log_likelihood(x: Vector, y: float, beta: Vector) -> float:
if y == 1:
return -math.log(logistic(dot(x, beta)))
else:
return -math.log(1 - logistic(dot(x, beta)))
#전체 데이터셋에 대한 NLL 합산
def negative_log_likelihood(xs: List[Vector],
ys: List[float],
beta: Vector) -> float:
return sum(_negative_log_likelihood(x, y, beta)
for x, y in zip(xs, ys))
```
### 4.3 손실 함수 미분 (Q4)
NLL의 그래디언트를 구현해 보시오.
```
from scratch.linear_algebra import vector_sum
# your code
def _negative_log_partial_j(x: Vector, y: float, beta: Vector, j: int) -> float:
return -(y - logistic(dot(x, beta))) * x[j]
def _negative_log_gradient(x: Vector, y: float, beta: Vector) -> Vector:
return [_negative_log_partial_j(x, y, beta, j) for j in range(len(beta))]
def negative_log_gradient(xs: List[Vector],
ys: List[float],
beta: Vector) -> Vector:
return vector_sum([_negative_log_gradient(x, y, beta)
for x, y in zip(xs, ys)])
```
### 4.4 모델 훈련 (Q5)
로지스틱 회귀 모델 학습을 경사 하강법으로 구현하시오.
```
import random
import tqdm
import IPython.display as display
from scratch.linear_algebra import vector_mean
from scratch.gradient_descent import gradient_step
def minibatches(xs: List[Vector], batch_size=20):
for start in range(0, len(xs), batch_size):
batch_xs=xs[start:start+batch_size]
batch_ys=ys[start:start+batch_size]
return batch_xs, batch_ys
def logistic_regression(xs: List[Vector],
ys: List[float],
learning_rate: float = 0.001,
num_steps: int = 1000,
batch_size: int = 1) -> Vector:
# your code
#초기화
beta = [random.random() for _ in range(31)]
history = []
with tqdm.trange(5000) as t:
for epoch in t:
for batch in minibatches(xs, batch_size=20):
gradient = negative_log_gradient(xs, ys, beta)
beta = gradient_step(beta, gradient, -learning_rate)
loss = negative_log_likelihood(xs, ys, beta)
t.set_description(f"loss: {loss:.3f} beta: {beta}")
history.append(loss)
if epoch and epoch % 100 == 0:
display.clear_output(wait=True)
plt.plot(history)
plt.show()
return beta
beta = logistic_regression(X_train_normed, y_train)
```
#### 𝜷 확인
```
plt.plot(beta)
plt.show()
```
### 3.7 모델 테스트 (Q6)
테스트 데이터를 이용해서 모델 예측을 해보고 TP, FP, FN, TN을 계산해 보시오.
```
# your code
true_positives = false_positives = true_negatives = false_negatives = 0
for x_i, y_i in zip(X_test_normed, y_test):
prediction = logistic(dot(beta, x_i))
if y_i == 1 and prediction >= 0.5: # TP: paid and we predict paid
true_positives += 1
elif y_i == 1: # FN: paid and we predict unpaid
false_negatives += 1
elif prediction >= 0.5: # FP: unpaid and we predict paid
false_positives += 1
else: # TN: unpaid and we predict unpaid
true_negatives += 1
TP = true_positives
FN = false_negatives
FP = false_positives
TN = true_negatives
confusion_matrix = [[TP, FP], [FN, TN]]
```
### 3.8 모델 성능
```
from scratch.machine_learning import accuracy, precision, recall, f1_score
print(confusion_matrix)
print("accuracy :", accuracy(TP, FP, FN, TN))
print("precision :", precision(TP, FP, FN, TN))
print("recall :", recall(TP, FP, FN, TN))
print("f1_score :", f1_score(TP, FP, FN, TN))
predictions = [logistic(dot(beta, x)) for x in X_test_normed]
plt.scatter(predictions, y_test, marker='+')
plt.xlabel("predicted probability")
plt.ylabel("actual outcome")
plt.title("Logistic Regression Predicted vs. Actual")
plt.show()
```
| true |
code
| 0.635307 | null | null | null | null |
|
# Nearest Centroid Classification with MInMaxScaler & PowerTransformer
This Code template is for the Classification task using a simple NearestCentroid with feature rescaling technique MinMaxScaler and feature tranformation technique used is PowerTransformer in a pipeline.
### Required Packages
```
!pip install imblearn
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from imblearn.over_sampling import RandomOverSampler
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder,MinMaxScaler, PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestCentroid
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features = []
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
#### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.
One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Data Scaling
Used sklearn.preprocessing.MinMaxScaler
This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.
Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html)
### Feature Transformation
Used sklearn.preprocessing.PowerTransformer
Apply a power transform featurewise to make data more Gaussian-like.
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)
### Model
The NearestCentroid classifier is a simple algorithm that represents each class by the centroid of its members. In effect, this makes it similar to the label updating phase of the KMeans algorithm. It also has no parameters to choose, making it a good baseline classifier. It does, however, suffer on non-convex classes, as well as when classes have drastically different variances, as equal variance in all dimensions is assumed.
#### Tuning Parameter
> **metric** : The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by metrics.pairwise.pairwise_distances for its metric parameter. The centroids for the samples corresponding to each class is the point from which the sum of the distances of all samples that belong to that particular class are minimized. If the “manhattan” metric is provided, this centroid is the median and for all other metrics, the centroid is now set to be the mean.
> **shrink_threshold** :Threshold for shrinking centroids to remove features.
```
# Build Model here
model = make_pipeline(MinMaxScaler(),PowerTransformer(),NearestCentroid())
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Snehaan Bhawal , Github: [Profile](https://github.com/Sbhawal)
| true |
code
| 0.264833 | null | null | null | null |
|
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms, models
from torch.autograd import Variable
data_dir = 'Cat_Dog_data'
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
test_data
train_data
train_loader = torch.utils.data.DataLoader(train_data,batch_size=128,shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data,batch_size=128)
model = models.densenet121(pretrained=True)
print(model)
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([('fc1',nn.Linear(1024,500)),
('relu',nn.ReLU()),
('fc2',nn.Linear(500,2)),
('output',nn.LogSoftmax(dim=1))]))
model.classifier = classifier
torch.cuda.is_available()
import time
#for cuda in [True, False]:
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(),lr=0.003)
# if cuda:
# model.cuda()
# else:
# model.cpu()
for ii, (inputs, labels) in enumerate(train_loader):
inputs, labels = Variable(inputs), Variable(labels)
# if cuda:
# inputs, labels = inputs.cuda(), labels.cuda()
# else:
# inputs, labels = inputs.cpu(), labels.cpu()
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs,labels)
loss.backward()
optimizer.step()
if ii==1:
break
print(f"Cuda = {cuda}; Time per batch: {(time.time()-start)/3:.3f} seconds")
```
### Full Model
```
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.densenet121(pretrained=True)
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 2),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.003)
model.to(device);
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
```
| true |
code
| 0.782891 | null | null | null | null |
|
# Modeling and Simulation in Python
Chapter 13
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
```
### Code from previous chapters
`make_system`, `plot_results`, and `calc_total_infected` are unchanged.
```
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
```
Here's an updated version of `run_simulation` that uses `unpack`.
```
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.row[t0] = init
for t in linrange(t0, t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
```
**Exercise:** Write a version of `update_func` that uses `unpack`.
```
# Original
def update_func(state, t, system):
"""Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def update_func(state, t, system):
"""Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
"""
unpack(system)
s, i, r = state
infected = beta * i * s
recovered = gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
```
Test the updated code with this example.
```
system = make_system(0.333, 0.25)
results = run_simulation(system, update_func)
results.head()
plot_results(results.S, results.I, results.R)
```
### Sweeping beta
Make a range of values for `beta`, with constant `gamma`.
```
beta_array = linspace(0.1, 1.1, 11)
gamma = 0.25
```
Run the simulation once for each value of `beta` and print total infections.
```
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(system.beta, calc_total_infected(results))
```
Wrap that loop in a function and return a `SweepSeries` object.
```
def sweep_beta(beta_array, gamma):
"""Sweep a range of values for beta.
beta_array: array of beta values
gamma: recovery rate
returns: SweepSeries that maps from beta to total infected
"""
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[system.beta] = calc_total_infected(results)
return sweep
```
Sweep `beta` and plot the results.
```
infected_sweep = sweep_beta(beta_array, gamma)
label = 'gamma = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contacts per day (beta)',
ylabel='Fraction infected')
savefig('figs/chap06-fig01.pdf')
```
### Sweeping gamma
Using the same array of values for `beta`
```
beta_array
```
And now an array of values for `gamma`
```
gamma_array = [0.2, 0.4, 0.6, 0.8]
```
For each value of `gamma`, sweep `beta` and plot the results.
```
for gamma in gamma_array:
infected_sweep = sweep_beta(beta_array, gamma)
label = 'γ = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contacts per day (beta)',
ylabel='Fraction infected',
loc='upper left')
savefig('figs/chap06-fig02.pdf')
```
** Exercise:** Suppose the infectious period for the Freshman Plague is known to be 2 days on average, and suppose during one particularly bad year, 40% of the class is infected at some point. Estimate the time between contacts.
```
beta_array = linspace(0.4, 0.5, 100)
gamma = 0.5
infected_sweep = sweep_beta(beta_array, gamma)
# Solution goes here
```
| true |
code
| 0.739937 | null | null | null | null |
|
# Writing Your Own Graph Algorithms
The analytical engine in GraphScope derives from [GRAPE](https://dl.acm.org/doi/10.1145/3282488), a graph processing system proposed on SIGMOD-2017. GRAPE differs from prior systems in its ability to parallelize sequential graph algorithms as a whole. In GRAPE, sequential algorithms can be easily **plugged into** with only minor changes and get parallelized to handle large graphs efficiently.
In this tutorial, we will show how to define and run your own algorithm in PIE and Pregel models.
Sounds like fun? Excellent, here we go!
## Writing algorithm in PIE model
GraphScope enables users to write algorithms in the [PIE](https://dl.acm.org/doi/10.1145/3282488) programming model in a pure Python mode, first of all, you should import **graphscope** package and the **pie** decorator.
```
import graphscope
from graphscope.framework.app import AppAssets
from graphscope.analytical.udf.decorators import pie
```
We use the single source shortest path ([SSSP](https://en.wikipedia.org/wiki/Shortest_path_problem)) algorithm as an example. To implement the PIE model, you just need to **fulfill this class**
```
@pie(vd_type="double", md_type="double")
class SSSP_PIE(AppAssets):
@staticmethod
def Init(frag, context):
pass
@staticmethod
def PEval(frag, context):
pass
@staticmethod
def IncEval(frag, context):
pass
```
The **pie** decorator contains two params named `vd_type` and `md_type` , which represent the vertex data type and message type respectively.
You may specify types for your own algorithms, optional values are `int`, `double`, and `string`.
In our **SSSP** case, we compute the shortest distance to the source for all nodes, so we use `double` value for `vd_type` and `md_type` both.
In `Init`, `PEval`, and `IncEval`, it has **frag** and **context** as parameters. You can use these two parameters to access the fragment data and intermediate results. Detail usage please refer to [Cython SDK API](https://graphscope.io/docs/reference/cython_sdk.html).
### Fulfill Init Function
```
@pie(vd_type="double", md_type="double")
class SSSP_PIE(AppAssets):
@staticmethod
def Init(frag, context):
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
nodes = frag.nodes(v_label_id)
context.init_value(
nodes, v_label_id, 1000000000.0, PIEAggregateType.kMinAggregate
)
context.register_sync_buffer(v_label_id, MessageStrategy.kSyncOnOuterVertex)
@staticmethod
def PEval(frag, context):
pass
@staticmethod
def IncEval(frag, context):
pass
```
The `Init` function are responsable for 1) setting the initial value for each node; 2) defining the strategy of message passing; and 3) specifing aggregator for handing received message on each rounds.
Note that the algorithm you defined will run on a property graph. So we should get the vertex label first by `v_label_num = frag.vertex_label_num()`, then we can traverse all nodes with the same label
and set the initial value by `nodes = frag.nodes(v_label_id)` and `context.init_value(nodes, v_label_id, 1000000000.0, PIEAggregateType.kMinAggregate)`.
Since we are computing the shorest path between the source node and others nodes. So we use `PIEAggregateType.kMinAggregate` as the aggregator for mesaage aggregation, which means it will
perform `min` operation upon all received messages. Other avaliable aggregators are `kMaxAggregate`, `kSumAggregate`, `kProductAggregate`, and `kOverwriteAggregate`.
At the end of `Init` function, we register the sync buffer for each node with `MessageStrategy.kSyncOnOuterVertex`, which tells the engine how to pass the message.
### Fulfill PEval Function
```
@pie(vd_type="double", md_type="double")
class SSSP_PIE(AppAssets):
@staticmethod
def Init(frag, context):
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
nodes = frag.nodes(v_label_id)
context.init_value(
nodes, v_label_id, 1000000000.0, PIEAggregateType.kMinAggregate
)
context.register_sync_buffer(v_label_id, MessageStrategy.kSyncOnOuterVertex)
@staticmethod
def PEval(frag, context):
src = int(context.get_config(b"src"))
graphscope.declare(graphscope.Vertex, source)
native_source = False
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
if frag.get_inner_node(v_label_id, src, source):
native_source = True
break
if native_source:
context.set_node_value(source, 0)
else:
return
e_label_num = frag.edge_label_num()
for e_label_id in range(e_label_num):
edges = frag.get_outgoing_edges(source, e_label_id)
for e in edges:
dst = e.neighbor()
distv = e.get_int(2)
if context.get_node_value(dst) > distv:
context.set_node_value(dst, distv)
@staticmethod
def IncEval(frag, context):
pass
```
In `PEval` of **SSSP**, it gets the queried source node by `context.get_config(b"src")`.
`PEval` checks each fragment whether it contains source node by `frag.get_inner_node(v_label_id, src, source)`. Note that the `get_inner_node` method needs a `source` parameter in type `Vertex`, which you can declare by `graphscope.declare(graphscope.Vertex, source)`
If a fragment contains the source node, it will traverse the outgoing edges of the source with `frag.get_outgoing_edges(source, e_label_id)`. For each vertex, it computes the distance from the source, and updates the value if the it less than the initial value.
### Fulfill IncEval Function
```
@pie(vd_type="double", md_type="double")
class SSSP_PIE(AppAssets):
@staticmethod
def Init(frag, context):
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
nodes = frag.nodes(v_label_id)
context.init_value(
nodes, v_label_id, 1000000000.0, PIEAggregateType.kMinAggregate
)
context.register_sync_buffer(v_label_id, MessageStrategy.kSyncOnOuterVertex)
@staticmethod
def PEval(frag, context):
src = int(context.get_config(b"src"))
graphscope.declare(graphscope.Vertex, source)
native_source = False
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
if frag.get_inner_node(v_label_id, src, source):
native_source = True
break
if native_source:
context.set_node_value(source, 0)
else:
return
e_label_num = frag.edge_label_num()
for e_label_id in range(e_label_num):
edges = frag.get_outgoing_edges(source, e_label_id)
for e in edges:
dst = e.neighbor()
distv = e.get_int(2)
if context.get_node_value(dst) > distv:
context.set_node_value(dst, distv)
@staticmethod
def IncEval(frag, context):
v_label_num = frag.vertex_label_num()
e_label_num = frag.edge_label_num()
for v_label_id in range(v_label_num):
iv = frag.inner_nodes(v_label_id)
for v in iv:
v_dist = context.get_node_value(v)
for e_label_id in range(e_label_num):
es = frag.get_outgoing_edges(v, e_label_id)
for e in es:
u = e.neighbor()
u_dist = v_dist + e.get_int(2)
if context.get_node_value(u) > u_dist:
context.set_node_value(u, u_dist)
```
The only difference between `IncEval` and `PEval` of **SSSP** algorithm is that `IncEval` are invoked
on each fragment, rather than only the fragment with source node. A fragment will repeat the `IncEval` until there is no messages received. When all the fragments are finished computation, the algorithm is terminated.
### Run Your Algorithm on A Graph.
First, let's establish a session and load a graph for testing.
```
from graphscope.framework.loader import Loader
# the location of the property graph for testing
property_dir = '/home/jovyan/datasets/property'
graphscope.set_option(show_log=True)
k8s_volumes = {
"data": {
"type": "hostPath",
"field": {
"path": "/testingdata",
"type": "Directory"
},
"mounts": {
"mountPath": "/home/jovyan/datasets",
"readOnly": True
}
}
}
sess = graphscope.session(k8s_volumes=k8s_volumes)
graph = sess.g(directed=False)
graph = graph.add_vertices("/home/jovyan/datasets/property/p2p-31_property_v_0", label="person")
graph = graph.add_edges("/home/jovyan/datasets/property/p2p-31_property_e_0", label="knows")
```
Then initialize your algorithm and query the shorest path from vertex `6` over the graph.
```
sssp = SSSP_PIE()
ctx = sssp(graph, src=6)
```
Runing this cell, your algorithm should evaluate successfully. The results are stored in vineyard in the distributed machies. Let's fetch and check the results.
```
r1 = (
ctx.to_dataframe({"node": "v:person.id", "r": "r:person"})
.sort_values(by=["node"])
.to_numpy(dtype=float)
)
r1
```
### Dump and Reload Your Algorithm
You can dump and save your define algorithm for future use.
```
import os
# specify the path you want to dump
dump_path = os.path.expanduser("~/Workspace/sssp_pie.gar")
# dump
SSSP_PIE.to_gar(dump_path)
```
Now, you can find a package named `sssp_pie.gar` in your `~/Workspace`. Reload this algorithm with following code.
```
from graphscope.framework.app import load_app
# specify the path you want to dump
dump_path = os.path.expanduser("~/Workspace/sssp_pie.gar")
sssp2 = load_app("SSSP_PIE", dump_path)
```
### Write Algorithm in Pregel Model
In addition to the sub-graph based PIE model, GraphScope supports vertex-centric Pregel model. To define a Pregel algorithm, you should import **pregel** decorator and fulfil the functions defined on vertex.
```
import graphscope
from graphscope.framework.app import AppAssets
from graphscope.analytical.udf.decorators import pregel
@pregel(vd_type="double", md_type="double")
class SSSP_Pregel(AppAssets):
@staticmethod
def Init(v, context):
pass
@staticmethod
def Compute(messages, v, context):
pass
```
The **pregel** decorator has two parameters named `vd_type` and `md_type`, which represent the vertex data type and message type respectively.
You can specify the types for your algorithm, options are `int`, `double`, and `string`. For **SSSP**, we set both to `double`.
Since Pregel model are defined on vertex, the `Init` and `Compute` functions has a parameter `v` to access the vertex data. See more details in [Cython SDK API](https://graphscope.io/docs/reference/cython_sdk.html).
### Fulfill Init Function¶
```
@pregel(vd_type="double", md_type="double")
class SSSP_Pregel(AppAssets):
@staticmethod
def Init(v, context):
v.set_value(1000000000.0)
@staticmethod
def Compute(messages, v, context):
pass
```
The `Init` function sets the initial value for each node by `v.set_value(1000000000.0)`
### Fulfill Compute function¶
```
@pregel(vd_type="double", md_type="double")
class SSSP_Pregel(AppAssets):
@staticmethod
def Init(v, context):
v.set_value(1000000000.0)
@staticmethod
def Compute(messages, v, context):
src_id = context.get_config(b"src")
cur_dist = v.value()
new_dist = 1000000000.0
if v.id() == src_id:
new_dist = 0
for message in messages:
new_dist = min(message, new_dist)
if new_dist < cur_dist:
v.set_value(new_dist)
for e_label_id in range(context.edge_label_num()):
edges = v.outgoing_edges(e_label_id)
for e in edges:
v.send(e.vertex(), new_dist + e.get_int(2))
v.vote_to_halt()
```
The `Compute` function for **SSSP** computes the new distance for each node by the following steps:
1) Initialize the new value with value 1000000000
2) If the vertex is source node, set its distance to 0.
3) Compute the `min` value of messages received, and set the value if it less than the current value.
Repeat these, until no more new messages(shorter distance) are generated.
### Optional Combiner
Optionally, we can define a combiner to reduce the message communication overhead.
```
@pregel(vd_type="double", md_type="double")
class SSSP_Pregel(AppAssets):
@staticmethod
def Init(v, context):
v.set_value(1000000000.0)
@staticmethod
def Compute(messages, v, context):
src_id = context.get_config(b"src")
cur_dist = v.value()
new_dist = 1000000000.0
if v.id() == src_id:
new_dist = 0
for message in messages:
new_dist = min(message, new_dist)
if new_dist < cur_dist:
v.set_value(new_dist)
for e_label_id in range(context.edge_label_num()):
edges = v.outgoing_edges(e_label_id)
for e in edges:
v.send(e.vertex(), new_dist + e.get_int(2))
v.vote_to_halt()
@staticmethod
def Combine(messages):
ret = 1000000000.0
for m in messages:
ret = min(ret, m)
return ret
```
### Run Your Pregel Algorithm on Graph.
Next, let's run your Pregel algorithm on the graph, and check the results.
```
sssp_pregel = SSSP_Pregel()
ctx = sssp_pregel(graph, src=6)
r2 = (
ctx.to_dataframe({"node": "v:person.id", "r": "r:person"})
.sort_values(by=["node"])
.to_numpy(dtype=float)
)
r2
```
It is important to release resources when they are no longer used.
```
sess.close()
```
### Aggregator in Pregel
Pregel aggregators are a mechanism for global communication, monitoring, and counting. Each vertex can provide a value to an aggregator in superstep `S`, the system combines these
values using a reducing operator, and the resulting value is made available to all vertices in superstep `S+1`. GraphScope provides a number of predefined aggregators for Pregel algorithms, such as `min`, `max`, or `sum` operations on data types.
Here is a example for use a builtin aggregator, more details can be found in [Cython SDK API](https://graphscope.io/docs/reference/cython_sdk.html)
```
@pregel(vd_type="double", md_type="double")
class Aggregators_Pregel_Test(AppAssets):
@staticmethod
def Init(v, context):
# int
context.register_aggregator(
b"int_sum_aggregator", PregelAggregatorType.kInt64SumAggregator
)
context.register_aggregator(
b"int_max_aggregator", PregelAggregatorType.kInt64MaxAggregator
)
context.register_aggregator(
b"int_min_aggregator", PregelAggregatorType.kInt64MinAggregator
)
# double
context.register_aggregator(
b"double_product_aggregator", PregelAggregatorType.kDoubleProductAggregator
)
context.register_aggregator(
b"double_overwrite_aggregator",
PregelAggregatorType.kDoubleOverwriteAggregator,
)
# bool
context.register_aggregator(
b"bool_and_aggregator", PregelAggregatorType.kBoolAndAggregator
)
context.register_aggregator(
b"bool_or_aggregator", PregelAggregatorType.kBoolOrAggregator
)
context.register_aggregator(
b"bool_overwrite_aggregator", PregelAggregatorType.kBoolOverwriteAggregator
)
# text
context.register_aggregator(
b"text_append_aggregator", PregelAggregatorType.kTextAppendAggregator
)
@staticmethod
def Compute(messages, v, context):
if context.superstep() == 0:
context.aggregate(b"int_sum_aggregator", 1)
context.aggregate(b"int_max_aggregator", int(v.id()))
context.aggregate(b"int_min_aggregator", int(v.id()))
context.aggregate(b"double_product_aggregator", 1.0)
context.aggregate(b"double_overwrite_aggregator", 1.0)
context.aggregate(b"bool_and_aggregator", True)
context.aggregate(b"bool_or_aggregator", False)
context.aggregate(b"bool_overwrite_aggregator", True)
context.aggregate(b"text_append_aggregator", v.id() + b",")
else:
if v.id() == b"1":
assert context.get_aggregated_value(b"int_sum_aggregator") == 62586
assert context.get_aggregated_value(b"int_max_aggregator") == 62586
assert context.get_aggregated_value(b"int_min_aggregator") == 1
assert context.get_aggregated_value(b"double_product_aggregator") == 1.0
assert (
context.get_aggregated_value(b"double_overwrite_aggregator") == 1.0
)
assert context.get_aggregated_value(b"bool_and_aggregator") == True
assert context.get_aggregated_value(b"bool_or_aggregator") == False
assert (
context.get_aggregated_value(b"bool_overwrite_aggregator") == True
)
context.get_aggregated_value(b"text_append_aggregator")
v.vote_to_halt()
```
| true |
code
| 0.502136 | null | null | null | null |
|
```
%reload_ext autoreload
%autoreload 2
from fastai.gen_doc.gen_notebooks import *
from pathlib import Path
```
### To update this notebook
Run `tools/sgen_notebooks.py
Or run below:
You need to make sure to refresh right after
```
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
```
# Metadata generated below
```
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
```
| true |
code
| 0.746844 | null | null | null | null |
|
# maysics.calculus模块使用说明
calculus模块包含七个函数
|名称|作用|
|---|---|
|lim|极限|
|ha|哈密顿算符|
|grad|梯度|
|nebla_dot|nebla算子点乘|
|nebla_cross|nebla算子叉乘|
|laplace|拉普拉斯算子|
|inte|积分|
<br></br>
## 求极限:lim
lim(f, x0, acc=0.01, method='both')
<br>求函数```f```在```acc```的误差下,$x\rightarrow x_{0}$的函数值
<br>```method```可选'both'、'+'、'-',分别表示双边极限、右极限、左极限
### DEMO 1-1:求函数$y=\frac{sin(x)}{x}$中$x\rightarrow0$时的值
```
from maysics.calculus import lim
import numpy as np
def f(x):
return np.sin(x) / x
lim(f, 0)
```
<br></br>
## 哈密顿算符:ha
哈密顿算符:$\hat{H}=-\frac{\hbar^{2}{\nabla^{2}}}{2m}+U$
<br>ha(f, m, U, acc=0.1)
<br>求函数```f```在```acc```误差下,粒子质量为```m```,势能为```U```时,通过哈密顿算符生成的新函数
<br>```f```需要以数组作为输入(不能是数)
<br>```U```是常数或函数
### DEMO 2-1:求函数$y=x$通过哈密顿算符生成的新函数
```
from maysics.calculus import ha
def f(x):
return x
# m=1, U=2
f_new = ha(f, 1, 2)
# 输出x=(1, 2, 3)时的函数值
f_new([1, 2, 3])
```
<br></br>
## 梯度:grad
grad(f, x, acc=0.1)
<br>在acc误差下计算函数f在x处的梯度
### DEMO 3-1:求函数$y=x^{2}+y^{2}$在点$(3, 3)$处的梯度
```
from maysics.calculus import grad
def f(x):
return x[0]**2 + x[1]**2
grad(f, [3, 3])
```
<br></br>
## nebla算子:nebla_dot和nebla_cross
nebla_dot用于点乘矢量函数:$\nabla\centerdot\vec{f}$
<br>nebla_dot(f, x, acc=0.1)
<br>nebla_cross用于叉乘矢量函数:$\nabla\times\vec{f}$(此时函数f的输出必须是三维的)
<br>nebla_cross(f, x, acc=0.1)
<br>用法类似grad函数
### DEMO 4-1:$\nabla\centerdot\vec{f}$,$\vec{f}=x^{2}\vec{i}+y^{2}\vec{j}+z^{2}\vec{k}$在点$(1,1,1)$的函数值
```
from maysics.calculus import nebla_dot
def f(x):
return x[0]**2, x[1]**2, x[2]**2
nebla_dot(f, [1, 1, 1])
```
### DEMO 4-2:$\nabla\times\vec{f}$,$\vec{f}=x^{2}\vec{i}+y^{2}\vec{j}+z^{2}\vec{k}$在点$(1,1,1)$的函数值
```
from maysics.calculus import nebla_cross
def f(x):
return x[0]**2, x[1]**2, x[2]**2
nebla_cross(f, [1, 1, 1])
```
<br></br>
## 拉普拉斯算子:laplace
$\Delta=\nabla^{2}$
<br>laplace(f, x, acc=0.1)
<br>函数```f```需以一维数组作为输入,且不支持批量输入
### DEMO 5-1:不支持小批量输入函数:$f(x,y,z)=x^{2}+y^{2}+z^{2}$在点$(1,1,1)$的$\Delta f$值
```
from maysics.calculus import laplace
def f(x):
return sum(x**2)
laplace(f, [1,1,1])
```
### DEMO 5-2:支持小批量输入函数:$f(x,y,z)=x^{2}+y^{2}+z^{2}$在点集${(1,1,1),(2,2,2)}$的$\Delta f$值
```
from maysics.calculus import laplace
def f(x):
return (x**2).sum(axis=1)
laplace(f, [[1,1,1],[2,2,2]])
```
<br></br>
## 定积分:inte
inte(func, area, method='rect', dim=1, args={}, condition=None, param={}, acc=0.1, loop=10000, height=1, random_state=None)
<br>```func```是被积函数
<br>```area```是一个二维数组,表示各个维度的积分范围
<br>```method```可选'rect'和'mc',分别表示使用矩形法和蒙特卡洛法进行积分,```acc```参数仅对矩形法起作用,```loop```、```height```和```random_state```参数仅对蒙特卡洛法起作用
<br>```dim```参数表示输入函数的维度,默认为一维函数
<br>```args```表示输入函数f除了自变量以外的其他参数
<br>```condition```是条件函数,当```condition```不为None时,只有满足```condition```(即输出为True)的点才会纳入积分范围
<br>```param```表示函数```condition```除了自变量以外的其他参数
<br>```acc```既可以是数类型,也可以是一维数组类型,前者表示各个维度精度一致,后者则可以各个维度精度不同
### 使用矩形法进行定积分
超矩形的大小为:$f(x)\times acc^{dim}$
### DEMO 6-1:求$f(x)=sin(x)$在0到π上的积分
```
from maysics.calculus import inte
import numpy as np
inte(np.sin, [[0, np.pi]])
```
### DEMO 6-2:求$f(x)=Asin(x)$在0到π上的积分
```
from maysics.calculus import inte
import numpy as np
def f(x, A):
return A * np.sin(x)
# 取A=2
inte(f, [[0, np.pi]], args={'A':2})
```
### DEMO 6-3:求$f(x)=2sin(x)$在0到π上函数值小于等于1区域的积分
```
from maysics.calculus import inte
import numpy as np
def c(x):
if 2 * np.sin(x) <= 1:
return True
else:
return False
# 取A=2
inte(np.sin, [[0, np.pi]], condition=c)
```
### DEMO 6-4:求$f(x,y)=x^{2}+y^{2}$在$x\in[-2,2]$,$y\in[-1,1]$的积分
```
from maysics.calculus import inte
def f(x):
return x[0]**2 + x[1]**2
inte(f, [[-2, 2], [-1, 1]])
```
### 使用蒙特卡洛法进行定积分
在$area\times height$的超矩形中随机产生loop个散点(注意$height\geq maxf(x)$在area中恒成立)
<br>将$y\leq f(x)$的散点数记为n,则积分$\approx\frac{n}{loop}\times area \times height$
<br>random_state是随机种子
### DEMO 6-5:求f(x)=2sin(x)在0到π上的积分
```
from maysics.calculus import inte
import numpy as np
def f(x):
return 2 * np.sin(x)
inte(f, [[0, np.pi]], method='mc', height=2)
```
### DEMO 6-6:求$f(x,y)=x^{2}+y^{2}$在$x\in[-2,2]$,$y\in[-1,1]$的积分
```
from maysics.calculus import inte
def f(x):
return x[0]**2 + x[1]**2
inte(f, [[-2, 2], [-1, 1]], method='mc', height=5)
```
| true |
code
| 0.370595 | null | null | null | null |
|
# Dimensionality reduction using `scikit-learn`
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing, model_selection as ms, \
manifold, decomposition as dec, cross_decomposition as cross_dec
from sklearn.pipeline import Pipeline
%matplotlib inline
BOROUGHS_URL = 'https://files.datapress.com/london/dataset/london-borough-profiles/2017-01-26T18:50:00/london-borough-profiles.csv'
```
Read in the London Borough Profiles datasets.
```
boroughs = pd.read_csv(BOROUGHS_URL, encoding='iso-8859-1')
```
Filter the DataFrame so that only boroughs are included.
```
boroughs = boroughs[boroughs.Code.str.startswith('E09', na=False)]
```
Replace underscores with spaces in column names.
```
boroughs.columns = boroughs.columns.str.replace('_', ' ')
```
Select columns of interest.
```
boroughs = boroughs[[
'Area name',
'Population density (per hectare) 2017',
'Proportion of population aged 0-15, 2015',
'Proportion of population of working-age, 2015',
'Proportion of population aged 65 and over, 2015',
'% of resident population born abroad (2015)',
'Unemployment rate (2015)',
'Gross Annual Pay, (2016)',
'Modelled Household median income estimates 2012/13',
'Number of active businesses, 2015',
'Two-year business survival rates (started in 2013)',
'Crime rates per thousand population 2014/15',
'Fires per thousand population (2014)',
'Ambulance incidents per hundred population (2014)',
'Median House Price, 2015',
'% of area that is Greenspace, 2005',
'Total carbon emissions (2014)',
'Household Waste Recycling Rate, 2014/15',
'Number of cars, (2011 Census)',
'Number of cars per household, (2011 Census)',
'% of adults who cycle at least once per month, 2014/15',
'Average Public Transport Accessibility score, 2014',
'Male life expectancy, (2012-14)',
'Female life expectancy, (2012-14)',
'Teenage conception rate (2014)',
'Life satisfaction score 2011-14 (out of 10)',
'Worthwhileness score 2011-14 (out of 10)',
'Happiness score 2011-14 (out of 10)',
'Anxiety score 2011-14 (out of 10)',
'Childhood Obesity Prevalance (%) 2015/16',
'People aged 17+ with diabetes (%)',
'Mortality rate from causes considered preventable 2012/14'
]]
```
Set index.
```
boroughs.set_index('Area name', inplace=True)
```
Fix a couple of issues with data types.
```
boroughs[boroughs['Gross Annual Pay, (2016)'] == '.'] = None
boroughs['Modelled Household median income estimates 2012/13'] = \
boroughs['Modelled Household median income estimates 2012/13'].str.replace("[^0-9]", "")
boroughs = boroughs.apply(pd.to_numeric)
```
Remove boroughs with missing values.
```
boroughs.dropna(inplace=True)
```
Extract information on 'feelings'.
```
col_idx = [
'Life satisfaction score 2011-14 (out of 10)',
'Worthwhileness score 2011-14 (out of 10)',
'Happiness score 2011-14 (out of 10)',
'Anxiety score 2011-14 (out of 10)'
]
feelings = boroughs[col_idx]
boroughs.drop(col_idx, axis=1, inplace=True)
```
## Multidimensional scaling (MDS)
Create a pipeline that scales the data and performs MDS.
```
smds = Pipeline([
('scale', preprocessing.StandardScaler()),
('mds', manifold.MDS())
])
```
Two-dimensional projection ('embedding') of 'boroughs'
```
boroughs_mds = smds.fit_transform(boroughs)
fig, ax = plt.subplots()
ax.scatter(boroughs_mds[:,0], boroughs_mds[:,1])
for i, name in enumerate(boroughs.index):
ax.annotate(name, boroughs_mds[i,:])
```
## Principal component analysis (PCA)
Create a pipeline that scales the data and performs PCA.
```
spca = Pipeline([
('scale', preprocessing.StandardScaler()),
('pca', dec.PCA())
])
```
Scores (projection of 'boroughs' on the PCs):
```
scores = spca.fit_transform(boroughs)
```
Scores plot:
```
fig, ax = plt.subplots()
ax.scatter(scores[:,0], scores[:,1])
for i, name in enumerate(boroughs.index):
ax.annotate(name, scores[i,0:2])
```
Loadings (coefficients defining the PCs):
```
spca.named_steps['pca'].components_
```
Explained variance:
```
spca.named_steps['pca'].explained_variance_
np.cumsum(spca.named_steps['pca'].explained_variance_)
```
Explained variance ratio:
```
spca.named_steps['pca'].explained_variance_ratio_
np.cumsum(spca.named_steps['pca'].explained_variance_ratio_)
```
Scree plot:
```
plt.bar(np.arange(1, spca.named_steps['pca'].n_components_ + 1) - 0.4,
spca.named_steps['pca'].explained_variance_ratio_)
cum_evr = np.cumsum(spca.named_steps['pca'].explained_variance_ratio_)
plt.plot(np.arange(1, spca.named_steps['pca'].n_components_ + 1), cum_evr, color='black')
```
## Partial least squares (PLS) regression
Create a pipeline that scales the data and performs PLS regression.
```
spls = Pipeline([
('scale', preprocessing.StandardScaler()),
('pls', cross_dec.PLSRegression(scale=False))
])
```
Train a PLS regression model with three components.
```
spls.set_params(
pls__n_components=3
)
spls.fit(boroughs, feelings)
```
Define folds for cross-validation.
```
three_fold_cv = ms.KFold(n_splits=3, shuffle=True)
```
Compute average MSE across folds.
```
mses = ms.cross_val_score(spls, boroughs, feelings, scoring='neg_mean_squared_error', cv=three_fold_cv)
np.mean(-mses)
```
Determine 'optimal' number of components.
```
gs = ms.GridSearchCV(
estimator=spls,
param_grid={
'pls__n_components': np.arange(1, 10)
},
scoring='neg_mean_squared_error',
cv=three_fold_cv
)
gs.fit(boroughs, feelings)
-gs.best_score_
gs.best_estimator_
```
Plot number of components against MSE.
```
plt.plot(np.arange(1, 10), -gs.cv_results_['mean_test_score'])
```
| true |
code
| 0.662114 | null | null | null | null |
|
# Federated learning: pretrained model
In this notebook, we provide a simple example of how to perform an experiment in a federated environment with the help of the Sherpa.ai Federated Learning framework. We are going to use a popular dataset and a pretrained model.
## The data
The framework provides some functions for loading the [Emnist](https://www.nist.gov/itl/products-and-services/emnist-dataset) digits dataset.
```
import shfl
database = shfl.data_base.Emnist()
train_data, train_labels, test_data, test_labels = database.load_data()
```
Let's inspect some properties of the loaded data.
```
print(len(train_data))
print(len(test_data))
print(type(train_data[0]))
train_data[0].shape
```
So, as we have seen, our dataset is composed of a set of matrices that are 28 by 28. Before starting with the federated scenario, we can take a look at a sample in the training data.
```
import matplotlib.pyplot as plt
plt.imshow(train_data[0])
```
We are going to simulate a federated learning scenario with a set of client nodes containing private data, and a central server that will be responsible for coordinating the different clients. But, first of all, we have to simulate the data contained in every client. In order to do that, we are going to use the previously loaded dataset. The assumption in this example is that the data is distributed as a set of independent and identically distributed random variables, with every node having approximately the same amount of data. There are a set of different possibilities for distributing the data. The distribution of the data is one of the factors that can have the most impact on a federated algorithm. Therefore, the framework has some of the most common distributions implemented, which allows you to easily experiment with different situations. In [Federated Sampling](./federated_learning_sampling.ipynb), you can dig into the options that the framework provides, at the moment.
```
iid_distribution = shfl.data_distribution.IidDataDistribution(database)
federated_data, test_data, test_label = iid_distribution.get_federated_data(num_nodes=20, percent=10)
```
That's it! We have created federated data from the Emnist dataset using 20 nodes and 10 percent of the available data. This data is distributed to a set of data nodes in the form of private data. Let's learn a little more about the federated data.
```
print(type(federated_data))
print(federated_data.num_nodes())
federated_data[0].private_data
```
As we can see, private data in a node is not directly accessible but the framework provides mechanisms to use this data in a machine learning model.
## The model
A federated learning algorithm is defined by a machine learning model, locally deployed in each node, that learns from the respective node's private data and an aggregating mechanism to aggregate the different model parameters uploaded by the client nodes to a central node. In this example, we will use a deep learning model using Keras to build it. The framework provides classes on using Tensorflow (see notebook [Federated learning Tensorflow Model](./federated_learning_basic_concepts_tensorflow.ipynb)) and Keras (see notebook [Federated Learning basic concepts](./federated_learning_basic_concepts.ipynb)) models in a federated learning scenario, your only job is to create a function acting as model builder. Moreover, the framework provides classes to allow using pretrained Tensorflow and Keras models. In this example, we will use a pretrained Keras learning model.
```
import tensorflow as tf
#If you want execute in GPU, you must uncomment this two lines.
# physical_devices = tf.config.experimental.list_physical_devices('GPU')
# tf.config.experimental.set_memory_growth(physical_devices[0], True)
train_data = train_data.reshape(-1,28,28,1)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1, input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
model.compile(optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"])
model.fit(x=train_data, y=train_labels, batch_size=128, epochs=3, validation_split=0.2,
verbose=1, shuffle=False)
def model_builder():
pretrained_model = model
criterion = tf.keras.losses.CategoricalCrossentropy()
optimizer = tf.keras.optimizers.RMSprop()
metrics = [tf.keras.metrics.categorical_accuracy]
return shfl.model.DeepLearningModel(model=pretrained_model, criterion=criterion, optimizer=optimizer, metrics=metrics)
```
Now, the only piece missing is the aggregation operator. Nevertheless, the framework provides some aggregation operators that we can use. In the following piece of code, we define the federated aggregation mechanism. Moreover, we define the federated government based on the Keras learning model, the federated data, and the aggregation mechanism.
```
aggregator = shfl.federated_aggregator.FedAvgAggregator()
federated_government = shfl.federated_government.FederatedGovernment(model_builder, federated_data, aggregator)
```
If you want to see all the aggregation operators, you can check out the [Aggregation Operators](./federated_learning_basic_concepts_aggregation_operators.ipynb) notebook. Before running the algorithm, we want to apply a transformation to the data. A good practice is to define a federated operation that will ensure that the transformation is applied to the federated data in all the client nodes. We want to reshape the data, so we define the following FederatedTransformation.
```
import numpy as np
class Reshape(shfl.private.FederatedTransformation):
def apply(self, labeled_data):
labeled_data.data = np.reshape(labeled_data.data, (labeled_data.data.shape[0], labeled_data.data.shape[1], labeled_data.data.shape[2],1))
class CastFloat(shfl.private.FederatedTransformation):
def apply(self, labeled_data):
labeled_data.data = labeled_data.data.astype(np.float32)
shfl.private.federated_operation.apply_federated_transformation(federated_data, Reshape())
shfl.private.federated_operation.apply_federated_transformation(federated_data, CastFloat())
```
## Run the federated learning experiment
We are now ready to execute our federated learning algorithm.
```
test_data = np.reshape(test_data, (test_data.shape[0], test_data.shape[1], test_data.shape[2],1))
test_data = test_data.astype(np.float32)
federated_government.run_rounds(2, test_data, test_label)
```
| true |
code
| 0.601418 | null | null | null | null |
|
## Using Isolation Forest to Detect Criminally-Linked Properties
The goal of this notebook is to apply the Isolation Forest anomaly detection algorithm to the property data. The algorithm is particularly good at detecting anomalous data points in cases of extreme class imbalance. After normalizing the data and splitting into a training set and test set, I trained the first model.
Next, I manually selected a few features that, based on my experience investigating money-laundering and asset tracing, I thought would be most important and trained a model on just those.
```
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn import preprocessing
from sklearn.metrics import classification_report, confusion_matrix, recall_score, roc_auc_score
from sklearn.metrics import make_scorer, precision_score, accuracy_score
from sklearn.ensemble import IsolationForest
from sklearn.decomposition import PCA
import seaborn as sns
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style('dark')
```
#### Load Data and Remove Columns
```
# Read in the data
df = pd.read_hdf('../data/processed/bexar_true_labels.h5')
print("Number of properties:", len(df))
# Get criminal property rate
crim_prop_rate = 1 - (len(df[df['crim_prop']==0]) / len(df))
print("Rate is: {:.5%}".format(crim_prop_rate))
# Re-label the normal properties with 1 and the criminal ones with -1
df['binary_y'] = [1 if x==0 else -1 for x in df.crim_prop]
print(df.binary_y.value_counts())
# Normalize the data
X = df.iloc[:,1:-2]
X_norm = preprocessing.normalize(X)
y = df.binary_y
# Split the data into training and test
X_train_norm, X_test_norm, y_train_norm, y_test_norm = train_test_split(
X_norm, y, test_size=0.33, random_state=42
)
```
#### UDFs
```
# Define function to plot resulting confusion matrix
def plot_confusion_matrix(conf_matrix, title, classes=['criminally-linked', 'normal'],
cmap=plt.cm.Oranges):
"""Plot confusion matrix with heatmap and classification statistics."""
conf_matrix = conf_matrix.astype('float') / conf_matrix.sum(axis=1)[:, np.newaxis]
plt.figure(figsize=(8,8))
plt.imshow(conf_matrix, interpolation='nearest', cmap=cmap)
plt.title(title,fontsize=18)
plt.colorbar(pad=.12)
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45,fontsize=11)
plt.yticks(tick_marks, classes, rotation=45, fontsize=11)
fmt = '.4%'
thresh = conf_matrix.max() / 2.
for i, j in itertools.product(range(conf_matrix.shape[0]), range(conf_matrix.shape[1])):
plt.text(j, i, format(conf_matrix[i, j], fmt),
horizontalalignment="center",
verticalalignment="top",
fontsize=16,
color="white" if conf_matrix[i, j] > thresh else "black")
plt.ylabel('True label',fontsize=14, rotation=0)
plt.xlabel('Predicted label',fontsize=14)
# Function for returning the model metrics
def metrics_iforest(y_true,y_pred):
"""Return model metrics."""
print('Model recall is',recall_score(
y_true,
y_pred,
zero_division=0,
pos_label=-1
))
print('Model precision is',precision_score(
y_true,
y_pred,
zero_division=0,
pos_label=-1
))
print("Model AUC is", roc_auc_score(y_true, y_pred))
# Function for histograms of anomaly scores
def anomaly_plot(anomaly_scores,anomaly_scores_list,title):
"""Plot histograms of anomaly scores."""
plt.figure(figsize=[15,9])
plt.subplot(211)
plt.hist(anomaly_scores,bins=100,log=False,color='royalblue')
for xc in anomaly_scores_list:
plt.axvline(x=xc,color='red',linestyle='--',linewidth=0.5,label='criminally-linked property')
plt.title(title,fontsize=16)
handles, labels = plt.gca().get_legend_handles_labels()
by_label = dict(zip(labels, handles))
plt.legend(by_label.values(), by_label.keys(),fontsize=14)
plt.ylabel('Number of properties',fontsize=13)
plt.subplot(212)
plt.hist(anomaly_scores,bins=100,log=True,color='royalblue')
for xc in anomaly_scores_list:
plt.axvline(x=xc,color='red',linestyle='--',linewidth=0.5,label='criminally-linked property')
plt.xlabel('Anomaly score',fontsize=13)
plt.ylabel('Number of properties',fontsize=13)
plt.title('{} (Log Scale)'.format(title),fontsize=16)
plt.show()
```
#### Gridsearch
Isolation Forest is fairly robust to parameter changes, but changes in the contamination rate affect performance. I will gridsearch based on a range of contamination from 0.01 to 0.25 in leaps of 0.05.
```
# Set what metrics to evaluate predictions
scoring = {
'AUC': 'roc_auc',
'Recall': make_scorer(recall_score,pos_label=-1),
'Precision': make_scorer(precision_score,pos_label=-1)
}
gs = GridSearchCV(
IsolationForest(max_samples=0.25, random_state=42,n_estimators=100),
param_grid={'contamination': np.arange(0.01, 0.25, 0.05)},
scoring=scoring,
refit='Recall',
verbose=0,
cv=3
)
# Fit to training data
gs.fit(X_train_norm,y_train_norm)
print(gs.best_params_)
```
##### Model Performance on Training Data
```
y_pred_train_gs = gs.predict(X_train_norm)
metrics_iforest(y_train_norm,y_pred_train_gs)
conf_matrix = confusion_matrix(y_train_norm, y_pred_train_gs)
print(conf_matrix)
plot_confusion_matrix(conf_matrix, title='Isolation Forest Confusion Matrix on Training Data')
```
Model recall is decent, but the precision is quite poor; the model is labeling >20% of innocent properties as criminal.
##### Model Performance on Test Data
```
y_pred_test_gs = gs.predict(X_test_norm)
metrics_iforest(y_test_norm,y_pred_test_gs)
conf_matrix = confusion_matrix(y_test_norm, y_pred_test_gs)
print(conf_matrix)
plot_confusion_matrix(conf_matrix, title='Isolation Forest Confusion Matrix on Test Data')
```
Similar to performance on the training data, the model has a tremendous amount of false positives. While better than false negatives, were this model to be implemented to screen properties, it would waste a lot of manual labor on checking falsely-labeled properties.
Given the context of detecting money-laundering and ill-gotten funds, more false positives are acceptable to reduce false negatives, but the model produces far too many.
#### Visualize Distribution of Anomaly Scores
Sklearn's Isolation Forest provides anomaly scores for each property where the lower the score, the more anomalous the datapoint is.
##### Training Data
```
# Grab anomaly scores for criminally-linked properties
train_df = pd.DataFrame(X_train_norm)
y_train_series = y_train_norm.reset_index()
train_df['y_value'] = y_train_series.binary_y
train_df['anomaly_scores'] = gs.decision_function(X_train_norm)
anomaly_scores_list = train_df[train_df.y_value==-1]['anomaly_scores']
print("Mean score for outlier properties:",np.mean(anomaly_scores_list))
print("Mean score for normal properties:",np.mean(train_df[train_df.y_value==1]['anomaly_scores']))
anomaly_plot(train_df['anomaly_scores'],
anomaly_scores_list,
title='Distribution of Anomaly Scores across Training Data')
```
##### Test Data
```
test_df = pd.DataFrame(X_test_norm)
y_test_series = y_test_norm.reset_index()
test_df['y_value'] = y_test_series.binary_y
test_df['anomaly_scores'] = gs.decision_function(X_test_norm)
anomaly_scores_list_test = test_df[test_df.y_value==-1]['anomaly_scores']
print("Mean score for outlier properties:",np.mean(anomaly_scores_list_test))
print("Mean score for normal properties:",np.mean(test_df[test_df.y_value==1]['anomaly_scores']))
anomaly_plot(test_df['anomaly_scores'],
anomaly_scores_list_test,
title='Distribution of Anomaly Scores across Test Data'
)
```
The top plots give a sense of how skewed the distribution is and how relatively lower the anomaly scores for the criminally-linked properties are when compared to the greater population. The log scale histogram highlights just how many properties do have quite low anomaly scores, which are returned as false positives.
#### Model with Select Features
With `feature_importances_` not existing for Isolation Forest, I wanted to see if I could use my background in investigating money laundering to select a few features that would be the best indicators of "abnormal" properties.
```
# Grab specific columns
X_trim = X[['partial_owner','just_established_owner',
'foreign_based_owner','out_of_state_owner',
'owner_legal_person','owner_likely_company',
'owner_owns_multiple','two_gto_reqs']]
# Normalize
X_trim_norm = preprocessing.normalize(X_trim)
# Split the data into train and test
X_train_trim, X_test_trim, y_train_trim, y_test_trim = train_test_split(
X_trim_norm, y, test_size=0.33, random_state=42
)
scoring = {
'AUC': 'roc_auc',
'Recall': make_scorer(recall_score, pos_label=-1),
'Precision': make_scorer(precision_score, pos_label=-1)
}
gs_trim = GridSearchCV(
IsolationForest(max_samples=0.25, random_state=42,n_estimators=100),
param_grid={'contamination': np.arange(0.01, 0.25, 0.05)},
scoring=scoring,
refit='Recall',
verbose=0,
cv=3
)
# Fit to training data
gs_trim.fit(X_train_trim,y_train_trim)
print(gs_trim.best_params_)
```
##### Training Data
```
y_pred_train_gs_trim = gs_trim.predict(X_train_trim)
metrics_iforest(y_train_trim,y_pred_train_gs_trim)
conf_matrix = confusion_matrix(y_train_trim, y_pred_train_gs_trim)
print(conf_matrix)
plot_confusion_matrix(conf_matrix, title='Conf Matrix on Training Data with Select Features')
```
Reducing the data to select features worsens the model's true positives by two properties, but massively improves the false positive rate (753 down to 269). Overall, model precision is still poor.
##### Test Data
```
y_pred_test_trim = gs_trim.predict(X_test_trim)
metrics_iforest(y_test_trim,y_pred_test_trim)
conf_matrix = confusion_matrix(y_test_trim, y_pred_test_trim)
print(conf_matrix)
plot_confusion_matrix(conf_matrix, title='Conf Matrix on Test Data with Select Features')
```
The model trained on select features performs better than the first on the test data both in terms of correct labels and reducing false positives.
#### Final Notes
- For both models, recall is strong, indicating the model is able to detect something anomalous about the criminal properties. However, model precision is awful, meaning it does so at the expense of many false positives.
- Selecting features based on my experience in the field improves model precision.
- There are many properties that the models find more "anomalous" than the true positives. This could indicate the criminals have done a good job of making their properties appear relatively "innocent" in the broad spectrum of residential property ownership in Bexar County.
| true |
code
| 0.690507 | null | null | null | null |
|
## A quick Gender Recognition model
Grabbed from [nlpforhackers](https://nlpforhackers.io/introduction-machine-learning/) webpage.
1. Firstly convert the dataset into a numpy array to keep only gender and names
2. Set the feature parameters which takes in different parameters
3. Vectorize the parametes
4. Get varied train, test split and test it for validity by checking out the count of the train test split
5. Transform lists of feature-value mappings to vectors. (When feature values are strings, this transformer will do a binary one-hot (aka one-of-K) coding: one boolean-valued feature is constructed for each of the possible string values that the feature can take on)
6. Train a decision tree classifier on this and save the model as a pickle file
```
import pandas as pd
import numpy as np
from sklearn.utils import shuffle
from sklearn.feature_extraction import DictVectorizer
from sklearn.tree import DecisionTreeClassifier
names = pd.read_csv('names_dataset.csv')
print(names.head(10))
print("%d names in dataset" % len(names))
# Get the data out of the dataframe into a numpy matrix and keep only the name and gender columns
names = names.as_matrix()[:, 1:]
print(names)
# We're using 90% of the data for training
TRAIN_SPLIT = 0.90
def features(name):
name = name.lower()
return {
'first-letter': name[0], # First letter
'first2-letters': name[0:2], # First 2 letters
'first3-letters': name[0:3], # First 3 letters
'last-letter': name[-1], # Last letter
'last2-letters': name[-2:], # Last 2 letters
'last3-letters': name[-3:], # Last 3 letters
}
# Feature Extraction
print(features("Alex"))
# Vectorize the features function
features = np.vectorize(features)
print(features(["Anna", "Hannah", "Paul"]))
# [ array({'first2-letters': 'an', 'last-letter': 'a', 'first-letter': 'a', 'last2-letters': 'na', 'last3-letters': 'nna', 'first3-letters': 'ann'}, dtype=object)
# array({'first2-letters': 'ha', 'last-letter': 'h', 'first-letter': 'h', 'last2-letters': 'ah', 'last3-letters': 'nah', 'first3-letters': 'han'}, dtype=object)
# array({'first2-letters': 'pa', 'last-letter': 'l', 'first-letter': 'p', 'last2-letters': 'ul', 'last3-letters': 'aul', 'first3-letters': 'pau'}, dtype=object)]
# Extract the features for the whole dataset
X = features(names[:, 0]) # X contains the features
# Get the gender column
y = names[:, 1] # y contains the targets
# Test if we built the dataset correctly
print("\n\nName: %s, features=%s, gender=%s" % (names[0][0], X[0], y[0]))
X, y = shuffle(X, y)
X_train, X_test = X[:int(TRAIN_SPLIT * len(X))], X[int(TRAIN_SPLIT * len(X)):]
y_train, y_test = y[:int(TRAIN_SPLIT * len(y))], y[int(TRAIN_SPLIT * len(y)):]
# Check to see if the datasets add up
print len(X_train), len(X_test), len(y_train), len(y_test)
# Transforms lists of feature-value mappings to vectors.
vectorizer = DictVectorizer()
vectorizer.fit(X_train)
transformed = vectorizer.transform(features(["Mary", "John"]))
print transformed
print type(transformed) # <class 'scipy.sparse.csr.csr_matrix'>
print transformed.toarray()[0][12] # 1.0
print vectorizer.feature_names_[12] # first-letter=m
clf = DecisionTreeClassifier(criterion = 'gini')
clf.fit(vectorizer.transform(X_train), y_train)
# Accuracy on training set
print clf.score(vectorizer.transform(X_train), y_train)
# Accuracy on test set
print clf.score(vectorizer.transform(X_test), y_test)
# Therefore, we are getting a decent result from the names
print clf.predict(vectorizer.transform(features(["SMYSLOV", "CHASTITY", "MISS PERKY", "SHARON", "ALONSO", "SECONDARY OFFICER"])))
# Save the model using pickle
import pickle
pickle_out = open("gender_recog.pickle", "wb")
pickle.dump(clf, pickle_out)
pickle_out.close()
```
| true |
code
| 0.368747 | null | null | null | null |
|
# Logistic regression example
### Dr. Tirthajyoti Sarkar, Fremont, CA 94536
---
This notebook demonstrates solving a logistic regression problem of predicting Hypothyrodism with **Scikit-learn** and **Statsmodels** libraries.
The dataset is taken from UCI ML repository.
<br>Here is the link: https://archive.ics.uci.edu/ml/datasets/Thyroid+Disease
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
### Read the dataset
```
names = 'response age sex on_thyroxine query_on_thyroxine antithyroid_medication thyroid_surgery query_hypothyroid query_hyperthyroid pregnant \
sick tumor lithium goitre TSH_measured TSH T3_measured \
T3 TT4_measured TT4 T4U_measured T4U FTI_measured FTI TBG_measured TBG'
names = names.split(' ')
#!wget https://raw.githubusercontent.com/tirthajyoti/Machine-Learning-with-Python/master/Datasets/hypothyroid.csv
#!mkdir Data
#!mv hypothyroid.csv Data/
df = pd.read_csv('Data/hypothyroid.csv',index_col=False,names=names,na_values=['?'])
df.head()
to_drop=[]
for c in df.columns:
if 'measured' in c or 'query' in c:
to_drop.append(c)
to_drop
to_drop.append('TBG')
df.drop(to_drop,axis=1,inplace=True)
df.head()
```
### Let us see the basic statistics on the dataset
```
df.describe().T
```
### Are any data points are missing? We can check it using `df.isna()` method
The `df.isna()` method gives back a full DataFrame with Boolean values - True for data present, False for missing data. We can use `sum()` on that DataFrame to see and calculate the number of missing values per column.
```
df.isna().sum()
```
### We can use `df.dropna()` method to drop those missing rows
```
df.dropna(inplace=True)
df.shape
```
### Creating a transformation function to convert `+` or `-` responses to 1 and 0
```
def class_convert(response):
if response=='hypothyroid':
return 1
else:
return 0
df['response']=df['response'].apply(class_convert)
df.head()
df.columns
```
### Exploratory data analysis
```
for var in ['age','TSH','T3','TT4','T4U','FTI']:
sns.boxplot(x='response',y=var,data=df)
plt.show()
sns.pairplot(data=df[df.columns[1:]],diag_kws={'edgecolor':'k','bins':25},plot_kws={'edgecolor':'k'})
plt.show()
```
### Create dummy variables for the categorical variables
```
df_dummies = pd.get_dummies(data=df)
df_dummies.shape
df_dummies.sample(10)
```
### Test/train split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df_dummies.drop('response',axis=1),
df_dummies['response'], test_size=0.30,
random_state=42)
print("Training set shape",X_train.shape)
print("Test set shape",X_test.shape)
```
### Using `LogisticRegression` estimator from Scikit-learn
We are using the L2 regularization by default
```
from sklearn.linear_model import LogisticRegression
clf1 = LogisticRegression(penalty='l2',solver='newton-cg')
clf1.fit(X_train,y_train)
```
### Intercept, coefficients, and score
```
clf1.intercept_
clf1.coef_
clf1.score(X_test,y_test)
```
### For `LogisticRegression` estimator, there is a special `predict_proba` method which computes the raw probability values
```
prob_threshold = 0.5
prob_df=pd.DataFrame(clf1.predict_proba(X_test[:10]),columns=['Prob of NO','Prob of YES'])
prob_df['Decision']=(prob_df['Prob of YES']>prob_threshold).apply(int)
prob_df
y_test[:10]
```
### Classification report, and confusion matrix
```
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(y_test, clf1.predict(X_test)))
pd.DataFrame(confusion_matrix(y_test, clf1.predict(X_test)),columns=['Predict-YES','Predict-NO'],index=['YES','NO'])
```
### Using `statsmodels` library
```
import statsmodels.formula.api as smf
import statsmodels.api as sm
df_dummies.columns
```
### Create a 'formula' in the same style as in R language
```
formula = 'response ~ ' + '+'.join(df_dummies.columns[1:])
formula
```
### Fit a GLM (Generalized Linear model) with this formula and choosing `Binomial` as the family of function
```
model = smf.glm(formula = formula, data=df_dummies, family=sm.families.Binomial())
result=model.fit()
```
### `summary` method shows a R-style table with all kind of statistical information
```
print(result.summary())
```
### The `predict` method computes probability for the test dataset
```
result.predict(X_test[:10])
```
### To create binary predictions, you have to apply a threshold probability and convert the booleans into integers
```
y_pred=(result.predict(X_test)>prob_threshold).apply(int)
print(classification_report(y_test,y_pred))
pd.DataFrame(confusion_matrix(y_test, y_pred),columns=['Predict-YES','Predict-NO'],index=['YES','NO'])
```
### A smaller model with only first few variables
We saw that majority of variables in the logistic regression model has p-values very high and therefore they are not statistically significant. We create another smaller model removing those variables.
```
formula = 'response ~ ' + '+'.join(df_dummies.columns[1:7])
formula
model = smf.glm(formula = formula, data=df_dummies, family=sm.families.Binomial())
result=model.fit()
print(result.summary())
y_pred=(result.predict(X_test)>prob_threshold).apply(int)
print(classification_report(y_pred,y_test))
pd.DataFrame(confusion_matrix(y_test, y_pred),columns=['Predict-YES','Predict-NO'],index=['YES','NO'])
```
### How do the probabilities compare between `Scikit-learn` and `Statsmodels` predictions?
```
sklearn_prob = clf1.predict_proba(X_test)[...,1][:10]
statsmodels_prob = result.predict(X_test[:10])
prob_comp_df=pd.DataFrame(data={'Scikit-learn Prob':list(sklearn_prob),'Statsmodels Prob':list(statsmodels_prob)})
prob_comp_df
```
### Coefficient interpretation
What is the interpretation of the coefficient value for `age` and `FTI`?
- With every one year of age increase, the log odds of the hypothyrodism **increases** by 0.0248 or the odds of hypothyroidsm increases by a factor of exp(0.0248) = 1.025 i.e. almost 2.5%.
- With every one unit of FTI increase, the log odds of the hypothyrodism **decreases** by 0.1307 or the odds of hypothyroidsm decreases by a factor of exp(0.1307) = 1.1396 i.e. almost by 12.25%.
| true |
code
| 0.461866 | null | null | null | null |
|
# Keras Functional API
```
# sudo pip3 install --ignore-installed --upgrade tensorflow
import keras
import tensorflow as tf
print(keras.__version__)
print(tf.__version__)
# To ignore keep_dims warning
tf.logging.set_verbosity(tf.logging.ERROR)
```
Let’s start with a minimal example that shows side by side a simple Sequential model and its equivalent in the functional API:
```
from keras.models import Sequential, Model
from keras import layers
from keras import Input
seq_model = Sequential()
seq_model.add(layers.Dense(32, activation='relu', input_shape=(64,)))
seq_model.add(layers.Dense(32, activation='relu'))
seq_model.add(layers.Dense(10, activation='softmax'))
input_tensor = Input(shape=(64,))
x = layers.Dense(32, activation='relu')(input_tensor)
x = layers.Dense(32, activation='relu')(x)
output_tensor = layers.Dense(10, activation='softmax')(x)
model = Model(input_tensor, output_tensor)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
The only part that may seem a bit magical at this point is instantiating a Model object using only an input tensor and an output tensor. Behind the scenes, Keras retrieves every layer involved in going from input_tensor to output_tensor, bringing them together into a graph-like data structure—a Model. Of course, the reason it works is that output_tensor was obtained by repeatedly transforming input_tensor.
If you tried to build a model from **inputs and outputs that weren’t related**, you’d get a RuntimeError:
```
unrelated_input = Input(shape=(32,))
bad_model = Model(unrelated_input, output_tensor)
```
This error tells you, in essence, that Keras couldn’t reach input_2 from the provided output tensor.
When it comes to compiling, training, or evaluating such an instance of Model, the API is *the same as that of Sequential*:
```
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
import numpy as np
x_train = np.random.random((1000, 64))
y_train = np.random.random((1000, 10))
model.fit(x_train, y_train, epochs=10, batch_size=128)
score = model.evaluate(x_train, y_train)
```
## Multi-input models
#### A question-answering model example
Following is an example of how you can build such a model with the functional API. You set up two independent branches, encoding the text input and the question input as representation vectors; then, concatenate these vectors; and finally, add a softmax classifier on top of the concatenated representations.
```
from keras.models import Model
from keras import layers
from keras import Input
text_vocabulary_size = 10000
question_vocabulary_size = 10000
answer_vocabulary_size = 500
# The text input is a variable-length sequence of integers.
# Note that you can optionally name the inputs.
text_input = Input(shape=(None,), dtype='int32', name='text')
# Embeds the inputs into a sequence of vectors of size 64
# embedded_text = layers.Embedding(64, text_vocabulary_size)(text_input)
# embedded_text = layers.Embedding(output_dim=64, input_dim=text_vocabulary_size)(text_input)
embedded_text = layers.Embedding(text_vocabulary_size,64)(text_input)
# Encodes the vectors in a single vector via an LSTM
encoded_text = layers.LSTM(32)(embedded_text)
# Same process (with different layer instances) for the question
question_input = Input(shape=(None,),dtype='int32',name='question')
# embedded_question = layers.Embedding(32, question_vocabulary_size)(question_input)
# embedded_question = layers.Embedding(output_dim=32, input_dim=question_vocabulary_size)(question_input)
embedded_question = layers.Embedding(question_vocabulary_size,32)(question_input)
encoded_question = layers.LSTM(16)(embedded_question)
# Concatenates the encoded question and encoded text
concatenated = layers.concatenate([encoded_text, encoded_question],axis=-1)
# Adds a softmax classifier on top
answer = layers.Dense(answer_vocabulary_size, activation='softmax')(concatenated)
# At model instantiation, you specify the two inputs and the output.
model = Model([text_input, question_input], answer)
model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['acc'])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
Now, how do you **train** this two-input model?
There are two possible APIs:
* you can feed the model a list of Numpy arrays as inputs
* you can feed it a dictionary that maps input names to Numpy arrays.
Naturally, the latter option is available only if you give names to your inputs.
#### Training the multi-input model
```
import numpy as np
num_samples = 1000
max_length = 100
# Generates dummy Numpy data
text = np.random.randint(1, text_vocabulary_size,size=(num_samples, max_length))
question = np.random.randint(1, question_vocabulary_size,size=(num_samples, max_length))
# Answers are one-hot encoded, not integers
# answers = np.random.randint(0, 1,size=(num_samples, answer_vocabulary_size))
answers = np.random.randint(answer_vocabulary_size, size=(num_samples))
answers = keras.utils.to_categorical(answers, answer_vocabulary_size)
# Fitting using a list of inputs
print('-'*10,"First training run with list of NumPy arrays",'-'*60)
model.fit([text, question], answers, epochs=10, batch_size=128)
print()
# Fitting using a dictionary of inputs (only if inputs are named)
print('-'*10,"Second training run with dictionary and named inputs",'-'*60)
model.fit({'text': text, 'question': question}, answers,epochs=10, batch_size=128)
```
## Multi-output models
You can also use the functional API to build models with multiple outputs (or multiple *heads*).
#### Example - prediction of Age, Gender and Income from social media posts
A simple example is a network that attempts to simultaneously predict different properties of the data, such as a network that takes as input a series of social media posts from a single anonymous person and tries to predict attributes of that person, such as age, gender, and income level.
```
from keras import layers
from keras import Input
from keras.models import Model
vocabulary_size = 50000
num_income_groups = 10
posts_input = Input(shape=(None,), dtype='int32', name='posts')
#embedded_posts = layers.Embedding(256, vocabulary_size)(posts_input)
embedded_posts = layers.Embedding(vocabulary_size,256)(posts_input)
x = layers.Conv1D(128, 5, activation='relu', padding='same')(embedded_posts)
x = layers.MaxPooling1D(5)(x)
x = layers.Conv1D(256, 5, activation='relu', padding='same')(x)
x = layers.Conv1D(256, 5, activation='relu', padding='same')(x)
x = layers.MaxPooling1D(5)(x)
x = layers.Conv1D(256, 5, activation='relu', padding='same')(x)
x = layers.Conv1D(256, 5, activation='relu', padding='same')(x)
x = layers.GlobalMaxPooling1D()(x)
x = layers.Dense(128, activation='relu')(x)
# Note that the output layers are given names.
age_prediction = layers.Dense(1, name='age')(x)
income_prediction = layers.Dense(num_income_groups, activation='softmax',name='income')(x)
gender_prediction = layers.Dense(1, activation='sigmoid', name='gender')(x)
model = Model(posts_input,[age_prediction, income_prediction, gender_prediction])
print("Model is ready!")
```
#### Compilation options of a multi-output model: multiple losses
```
model.compile(optimizer='rmsprop', loss=['mse', 'categorical_crossentropy', 'binary_crossentropy'])
# Equivalent (possible only if you give names to the output layers)
model.compile(optimizer='rmsprop',loss={'age': 'mse',
'income': 'categorical_crossentropy',
'gender': 'binary_crossentropy'})
model.compile(optimizer='rmsprop',
loss=['mse', 'categorical_crossentropy', 'binary_crossentropy'],
loss_weights=[0.25, 1., 10.])
# Equivalent (possible only if you give names to the output layers)
model.compile(optimizer='rmsprop',
loss={'age': 'mse','income': 'categorical_crossentropy','gender': 'binary_crossentropy'},
loss_weights={'age': 0.25,
'income': 1.,
'gender': 10.})
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
#### Feeding data to a multi-output model
Much as in the case of multi-input models, you can pass Numpy data to the model for training either via a list of arrays or via a dictionary of arrays.
#### Training a multi-output model
```
import numpy as np
TRACE = False
num_samples = 1000
max_length = 100
posts = np.random.randint(1, vocabulary_size, size=(num_samples, max_length))
if TRACE:
print("*** POSTS ***")
print(posts.shape)
print(posts[:10])
print()
age_targets = np.random.randint(0, 100, size=(num_samples,1))
if TRACE:
print("*** AGE ***")
print(age_targets.shape)
print(age_targets[:10])
print()
income_targets = np.random.randint(1, num_income_groups, size=(num_samples,1))
income_targets = keras.utils.to_categorical(income_targets,num_income_groups)
if TRACE:
print("*** INCOME ***")
print(income_targets.shape)
print(income_targets[:10])
print()
gender_targets = np.random.randint(0, 2, size=(num_samples,1))
if TRACE:
print("*** GENDER ***")
print(gender_targets.shape)
print(gender_targets[:10])
print()
print('-'*10, "First training run with NumPy arrays", '-'*60)
# age_targets, income_targets, and gender_targets are assumed to be Numpy arrays.
model.fit(posts, [age_targets, income_targets, gender_targets], epochs=10, batch_size=64)
print('-'*10,"Second training run with dictionary and named outputs",'-'*60)
# Equivalent (possible only if you give names to the output layers)
model.fit(posts, {'age': age_targets,
'income': income_targets,
'gender': gender_targets},
epochs=10, batch_size=64)
```
### 7.1.4 Directed acyclic graphs of layers
With the functional API, not only can you build models with multiple inputs and multiple outputs, but you can also implement networks with a complex internal topology.
Neural networks in Keras are allowed to be arbitrary directed acyclic graphs of layers (the only processing loops that are allowed are those internal to recurrent layers).
Several common neural-network components are implemented as graphs. Two notable ones are <i>Inception modules</i> and <i>residual connections</i>. To better understand how the functional API can be used to build graphs of layers, let’s take a look at how you can implement both of them in Keras.
#### Inception modules
Inception [3] is a popular type of network architecture for convolutional neural networks. It consists of a stack of modules that themselves look like small independent networks, split into several parallel branches.
##### The purpose of 1 × 1 convolutions
1 × 1 convolutions (also called pointwise convolutions) are featured in Inception modules, where they contribute to factoring out channel-wise feature learning and space-wise feature learning.
```
from keras import layers
from keras.layers import Input
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
x = Input(shape=(28, 28, 1), dtype='float32', name='images')
print("x.shape:",x.shape)
# Every branch has the same stride value (2), which is necessary to
# keep all branch outputs the same size so you can concatenate them
branch_a = layers.Conv2D(128, 1, padding='same', activation='relu', strides=2)(x)
# In this branch, the striding occurs in the spatial convolution layer.
branch_b = layers.Conv2D(128, 1, padding='same', activation='relu')(x)
branch_b = layers.Conv2D(128, 3, padding='same', activation='relu', strides=2)(branch_b)
# In this branch, the striding occurs in the average pooling layer.
branch_c = layers.AveragePooling2D(3, padding='same', strides=2)(x)
branch_c = layers.Conv2D(128, 3, padding='same', activation='relu')(branch_c)
branch_d = layers.Conv2D(128, 1, padding='same', activation='relu')(x)
branch_d = layers.Conv2D(128, 3, padding='same', activation='relu')(branch_d)
branch_d = layers.Conv2D(128, 3, padding='same', activation='relu', strides=2)(branch_d)
# Concatenates the branch outputs to obtain the module output
output = layers.concatenate([branch_a, branch_b, branch_c, branch_d], axis=-1)
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(10, activation='softmax')(output)
model = keras.models.Model(inputs=x, outputs=predictions)
```
#### Train the Inception model using the Dataset API and the MNIST data
Inspired by: https://github.com/keras-team/keras/blob/master/examples/mnist_dataset_api.py
```
import numpy as np
import os
import tempfile
import keras
from keras import backend as K
from keras import layers
from keras.datasets import mnist
import tensorflow as tf
if K.backend() != 'tensorflow':
raise RuntimeError('This example can only run with the TensorFlow backend,'
' because it requires the Dataset API, which is not'
' supported on other platforms.')
batch_size = 128
buffer_size = 10000
steps_per_epoch = int(np.ceil(60000 / float(batch_size))) # = 469
epochs = 5
num_classes = 10
def cnn_layers(x):
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
print("x.shape:",x.shape)
# Every branch has the same stride value (2), which is necessary to
# keep all branch outputs the same size so you can concatenate them
branch_a = layers.Conv2D(128, 1, padding='same', activation='relu', strides=2)(x)
# In this branch, the striding occurs in the spatial convolution layer.
branch_b = layers.Conv2D(128, 1, padding='same', activation='relu')(x)
branch_b = layers.Conv2D(128, 3, padding='same', activation='relu', strides=2)(branch_b)
# In this branch, the striding occurs in the average pooling layer.
branch_c = layers.AveragePooling2D(3, padding='same', strides=2)(x)
branch_c = layers.Conv2D(128, 3, padding='same', activation='relu')(branch_c)
branch_d = layers.Conv2D(128, 1, padding='same', activation='relu')(x)
branch_d = layers.Conv2D(128, 3, padding='same', activation='relu')(branch_d)
branch_d = layers.Conv2D(128, 3, padding='same', activation='relu', strides=2)(branch_d)
# Concatenates the branch outputs to obtain the module output
output = layers.concatenate([branch_a, branch_b, branch_c, branch_d], axis=-1)
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(num_classes, activation='softmax')(output)
return predictions
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = np.expand_dims(x_train, -1)
y_train = tf.one_hot(y_train, num_classes)
# Create the dataset and its associated one-shot iterator.
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.repeat()
dataset = dataset.shuffle(buffer_size)
dataset = dataset.batch(batch_size)
iterator = dataset.make_one_shot_iterator()
# Model creation using tensors from the get_next() graph node.
inputs, targets = iterator.get_next()
print("inputs.shape:",inputs.shape)
print("targets.shape:",targets.shape)
model_input = layers.Input(tensor=inputs)
model_output = cnn_layers(model_input)
model = keras.models.Model(inputs=model_input, outputs=model_output)
model.compile(optimizer=keras.optimizers.RMSprop(lr=2e-3, decay=1e-5),
loss='categorical_crossentropy',
metrics=['accuracy'],
target_tensors=[targets])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
#### Train Inception model
```
model.fit(epochs=epochs,
steps_per_epoch=steps_per_epoch)
# Save the model weights.
weight_path = os.path.join(tempfile.gettempdir(), 'saved_Inception_wt.h5')
model.save_weights(weight_path)
```
#### Test the Inception model
Second session to test loading trained model without tensors.
```
# Clean up the TF session.
K.clear_session()
# Second session to test loading trained model without tensors.
x_test = x_test.astype(np.float32)
x_test = np.expand_dims(x_test, -1)
x_test_inp = layers.Input(shape=x_test.shape[1:])
test_out = cnn_layers(x_test_inp)
test_model = keras.models.Model(inputs=x_test_inp, outputs=test_out)
weight_path = os.path.join(tempfile.gettempdir(), 'saved_Inception_wt.h5')
test_model.load_weights(weight_path)
test_model.compile(optimizer='rmsprop',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
test_model.summary()
SVG(model_to_dot(test_model).create(prog='dot', format='svg'))
loss, acc = test_model.evaluate(x_test, y_test, num_classes)
print('\nTest accuracy: {0}'.format(acc))
```
#### Residual connections - ResNET
Residual connections or ResNET are a common graph-like network component found in many post-2015 network architectures, including Xception. They were introduced by He et al. from Microsoft and are figthing two common problems with large-scale deep-learning model: vanishing gradients and representational bottlenecks.
A residual connection consists of making the output of an earlier layer available as input to a later layer, effectively creating a shortcut in a sequential network. Rather than being concatenated to the later activation, the earlier output is summed with the later activation, which assumes that both activations are the same size. If they’re different sizes, you can use a linear transformation to reshape the earlier activation into the target shape (for example, a Dense layer without an activation or, for convolutional feature maps, a 1 × 1 convolution without an activation).
###### ResNET implementation when the feature-map sizes are the same
Here’s how to implement a residual connection in Keras when the feature-map sizes are the same, using identity residual connections. This example assumes the existence of a 4D input tensor x:
```
from keras import layers
from keras.layers import Input
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
x = Input(shape=(28, 28, 1), dtype='float32', name='images')
print("x.shape:",x.shape)
# Applies a transformation to x
y = layers.Conv2D(128, 3, activation='relu', padding='same')(x)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
# Adds the original x back to the output features
output = layers.add([y, x])
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(10, activation='softmax')(output)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
###### ResNET implementation when the feature-map sizes differ
And the following implements a residual connection when the feature-map sizes differ, using a linear residual connection (again, assuming the existence of a 4D input tensor x):
```
from keras import layers
from keras.layers import Input
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
x = Input(shape=(28, 28, 1), dtype='float32', name='images')
print("x.shape:",x.shape)
# Applies a transformation to x
y = layers.Conv2D(128, 3, activation='relu', padding='same')(x)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
y = layers.MaxPooling2D(2, strides=2)(y)
# Uses a 1 × 1 convolution to linearly downsample the original x tensor to the same shape as y
residual = layers.Conv2D(128, 1, strides=2, padding='same')(x)
# Adds the residual tensor back to the output features
output = layers.add([y, residual])
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(10, activation='softmax')(output)
model = keras.models.Model(inputs=x, outputs=predictions)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
#### Train the ResNET model using the Dataset API and the MNIST data
(when the feature-map sizes are the same)
```
import numpy as np
import os
import tempfile
import keras
from keras import backend as K
from keras import layers
from keras.datasets import mnist
import tensorflow as tf
if K.backend() != 'tensorflow':
raise RuntimeError('This example can only run with the TensorFlow backend,'
' because it requires the Dataset API, which is not'
' supported on other platforms.')
batch_size = 128
buffer_size = 10000
steps_per_epoch = int(np.ceil(60000 / float(batch_size))) # = 469
epochs = 5
num_classes = 10
def cnn_layers(x):
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
print("x.shape:",x.shape)
# Applies a transformation to x
y = layers.Conv2D(128, 3, activation='relu', padding='same')(x)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
# Adds the original x back to the output features
output = layers.add([y, x])
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(10, activation='softmax')(output)
return predictions
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = np.expand_dims(x_train, -1)
y_train = tf.one_hot(y_train, num_classes)
# Create the dataset and its associated one-shot iterator.
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.repeat()
dataset = dataset.shuffle(buffer_size)
dataset = dataset.batch(batch_size)
iterator = dataset.make_one_shot_iterator()
# Model creation using tensors from the get_next() graph node.
inputs, targets = iterator.get_next()
print("inputs.shape:",inputs.shape)
print("targets.shape:",targets.shape)
model_input = layers.Input(tensor=inputs)
model_output = cnn_layers(model_input)
model = keras.models.Model(inputs=model_input, outputs=model_output)
model.compile(optimizer=keras.optimizers.RMSprop(lr=2e-3, decay=1e-5),
loss='categorical_crossentropy',
metrics=['accuracy'],
target_tensors=[targets])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
#### Train and Save the ResNet model
```
model.fit(epochs=epochs,
steps_per_epoch=steps_per_epoch)
# Save the model weights.
weight_path = os.path.join(tempfile.gettempdir(), 'saved_ResNet_wt.h5')
model.save_weights(weight_path)
```
#### Second session to test loading trained model without tensors.
```
# Clean up the TF session.
K.clear_session()
# Second session to test loading trained model without tensors.
x_test = x_test.astype(np.float32)
x_test = np.expand_dims(x_test, -1)
x_test_inp = layers.Input(shape=x_test.shape[1:])
test_out = cnn_layers(x_test_inp)
test_model = keras.models.Model(inputs=x_test_inp, outputs=test_out)
weight_path = os.path.join(tempfile.gettempdir(), 'saved_ResNet_wt.h5')
test_model.load_weights(weight_path)
test_model.compile(optimizer='rmsprop',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
test_model.summary()
loss, acc = test_model.evaluate(x_test, y_test, num_classes)
print('\nTest accuracy: {0}'.format(acc))
```
Not very good... probably normal since residual connection are good with very deep network but here we have only 2 hidden layers.
### 7.1.5. Layer weights sharing
One more important feature of the functional API is the ability to reuse a layer instance several times where instead of instantiating a new layer for each call, you reuse the same weights with every call. This allows you to build models that have shared branches—several branches that all share the same knowledge and perform the same operations.
#### Example - semantic similarity between two sentences
For example, consider a model that attempts to assess the semantic similarity between two sentences. The model has two inputs (the two sentences to compare) and outputs a score between 0 and 1, where 0 means unrelated sentences and 1 means sentences that are either identical or reformulations of each other. Such a model could be useful in many applications, including deduplicating natural-language queries in a dialog system.
In this setup, the two input sentences are interchangeable, because semantic similarity is a symmetrical relationship: the similarity of A to B is identical to the similarity of B to A. For this reason, it wouldn’t make sense to learn two independent models for processing each input sentence. Rather, you want to process both with a single LSTM layer. The representations of this LSTM layer (its weights) are learned based on both inputs simultaneously. This is what we call a Siamese LSTM model or a shared LSTM.
Note: Siamese network is a special type of neural network architecture. Instead of learning to classify its
inputs, the Siamese neural network learns to differentiate between two inputs. It learns the similarity.
Here’s how to implement such a model using layer sharing (layer reuse) in the Keras functional API:
```
from keras import layers
from keras import Input
from keras.models import Model
# Instantiates a single LSTM layer, once
lstm = layers.LSTM(32)
# Building the left branch of the model:
# inputs are variable-length sequences of vectors of size 128.
left_input = Input(shape=(None, 128))
left_output = lstm(left_input)
# Building the right branch of the model:
# when you call an existing layer instance, you reuse its weights.
right_input = Input(shape=(None, 128))
right_output = lstm(right_input)
# Builds the classifier on top
merged = layers.concatenate([left_output, right_output], axis=-1)
predictions = layers.Dense(1, activation='sigmoid')(merged)
# Instantiating the model
model = Model([left_input, right_input], predictions)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
import numpy as np
num_samples = 100
num_symbols = 2
TRACE = False
left_data = np.random.randint(0,num_symbols, size=(num_samples,1,128))
if TRACE:
print(type(left_data))
print(left_data.shape)
print(left_data)
print('-'*50)
right_data = np.random.randint(0,num_symbols, size=(num_samples,1,128))
if TRACE:
print(type(right_data))
print(right_data.shape)
print(right_data)
print('-'*50)
matching_list = [np.random.randint(0,num_symbols) for _ in range(num_samples)]
targets = np.array(matching_list)
if TRACE:
print(type(targets))
print(targets.shape)
print(targets)
print('-'*50)
# We must compile a model before training/testing.
model.compile(optimizer='rmsprop',loss='binary_crossentropy',metrics=['acc'])
# Training the model: when you train such a model,
# the weights of the LSTM layer are updated based on both inputs.
model.fit([left_data, right_data],targets)
```
### 7.1.6. Models as layers
Importantly, in the functional API, models can be used as you’d use layers—effectively, you can think of a model as a “bigger layer.” This is true of both the Sequential and Model classes. This means you can call a model on an input tensor and retrieve an output tensor:
y = model(x)
If the model has multiple input tensors and multiple output tensors, it should be called with a list of tensors:
y1, y2 = model([x1, x2])
When you call a model instance, you’re reusing the weights of the model—exactly like what happens when you call a layer instance. Calling an instance, whether it’s a layer instance or a model instance, will always reuse the existing learned representations of the instance—which is intuitive.
```
from keras import layers
from keras import applications
from keras import Input
nbr_classes = 10
# The base image-processing model is the Xception network (convolutional base only).
xception_base = applications.Xception(weights=None,include_top=False)
# The inputs are 250 × 250 RGB images.
left_input = Input(shape=(250, 250, 3))
right_input = Input(shape=(250, 250, 3))
left_features = xception_base(left_input)
# right_input = xception_base(right_input)
right_features = xception_base(right_input)
merged_features = layers.concatenate([left_features, right_features], axis=-1)
predictions = layers.Dense(nbr_classes, activation='softmax')(merged_features)
# Instantiating the model
model = Model([left_input, right_input], predictions)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
| true |
code
| 0.686186 | null | null | null | null |
|
```
# Libraries for R^2 visualization
from ipywidgets import interactive, IntSlider, FloatSlider
from math import floor, ceil
from sklearn.base import BaseEstimator, RegressorMixin
# Libraries for model building
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Library for working locally or Colab
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
```
# I. Wrangle Data
```
df = wrangle(DATA_PATH + 'elections/bread_peace_voting.csv')
```
# II. Split Data
**First** we need to split our **target vector** from our **feature matrix**.
```
```
**Second** we need to split our dataset into **training** and **test** sets.
Two strategies:
- Random train-test split using [`train_test_split`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html). Generally we use 80% of the data for training, and 20% of the data for testing.
- If you have **timeseries**, then you need to do a "cutoff" split.
```
```
# III. Establish Baseline
```
```
# IV. Build Model
```
```
# V. Check Metrics
## Mean Absolute Error
The unit of measurement is the same as the unit of measurment for your target (in this case, vote share [%]).
```
```
## Root Mean Squared Error
The unit of measurement is the same as the unit of measurment for your target (in this case, vote share [%]).
```
```
## $R^2$ Score
TL;DR: Usually ranges between 0 (bad) and 1 (good).
```
class BruteForceRegressor(BaseEstimator, RegressorMixin):
def __init__(self, m=0, b=0):
self.m = m
self.b = b
self.mean = 0
def fit(self, X, y):
self.mean = np.mean(y)
return self
def predict(self, X, return_mean=True):
if return_mean:
return [self.mean] * len(X)
else:
return X * self.m + self.b
def plot(slope, intercept):
# Assign data to variables
x = df['income']
y = df['incumbent_vote_share']
# Create figure
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,6))
# Set ax limits
mar = 0.2
x_lim = floor(x.min() - x.min()*mar), ceil(x.max() + x.min()*mar)
y_lim = floor(y.min() - y.min()*mar), ceil(y.max() + y.min()*mar)
# Instantiate and train model
bfr = BruteForceRegressor(slope, intercept)
bfr.fit(x, y)
# ax1
## Plot data
ax1.set_xlim(x_lim)
ax1.set_ylim(y_lim)
ax1.scatter(x, y)
## Plot base model
ax1.axhline(bfr.mean, color='orange', label='baseline model')
## Plot residual lines
y_base_pred = bfr.predict(x)
ss_base = mean_squared_error(y, y_base_pred) * len(y)
for x_i, y_i, yp_i in zip(x, y, y_base_pred):
ax1.plot([x_i, x_i], [y_i, yp_i],
color='gray', linestyle='--', alpha=0.75)
## Formatting
ax1.legend()
ax1.set_title(f'Sum of Squares: {np.round(ss_base, 2)}')
ax1.set_xlabel('Growth in Personal Incomes')
ax1.set_ylabel('Incumbent Party Vote Share [%]')
# ax2
ax2.set_xlim(x_lim)
ax2.set_ylim(y_lim)
## Plot data
ax2.scatter(x, y)
## Plot model
x_model = np.linspace(*ax2.get_xlim(), 10)
y_model = bfr.predict(x_model, return_mean=False)
ax2.plot(x_model, y_model, color='green', label='our model')
for x_coord, y_coord in zip(x, y):
ax2.plot([x_coord, x_coord], [y_coord, x_coord * slope + intercept],
color='gray', linestyle='--', alpha=0.75)
ss_ours = mean_squared_error(y, bfr.predict(x, return_mean=False)) * len(y)
## Formatting
ax2.legend()
ax2.set_title(f'Sum of Squares: {np.round(ss_ours, 2)}')
ax2.set_xlabel('Growth in Personal Incomes')
ax2.set_ylabel('Incumbent Party Vote Share [%]')
y = df['incumbent_vote_share']
slope_slider = FloatSlider(min=-5, max=5, step=0.5, value=0)
intercept_slider = FloatSlider(min=int(y.min()), max=y.max(), step=2, value=y.mean())
interactive(plot, slope=slope_slider, intercept=intercept_slider)
```
# VI. Communicate Results
**Challenge:** How can we find the coefficients and intercept for our `model`?
```
```
| true |
code
| 0.681594 | null | null | null | null |
|
<div>
<img src="https://drive.google.com/uc?export=view&id=1vK33e_EqaHgBHcbRV_m38hx6IkG0blK_" width="350"/>
</div>
#**Artificial Intelligence - MSc**
##ET5003 - MACHINE LEARNING APPLICATIONS
###Instructor: Enrique Naredo
###ET5003_NLP_SpamClasiffier-2
### Spam Classification
[Spamming](https://en.wikipedia.org/wiki/Spamming) is the use of messaging systems to send multiple unsolicited messages (spam) to large numbers of recipients for the purpose of commercial advertising, for the purpose of non-commercial proselytizing, for any prohibited purpose (especially the fraudulent purpose of phishing), or simply sending the same message over and over to the same user.
Spam Classification: Deciding whether an email is spam or not.
## Imports
```
# standard libraries
import pandas as pd
import numpy as np
# Scikit-learn is an open source machine learning library
# that supports supervised and unsupervised learning
# https://scikit-learn.org/stable/
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, confusion_matrix
# Regular expression operations
#https://docs.python.org/3/library/re.html
import re
# Natural Language Toolkit
# https://www.nltk.org/install.html
import nltk
# Stemming maps different forms of the same word to a common “stem”
# https://pypi.org/project/snowballstemmer/
from nltk.stem import SnowballStemmer
# https://www.nltk.org/book/ch02.html
from nltk.corpus import stopwords
```
## Step 1: Load dataset
```
# Mount Google Drive
from google.colab import drive
drive.mount('/content/drive')
# path to your (local/cloud) drive
path = '/content/drive/MyDrive/Colab Notebooks/Enrique/Data/spam/'
# load dataset
df = pd.read_csv(path+'spam.csv', encoding='latin-1')
df.rename(columns = {'v1':'class_label', 'v2':'message'}, inplace = True)
df.drop(['Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4'], axis = 1, inplace = True)
# original dataset
df.head()
```
The dataset has 4825 ham messages and 747 spam messages.
```
# histogram
import seaborn as sns
sns.countplot(df['class_label'])
# explore dataset
vc = df['class_label'].value_counts()
print(vc)
```
This is an imbalanced dataset
* The number of ham messages is much higher than those of spam.
* This can potentially cause our model to be biased.
* To fix this, we could resample our data to get an equal number of spam/ham messages.
```
# convert class label to numeric
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(df.class_label)
df2 = df
df2['class_label'] = le.transform(df.class_label)
df2.head()
# another histogram
df2.hist()
```
## Step 2: Pre-processing
Next, we’ll convert our DataFrame to a list, where every element of that list will be a spam message. Then, we’ll join each element of our list into one big string of spam messages. The lowercase form of that string is the required format needed for our word cloud creation.
```
spam_list = df['message'].tolist()
spam_list
new_df = pd.DataFrame({'message':spam_list})
# removing everything except alphabets
new_df['clean_message'] = new_df['message'].str.replace("[^a-zA-Z#]", " ")
# removing short words
short_word = 4
new_df['clean_message'] = new_df['clean_message'].apply(lambda x: ' '.join([w for w in x.split() if len(w)>short_word]))
# make all text lowercase
new_df['clean_message'] = new_df['clean_message'].apply(lambda x: x.lower())
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
swords = stopwords.words('english')
# tokenization
tokenized_doc = new_df['clean_message'].apply(lambda x: x.split())
# remove stop-words
tokenized_doc = tokenized_doc.apply(lambda x: [item for item in x if item not in swords])
# de-tokenization
detokenized_doc = []
for i in range(len(new_df)):
t = ' '.join(tokenized_doc[i])
detokenized_doc.append(t)
new_df['clean_message'] = detokenized_doc
new_df.head()
```
## Step 3: TfidfVectorizer
**[TfidfVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html)**
Convert a collection of raw documents to a matrix of TF-IDF features.
```
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(stop_words='english', max_features= 300, max_df=0.5, smooth_idf=True)
print(vectorizer)
X = vectorizer.fit_transform(new_df['clean_message'])
X.shape
y = df['class_label']
y.shape
```
Handle imbalance data through SMOTE
```
from imblearn.combine import SMOTETomek
smk= SMOTETomek()
X_bal, y_bal = smk.fit_sample(X, y)
# histogram
import seaborn as sns
sns.countplot(y_bal)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_bal, y_bal, test_size = 0.20, random_state = 0)
X_train.todense()
```
## Step 4: Learning
Training the classifier and making predictions on the test set
```
# create a model
MNB = MultinomialNB()
# fit to data
MNB.fit(X_train, y_train)
# testing the model
prediction_train = MNB.predict(X_train)
print('training prediction\t', prediction_train)
prediction_test = MNB.predict(X_test)
print('test prediction\t\t', prediction_test)
np.set_printoptions(suppress=True)
# Ham and Spam probabilities in test
class_prob = MNB.predict_proba(X_test)
print(class_prob)
# show emails classified as 'spam'
threshold = 0.5
spam_ind = np.where(class_prob[:,1]>threshold)[0]
```
## Step 5: Accuracy
```
# accuracy in training set
y_pred_train = prediction_train
print("Train Accuracy: "+str(accuracy_score(y_train, y_pred_train)))
# accuracy in test set (unseen data)
y_true = y_test
y_pred_test = prediction_test
print("Test Accuracy: "+str(accuracy_score(y_true, y_pred_test)))
# confusion matrix
conf_mat = confusion_matrix(y_true, y_pred_test)
print("Confusion Matrix\n", conf_mat)
import matplotlib.pyplot as plt
from sklearn.metrics import ConfusionMatrixDisplay
labels = ['Ham','Spam']
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(conf_mat)
plt.title('Confusion matrix of the classifier\n')
fig.colorbar(cax)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
```
| true |
code
| 0.659076 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/amathsow/wolof_speech_recognition/blob/master/Speech_recognition_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip3 install torch
!pip3 install torchvision
!pip3 install torchaudio
!pip install comet_ml
import os
from comet_ml import Experiment
import torch
import torch.nn as nn
import torch.utils.data as data
import torch.optim as optim
import torch.nn.functional as F
import torchaudio
import numpy as np
import pandas as pd
import librosa
```
## ETL process
```
from google.colab import drive
drive.mount('/content/drive')
path_audio= 'drive/My Drive/Speech Recognition project/recordings/'
path_text = 'drive/My Drive/Speech Recognition project/wolof_text/'
wav_text = 'drive/My Drive/Speech Recognition project/Wavtext_dataset2.csv'
```
## Data preparation for created the char.txt file from my dataset.
```
datapath = 'drive/My Drive/Speech Recognition project/data/records'
trainpath = '../drive/My Drive/Speech Recognition project/data/records/train/'
valpath = '../drive/My Drive/Speech Recognition project/data/records/val/'
testpath = '../drive/My Drive/Speech Recognition project/data/records/test/'
```
## Let's create the dataset
```
! git clone https://github.com/facebookresearch/CPC_audio.git
!pip install soundfile
!pip install torchaudio
!mkdir checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_30.pt -P checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_logs.json -P checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_args.json -P checkpoint_data
!ls checkpoint_data
import torch
import torchaudio
%cd CPC_audio/
from cpc.model import CPCEncoder, CPCAR
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
DIM_ENCODER=256
DIM_CONTEXT=256
KEEP_HIDDEN_VECTOR=False
N_LEVELS_CONTEXT=1
CONTEXT_RNN="LSTM"
N_PREDICTIONS=12
LEARNING_RATE=2e-4
N_NEGATIVE_SAMPLE =128
encoder = CPCEncoder(DIM_ENCODER).to(device)
context = CPCAR(DIM_ENCODER, DIM_CONTEXT, KEEP_HIDDEN_VECTOR, 1, mode=CONTEXT_RNN).to(device)
# Several functions that will be necessary to load the data later
from cpc.dataset import findAllSeqs, AudioBatchData, parseSeqLabels
SIZE_WINDOW = 20480
BATCH_SIZE=8
def load_dataset(path_dataset, file_extension='.flac', phone_label_dict=None):
data_list, speakers = findAllSeqs(path_dataset, extension=file_extension)
dataset = AudioBatchData(path_dataset, SIZE_WINDOW, data_list, phone_label_dict, len(speakers))
return dataset
class CPCModel(torch.nn.Module):
def __init__(self,
encoder,
AR):
super(CPCModel, self).__init__()
self.gEncoder = encoder
self.gAR = AR
def forward(self, batch_data):
encoder_output = self.gEncoder(batch_data)
#print(encoder_output.shape)
# The output of the encoder data does not have the good format
# indeed it is Batch_size x Hidden_size x temp size
# while the context requires Batch_size x temp size x Hidden_size
# thus you need to permute
context_input = encoder_output.permute(0, 2, 1)
context_output = self.gAR(context_input)
#print(context_output.shape)
return context_output, encoder_output
datapath ='../drive/My Drive/Speech Recognition project/data/records/'
datapath2 ='../drive/My Drive/Speech Recognition project/data/'
!ls .. /checkpoint_data/checkpoint_30.pt
%cd CPC_audio/
from cpc.dataset import parseSeqLabels
from cpc.feature_loader import loadModel
checkpoint_path = '../checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
label_dict, N_PHONES = parseSeqLabels(datapath2+'chars2.txt')
dataset_train = load_dataset(datapath+'train', file_extension='.wav', phone_label_dict=label_dict)
dataset_val = load_dataset(datapath+'val', file_extension='.wav', phone_label_dict=label_dict)
dataset_test = load_dataset(datapath+'test', file_extension='.wav', phone_label_dict=label_dict)
data_loader_train = dataset_train.getDataLoader(BATCH_SIZE, "speaker", True)
data_loader_val = dataset_val.getDataLoader(BATCH_SIZE, "sequence", False)
data_loader_test = dataset_test.getDataLoader(BATCH_SIZE, "sequence", False)
```
## Create Model
```
class PhoneClassifier(torch.nn.Module):
def __init__(self,
input_dim : int,
n_phones : int):
super(PhoneClassifier, self).__init__()
self.linear = torch.nn.Linear(input_dim, n_phones)
def forward(self, x):
return self.linear(x)
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
loss_criterion = torch.nn.CrossEntropyLoss()
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
def train_one_epoch(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
avg_loss, avg_accuracy = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer_frozen)
avg_loss, avg_accuracy
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
import matplotlib.pyplot as plt
from google.colab import files
def run(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
epoches = []
train_losses = []
train_accuracies = []
val_losses = []
val_accuracies = []
for epoch in range(n_epoch):
epoches.append(epoch)
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train, acc_train = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}. Average accuracy {acc_train}")
train_losses.append(loss_train)
train_accuracies.append(acc_train)
print("-------------------")
print("Validation dataset")
loss_val, acc_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
print(f"Average loss : {loss_val}. Average accuracy {acc_val}")
print("-------------------")
print()
val_losses.append(loss_val)
val_accuracies.append(acc_val)
plt.plot(epoches, train_losses, label = "train loss")
plt.plot(epoches, val_losses, label = "val loss")
plt.xlabel('epoches')
plt.ylabel('loss')
plt.title('train and validation loss')
plt.legend()
# Display a figure.
plt.savefig("loss1.png")
files.download("loss1.png")
plt.show()
plt.plot(epoches, train_accuracies, label = "train accuracy")
plt.plot(epoches, val_accuracies, label = "vali accuracy")
plt.xlabel('epoches')
plt.ylabel('accuracy')
plt.title('train and validation accuracy')
plt.legend()
plt.savefig("val1.png")
files.download("val1.png")
# Display a figure.
plt.show()
```
## The Training and Evaluating Script
```
run(cpc_model,phone_classifier,loss_criterion,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=10)
loss_ctc = torch.nn.CTCLoss(zero_infinity=True)
%cd CPC_audio/
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_per = datapath+'train'
path_val_data_per = datapath+'val'
path_phone_data_per = datapath2+'chars2.txt'
BATCH_SIZE=8
phone_labels, N_PHONES = parseSeqLabels(path_phone_data_per)
data_train_per, _ = findAllSeqs(path_train_data_per, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_per, data_train_per, phone_labels)
data_loader_train = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_val_per, _ = findAllSeqs(path_val_data_per, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_per, data_val_per, phone_labels)
data_loader_val = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
from cpc.feature_loader import loadModel
checkpoint_path = '../checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
import torch.nn.functional as F
def train_one_epoch_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores.float(),y.float().to(device),yhat_len,y_len)
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
return avg_loss
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores,y.to(device),yhat_len,y_len)
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
#print(loss)
return avg_loss
def run_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
epoches = []
train_losses = []
val_losses = []
for epoch in range(n_epoch):
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train = train_one_epoch_ctc(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}.")
print("-------------------")
print("Validation dataset")
loss_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
print(f"Average loss : {loss_val}")
print("-------------------")
print()
epoches.append(epoch)
train_losses.append(loss_train)
val_losses.append(loss_val)
plt.plot(epoches, train_losses, label = "ctc_train loss")
plt.plot(epoches, val_losses, label = "ctc_val loss")
plt.xlabel('epoches')
plt.ylabel('loss')
plt.title('train and validation ctc loss')
plt.legend()
# Display and save a figure.
plt.savefig("ctc_loss.png")
files.download("ctc_loss.png")
plt.show()
run_ctc(cpc_model,phone_classifier,loss_ctc,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=10)
import numpy as np
def get_PER_sequence(ref_seq, target_seq):
# re = g.split()
# h = h.split()
n = len(ref_seq)
m = len(target_seq)
D = np.zeros((n+1,m+1))
for i in range(1,n+1):
D[i,0] = D[i-1,0]+1
for j in range(1,m+1):
D[0,j] = D[0,j-1]+1
### TODO compute the alignment
for i in range(1,n+1):
for j in range(1,m+1):
D[i,j] = min(
D[i-1,j]+1,
D[i-1,j-1]+1,
D[i,j-1]+1,
D[i-1,j-1]+ 0 if ref_seq[i-1]==target_seq[j-1] else float("inf")
)
return D[n,m]/len(ref_seq)
#return PER
ref_seq = [0, 1, 1, 2, 0, 2, 2]
pred_seq = [1, 1, 2, 2, 0, 0]
expected_PER = 4. / 7.
print(get_PER_sequence(ref_seq, pred_seq) == expected_PER)
import progressbar
from multiprocessing import Pool
def cut_data(seq, sizeSeq):
maxSeq = sizeSeq.max()
return seq[:, :maxSeq]
def prepare_data(data):
seq, sizeSeq, phone, sizePhone = data
seq = seq.cuda()
phone = phone.cuda()
sizeSeq = sizeSeq.cuda().view(-1)
sizePhone = sizePhone.cuda().view(-1)
seq = cut_data(seq.permute(0, 2, 1), sizeSeq).permute(0, 2, 1)
return seq, sizeSeq, phone, sizePhone
def get_per(test_dataloader,
cpc_model,
phone_classifier):
downsampling_factor = 160
cpc_model.eval()
phone_classifier.eval()
avgPER = 0
nItems = 0
per = []
Item = []
print("Starting the PER computation through beam search")
bar = progressbar.ProgressBar(maxval=len(test_dataloader))
bar.start()
for index, data in enumerate(test_dataloader):
bar.update(index)
with torch.no_grad():
seq, sizeSeq, phone, sizePhone = prepare_data(data)
c_feature, _, _ = cpc_model(seq.to(device),phone.to(device))
sizeSeq = sizeSeq / downsampling_factor
predictions = torch.nn.functional.softmax(
phone_classifier(c_feature), dim=2).cpu()
phone = phone.cpu()
sizeSeq = sizeSeq.cpu()
sizePhone = sizePhone.cpu()
bs = c_feature.size(0)
data_per = [(predictions[b].argmax(1), phone[b]) for b in range(bs)]
# data_per = [(predictions[b], sizeSeq[b], phone[b], sizePhone[b],
# "criterion.module.BLANK_LABEL") for b in range(bs)]
with Pool(bs) as p:
poolData = p.starmap(get_PER_sequence, data_per)
avgPER += sum([x for x in poolData])
nItems += len(poolData)
per.append(sum([x for x in poolData]))
Item.append(index)
bar.finish()
avgPER /= nItems
print(f"Average CER {avgPER}")
plt.plot(Item, per, label = "Per by item")
plt.xlabel('Items')
plt.ylabel('PER')
plt.title('trends of the PER')
plt.legend()
# Display and save a figure.
plt.savefig("Per.png")
files.download("Per.png")
plt.show()
return avgPER
get_per(data_loader_val,cpc_model,phone_classifier)
# Load a dataset labelled with the letters of each sequence.
%cd /content/CPC_audio
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_cer = datapath+'train'
path_val_data_cer = datapath+'val'
path_letter_data_cer = datapath2+'chars2.txt'
BATCH_SIZE=8
letters_labels, N_LETTERS = parseSeqLabels(path_letter_data_cer)
data_train_cer, _ = findAllSeqs(path_train_data_cer, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_cer, data_train_cer, letters_labels)
data_val_cer, _ = findAllSeqs(path_val_data_cer, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_cer, data_val_cer, letters_labels)
# The data loader will generate a tuple of tensors data, labels for each batch
# data : size N x T1 x 1 : the audio sequence
# label : size N x T2 the sequence of letters corresponding to the audio data
# IMPORTANT NOTE: just like the PER the CER is computed with non-aligned phone data.
data_loader_train_letters = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_loader_val_letters = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
from cpc.feature_loader import loadModel
checkpoint_path = '../checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
character_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_LETTERS).to(device)
parameters = list(character_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(character_classifier.parameters()), lr=LEARNING_RATE)
loss_ctc = torch.nn.CTCLoss(zero_infinity=True)
run_ctc(cpc_model,character_classifier,loss_ctc,data_loader_train_letters,data_loader_val_letters,optimizer_frozen,n_epoch=10)
get_per(data_loader_val_letters,cpc_model,character_classifier)
```
| true |
code
| 0.830697 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/lmcanavals/algorithmic_complexity/blob/main/05_01_UCS_dijkstra.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Breadth First Search
BFS para los amigos
```
import graphviz as gv
import numpy as np
import pandas as pd
import heapq as hq
import math
def readAdjl(fn, haslabels=False, weighted=False, sep="|"):
with open(fn) as f:
labels = None
if haslabels:
labels = f.readline().strip().split()
L = []
for line in f:
if weighted:
L.append([tuple(map(int, p.split(sep))) for p in line.strip().split()])
# line => "1|3 2|5 4|4" ==> [(1, 3), (2, 5), (4, 4)]
else:
L.append(list(map(int, line.strip().split()))) # "1 3 5" => [1, 3, 5]
# L.append([int(x) for x in line.strip().split()])
return L, labels
def adjlShow(L, labels=None, directed=False, weighted=False, path=[],
layout="sfdp"):
g = gv.Digraph("G") if directed else gv.Graph("G")
g.graph_attr["layout"] = layout
g.edge_attr["color"] = "gray"
g.node_attr["color"] = "orangered"
g.node_attr["width"] = "0.1"
g.node_attr["height"] = "0.1"
g.node_attr["fontsize"] = "8"
g.node_attr["fontcolor"] = "mediumslateblue"
g.node_attr["fontname"] = "monospace"
g.edge_attr["fontsize"] = "8"
g.edge_attr["fontname"] = "monospace"
n = len(L)
for u in range(n):
g.node(str(u), labels[u] if labels else str(u))
added = set()
for v, u in enumerate(path):
if u != None:
if weighted:
for vi, w in G[u]:
if vi == v:
break
g.edge(str(u), str(v), str(w), dir="forward", penwidth="2", color="orange")
else:
g.edge(str(u), str(v), dir="forward", penwidth="2", color="orange")
added.add(f"{u},{v}")
added.add(f"{v},{u}")
if weighted:
for u in range(n):
for v, w in L[u]:
if not directed and not f"{u},{v}" in added:
added.add(f"{u},{v}")
added.add(f"{v},{u}")
g.edge(str(u), str(v), str(w))
elif directed:
g.edge(str(u), str(v), str(w))
else:
for u in range(n):
for v in L[u]:
if not directed and not f"{u},{v}" in added:
added.add(f"{u},{v}")
added.add(f"{v},{u}")
g.edge(str(u), str(v))
elif directed:
g.edge(str(u), str(v))
return g
```
## Dijkstra
```
def dijkstra(G, s):
n = len(G)
visited = [False]*n
path = [None]*n
cost = [math.inf]*n
cost[s] = 0
queue = [(0, s)]
while queue:
g_u, u = hq.heappop(queue)
if not visite[u]:
visited[u] = True
for v, w in G[u]:
f = g_u + w
if f < cost[v]:
cost[v] = f
path[v] = u
hq.heappush(queue, (f, v))
return path, cost
%%file 1.in
2|4 7|8 14|3
2|7 5|7
0|4 1|7 3|5 6|1
2|5
7|7
1|7 6|1 8|5
2|1 5|1
0|8 4|7 8|8
5|5 7|8 9|8 11|9 12|6
8|8 10|8 12|9 13|7
9|8 13|3
8|9
8|6 9|9 13|2 15|5
9|7 10|13 12|2 16|9
0|3 15|9
12|5 14|9 17|7
13|9 17|8
15|7 16|8
G, _ = readAdjl("1.in", weighted=True)
for i, edges in enumerate(G):
print(f"{i:2}: {edges}")
adjlShow(G, weighted=True)
path, cost = dijkstra(G, 8)
print(path)
adjlShow(G, weighted=True, path=path)
```
| true |
code
| 0.21963 | null | null | null | null |
|
# [Introduction to Data Science: A Comp-Math-Stat Approach](https://lamastex.github.io/scalable-data-science/as/2019/)
## YOIYUI001, Summer 2019
©2019 Raazesh Sainudiin. [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
# 08. Pseudo-Random Numbers, Simulating from Some Discrete and Continuous Random Variables
- The $Uniform(0,1)$ RV
- The $Bernoulli(\theta)$ RV
- Simulating from the $Bernoulli(\theta)$ RV
- The Equi-Probable $de\,Moivre(k)$ RV
- Simulating from the Equi-Probable $de\,Moivre(k)$ RV
- The $Uniform(\theta_1, \theta_2)$ RV
- Simulating from the $Uniform(\theta_1, \theta_2)$ RV
- The $Exponential(\lambda)$ RV
- Simulating from the $Exponential(\lambda)$ RV
- The standard $Cauchy$ RV
- Simulating from the standard $Cauchy$ RV
- Investigating running means
- Replicable samples
- A simple simulation
In the last notebook, we started to look at how we can produce realisations from the most elementary $Uniform(0,1)$ random variable.
i.e., how can we produce samples $(x_1, x_2, \ldots, x_n)$ from $X_1, X_2, \ldots, X_n$ $\overset{IID}{\thicksim}$ $Uniform(0,1)$?
What is SageMath doing when we ask for random()?
```
random()
```
We looked at how Modular arithmetic and number theory gives us pseudo-random number generators.
We used linear congruential generators (LCG) as simple pseudo-random number generators.
Remember that "pseudo-random" means that the numbers are not really random. We saw that some linear congruential generators (LCG) have much shorter, more predictable, patterns than others and we learned what makes a good LCG.
We introduced the pseudo-random number generator (PRNG) called the Mersenne Twister that we will use for simulation purposes in this course. It is based on more sophisticated theory than that of LCG but the basic principles of recurrence relations are the same.
# The $Uniform(0,1)$ Random Variable
Recall that the $Uniform(0,1)$ random variable is the fundamental model as we can transform it to any other random variable, random vector or random structure. The PDF $f$ and DF $F$ of $X \sim Uniform(0,1)$ are:
$f(x) = \begin{cases} 0 & \text{if} \ x \notin [0,1] \\ 1 & \text{if} \ x \in [0,1] \end{cases}$
$F(x) = \begin{cases} 0 & \text{if} \ x < 0 \\ 1 & \text{if} \ x > 1 \\ x & \text{if} \ x \in [0,1] \end{cases}$
We use the Mersenne twister pseudo-random number generator to mimic independent and identically distributed draws from the $uniform(0,1)$ RV.
In Sage, we use the python random module to generate pseudo-random numbers for us. (We have already used it: remember randint?)
random() will give us one simulation from the $Uniform(0,1)$ RV:
```
random()
```
If we want a whole simulated sample we can use a list comprehension. We will be using this technique frequently so make sure you understand what is going on. "for i in range(3)" is acting like a counter to give us 3 simulated values in the list we are making
```
[random() for i in range(3)]
listOfUniformSamples = [random() for i in range(3) ]
listOfUniformSamples
```
If we do this again, we will get a different sample:
```
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
```
Often is it useful to be able to replicate the same random sample. For example, if we were writing some code to do some simulations using samples from a PRNG, and we "improved" the way that we were doing it, how would we want to test our improvement? If we could replicate the same samples then we could show that our new code was equivalent to our old code, just more efficient.
Remember when we were using the LCGs, and we could set the seed $x_0$? More sophisticated PRNGs like the Mersenne Twister also have a seed. By setting this seed to a specified value we can make sure that we can replicate samples.
```
?set_random_seed
set_random_seed(256526)
listOfUniformSamples = [random() for i in range(3) ]
listOfUniformSamples
initial_seed()
```
Now we can replicate the same sample again by setting the seed to the same value:
```
set_random_seed(256526)
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
initial_seed()
set_random_seed(2676676766)
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
initial_seed()
```
We can compare some samples visually by plotting them:
```
set_random_seed(256526)
listOfUniformSamples = [(i,random()) for i in range(100)]
plotsSeed1 = points(listOfUniformSamples)
t1 = text('Seed 1 = 256626', (60,1.2), rgbcolor='blue',fontsize=10)
set_random_seed(2676676766)
plotsSeed2 = points([(i,random()) for i in range(100)],rgbcolor="red")
t2 = text('Seed 2 = 2676676766', (60,1.2), rgbcolor='red',fontsize=10)
bothSeeds = plotsSeed1 + plotsSeed2
t31 = text('Seed 1 and', (30,1.2), rgbcolor='blue',fontsize=10)
t32 = text('Seed 2', (65,1.2), rgbcolor='red',fontsize=10)
show(graphics_array( (plotsSeed1+t1,plotsSeed2+t2, bothSeeds+t31+t32)),figsize=[9,3])
```
### YouTry
Try looking at the more advanced documentation and play a bit.
```
#?sage.misc.randstate
```
(end of You Try)
---
---
### Question:
What can we do with samples from a $Uniform(0,1)$ RV? Why bother?
### Answer:
We can use them to sample or simulate from other, more complex, random variables.
# The $Bernoulli(\theta)$ Random Variable
The $Bernoulli(\theta)$ RV $X$ with PMF $f(x;\theta)$ and DF $F(x;\theta)$ parameterised by some real $\theta\in [0,1]$ is a discrete random variable with only two possible outcomes.
$f(x;\theta)= \theta^x (1-\theta)^{1-x} \mathbf{1}_{\{0,1\}}(x) =
\begin{cases}
\theta & \text{if} \ x=1,\\
1-\theta & \text{if} \ x=0,\\
0 & \text{otherwise}
\end{cases}$
$F(x;\theta) =
\begin{cases}
1 & \text{if} \ 1 \leq x,\\
1-\theta & \text{if} \ 0 \leq x < 1,\\
0 & \text{otherwise}
\end{cases}$
Here are some functions for the PMF and DF for a $Bernoulli$ RV along with various useful functions for us in the sequel. Let's take a quick look at them.
```
def bernoulliPMF(x, theta):
'''Probability mass function for Bernoulli(theta).
Param x is the value to find the Bernoulli probability mass of.
Param theta is the theta parameterising this Bernoulli RV.'''
retValue = 0
if x == 1:
retValue = theta
elif x == 0:
retValue = 1 - theta
return retValue
def bernoulliCDF(x, theta):
'''DF for Bernoulli(theta).
Param x is the value to find the Bernoulli cumulative density function of.
Param theta is the theta parameterising this Bernoulli RV.'''
retValue = 0
if x >= 1:
retValue = 1
elif x >= 0:
retValue = 1 - theta
# in the case where x < 0, retValue is the default of 0
return retValue
# PFM plot
def pmfPlot(outcomes, pmf_values):
'''Returns a pmf plot for a discrete distribution.'''
pmf = points(zip(outcomes,pmf_values), rgbcolor="blue", pointsize='20')
for i in range(len(outcomes)):
pmf += line([(outcomes[i], 0),(outcomes[i], pmf_values[i])], rgbcolor="blue", linestyle=":")
# padding
pmf += point((0,1), rgbcolor="black", pointsize="0")
return pmf
# CDF plot
def cdfPlot(outcomes, cdf_values):
'''Returns a DF plot for a discrete distribution.'''
cdf_pairs = zip(outcomes, cdf_values)
cdf = point(cdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(cdf_pairs)):
x, kheight = cdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = cdf_pairs[k-1] # unpack previous tuple
cdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
cdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
cdf += line([(x, previous_height),(x, kheight)], rgbcolor="blue", linestyle=":")
# padding
max_index = len(outcomes)-1
cdf += line([(outcomes[0]-0.2, 0),(outcomes[0], 0)], rgbcolor="grey")
cdf += line([(outcomes[max_index],cdf_values[max_index]),(outcomes[max_index]+0.2, cdf_values[max_index])], \
rgbcolor="grey")
return cdf
def makeFreqDictHidden(myDataList):
'''Make a frequency mapping out of a list of data.
Param myDataList, a list of data.
Return a dictionary mapping each data value from min to max in steps of 1 to its frequency count.'''
freqDict = {} # start with an empty dictionary
sortedMyDataList = sorted(myDataList)
for k in sortedMyDataList:
freqDict[k] = myDataList.count(k)
return freqDict # return the dictionary created
def makeEMFHidden(myDataList):
'''Make an empirical mass function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, relative frequency) ordered by data value.'''
freqs = makeFreqDictHidden(myDataList) # make the frequency counts mapping
totalCounts = sum(freqs.values())
relFreqs = [fr/(1.0*totalCounts) for fr in freqs.values()] # use a list comprehension
numRelFreqPairs = zip(freqs.keys(), relFreqs) # zip the keys and relative frequencies together
numRelFreqPairs.sort() # sort the list of tuples
return numRelFreqPairs
from pylab import array
def makeEDFHidden(myDataList):
'''Make an empirical distribution function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, cumulative relative frequency) ordered by data value.'''
freqs = makeFreqDictHidden(myDataList) # make the frequency counts mapping
totalCounts = sum(freqs.values())
relFreqs = [fr/(1.0*totalCounts) for fr in freqs.values()] # use a list comprehension
relFreqsArray = array(relFreqs)
cumFreqs = list(relFreqsArray.cumsum())
numCumFreqPairs = zip(freqs.keys(), cumFreqs) # zip the keys and culm relative frequencies together
numCumFreqPairs.sort() # sort the list of tuples
return numCumFreqPairs
# EPMF plot
def epmfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
epmf_pairs = makeEMFHidden(samples)
epmf = point(epmf_pairs, rgbcolor = "blue", pointsize="20")
for k in epmf_pairs: # for each tuple in the list
kkey, kheight = k # unpack tuple
epmf += line([(kkey, 0),(kkey, kheight)], rgbcolor="blue", linestyle=":")
# padding
epmf += point((0,1), rgbcolor="black", pointsize="0")
return epmf
# ECDF plot
def ecdfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
ecdf_pairs = makeEDFHidden(samples)
ecdf = point(ecdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(ecdf_pairs)):
x, kheight = ecdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = ecdf_pairs[k-1] # unpack previous tuple
ecdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
ecdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
ecdf += line([(x, previous_height),(x, kheight)], rgbcolor="blue", linestyle=":")
# padding
ecdf += line([(ecdf_pairs[0][0]-0.2, 0),(ecdf_pairs[0][0], 0)], rgbcolor="grey")
max_index = len(ecdf_pairs)-1
ecdf += line([(ecdf_pairs[max_index][0], ecdf_pairs[max_index][1]),(ecdf_pairs[max_index][0]+0.2, \
ecdf_pairs[max_index][1])],rgbcolor="grey")
return ecdf
```
We can see the effect of varying $\theta$ interactively:
```
@interact
def _(theta=(0.5)):
'''Interactive function to plot the bernoulli pmf and cdf.'''
if theta <=1 and theta >= 0:
outcomes = (0, 1) # define the bernoulli outcomes
print "Bernoulli (", RR(theta).n(digits=2), ") pmf and cdf"
# pmf plot
pmf_values = [bernoulliPMF(x, theta) for x in outcomes]
pmf = pmfPlot(outcomes, pmf_values) # this is one of our own, hidden, functions
# cdf plot
cdf_values = [bernoulliCDF(x, theta) for x in outcomes]
cdf = cdfPlot(outcomes, cdf_values) # this is one of our own, hidden, functions
show(graphics_array([pmf, cdf]),figsize=[8,3])
else:
print "0 <= theta <= 1"
```
Don't worry about how these plots are done: you are not expected to be able to understand all of these details now.
Just use them to see the effect of varying $\theta$.
## Simulating a sample from the $Bernoulli(\theta)$ RV
We can simulate a sample from a $Bernoulli$ distribution by transforming input from a $Uniform(0,1)$ distribution using the floor() function in Sage. In maths, $\lfloor x \rfloor$, the 'floor of $x$' is the largest integer that is smaller than or equal to $x$. For example, $\lfloor 3.8 \rfloor = 3$.
```
z=3.8
floor(z)
```
Using floor, we can do inversion sampling from the $Bernoulli(\theta)$ RV using the the $Uniform(0,1)$ random variable that we said is the fundamental model.
We will introduce inversion sampling more formally later. In general, inversion sampling means using the inverse of the CDF $F$, $F^{[-1]}$, to transform input from a $Uniform(0,1)$ distribution.
To simulate from the $Bernoulli(\theta)$, we can use the following algorithm:
### Input:
- $u \thicksim Uniform(0,1)$ from a PRNG, $\qquad \qquad \text{where, } \sim$ means "sample from"
- $\theta$, the parameter
### Output:
$x \thicksim Bernoulli(\theta)$
### Steps:
- $u \leftarrow Uniform(0,1)$
- $x \leftarrow \lfloor u + \theta \rfloor$
- Return $x$
We can illustrate this with SageMath:
```
theta = 0.5 # theta must be such that 0 <= theta <= 1
u = random()
x = floor(u + theta)
x
```
To make a number of simulations, we can use list comprehensions again:
```
theta = 0.5
n = 20
randomUs = [random() for i in range(n)]
simulatedBs = [floor(u + theta) for u in randomUs]
simulatedBs
```
To make modular reusable code we can package up what we have done as functions.
The function `bernoulliFInverse(u, theta)` codes the inverse of the CDF of a Bernoulli distribution parameterised by `theta`. The function `bernoulliSample(n, theta)` uses `bernoulliFInverse(...)` in a list comprehension to simulate n samples from a Bernoulli distribution parameterised by theta, i.e., the distribution of our $Bernoulli(\theta)$ RV.
```
def bernoulliFInverse(u, theta):
'''A function to evaluate the inverse CDF of a bernoulli.
Param u is the value to evaluate the inverse CDF at.
Param theta is the distribution parameters.
Returns inverse CDF under theta evaluated at u'''
return floor(u + theta)
def bernoulliSample(n, theta):
'''A function to simulate samples from a bernoulli distribution.
Param n is the number of samples to simulate.
Param theta is the bernoulli distribution parameter.
Returns a simulated Bernoulli sample as a list'''
us = [random() for i in range(n)]
# use bernoulliFInverse in a list comprehension
return [bernoulliFInverse(u, theta) for u in us]
```
Note that we are using a list comprehension and the built-in SageMath `random()` function to make a list of pseudo-random simulations from the $Uniform(0,1)$. The length of the list is determined by the value of n. Inside the body of the function we assign this list to a variable named `us` (i.e., u plural). We then use another list comprehension to make our simulated sample. This list comprehension works by calling our function `bernoulliFInverse(...)` and passing in values for theta together with each u in us in turn.
Let's try a small number of samples:
```
theta = 0.2
n = 10
samples = bernoulliSample(n, theta)
samples
```
Now lets explore the effect of interactively varying n and $\theta$:
```
@interact
def _(theta=(0.5), n=(10,(0..1000))):
'''Interactive function to plot samples from bernoulli distribution.'''
if theta >= 0 and theta <= 1:
print "epmf and ecdf for ", n, " samples from Bernoulli (", theta, ")"
samples = bernoulliSample(n, theta)
# epmf plot
epmf = epmfPlot(samples) # this is one of our hidden functions
# ecdf plot
ecdf = ecdfPlot(samples) # this is one of our hidden functions
show(graphics_array([epmf, ecdf]),figsize=[8,3])
else:
print "0 <= theta <=1, n>0"
```
You can vary $\theta$ and $n$ on the interactive plot. You should be able to see that as $n$ increases, the empirical plots should get closer to the theoretical $f$ and $F$.
### YouTry
Check that you understand what `floor` is doing. We have put some extra print statements into our demonstration of floor so that you can see what is going on in each step. Try evaluating this cell several times so that you see what happens with different values of `u`.
```
theta = 0.5 # theta must be such that 0 <= theta <= 1
u = random()
print "u is", u
print "u + theta is", (u + theta)
print "floor(u + theta) is", floor(u + theta)
```
In the cell below we use floor to get 1's and 0's from the pseudo-random u's given by random(). It is effectively doing exactly the same thing as the functions above that we use to simulate a specified number of $Bernoulli(\theta)$ RVs, but the why that it is written may be easier to understand. If `floor` is doing what we want it to, then when `n` is sufficiently large, we'd expect our proportion of `1`s to be close to `theta` (remember Kolmogorov's axiomatic motivations for probability!). Try changing the value assigned to the variable `theta` and re-evaluting the cell to check this.
```
theta = 0.7 # theta must be such that 0 <= theta <= 1
listFloorResults = [] # an empty list to store results in
n = 100000 # how many iterations to do
for i in range(n): # a for loop to do something n times
u = random() # generate u
x = floor(u + theta) # use floor
listFloorResults.append(x) # add x to the list of results
listFloorResults.count(1)*1.0/len(listFloorResults) # proportion of 1s in the results
```
# The equi-probable $de~Moivre(\theta)$ Random Variable
The $de~Moivre(\theta_1,\theta_2,\ldots,\theta_k)$ RV is the natural generalisation of the $Bernoulli (\theta)$ RV to more than two outcomes. Take a die (i.e. one of a pair of dice): there are 6 possible outcomes from tossing a die if the die is a normal six-sided one (the outcome is which face is the on the top). To start with we can allow the possibility that the different faces could be loaded so that they have different probabilities of being the face on the top if we throw the die. In this case, k=6 and the parameters $\theta_1$, $\theta_2$, ...$\theta_6$ specify how the die is loaded, and the number on the upper-most face if the die is tossed is a $de\,Moivre$ random variable parameterised by $\theta_1,\theta_2,\ldots,\theta_6$.
If $\theta_1=\theta_2=\ldots=\theta_6= \frac{1}{6}$ then we have a fair die.
Here are some functions for the equi-probable $de\, Moivre$ PMF and CDF where we code the possible outcomes as the numbers on the faces of a k-sided die, i.e, 1,2,...k.
```
def deMoivrePMF(x, k):
'''Probability mass function for equi-probable de Moivre(k).
Param x is the value to evaluate the deMoirve pmf at.
Param k is the k parameter for an equi-probable deMoivre.
Returns the evaluation of the deMoivre(k) pmf at x.'''
if (int(x)==x) & (x > 0) & (x <= k):
return 1.0/k
else:
return 0
def deMoivreCDF(x, k):
'''DF for equi-probable de Moivre(k).
Param x is the value to evaluate the deMoirve cdf at.
Param k is the k parameter for an equi-probable deMoivre.
Returns the evaluation of the deMoivre(k) cdf at x.'''
return 1.0*x/k
@interact
def _(k=(6)):
'''Interactive function to plot the de Moivre pmf and cdf.'''
if (int(k) == k) and (k >= 1):
outcomes = range(1,k+1,1) # define the outcomes
pmf_values = [deMoivrePMF(x, k) for x in outcomes]
print "equi-probable de Moivre (", k, ") pmf and cdf"
# pmf plot
pmf = pmfPlot(outcomes, pmf_values) # this is one of our hidden functions
# cdf plot
cdf_values = [deMoivreCDF(x, k) for x in outcomes]
cdf = cdfPlot(outcomes, cdf_values) # this is one of our hidden functions
show(graphics_array([pmf, cdf]),figsize=[8,3])
else:
print "k must be an integer, k>0"
```
### YouTry
Try changing the value of k in the above interact.
## Simulating a sample from the equi-probable $de\,Moivre(k)$ random variable
We use floor ($\lfloor \, \rfloor$) again for simulating from the equi-probable $de \, Moivre(k)$ RV, but because we are defining our outcomes as 1, 2, ... k, we just add 1 to the result.
```
k = 6
u = random()
x = floor(u*k)+1
x
```
To simulate from the equi-probable $de\,Moivre(k)$, we can use the following algorithm:
#### Input:
- $u \thicksim Uniform(0,1)$ from a PRNG
- $k$, the parameter
#### Output:
- $x \thicksim \text{equi-probable } de \, Moivre(k)$
#### Steps:
- $u \leftarrow Uniform(0,1)$
- $x \leftarrow \lfloor uk \rfloor + 1$
- return $x$
We can illustrate this with SageMath:
```
def deMoivreFInverse(u, k):
'''A function to evaluate the inverse CDF of an equi-probable de Moivre.
Param u is the value to evaluate the inverse CDF at.
Param k is the distribution parameter.
Returns the inverse CDF for a de Moivre(k) distribution evaluated at u.'''
return floor(k*u) + 1
def deMoivreSample(n, k):
'''A function to simulate samples from an equi-probable de Moivre.
Param n is the number of samples to simulate.
Param k is the bernoulli distribution parameter.
Returns a simulated sample of size n from an equi-probable de Moivre(k) distribution as a list.'''
us = [random() for i in range(n)]
return [deMoivreFInverse(u, k) for u in us]
```
A small sample:
```
deMoivreSample(15,6)
```
You should understand the `deMoivreFInverse` and `deMoivreSample` functions and be able to write something like them if you were asked to.
You are not expected to be to make the interactive plots below (but this is not too hard to do by syntactic mimicry and google searches!).
Now let's do some interactive sampling where you can vary $k$ and the sample size $n$:
```
@interact
def _(k=(6), n=(10,(0..500))):
'''Interactive function to plot samples from equi-probable de Moivre distribution.'''
if n > 0 and k >= 0 and int(k) == k:
print "epmf and ecdf for ", n, " samples from equi-probable de Moivre (", k, ")"
outcomes = range(1,k+1,1) # define the outcomes
samples = deMoivreSample(n, k) # get the samples
epmf = epmfPlot(samples) # this is one of our hidden functions
ecdf = ecdfPlot(samples) # this is one of our hidden functions
show(graphics_array([epmf, ecdf]),figsize=[10,3])
else:
print "k>0 must be an integer, n>0"
```
Try changing $n$ and/or $k$. With $k = 40$ for example, you could be simulating the number on the first ball for $n$ Lotto draws.
### YouTry
A useful counterpart to the floor of a number is the ceiling, denoted $\lceil \, \rceil$. In maths, $\lceil x \rceil$, the 'ceiling of $x$' is the smallest integer that is larger than or equal to $x$. For example, $\lceil 3.8 \rceil = 4$. We can use the ceil function to do this in Sage:
```
ceil(3.8)
```
Try using `ceil` to check that you understand what it is doing. What would `ceil(0)` be?
# Inversion Sampler for Continuous Random Variables
When we simulated from the discrete RVs above, the $Bernoulli(\theta)$ and the equi-probable $de\,Moivre(k)$, we transformed some $u \thicksim Uniform(0,1)$ into some value for the RV.
Now we will look at the formal idea of an inversion sampler for continuous random variables. Inversion sampling for continuous random variables is a way to simulate values for a continuous random variable $X$ using $u \thicksim Uniform(0,1)$.
The idea of the inversion sampler is to treat $u \thicksim Uniform(0,1)$ as some value taken by the CDF $F$ and find the value $x$ at which $F(X \le x) = u$.
To find x where $F(X \le x) = u$ we need to use the inverse of $F$, $F^{[-1]}$. This is why it is called an **inversion sampler**.
Formalising this,
### Proposition
Let $F(x) := \int_{- \infty}^{x} f(y) \,d y : \mathbb{R} \rightarrow [0,1]$ be a continuous DF with density $f$, and let its inverse $F^{[-1]} $ be:
$$ F^{[-1]}(u) := \inf \{ x : F(x) = u \} : [0,1] \rightarrow \mathbb{R} $$
Then, $F^{[-1]}(U)$ has the distribution function $F$, provided $U \thicksim Uniform(0,1)$ ($U$ is a $Uniform(0,1)$ RV).
Note:
The infimum of a set A of real numbers, denoted by $\inf(A)$, is the greatest lower bound of every element of $A$.
Proof
The "one-line proof" of the proposition is due to the following equalities:
$$P(F^{[-1]}(U) \leq x) = P(\inf \{ y : F(y) = U)\} \leq x ) = P(U \leq F(x)) = F(x), \quad \text{for all } x \in \mathbb{R} . $$
# Algorithm for Inversion Sampler
#### Input:
- A PRNG for $Uniform(0,1)$ samples
- A procedure to give us $F^{[-1]}(u)$, inverse of the DF of the target RV $X$ evaluated at $u$
#### Output:
- A sample $x$ from $X$ distributed according to $F$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u)$
# The $Uniform(\theta_1, \theta_2)$RV
We have already met the$Uniform(\theta_1, \theta_2)$ RV.
Given two real parameters $\theta_1,\theta_2 \in \mathbb{R}$, such that $\theta_1 < \theta_2$, the PDF of the $Uniform(\theta_1,\theta_2)$ RV $X$ is:
$$f(x;\theta_1,\theta_2) =
\begin{cases}
\frac{1}{\theta_2 - \theta_1} & \text{if }\theta_1 \leq x \leq \theta_2\text{,}\\
0 & \text{otherwise}
\end{cases}
$$
and its DF given by $F(x;\theta_1,\theta_2) = \int_{- \infty}^x f(y; \theta_1,\theta_2) \, dy$ is:
$$
F(x; \theta_1,\theta_2) =
\begin{cases}
0 & \text{if }x < \theta_1 \\
\frac{x-\theta_1}{\theta_2-\theta_1} & \text{if}~\theta_1 \leq x \leq \theta_2,\\
1 & \text{if} x > \theta_2
\end{cases}
$$
For example, here are the PDF, CDF and inverse CDF for the $Uniform(-1,1)$:
<img src="images/UniformMinus11ThreeCharts.png" width=800>
As usual, we can make some SageMath functions for the PDF and CDF:
```
# uniform pdf
def uniformPDF(x, theta1, theta2):
'''Uniform(theta1, theta2) pdf function f(x; theta1, theta2).
x is the value to evaluate the pdf at.
theta1, theta2 are the distribution parameters.'''
retvalue = 0 # default return value
if x >= theta1 and x <= theta2:
retvalue = 1.0/(theta2-theta1)
return retvalue
# uniform cdf
def uniformCDF(x, theta1, theta2):
'''Uniform(theta1, theta2) CDF or DF function F(x; theta1, theta2).
x is the value to evaluate the cdf at.
theta1, theta2 are the distribution parameters.'''
retvalue = 0 # default return value
if (x > theta2):
retvalue = 1
elif (x > theta1): # else-if
retvalue = (x - theta1) / (theta2-theta1)
# if (x < theta1), retvalue will be 0
return retvalue
```
Using these functions in an interactive plot, we can see the effect of changing the distribution parameters $\theta_1$ and $\theta_2$.
```
@interact
def InteractiveUniformPDFCDFPlots(theta1=0,theta2=1):
if theta2 > theta1:
print "Uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") pdf and cdf"
p1 = line([(theta1-1,0), (theta1,0)], rgbcolor='blue')
p1 += line([(theta1,1/(theta2-theta1)), (theta2,1/(theta2-theta1))], rgbcolor='blue')
p1 += line([(theta2,0), (theta2+1,0)], rgbcolor='blue')
p2 = line([(theta1-1,0), (theta1,0)], rgbcolor='red')
p2 += line([(theta1,0), (theta2,1)], rgbcolor='red')
p2 += line([(theta2,1), (theta2+1,1)], rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "theta2 must be greater than theta1"
```
# Simulating from the $Uniform(\theta_1, \theta_2)$ RV
We can simulate from the $Uniform(\theta_1,\theta_2)$ using the inversion sampler, provided that we can get an expression for $F^{[-1]}$ that can be implemented as a procedure.
We can get this by solving for $x$ in terms of $u=F(x;\theta_1,\theta_2)$:
$$
u = \frac{x-\theta_1}{\theta_2-\theta_1} \quad \iff \quad x = (\theta_2-\theta_1)u+\theta_1 \quad \iff \quad F^{[-1]}(u;\theta_1,\theta_2) = \theta_1+(\theta_2-\theta_1)u
$$
<img src="images/Week7InverseUniformSampler.png" width=600>
## Algorithm for Inversion Sampler for the $Uniform(\theta_1, \theta_2)$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
- $\theta_1$, $\theta_2$
#### Output:
- A sample $x \thicksim Uniform(\theta_1, \theta_2)$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = (\theta_1 + u(\theta_2 - \theta_1))$
- Return $x$
We can illustrate this with SageMath by writing a function to calculate the inverse of the CDF of a uniform distribution parameterised by theta1 and theta2. Given a value between 0 and 1 for the parameter u, it returns the height of the inverse CDF at this point, i.e. the value in the range theta1 to theta2 where the CDF evaluates to u.
```
def uniformFInverse(u, theta1, theta2):
'''A function to evaluate the inverse CDF of a uniform(theta1, theta2) distribution.
u, u should be 0 <= u <= 1, is the value to evaluate the inverse CDF at.
theta1, theta2, theta2 > theta1, are the uniform distribution parameters.'''
return theta1 + (theta2 - theta1)*u
```
This function transforms a single $u$ into a single simulated value from the $Uniform(\theta_1, \theta_2)$, for example:
```
u = random()
theta1, theta2 = 3, 6
uniformFInverse(u, theta1, theta2)
```
Then we can use this function inside another function to generate a number of samples:
```
def uniformSample(n, theta1, theta2):
'''A function to simulate samples from a uniform distribution.
n > 0 is the number of samples to simulate.
theta1, theta2 (theta2 > theta1) are the uniform distribution parameters.'''
us = [random() for i in range(n)]
return [uniformFInverse(u, theta1, theta2) for u in us]
```
The basic strategy is the same as for simulating $Bernoulli$ and $de \, Moirve$ samples: we are using a list comprehension and the built-in SAGE random() function to make a list of pseudo-random simulations from the $Uniform(0,1)$. The length of the list is determined by the value of n. Inside the body of the function we assign this list to a variable named us (i.e., u plural). We then use another list comprehension to make our simulated sample. This list comprehension works by calling our function uniformFInverse(...) and passing in values for theta1 and theta2 together with each u in us in turn.
You should be able to write simple functions like uniformFinverse and uniformSample yourself.
Try this for a small sample:
```
param1 = -5
param2 = 5
nToGenerate = 30
myUniformSample = uniformSample(nToGenerate, param1, param2)
print(myUniformSample)
```
Much more fun, we can make an interactive plot which uses the uniformSample(...) function to generate and plot while you choose the parameters and number to generate (you are not expected to be able to make interactive plots like this):
```
@interact
def _(theta1=-1, theta2=1, n=(1..5000)):
'''Interactive function to plot samples from uniform distribution.'''
if theta2 > theta1:
if n == 1:
print n, "uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") sample"
else:
print n, "uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") samples"
sample = uniformSample(n, theta1, theta2)
pts = zip(range(1,n+1,1),sample) # plot so that first sample is at x=1
p=points(pts)
p+= text(str(theta1), (0, theta1), fontsize=10, color='black') # add labels manually
p+= text(str(theta2), (0, theta2), fontsize=10, color='black')
p.show(xmin=0, xmax = n+1, ymin=theta1, ymax = theta2, axes=false, gridlines=[[0,n+1],[theta1,theta2]], \
figsize=[7,3])
else:
print "Theta1 must be less than theta2"
```
We can get a better idea of the distribution of our sample using a histogram (the minimum sample size has been set to 50 here because the automatic histogram generation does not do a very good job with small samples).
```
import pylab
@interact
def _(theta1=0, theta2=1, n=(50..5000), Bins=5):
'''Interactive function to plot samples from uniform distribution as a histogram.'''
if theta2 > theta1:
sample = uniformSample(n, theta1, theta2)
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(sample, Bins, density=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram')
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "Theta1 must be less than theta2"
```
# The $Exponential(\lambda)$ Random Variable
For a given $\lambda$ > 0, an $Exponential(\lambda)$ Random Variable has the following PDF $f$ and DF $F$:
$$
f(x;\lambda) =\begin{cases}\lambda e^{-\lambda x} & \text{if }x \ge 0\text{,}\\ 0 & \text{otherwise}\end{cases}
$$
$$
F(x;\lambda) =\begin{cases}1 - e^{-\lambda x} & \text{if }x \ge 0\text{,}\\ 0 & \text{otherwise}\end{cases}
$$
An exponential distribution is useful because is can often be used to model inter-arrival times or making inter-event measurements (if you are familiar with the $Poisson$ distribution, a discrete distribution, you may have also met the $Exponential$ distribution as the time between $Poisson$ events). Here are some examples of random variables which are sometimes modelled with an exponential distribution:
time between the arrival of buses at a bus-stop
distance between roadkills on a stretch of highway
In SageMath, the we can use `exp(x)` to calculate $e^x$, for example:
```
x = 3.0
exp(x)
```
We can code some functions for the PDF and DF of an $Exponential$ parameterised by lambda like this $\lambda$.
**Note** that we cannot or should not use the name `lambda` for the parameter because in SageMath (and Python), the term `lambda` has a special meaning. Do you recall lambda expressions?
```
def exponentialPDF(x, lam):
'''Exponential pdf function.
x is the value we want to evaluate the pdf at.
lam is the exponential distribution parameter.'''
return lam*exp(-lam*x)
def exponentialCDF(x, lam):
'''Exponential cdf or df function.
x is the value we want to evaluate the cdf at.
lam is the exponential distribution parameter.'''
return 1 - exp(-lam*x)
```
You should be able to write simple functions like `exponentialPDF` and `exponentialCDF` yourself, but you are not expected to be able to make the interactive plots.
You can see the shapes of the PDF and CDF for different values of $\lambda$ using the interactive plot below.
```
@interact
def _(lam=('lambda',0.5),Xmax=(5..100)):
'''Interactive function to plot the exponential pdf and cdf.'''
if lam > 0:
print "Exponential(", RR(lam).n(digits=2), ") pdf and cdf"
from pylab import arange
xvalues = list(arange(0.1, Xmax, 0.1))
p1 = line(zip(xvalues, [exponentialPDF(y, lam) for y in xvalues]), rgbcolor='blue')
p2 = line(zip(xvalues, [exponentialCDF(y, lam) for y in xvalues]), rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "Lambda must be greater than 0"
```
We are going to write some functions to help us to do inversion sampling from the $Exponential(\lambda)$ RV.
As before, we need an expression for $F^{[-1]}$ that can be implemented as a procedure.
We can get this by solving for $x$ in terms of $u=F(x;\lambda)$
### YouTry later
Show that
$$
F^{[-1]}(u;\lambda) =\frac{-1}{\lambda} \ln(1-u)
$$
$\ln = \log_e$ is the natural logarthim.
(end of You try)
---
---
# Simulating from the $Exponential(\lambda)$ RV
Algorithm for Inversion Sampler for the $Exponential(\lambda)$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
- $\lambda$
### Output:
- sample $x \thicksim Exponential(\lambda)$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = \frac{-1}{\lambda}\ln(1-u)$
- Return $x$
The function `exponentialFInverse(u, lam)` codes the inverse of the CDF of an exponential distribution parameterised by `lam`. Given a value between 0 and 1 for the parameter `u`, it returns the height of the inverse CDF of the exponential distribution at this point, i.e. the value where the CDF evaluates to `u`. The function `exponentialSample(n, lam)` uses `exponentialFInverse(...)` to simulate `n` samples from an exponential distribution parameterised by `lam`.
```
def exponentialFInverse(u, lam):
'''A function to evaluate the inverse CDF of a exponential distribution.
u is the value to evaluate the inverse CDF at.
lam is the exponential distribution parameter.'''
# log without a base is the natural logarithm
return (-1.0/lam)*log(1 - u)
def exponentialSample(n, lam):
'''A function to simulate samples from an exponential distribution.
n is the number of samples to simulate.
lam is the exponential distribution parameter.'''
us = [random() for i in range(n)]
return [exponentialFInverse(u, lam) for u in us]
```
We can have a look at a small sample:
```
lam = 0.5
nToGenerate = 30
sample = exponentialSample(nToGenerate, lam)
print(sorted(sample)) # recall that sorted makes a new sorted list
```
You should be able to write simple functions like `exponentialFinverse` and `exponentialSample` yourself by now.
The best way to visualise the results is to use a histogram. With this interactive plot you can explore the effect of varying lambda and n:
```
import pylab
@interact
def _(lam=('lambda',0.5), n=(50,(10..10000)), Bins=(5,(1,1000))):
'''Interactive function to plot samples from exponential distribution.'''
if lam > 0:
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(exponentialSample(n, lam), Bins, density=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram')
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "Lambda must be greater than 0"
```
# The Standard $Cauchy$ Random Variable
A standard $Cauchy$ Random Variable has the following PDF $f$ and DF $F$:
$$
f(x) =\frac{1}{\pi(1+x^2)}\text{,}\,\, -\infty < x < \infty
$$
$$
F(x) = \frac{1}{\pi}\tan^{-1}(x) + 0.5
$$
The $Cauchy$ distribution is an interesting distribution because the expectation does not exist:
$$
\int \left|x\right|\,dF(x) = \frac{2}{\pi} \int_0^{\infty} \frac{x}{1+x^2}\,dx = \left(x \tan^{-1}(x) \right]_0^{\infty} - \int_0^{\infty} \tan^{-1}(x)\, dx = \infty \ .
$$
In SageMath, we can use the `arctan` function for $tan^{-1}$, and `pi` for $\pi$ and code some functions for the PDF and DF of the standard Cauchy as follows.
```
def cauchyPDF(x):
'''Standard Cauchy pdf function.
x is the value to evaluate the pdf at.'''
return 1.0/(pi.n()*(1+x^2))
def cauchyCDF(x):
'''Standard Cauchy cdf function.
x is the value to evaluate the cdf at.'''
return (1.0/pi.n())*arctan(x) + 0.5
```
You can see the shapes of the PDF and CDF using the plot below. Note from the PDF $f$ above is defined for $-\infty < x < \infty$. This means we should set some arbitrary limits on the minimum and maximum values to use for the x-axis on the plots. You can change these limits interactively.
```
@interact
def _(lower=(-4), upper=(4)):
'''Interactive function to plot the Cauchy pdf and cdf.'''
if lower < upper:
print "Standard Cauchy pdf and cdf"
p1 = plot(cauchyPDF, lower,upper, rgbcolor='blue')
p2 = plot(cauchyCDF, lower,upper, rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "Upper must be greater than lower"
```
#### Constructing a standard $Cauchy$ RVs
- Place a double light sabre (i.e., one that can shoot its lazer beam from both ends, like that of Darth Mole in Star Wars) on a cartesian axis so that it is centred on $(0, 1)$.
- Randomly spin it (so that its spin angle to the x-axis is $\theta \thicksim Uniform (0, 2\pi)$).
- Let it come to rest.
- The y-coordinate of the point of intersection with the y-axis is a standard Cauchy RV.
You can see that we are equally likely to get positive and negative values (the density function of the standard $Cauchy$ RV is symmetrical about 0) and whenever the spin angle is close to $\frac{\pi}{4}$ ($90^{\circ}$) or $\frac{3\pi}{2}$ ($270^{\circ}$), the intersections will be a long way out up or down the y-axis, i.e. very negative or very positive values. If the light sabre is exactly parallel to the y-axis there will be no intersection: a $Cauchy$ RV $X$ can take values $-\infty < x < \infty$
<img src="images/Week7CauchyLightSabre.png" width=300>
## Simulating from the standard $Cauchy$
We can perform inversion sampling on the $Cauchy$ RV by transforming a $Uniform(0,1)$ random variable into a $Cauchy$ random variable using the inverse CDF.
We can get this by replacing $F(x)$ by $u$ in the expression for $F(x)$:
$$
\frac{1}{\pi}tan^{-1}(x) + 0.5 = u
$$
and solving for $x$:
$$
\begin{array}{lcl} \frac{1}{\pi}tan^{-1}(x) + 0.5 = u & \iff & \frac{1}{\pi} tan^{-1}(x) = u - \frac{1}{2}\\ & \iff & tan^{-1}(x) = (u - \frac{1}{2})\pi\\ & \iff & tan(tan^{-1}(x)) = tan((u - \frac{1}{2})\pi)\\ & \iff & x = tan((u - \frac{1}{2})\pi) \end{array}
$$
## Inversion Sampler for the standard $Cauchy$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
#### Output:
- A sample $x \thicksim \text{standard } Cauchy$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = tan((u - \frac{1}{2})\pi)$
- Return $x$
The function `cauchyFInverse(u)` codes the inverse of the CDF of the standard Cauchy distribution. Given a value between 0 and 1 for the parameter u, it returns the height of the inverse CDF of the standard $Cauchy$ at this point, i.e. the value where the CDF evaluates to u. The function `cauchySample(n`) uses `cauchyFInverse(...)` to simulate `n` samples from a standard Cauchy distribution.
```
def cauchyFInverse(u):
'''A function to evaluate the inverse CDF of a standard Cauchy distribution.
u is the value to evaluate the inverse CDF at.'''
return RR(tan(pi*(u-0.5)))
def cauchySample(n):
'''A function to simulate samples from a standard Cauchy distribution.
n is the number of samples to simulate.'''
us = [random() for i in range(n)]
return [cauchyFInverse(u) for u in us]
```
And we can visualise these simulated samples with an interactive plot:
```
@interact
def _(n=(50,(0..5000))):
'''Interactive function to plot samples from standard Cauchy distribution.'''
if n == 1:
print n, "Standard Cauchy sample"
else:
print n, "Standard Cauchy samples"
sample = cauchySample(n)
pts = zip(range(1,n+1,1),sample)
p=points(pts)
p+= text(str(floor(min(sample))), (0, floor(min(sample))), \
fontsize=10, color='black') # add labels manually
p+= text(str(ceil(max(sample))), (0, ceil(max(sample))), \
fontsize=10, color='black')
p.show(xmin=0, xmax = n+1, ymin=floor(min(sample)), \
ymax = ceil(max(sample)), axes=false, \
gridlines=[[0,n+1],[floor(min(sample)),ceil(max(sample))]],\
figsize=[7,3])
```
Notice how we can get some very extreme values This is because of the 'thick tails' of the density function of the $Cauchy$ RV. Think about this in relation to the double light sabre visualisation. We can see effect of the extreme values with a histogram visualisation as well. The interactive plot below will only use values between lower and upper in the histogram. Try increasing the sample size to something like 1000 and then gradually widening the limits:
```
import pylab
@interact
def _(n=(50,(0..5000)), lower=(-4), upper=(4), Bins=(5,(1,100))):
'''Interactive function to plot samples from
standard Cauchy distribution.'''
if lower < upper:
if n == 1:
print n, "Standard Cauchy sample"
else:
print n, "Standard Cauchy samples"
sample = cauchySample(n) # the whole sample
sampleToShow=[c for c in sample if (c >= lower and c <= upper)]
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(sampleToShow, Bins, density=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram, values between ' \
+ str(floor(lower)) + ' and ' + str(ceil(upper)))
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "lower must be less than upper"
```
# Running means
When we introduced the $Cauchy$ distribution, we noted that the expectation of the $Cauchy$ RV does not exist. This means that attempts to estimate the mean of a $Cauchy$ RV by looking at a sample mean will not be successful: as you take larger and larger samples, the effect of the extreme values will still cause the sample mean to swing around wildly (we will cover estimation properly soon). You are going to investigate the sample mean of simulated $Cauchy$ samples of steadily increasing size and show how unstable this is. A convenient way of doing this is to look at a running mean. We will start by working through the process of calculating some running means for the $Uniform(0,10)$, which do stabilise. You will then do the same thing for the $Cauchy$ and be able to see the instability.
We will be using the pylab.cumsum function, so we make sure that we have it available. We then generate a sample from the $Uniform(0,10)$
```
from pylab import cumsum
nToGenerate = 10 # sample size to generate
theta1, theta2 = 0, 10 # uniform parameters
uSample = uniformSample(nToGenerate, theta1, theta2)
print(uSample)
```
We are going to treat this sample as though it is actually 10 samples of increasing size:
- sample 1 is the first element in uSample
- sample 2 contains the first 2 elements in uSample
- sample 3 contains the first 3 elements in uSample
- ...
- sample10 contains the first 10 elements in uSample
We know that a sample mean is the sum of the elements in the sample divided by the number of elements in the sample $n$:
$$
\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i
$$
We can get the sum of the elements in each of our 10 samples with the cumulative sum of `uSample`.
We use `cumsum` to get the cumulative sum. This will be a `pylab.array` (or `numpy.arrat`) type, so we use the `list` function to turn it back into a list:
```
csUSample = list(cumsum(uSample))
print(csUSample)
```
What we have now is effectively a list
$$\left[\displaystyle\sum_{i=1}^1x_i, \sum_{i=1}^2x_i, \sum_{i=1}^3x_i, \ldots, \sum_{i=1}^{10}x_i\right]$$
So all we have to do is divide each element in `csUSample` by the number of elements that were summed to make it, and we have a list of running means
$$\left[\frac{1}{1}\displaystyle\sum_{i=1}^1x_i, \frac{1}{2}\sum_{i=1}^2x_i, \frac{1}{3}\sum_{i=1}^3x_i, \ldots, \frac{1}{10}\sum_{i=1}^{10}x_i\right]$$
We can get the running sample sizes using the `range` function:
```
samplesizes = range(1, len(uSample)+1,1)
samplesizes
```
And we can do the division with list comprehension:
```
uniformRunningMeans = [csUSample[i]/samplesizes[i] for i in range(nToGenerate)]
print(uniformRunningMeans)
```
We could pull all of this together into a function which produced a list of running means for sample sizes 1 to $n$.
```
def uniformRunningMeans(n, theta1, theta2):
'''Function to give a list of n running means from uniform(theta1, theta2).
n is the number of running means to generate.
theta1, theta2 are the uniform distribution parameters.
return a list of n running means.'''
sample = uniformSample(n, theta1, theta2)
from pylab import cumsum # we can import in the middle of code!
csSample = list(cumsum(sample))
samplesizes = range(1, n+1,1)
return [csSample[i]/samplesizes[i] for i in range(n)]
```
Have a look at the running means of 10 incrementally-sized samples:
```
nToGenerate = 10
theta1, theta2 = 0, 10
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
pts = zip(range(1, len(uRunningMeans)+1,1),uRunningMeans)
p = points(pts)
show(p, figsize=[5,3])
```
Recall that the expectation $E_{(\theta_1, \theta_2)}(X)$ of a $X \thicksim Uniform(\theta_1, \theta_2) = \frac{(\theta_1 +\theta_2)}{2}$
In our simulations we are using $\theta_1 = 0$, $\theta_2 = 10$, so if $X \thicksim Uniform(0,10)$, $E(X) = 5$
To show that the running means of different simulations from a $Uniform$ distribution settle down to be close to the expectation, we can plot say 5 different groups of running means for sample sizes $1, \ldots, 1000$. We will use a line plot rather than plotting individual points.
```
nToGenerate = 1000
theta1, theta2 = 0, 10
iterations = 5
xvalues = range(1, nToGenerate+1,1)
for i in range(iterations):
redshade = 0.5*(iterations - 1 - i)/iterations # to get different colours for the lines
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
pts = zip(xvalues,uRunningMeans)
if (i == 0):
p = line(pts, rgbcolor = (redshade,0,1))
else:
p += line(pts, rgbcolor = (redshade,0,1))
show(p, figsize=[5,3])
```
### YouTry!
Your task is to now do the same thing for some standard Cauchy running means.
To start with, do not put everything into a function, just put statements into the cell(s) below to:
Make variable for the number of running means to generate; assign it a small value like 10 at this stage
Use the cauchySample function to generate the sample from the standard $Cauchy$; have a look at your sample
Make a named list of cumulative sums of your $Cauchy$ sample using list and cumsum, as we did above; have a look at your cumulative sums
Make a named list of sample sizes, as we did above
Use a list comprehension to turn the cumulative sums and sample sizes into a list of running means, as we did above
Have a look at your running means; do they make sense to you given the individual sample values?
Add more cells as you need them.
When you are happy that you are doing the right things, **write a function**, parameterised by the number of running means to do, that returns a list of running means. Try to make your own function rather than copying and changing the one we used for the $Uniform$: you will learn more by trying to do it yourself. Please call your function `cauchyRunningMeans`, so that (if you have done everything else right), you'll be able to use some code we will supply you with to plot the results.
Try checking your function by using it to create a small list of running means. Check that the function does not report an error and gives you the kind of list you expect.
When you think that your function is working correctly, try evaluating the cell below: this will put the plot of 5 groups of $Uniform(0,10)$ running means beside a plot of 5 groups of standard $Cauchy$ running means produced by your function.
```
nToGenerate = 10000
theta1, theta2 = 0, 10
iterations = 5
xvalues = range(1, nToGenerate+1,1)
for i in range(iterations):
shade = 0.5*(iterations - 1 - i)/iterations # to get different colours for the lines
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
problemStr="" # an empty string
# use try to catch problems with cauchyRunningMeans functions
try:
cRunningMeans = cauchyRunningMeans(nToGenerate)
##cRunningMeans = hiddenCauchyRunningMeans(nToGenerate)
cPts = zip(xvalues, cRunningMeans)
except NameError, e:
# cauchyRunningMeans is not defined
cRunningMeans = [1 for c in range(nToGenerate)] # default value
problemStr = "No "
except Exception, e:
# some other problem with cauchyRunningMeans
cRunningMeans = [1 for c in range(nToGenerate)]
problemStr = "Problem with "
uPts = zip(xvalues, uRunningMeans)
cPts = zip(xvalues, cRunningMeans)
if (i < 1):
p1 = line(uPts, rgbcolor = (shade, 0, 1))
p2 = line(cPts, rgbcolor = (1-shade, 0, shade))
cauchyTitleMax = max(cRunningMeans) # for placement of cauchy title
else:
p1 += line(uPts, rgbcolor = (shade, 0, 1))
p2 += line(cPts, rgbcolor = (1-shade, 0, shade))
if max(cRunningMeans) > cauchyTitleMax:
cauchyTitleMax = max(cRunningMeans)
titleText1 = "Uniform(" + str(theta1) + "," + str(theta2) + ") running means" # make title text
t1 = text(titleText1, (nToGenerate/2,theta2), rgbcolor='blue',fontsize=10)
titleText2 = problemStr + "standard Cauchy running means" # make title text
t2 = text(titleText2, (nToGenerate/2,ceil(cauchyTitleMax)+1), rgbcolor='red',fontsize=10)
show(graphics_array((p1+t1,p2+t2)),figsize=[10,5])
```
# Replicable samples
Remember that we know how to set the seed of the PRNG used by `random()` with `set_random_seed`? If we wanted our sampling functions to give repeatable samples, we could also pass the functions the seed to use. Try making a new version of `uniformSample` which has a parameter for a value to use as the random number generator seed. Call your new version `uniformSampleSeeded` to distinguish it from the original one.
Try out your new `uniformSampleSeeded` function: if you generate two samples using the same seed they should be exactly the same. You could try using a large sample and checking on sample statistics such as the mean, min, max, variance etc, rather than comparing small samples by eye.
Recall that you can also give parameters default values in SageMath. Using a default value means that if no value is passed to the function for that parameter, the default value is used. Here is an example with a very simple function:
```
# we already saw default parameters in use - here's a careful walkthrough of how it works
def simpleDefaultExample(x, y=0):
'''A simple function to demonstrate default parameter values.
x is the first parameter, with no default value.
y is the second parameter, defaulting to 0.'''
return x + y
```
Note that parameters with default values need to come after parameters without default values when we define the function.
Now you can try the function - evaluate the following cells to see what you get:
```
simpleDefaultExample (1,3) # specifying two arguments for the function
simpleDefaultExample (1) # specifying one argument for the function
# another way to specify one argument for the function
simpleDefaultExample (x=6)
# uncomment next line and evaluate - but this will give an error because x has no default value
#simpleDefaultExample()
# uncomment next line and evaluate - but this will also give an error because x has no default value
# simpleDefaultExample (y=9)
```
Try making yet another version of the uniform sampler which takes a value to be used as a random number generator seed, but defaults to `None` if no value is supplied for that parameter. `None` is a special Python type.
```
x = None
type(x)
```
Using `set_random_seed(None)` will mean that the random seed is actually reset to a new ('random') value. You can see this by testing what happens when you do this twice in succession and then check what seed is being used with `initial_seed`:
```
set_random_seed(None)
initial_seed()
set_random_seed(None)
initial_seed()
```
Do another version of the `uniformSampleSeeded` function with a default value for the seed of `None`.
Check your function again by testing with both when you supply a value for the seed and when you don't.
---
## Assignment 2, PROBLEM 4
Maximum Points = 1
First read and understand the following simple simulation (originally written by Jenny Harlow). Then you will modify the simulation to find the solution to this problem.
### A Simple Simulation
We could use the samplers we have made to do a very simple simulation. Suppose the inter-arrival times, in minutes, of Orbiter buses at an Orbiter stop in Christchurch follows an $Exponential(\lambda = 0.1)$ distribution. Also suppose that this is quite a popular bus stop, and the arrival of people is very predictable: one new person will arrive in each whole minute. This means that the longer another bus takes in coming, the more people arrive to join the queue. Also suppose that the number of free seats available on any bus follows a $de\, Moivre(k=40)$ distribution, i.e, there are equally like to to be 1, or 2, or 3 ... or 40 spare seats. If there are more spare seats than people in the queue, everyone can get onto the bus and nobody is left waiting, but if there are not enough spare seats some people will be left waiting for the next bus. As they wait, more people arrive to join the queue....
This is not very realistic - we would want a better model for how many people arrive at the stop at least, and for the number of spare seats there will be on the bus. However, we are just using this as a simple example that you can do using the random variables you already know how to simulate samples from.
Try to code this example yourself, using our suggested steps. We have put our version the code into a cell below, but you will get more out of this example by trying to do it yourself first.
#### Suggested steps:
- Get a list of 100 $Exponential(\lambda = 0.1)$ samples using the `exponentialSamples` function. Assign the list to a variable named something like `busTime`s. These are your 100 simulated bus inter-arrival times.
- Choose a value for the number of people who will be waiting at the busstop when you start the simulation. Call this something like `waiting`.
- Make a list called something like `leftWaiting`, which to begin with contains just the value assigned to `waiting`.
- Make an empty list called something like `boardBus`.
- Start a for loop which takes each element in `busTimes` in turn, i.e. each bus inter-arrival time, and within the for loop:
- Calculate the number of people arriving at the stop as the floor of the time taken for that bus to arrive (i.e., one person for each whole minute until the bus arrives)
- Add this to the number of people waiting (e.g., if the number of arrivals is assigned to a variable arrivals, then waiting = waiting + arrivals will increment the value assigned to the waiting variable by the value of arrivals).
- Simulate a value for the number of seats available on the bus as one simulation from a $de \, Moirve(k=40)$ RV (it may be easier to use `deMoivreFInverse` rather than `deMoivreSample` because you only need one value - remember that you will have to pass a simulated $u \thicksim Uniform(0,1)$ to `deMoivreFInverse` as well as the value of the parameter $k$).
- The number of people who can get on the bus is the minimum of the number of people waiting in the queue and the number of seats on the bus. Calculate this value and assign it to a variable called something like `getOnBus`.
- Append `getOnBus` to the list `boardBus`.
- Subtract `getOnBus` from the number of people waiting, waiting (e.g., `waiting = waiting - getOnBus` will decrement waiting by the number of people who get on the bus).
- Append the new value of `waiting` to the list `leftWaiting`.
- That is the end of the for loop: you now have two lists, one for the number of people waiting at the stop and one for the number of people who can board each bus as it arrives.
## YouTry
Here is our code to do the bus stop simulation.
Yours may be different - maybe it will be better!
*You are expected to find the needed functions from the latest notebook this assignment came from and be able to answer this question. Unless you can do it in your head.*
```
def busStopSimulation(buses, lam, seats):
'''A Simple Simulation - see description above!'''
BusTimes = exponentialSample(buses,lam)
waiting = 0 # how many people are waiting at the start of the simulation
BoardBus = [] # empty list
LeftWaiting = [waiting] # list with just waiting in it
for time in BusTimes: # for each bus inter-arrival time
arrivals = floor(time) # people who arrive at the stop before the bus gets there
waiting = waiting + arrivals # add them to the queue
busSeats = deMoivreFInverse(random(), seats) # how many seats available on the bus
getOnBus = min(waiting, busSeats) # how many people can get on the bus
BoardBus.append(getOnBus) # add to the list
waiting = waiting - getOnBus # take the people who board the bus out of the queue
LeftWaiting.append(waiting) # add to the list
return [LeftWaiting, BoardBus, BusTimes]
# let's simulate the people left waiting at the bus stop
set_random_seed(None) # replace None by a integer to fix seed and output of simulation
buses = 100
lam = 0.1
seats = 40
leftWaiting, boardBus, busTimes = busStopSimulation(buses, lam, seats)
print(leftWaiting) # look at the leftWaiting list
print(boardBus) # boad bus
print(busTimes)
```
We could do an interactive visualisation of this by evaluating the next cell. This will be showing the number of people able to board the bus and the number of people left waiting at the bus stop by the height of lines on the plot.
```
@interact
def _(seed=[0,123,456], lam=[0.1,0.01], seats=[40,10,1000]):
set_random_seed(seed)
buses=100
leftWaiting, boardBus, busTimes = busStopSimulation(buses, lam,seats)
p1 = line([(0.5,0),(0.5,leftWaiting[0])])
from pylab import cumsum
csBusTimes=list(cumsum(busTimes))
for i in range(1, len(leftWaiting), 1):
p1+= line([(csBusTimes[i-1],0),(csBusTimes[i-1],boardBus[i-1])], rgbcolor='green')
p1+= line([(csBusTimes[i-1]+.01,0),(csBusTimes[i-1]+.01,leftWaiting[i])], rgbcolor='red')
t1 = text("Boarding the bus", (csBusTimes[len(busTimes)-1]/3,max(max(boardBus),max(leftWaiting))+1), \
rgbcolor='green',fontsize=10)
t2 = text("Waiting", (csBusTimes[len(busTimes)-1]*(2/3),max(max(boardBus),max(leftWaiting))+1), \
rgbcolor='red',fontsize=10)
xaxislabel = text("Time", (csBusTimes[len(busTimes)-1],-10),fontsize=10,color='black')
yaxislabel = text("People", (-50,max(max(boardBus),max(leftWaiting))+1),fontsize=10,color='black')
show(p1+t1+t2+xaxislabel+yaxislabel,figsize=[8,5])
```
Very briefly explain the effect of varying one of the three parameters:
- `seed`
- `lam`
- `seats`
while holding the other two parameters fixed on:
- the number of people waiting at the bus stop and
- the number of people boarding the bus
by using the dropdown menus in the `@interact` above. Think if the simulation makes sense and explain why. You can write down your answers using keyboard by double-clicking this cell and writing between `---` and `---`.
---
---
#### Solution for CauchyRunningMeans
```
def hiddenCauchyRunningMeans(n):
'''Function to give a list of n running means from standardCauchy.
n is the number of running means to generate.'''
sample = cauchySample(n)
from pylab import cumsum
csSample = list(cumsum(sample))
samplesizes = range(1, n+1,1)
return [csSample[i]/samplesizes[i] for i in range(n)]
```
| true |
code
| 0.52476 | null | null | null | null |
|
```
%load_ext autoreload
%autoreload 2
```
# Generate images
```
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
SMALL_SIZE = 15
MEDIUM_SIZE = 20
BIGGER_SIZE = 25
plt.rc("font", size=SMALL_SIZE)
plt.rc("axes", titlesize=SMALL_SIZE)
plt.rc("axes", labelsize=MEDIUM_SIZE)
plt.rc("xtick", labelsize=SMALL_SIZE)
plt.rc("ytick", labelsize=SMALL_SIZE)
plt.rc("legend", fontsize=SMALL_SIZE)
plt.rc("figure", titlesize=BIGGER_SIZE)
DATA_PATH = Path("../thesis/img/")
```
# DTW
```
from fastdtw import fastdtw
ts_0 = np.sin(np.logspace(0, np.log10(2 * np.pi), 30))
ts_1 = np.sin(np.linspace(1, 2 * np.pi, 30))
distance, warping_path = fastdtw(ts_0, ts_1)
fig, axs = plt.subplots(2, 1, figsize=(8, 8), sharex=True)
for name, ax in zip(["Euclidian distance", "Dynamic Time Warping"], axs):
ax.plot(ts_0 + 1, "o-", linewidth=3)
ax.plot(ts_1, "o-", linewidth=3)
ax.set_yticks([])
ax.set_xticks([])
ax.set_title(name)
for x, y in zip(zip(np.arange(30), np.arange(30)), zip(ts_0 + 1, ts_1)):
axs[0].plot(x, y, "r--", linewidth=2, alpha=0.5)
for x_0, x_1 in warping_path:
axs[1].plot([x_0, x_1], [ts_0[x_0] + 1, ts_1[x_1]], "r--", linewidth=2, alpha=0.5)
plt.savefig(DATA_PATH / "dtw_vs_euclid.svg")
plt.tight_layout()
plt.show()
matrix = (ts_0.reshape(-1, 1) - ts_1) ** 2
x = [x for x, _ in warping_path]
y = [y for _, y in warping_path]
# plt.close('all')
fig = plt.figure(figsize=(8, 8))
gs = fig.add_gridspec(
2,
2,
width_ratios=(1, 8),
height_ratios=(8, 1),
left=0.1,
right=0.9,
bottom=0.1,
top=0.9,
wspace=0.01,
hspace=0.01,
)
fig.tight_layout()
ax_ts_x = fig.add_subplot(gs[0, 0])
ax_ts_y = fig.add_subplot(gs[1, 1])
ax = fig.add_subplot(gs[0, 1], sharex=ax_ts_y, sharey=ax_ts_x)
ax.set_xticks([])
ax.set_yticks([])
ax.tick_params(axis="x", labelbottom=False)
ax.tick_params(axis="y", labelleft=False)
fig.suptitle("DTW calculated optimal warping path")
im = ax.imshow(np.log1p(matrix), origin="lower", cmap="bone_r")
ax.plot(y, x, "r", linewidth=4, label="Optimal warping path")
ax.plot(
[0, 29], [0, 29], "--", linewidth=3, color="black", label="Default warping path"
)
ax.legend()
ax_ts_x.plot(ts_0 * -1, np.arange(30), linewidth=4, color="#1f77b4")
# ax_ts_x.set_yticks(np.arange(30))
ax_ts_x.set_ylim(-0.5, 29.5)
ax_ts_x.set_xlim(-1.5, 1.5)
ax_ts_x.set_xticks([])
ax_ts_y.plot(ts_1, linewidth=4, color="#ff7f0e")
# ax_ts_y.set_xticks(np.arange(30))
ax_ts_y.set_xlim(-0.5, 29.5)
ax_ts_y.set_ylim(-1.5, 1.5)
ax_ts_y.set_yticks([])
# cbar = plt.colorbar(im, ax=ax, use_gridspec=False, panchor=False)
plt.savefig(DATA_PATH / "dtw_warping_path.svg")
plt.show()
```
# TSNE
```
import mpl_toolkits.mplot3d.axes3d as p3
from sklearn.datasets import make_s_curve, make_swiss_roll
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
n_samples = 1500
X, y = make_swiss_roll(n_samples, noise=0.1)
X, y = make_s_curve(n_samples, random_state=42)
fig = plt.figure(figsize=(10, 10))
ax = fig.gca(projection="3d")
ax.view_init(20, -60)
# ax.set_title("S curve dataset", fontsize=18)
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y)
ax.set_yticklabels([])
ax.set_xticklabels([])
ax.set_zticklabels([])
fig.tight_layout()
plt.savefig(DATA_PATH / "s_dataset.svg", bbox_inches=0)
plt.show()
X_pca = PCA(n_components=2, random_state=42).fit_transform(X)
X_tsne = TSNE(n_components=2, perplexity=30, init="pca", random_state=42).fit_transform(
X
)
fig = plt.figure(figsize=(10, 10))
# plt.title("PCA transformation", fontsize=18)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y)
plt.xticks([])
plt.yticks([])
plt.savefig(DATA_PATH / "s_dataset_pca.svg")
plt.show()
fig = plt.figure(figsize=(10, 10))
# plt.title("t-SNE transformation", fontsize=18)
plt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=y)
plt.xticks([])
plt.yticks([])
plt.savefig(DATA_PATH / "s_dataset_tsne.svg")
plt.show()
```
# Datashader
```
import datashader as ds
import datashader.transfer_functions as tf
import matplotlib.patches as mpatches
from lttb import downsample
np.random.seed(42)
signal = np.random.normal(0, 10, size=10 ** 6).cumsum() + np.sin(
np.linspace(0, 100 * np.pi, 10 ** 6)
) * np.random.normal(0, 1, size=10 ** 6)
s_frame = pd.DataFrame(signal, columns=["signal"]).reset_index()
x = 1500
y = 500
cvs = ds.Canvas(plot_height=y, plot_width=x)
line = cvs.line(s_frame, "index", "signal")
img = tf.shade(line).to_pil()
trans = downsample(s_frame.values, 100)
trans[:, 0] /= trans[:, 0].max()
trans[:, 0] *= x
trans[:, 1] *= -1
trans[:, 1] -= trans[:, 1].min()
trans[:, 1] /= trans[:, 1].max()
trans[:, 1] *= y
fig, ax = plt.subplots(figsize=(x / 60, y / 60))
plt.imshow(img, origin="upper")
plt.plot(*trans.T, "r", alpha=0.6, linewidth=2)
plt.legend(
handles=[
mpatches.Patch(color="blue", label="Datashader (10^6 points)"),
mpatches.Patch(color="red", label="LTTB (10^3 points)"),
],
prop={"size": 25},
)
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.ylabel("Value", fontsize=25)
plt.xlabel("Time", fontsize=25)
plt.tight_layout()
plt.savefig(DATA_PATH / "datashader.png")
plt.show()
```
# LTTB
```
from matplotlib import cm
from matplotlib.colors import Normalize
from matplotlib.patches import Polygon
np.random.seed(42)
ns = np.random.normal(0, 1, size=26).cumsum()
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
plt.plot(ns, "-o", linewidth=2)
mapper = cm.ScalarMappable(Normalize(vmin=0, vmax=15, clip=True), cmap="autumn_r")
areas = []
for i, data in enumerate(ns[:-2], 1):
cors = [[i + ui, ns[i + ui]] for ui in range(-1, 2)]
x = [m[0] for m in cors]
y = [m[1] for m in cors]
ea = 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))) * 10
areas.append(ea)
color = mapper.to_rgba(ea)
plt.plot([i], [ns[i]], "o", color=color)
ax.add_patch(
Polygon(
cors,
closed=True,
fill=True,
alpha=0.3,
color=color,
)
)
cbar = plt.colorbar(mapper, alpha=0.3)
cbar.set_label("Effective Area Size")
fig.suptitle("Effective Area of Data Points")
plt.ylabel("Value")
plt.xlabel("Time")
plt.tight_layout()
plt.savefig(DATA_PATH / "effective-area.svg")
plt.savefig(DATA_PATH / "effective-area.png")
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
plt.plot(ns, "--o", linewidth=2, label="Original time series")
mapper = cm.ScalarMappable(Normalize(vmin=0, vmax=15, clip=True), cmap="autumn_r")
lotb = np.concatenate(
[[0], np.arange(1, 25, 3) + np.array(areas).reshape(-1, 3).argmax(axis=1), [25]]
)
for i, data in enumerate(ns[:-2], 1):
cors = [[i + ui, ns[i + ui]] for ui in range(-1, 2)]
x = [m[0] for m in cors]
y = [m[1] for m in cors]
ea = 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))) * 10
color = mapper.to_rgba(ea) # cm.tab10.colors[i % 5 + 1]
plt.plot([i], [ns[i]], "o", color=color)
ax.add_patch(
Polygon(
cors,
closed=True,
fill=True,
alpha=0.3,
color=color,
)
)
plt.plot(
lotb, ns[lotb], "-x", linewidth=2, color="tab:purple", label="LTOB approximation"
)
cbar = plt.colorbar(mapper, alpha=0.3)
cbar.set_label("Effective Area Size")
plt.vlines(np.linspace(0.5, 24.5, 9), ns.min(), ns.max(), "black", "--", alpha=0.5)
plt.ylabel("Value")
plt.xlabel("Time")
fig.suptitle("LTOB downsampling")
plt.legend()
plt.tight_layout()
plt.savefig(DATA_PATH / "ltob.svg")
plt.savefig(DATA_PATH / "ltob.png")
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
plt.plot(ns, "--o", linewidth=2, label="Original time series")
ds = downsample(np.vstack([np.arange(26), ns]).T, 10)
plt.plot(*ds.T, "-x", linewidth=2, label="LTTB approximation")
# plt.plot(ns, "x")
plt.vlines(np.linspace(0.5, 24.5, 9), ns.min(), ns.max(), "black", "--", alpha=0.5)
plt.ylabel("Value")
plt.xlabel("Time")
fig.suptitle("LTTB downsampling")
plt.legend()
plt.tight_layout()
plt.savefig(DATA_PATH / "lttb.svg")
plt.savefig(DATA_PATH / "lttb.png")
plt.show()
```
| true |
code
| 0.638187 | null | null | null | null |
|
**Chapter 5 – Support Vector Machines**
_This notebook contains all the sample code and solutions to the exercises in chapter 5._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "svm"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Large margin classification
The next few code cells generate the first figures in chapter 5. The first actual code sample comes after:
```
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
# SVM Classifier model
svm_clf = SVC(kernel="linear", C=float("inf"))
svm_clf.fit(X, y)
# Bad models
x0 = np.linspace(0, 5.5, 200)
pred_1 = 5*x0 - 20
pred_2 = x0 - 1.8
pred_3 = 0.1 * x0 + 0.5
def plot_svc_decision_boundary(svm_clf, xmin, xmax):
w = svm_clf.coef_[0]
b = svm_clf.intercept_[0]
# At the decision boundary, w0*x0 + w1*x1 + b = 0
# => x1 = -w0/w1 * x0 - b/w1
x0 = np.linspace(xmin, xmax, 200)
decision_boundary = -w[0]/w[1] * x0 - b/w[1]
margin = 1/w[1]
gutter_up = decision_boundary + margin
gutter_down = decision_boundary - margin
svs = svm_clf.support_vectors_
plt.scatter(svs[:, 0], svs[:, 1], s=180, facecolors='#FFAAAA')
plt.plot(x0, decision_boundary, "k-", linewidth=2)
plt.plot(x0, gutter_up, "k--", linewidth=2)
plt.plot(x0, gutter_down, "k--", linewidth=2)
plt.figure(figsize=(12,2.7))
plt.subplot(121)
plt.plot(x0, pred_1, "g--", linewidth=2)
plt.plot(x0, pred_2, "m-", linewidth=2)
plt.plot(x0, pred_3, "r-", linewidth=2)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plot_svc_decision_boundary(svm_clf, 0, 5.5)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo")
plt.xlabel("Petal length", fontsize=14)
plt.axis([0, 5.5, 0, 2])
save_fig("large_margin_classification_plot")
plt.show()
```
# Sensitivity to feature scales
```
Xs = np.array([[1, 50], [5, 20], [3, 80], [5, 60]]).astype(np.float64)
ys = np.array([0, 0, 1, 1])
svm_clf = SVC(kernel="linear", C=100)
svm_clf.fit(Xs, ys)
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(Xs[:, 0][ys==1], Xs[:, 1][ys==1], "bo")
plt.plot(Xs[:, 0][ys==0], Xs[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, 0, 6)
plt.xlabel("$x_0$", fontsize=20)
plt.ylabel("$x_1$ ", fontsize=20, rotation=0)
plt.title("Unscaled", fontsize=16)
plt.axis([0, 6, 0, 90])
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(Xs)
svm_clf.fit(X_scaled, ys)
plt.subplot(122)
plt.plot(X_scaled[:, 0][ys==1], X_scaled[:, 1][ys==1], "bo")
plt.plot(X_scaled[:, 0][ys==0], X_scaled[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, -2, 2)
plt.xlabel("$x_0$", fontsize=20)
plt.title("Scaled", fontsize=16)
plt.axis([-2, 2, -2, 2])
save_fig("sensitivity_to_feature_scales_plot")
```
# Sensitivity to outliers
```
X_outliers = np.array([[3.4, 1.3], [3.2, 0.8]])
y_outliers = np.array([0, 0])
Xo1 = np.concatenate([X, X_outliers[:1]], axis=0)
yo1 = np.concatenate([y, y_outliers[:1]], axis=0)
Xo2 = np.concatenate([X, X_outliers[1:]], axis=0)
yo2 = np.concatenate([y, y_outliers[1:]], axis=0)
svm_clf2 = SVC(kernel="linear", C=10**9)
svm_clf2.fit(Xo2, yo2)
plt.figure(figsize=(12,2.7))
plt.subplot(121)
plt.plot(Xo1[:, 0][yo1==1], Xo1[:, 1][yo1==1], "bs")
plt.plot(Xo1[:, 0][yo1==0], Xo1[:, 1][yo1==0], "yo")
plt.text(0.3, 1.0, "Impossible!", fontsize=24, color="red")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[0][0], X_outliers[0][1]),
xytext=(2.5, 1.7),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plt.plot(Xo2[:, 0][yo2==1], Xo2[:, 1][yo2==1], "bs")
plt.plot(Xo2[:, 0][yo2==0], Xo2[:, 1][yo2==0], "yo")
plot_svc_decision_boundary(svm_clf2, 0, 5.5)
plt.xlabel("Petal length", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[1][0], X_outliers[1][1]),
xytext=(3.2, 0.08),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
save_fig("sensitivity_to_outliers_plot")
plt.show()
```
# Large margin *vs* margin violations
This is the first code example in chapter 5:
```
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
svm_clf = Pipeline([
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=1, loss="hinge", random_state=42)),
])
svm_clf.fit(X, y)
svm_clf.predict([[5.5, 1.7]])
```
Now let's generate the graph comparing different regularization settings:
```
scaler = StandardScaler()
svm_clf1 = LinearSVC(C=1, loss="hinge", random_state=42)
svm_clf2 = LinearSVC(C=100, loss="hinge", random_state=42)
scaled_svm_clf1 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf1),
])
scaled_svm_clf2 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf2),
])
scaled_svm_clf1.fit(X, y)
scaled_svm_clf2.fit(X, y)
# Convert to unscaled parameters
b1 = svm_clf1.decision_function([-scaler.mean_ / scaler.scale_])
b2 = svm_clf2.decision_function([-scaler.mean_ / scaler.scale_])
w1 = svm_clf1.coef_[0] / scaler.scale_
w2 = svm_clf2.coef_[0] / scaler.scale_
svm_clf1.intercept_ = np.array([b1])
svm_clf2.intercept_ = np.array([b2])
svm_clf1.coef_ = np.array([w1])
svm_clf2.coef_ = np.array([w2])
# Find support vectors (LinearSVC does not do this automatically)
t = y * 2 - 1
support_vectors_idx1 = (t * (X.dot(w1) + b1) < 1).ravel()
support_vectors_idx2 = (t * (X.dot(w2) + b2) < 1).ravel()
svm_clf1.support_vectors_ = X[support_vectors_idx1]
svm_clf2.support_vectors_ = X[support_vectors_idx2]
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs", label="Iris-Versicolor")
plot_svc_decision_boundary(svm_clf1, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.title("$C = {}$".format(svm_clf1.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("$C = {}$".format(svm_clf2.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
save_fig("regularization_plot")
```
# Non-linear classification
```
X1D = np.linspace(-4, 4, 9).reshape(-1, 1)
X2D = np.c_[X1D, X1D**2]
y = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.plot(X1D[:, 0][y==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][y==1], np.zeros(5), "g^")
plt.gca().get_yaxis().set_ticks([])
plt.xlabel(r"$x_1$", fontsize=20)
plt.axis([-4.5, 4.5, -0.2, 0.2])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(X2D[:, 0][y==0], X2D[:, 1][y==0], "bs")
plt.plot(X2D[:, 0][y==1], X2D[:, 1][y==1], "g^")
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plt.gca().get_yaxis().set_ticks([0, 4, 8, 12, 16])
plt.plot([-4.5, 4.5], [6.5, 6.5], "r--", linewidth=3)
plt.axis([-4.5, 4.5, -1, 17])
plt.subplots_adjust(right=1)
save_fig("higher_dimensions_plot", tight_layout=False)
plt.show()
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.show()
from sklearn.datasets import make_moons
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_svm_clf = Pipeline([
("poly_features", PolynomialFeatures(degree=3)),
("scaler", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge", random_state=42))
])
polynomial_svm_clf.fit(X, y)
def plot_predictions(clf, axes):
x0s = np.linspace(axes[0], axes[1], 100)
x1s = np.linspace(axes[2], axes[3], 100)
x0, x1 = np.meshgrid(x0s, x1s)
X = np.c_[x0.ravel(), x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
y_decision = clf.decision_function(X).reshape(x0.shape)
plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)
plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)
plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
save_fig("moons_polynomial_svc_plot")
plt.show()
from sklearn.svm import SVC
poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
])
poly_kernel_svm_clf.fit(X, y)
poly100_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))
])
poly100_kernel_svm_clf.fit(X, y)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_predictions(poly_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=3, r=1, C=5$", fontsize=18)
plt.subplot(122)
plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=10, r=100, C=5$", fontsize=18)
save_fig("moons_kernelized_polynomial_svc_plot")
plt.show()
def gaussian_rbf(x, landmark, gamma):
return np.exp(-gamma * np.linalg.norm(x - landmark, axis=1)**2)
gamma = 0.3
x1s = np.linspace(-4.5, 4.5, 200).reshape(-1, 1)
x2s = gaussian_rbf(x1s, -2, gamma)
x3s = gaussian_rbf(x1s, 1, gamma)
XK = np.c_[gaussian_rbf(X1D, -2, gamma), gaussian_rbf(X1D, 1, gamma)]
yk = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.scatter(x=[-2, 1], y=[0, 0], s=150, alpha=0.5, c="red")
plt.plot(X1D[:, 0][yk==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][yk==1], np.zeros(5), "g^")
plt.plot(x1s, x2s, "g--")
plt.plot(x1s, x3s, "b:")
plt.gca().get_yaxis().set_ticks([0, 0.25, 0.5, 0.75, 1])
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"Similarity", fontsize=14)
plt.annotate(r'$\mathbf{x}$',
xy=(X1D[3, 0], 0),
xytext=(-0.5, 0.20),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.text(-2, 0.9, "$x_2$", ha="center", fontsize=20)
plt.text(1, 0.9, "$x_3$", ha="center", fontsize=20)
plt.axis([-4.5, 4.5, -0.1, 1.1])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(XK[:, 0][yk==0], XK[:, 1][yk==0], "bs")
plt.plot(XK[:, 0][yk==1], XK[:, 1][yk==1], "g^")
plt.xlabel(r"$x_2$", fontsize=20)
plt.ylabel(r"$x_3$ ", fontsize=20, rotation=0)
plt.annotate(r'$\phi\left(\mathbf{x}\right)$',
xy=(XK[3, 0], XK[3, 1]),
xytext=(0.65, 0.50),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.plot([-0.1, 1.1], [0.57, -0.1], "r--", linewidth=3)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplots_adjust(right=1)
save_fig("kernel_method_plot")
plt.show()
x1_example = X1D[3, 0]
for landmark in (-2, 1):
k = gaussian_rbf(np.array([[x1_example]]), np.array([[landmark]]), gamma)
print("Phi({}, {}) = {}".format(x1_example, landmark, k))
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001))
])
rbf_kernel_svm_clf.fit(X, y)
from sklearn.svm import SVC
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))
])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
plt.figure(figsize=(11, 7))
for i, svm_clf in enumerate(svm_clfs):
plt.subplot(221 + i)
plot_predictions(svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)
save_fig("moons_rbf_svc_plot")
plt.show()
```
# Regression
```
np.random.seed(42)
m = 50
X = 2 * np.random.rand(m, 1)
y = (4 + 3 * X + np.random.randn(m, 1)).ravel()
from sklearn.svm import LinearSVR
svm_reg = LinearSVR(epsilon=1.5, random_state=42)
svm_reg.fit(X, y)
svm_reg1 = LinearSVR(epsilon=1.5, random_state=42)
svm_reg2 = LinearSVR(epsilon=0.5, random_state=42)
svm_reg1.fit(X, y)
svm_reg2.fit(X, y)
def find_support_vectors(svm_reg, X, y):
y_pred = svm_reg.predict(X)
off_margin = (np.abs(y - y_pred) >= svm_reg.epsilon)
return np.argwhere(off_margin)
svm_reg1.support_ = find_support_vectors(svm_reg1, X, y)
svm_reg2.support_ = find_support_vectors(svm_reg2, X, y)
eps_x1 = 1
eps_y_pred = svm_reg1.predict([[eps_x1]])
def plot_svm_regression(svm_reg, X, y, axes):
x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)
y_pred = svm_reg.predict(x1s)
plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")
plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")
plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")
plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')
plt.plot(X, y, "bo")
plt.xlabel(r"$x_1$", fontsize=18)
plt.legend(loc="upper left", fontsize=18)
plt.axis(axes)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_reg1, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
#plt.plot([eps_x1, eps_x1], [eps_y_pred, eps_y_pred - svm_reg1.epsilon], "k-", linewidth=2)
plt.annotate(
'', xy=(eps_x1, eps_y_pred), xycoords='data',
xytext=(eps_x1, eps_y_pred - svm_reg1.epsilon),
textcoords='data', arrowprops={'arrowstyle': '<->', 'linewidth': 1.5}
)
plt.text(0.91, 5.6, r"$\epsilon$", fontsize=20)
plt.subplot(122)
plot_svm_regression(svm_reg2, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg2.epsilon), fontsize=18)
save_fig("svm_regression_plot")
plt.show()
np.random.seed(42)
m = 100
X = 2 * np.random.rand(m, 1) - 1
y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()
```
**Warning**: the default value of `gamma` will change from `'auto'` to `'scale'` in version 0.22 to better account for unscaled features. To preserve the same results as in the book, we explicitly set it to `'auto'`, but you should probably just use the default in your own code.
```
from sklearn.svm import SVR
svm_poly_reg = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="auto")
svm_poly_reg.fit(X, y)
from sklearn.svm import SVR
svm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="auto")
svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1, gamma="auto")
svm_poly_reg1.fit(X, y)
svm_poly_reg2.fit(X, y)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
plt.subplot(122)
plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18)
save_fig("svm_with_polynomial_kernel_plot")
plt.show()
```
# Under the hood
```
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
from mpl_toolkits.mplot3d import Axes3D
def plot_3D_decision_function(ax, w, b, x1_lim=[4, 6], x2_lim=[0.8, 2.8]):
x1_in_bounds = (X[:, 0] > x1_lim[0]) & (X[:, 0] < x1_lim[1])
X_crop = X[x1_in_bounds]
y_crop = y[x1_in_bounds]
x1s = np.linspace(x1_lim[0], x1_lim[1], 20)
x2s = np.linspace(x2_lim[0], x2_lim[1], 20)
x1, x2 = np.meshgrid(x1s, x2s)
xs = np.c_[x1.ravel(), x2.ravel()]
df = (xs.dot(w) + b).reshape(x1.shape)
m = 1 / np.linalg.norm(w)
boundary_x2s = -x1s*(w[0]/w[1])-b/w[1]
margin_x2s_1 = -x1s*(w[0]/w[1])-(b-1)/w[1]
margin_x2s_2 = -x1s*(w[0]/w[1])-(b+1)/w[1]
ax.plot_surface(x1s, x2, np.zeros_like(x1),
color="b", alpha=0.2, cstride=100, rstride=100)
ax.plot(x1s, boundary_x2s, 0, "k-", linewidth=2, label=r"$h=0$")
ax.plot(x1s, margin_x2s_1, 0, "k--", linewidth=2, label=r"$h=\pm 1$")
ax.plot(x1s, margin_x2s_2, 0, "k--", linewidth=2)
ax.plot(X_crop[:, 0][y_crop==1], X_crop[:, 1][y_crop==1], 0, "g^")
ax.plot_wireframe(x1, x2, df, alpha=0.3, color="k")
ax.plot(X_crop[:, 0][y_crop==0], X_crop[:, 1][y_crop==0], 0, "bs")
ax.axis(x1_lim + x2_lim)
ax.text(4.5, 2.5, 3.8, "Decision function $h$", fontsize=15)
ax.set_xlabel(r"Petal length", fontsize=15)
ax.set_ylabel(r"Petal width", fontsize=15)
ax.set_zlabel(r"$h = \mathbf{w}^T \mathbf{x} + b$", fontsize=18)
ax.legend(loc="upper left", fontsize=16)
fig = plt.figure(figsize=(11, 6))
ax1 = fig.add_subplot(111, projection='3d')
plot_3D_decision_function(ax1, w=svm_clf2.coef_[0], b=svm_clf2.intercept_[0])
#save_fig("iris_3D_plot")
plt.show()
```
# Small weight vector results in a large margin
```
def plot_2D_decision_function(w, b, ylabel=True, x1_lim=[-3, 3]):
x1 = np.linspace(x1_lim[0], x1_lim[1], 200)
y = w * x1 + b
m = 1 / w
plt.plot(x1, y)
plt.plot(x1_lim, [1, 1], "k:")
plt.plot(x1_lim, [-1, -1], "k:")
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot([m, m], [0, 1], "k--")
plt.plot([-m, -m], [0, -1], "k--")
plt.plot([-m, m], [0, 0], "k-o", linewidth=3)
plt.axis(x1_lim + [-2, 2])
plt.xlabel(r"$x_1$", fontsize=16)
if ylabel:
plt.ylabel(r"$w_1 x_1$ ", rotation=0, fontsize=16)
plt.title(r"$w_1 = {}$".format(w), fontsize=16)
plt.figure(figsize=(12, 3.2))
plt.subplot(121)
plot_2D_decision_function(1, 0)
plt.subplot(122)
plot_2D_decision_function(0.5, 0, ylabel=False)
save_fig("small_w_large_margin_plot")
plt.show()
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
svm_clf = SVC(kernel="linear", C=1)
svm_clf.fit(X, y)
svm_clf.predict([[5.3, 1.3]])
```
# Hinge loss
```
t = np.linspace(-2, 4, 200)
h = np.where(1 - t < 0, 0, 1 - t) # max(0, 1-t)
plt.figure(figsize=(5,2.8))
plt.plot(t, h, "b-", linewidth=2, label="$max(0, 1 - t)$")
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.yticks(np.arange(-1, 2.5, 1))
plt.xlabel("$t$", fontsize=16)
plt.axis([-2, 4, -1, 2.5])
plt.legend(loc="upper right", fontsize=16)
save_fig("hinge_plot")
plt.show()
```
# Extra material
## Training time
```
X, y = make_moons(n_samples=1000, noise=0.4, random_state=42)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
import time
tol = 0.1
tols = []
times = []
for i in range(10):
svm_clf = SVC(kernel="poly", gamma=3, C=10, tol=tol, verbose=1)
t1 = time.time()
svm_clf.fit(X, y)
t2 = time.time()
times.append(t2-t1)
tols.append(tol)
print(i, tol, t2-t1)
tol /= 10
plt.semilogx(tols, times)
```
## Linear SVM classifier implementation using Batch Gradient Descent
```
# Training set
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64).reshape(-1, 1) # Iris-Virginica
from sklearn.base import BaseEstimator
class MyLinearSVC(BaseEstimator):
def __init__(self, C=1, eta0=1, eta_d=10000, n_epochs=1000, random_state=None):
self.C = C
self.eta0 = eta0
self.n_epochs = n_epochs
self.random_state = random_state
self.eta_d = eta_d
def eta(self, epoch):
return self.eta0 / (epoch + self.eta_d)
def fit(self, X, y):
# Random initialization
if self.random_state:
np.random.seed(self.random_state)
w = np.random.randn(X.shape[1], 1) # n feature weights
b = 0
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_t = X * t
self.Js=[]
# Training
for epoch in range(self.n_epochs):
support_vectors_idx = (X_t.dot(w) + t * b < 1).ravel()
X_t_sv = X_t[support_vectors_idx]
t_sv = t[support_vectors_idx]
J = 1/2 * np.sum(w * w) + self.C * (np.sum(1 - X_t_sv.dot(w)) - b * np.sum(t_sv))
self.Js.append(J)
w_gradient_vector = w - self.C * np.sum(X_t_sv, axis=0).reshape(-1, 1)
b_derivative = -C * np.sum(t_sv)
w = w - self.eta(epoch) * w_gradient_vector
b = b - self.eta(epoch) * b_derivative
self.intercept_ = np.array([b])
self.coef_ = np.array([w])
support_vectors_idx = (X_t.dot(w) + t * b < 1).ravel()
self.support_vectors_ = X[support_vectors_idx]
return self
def decision_function(self, X):
return X.dot(self.coef_[0]) + self.intercept_[0]
def predict(self, X):
return (self.decision_function(X) >= 0).astype(np.float64)
C=2
svm_clf = MyLinearSVC(C=C, eta0 = 10, eta_d = 1000, n_epochs=60000, random_state=2)
svm_clf.fit(X, y)
svm_clf.predict(np.array([[5, 2], [4, 1]]))
plt.plot(range(svm_clf.n_epochs), svm_clf.Js)
plt.axis([0, svm_clf.n_epochs, 0, 100])
print(svm_clf.intercept_, svm_clf.coef_)
svm_clf2 = SVC(kernel="linear", C=C)
svm_clf2.fit(X, y.ravel())
print(svm_clf2.intercept_, svm_clf2.coef_)
yr = y.ravel()
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs", label="Not Iris-Virginica")
plot_svc_decision_boundary(svm_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("MyLinearSVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("SVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(loss="hinge", alpha = 0.017, max_iter = 50, tol=-np.infty, random_state=42)
sgd_clf.fit(X, y.ravel())
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_b = np.c_[np.ones((m, 1)), X] # Add bias input x0=1
X_b_t = X_b * t
sgd_theta = np.r_[sgd_clf.intercept_[0], sgd_clf.coef_[0]]
print(sgd_theta)
support_vectors_idx = (X_b_t.dot(sgd_theta) < 1).ravel()
sgd_clf.support_vectors_ = X[support_vectors_idx]
sgd_clf.C = C
plt.figure(figsize=(5.5,3.2))
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(sgd_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("SGDClassifier", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
```
# Exercise solutions
## 1. to 7.
See appendix A.
# 8.
_Exercise: train a `LinearSVC` on a linearly separable dataset. Then train an `SVC` and a `SGDClassifier` on the same dataset. See if you can get them to produce roughly the same model._
Let's use the Iris dataset: the Iris Setosa and Iris Versicolor classes are linearly separable.
```
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
from sklearn.svm import SVC, LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import StandardScaler
C = 5
alpha = 1 / (C * len(X))
lin_clf = LinearSVC(loss="hinge", C=C, random_state=42)
svm_clf = SVC(kernel="linear", C=C)
sgd_clf = SGDClassifier(loss="hinge", learning_rate="constant", eta0=0.001, alpha=alpha,
max_iter=100000, tol=-np.infty, random_state=42)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
lin_clf.fit(X_scaled, y)
svm_clf.fit(X_scaled, y)
sgd_clf.fit(X_scaled, y)
print("LinearSVC: ", lin_clf.intercept_, lin_clf.coef_)
print("SVC: ", svm_clf.intercept_, svm_clf.coef_)
print("SGDClassifier(alpha={:.5f}):".format(sgd_clf.alpha), sgd_clf.intercept_, sgd_clf.coef_)
```
Let's plot the decision boundaries of these three models:
```
# Compute the slope and bias of each decision boundary
w1 = -lin_clf.coef_[0, 0]/lin_clf.coef_[0, 1]
b1 = -lin_clf.intercept_[0]/lin_clf.coef_[0, 1]
w2 = -svm_clf.coef_[0, 0]/svm_clf.coef_[0, 1]
b2 = -svm_clf.intercept_[0]/svm_clf.coef_[0, 1]
w3 = -sgd_clf.coef_[0, 0]/sgd_clf.coef_[0, 1]
b3 = -sgd_clf.intercept_[0]/sgd_clf.coef_[0, 1]
# Transform the decision boundary lines back to the original scale
line1 = scaler.inverse_transform([[-10, -10 * w1 + b1], [10, 10 * w1 + b1]])
line2 = scaler.inverse_transform([[-10, -10 * w2 + b2], [10, 10 * w2 + b2]])
line3 = scaler.inverse_transform([[-10, -10 * w3 + b3], [10, 10 * w3 + b3]])
# Plot all three decision boundaries
plt.figure(figsize=(11, 4))
plt.plot(line1[:, 0], line1[:, 1], "k:", label="LinearSVC")
plt.plot(line2[:, 0], line2[:, 1], "b--", linewidth=2, label="SVC")
plt.plot(line3[:, 0], line3[:, 1], "r-", label="SGDClassifier")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs") # label="Iris-Versicolor"
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo") # label="Iris-Setosa"
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper center", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.show()
```
Close enough!
# 9.
_Exercise: train an SVM classifier on the MNIST dataset. Since SVM classifiers are binary classifiers, you will need to use one-versus-all to classify all 10 digits. You may want to tune the hyperparameters using small validation sets to speed up the process. What accuracy can you reach?_
First, let's load the dataset and split it into a training set and a test set. We could use `train_test_split()` but people usually just take the first 60,000 instances for the training set, and the last 10,000 instances for the test set (this makes it possible to compare your model's performance with others):
```
try:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, cache=True)
except ImportError:
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
X = mnist["data"]
y = mnist["target"]
X_train = X[:60000]
y_train = y[:60000]
X_test = X[60000:]
y_test = y[60000:]
```
Many training algorithms are sensitive to the order of the training instances, so it's generally good practice to shuffle them first:
```
np.random.seed(42)
rnd_idx = np.random.permutation(60000)
X_train = X_train[rnd_idx]
y_train = y_train[rnd_idx]
```
Let's start simple, with a linear SVM classifier. It will automatically use the One-vs-All (also called One-vs-the-Rest, OvR) strategy, so there's nothing special we need to do. Easy!
```
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train, y_train)
```
Let's make predictions on the training set and measure the accuracy (we don't want to measure it on the test set yet, since we have not selected and trained the final model yet):
```
from sklearn.metrics import accuracy_score
y_pred = lin_clf.predict(X_train)
accuracy_score(y_train, y_pred)
```
Wow, 86% accuracy on MNIST is a really bad performance. This linear model is certainly too simple for MNIST, but perhaps we just needed to scale the data first:
```
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float32))
X_test_scaled = scaler.transform(X_test.astype(np.float32))
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train_scaled, y_train)
y_pred = lin_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
```
That's much better (we cut the error rate in two), but still not great at all for MNIST. If we want to use an SVM, we will have to use a kernel. Let's try an `SVC` with an RBF kernel (the default).
**Warning**: if you are using Scikit-Learn ≤ 0.19, the `SVC` class will use the One-vs-One (OvO) strategy by default, so you must explicitly set `decision_function_shape="ovr"` if you want to use the OvR strategy instead (OvR is the default since 0.19).
```
svm_clf = SVC(decision_function_shape="ovr", gamma="auto")
svm_clf.fit(X_train_scaled[:10000], y_train[:10000])
y_pred = svm_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
```
That's promising, we get better performance even though we trained the model on 6 times less data. Let's tune the hyperparameters by doing a randomized search with cross validation. We will do this on a small dataset just to speed up the process:
```
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(svm_clf, param_distributions, n_iter=10, verbose=2, cv=3)
rnd_search_cv.fit(X_train_scaled[:1000], y_train[:1000])
rnd_search_cv.best_estimator_
rnd_search_cv.best_score_
```
This looks pretty low but remember we only trained the model on 1,000 instances. Let's retrain the best estimator on the whole training set (run this at night, it will take hours):
```
rnd_search_cv.best_estimator_.fit(X_train_scaled, y_train)
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
```
Ah, this looks good! Let's select this model. Now we can test it on the test set:
```
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
accuracy_score(y_test, y_pred)
```
Not too bad, but apparently the model is overfitting slightly. It's tempting to tweak the hyperparameters a bit more (e.g. decreasing `C` and/or `gamma`), but we would run the risk of overfitting the test set. Other people have found that the hyperparameters `C=5` and `gamma=0.005` yield even better performance (over 98% accuracy). By running the randomized search for longer and on a larger part of the training set, you may be able to find this as well.
## 10.
_Exercise: train an SVM regressor on the California housing dataset._
Let's load the dataset using Scikit-Learn's `fetch_california_housing()` function:
```
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
X = housing["data"]
y = housing["target"]
```
Split it into a training set and a test set:
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
Don't forget to scale the data:
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
```
Let's train a simple `LinearSVR` first:
```
from sklearn.svm import LinearSVR
lin_svr = LinearSVR(random_state=42)
lin_svr.fit(X_train_scaled, y_train)
```
Let's see how it performs on the training set:
```
from sklearn.metrics import mean_squared_error
y_pred = lin_svr.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
mse
```
Let's look at the RMSE:
```
np.sqrt(mse)
```
In this training set, the targets are tens of thousands of dollars. The RMSE gives a rough idea of the kind of error you should expect (with a higher weight for large errors): so with this model we can expect errors somewhere around $10,000. Not great. Let's see if we can do better with an RBF Kernel. We will use randomized search with cross validation to find the appropriate hyperparameter values for `C` and `gamma`:
```
from sklearn.svm import SVR
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(SVR(), param_distributions, n_iter=10, verbose=2, cv=3, random_state=42)
rnd_search_cv.fit(X_train_scaled, y_train)
rnd_search_cv.best_estimator_
```
Now let's measure the RMSE on the training set:
```
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
np.sqrt(mse)
```
Looks much better than the linear model. Let's select this model and evaluate it on the test set:
```
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
mse = mean_squared_error(y_test, y_pred)
np.sqrt(mse)
cmap = matplotlib.cm.get_cmap("jet")
from sklearn.datasets import fetch_openml
mnist = fetch_openml("mnist_784", version=1)
print(mnist.data.shape)
```
| true |
code
| 0.550487 | null | null | null | null |
|
<a href="http://landlab.github.io"><img style="float: left" src="../../../landlab_header.png"></a>
# The Implicit Kinematic Wave Overland Flow Component
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
## Overview
This notebook demonstrates the `KinwaveImplicitOverlandFlow` Landlab component. The component implements a two-dimensional kinematic wave model of overland flow, using a digital elevation model or other source of topography as the surface over which water flows.
### Theory
The kinematic wave equations are a simplified form of the 2D shallow-water equations in which energy slope is assumed to equal bed slope. Conservation of water mass is expressed in terms of the time derivative of the local water depth, $H$, and the spatial derivative (divergence) of the unit discharge vector $\mathbf{q} = UH$ (where $U$ is the 2D depth-averaged velocity vector):
$$\frac{\partial H}{\partial t} = R - \nabla\cdot \mathbf{q}$$
where $R$ is the local runoff rate [L/T] and $\mathbf{q}$ has dimensions of volume flow per time per width [L$^2$/T]. The discharge depends on the local depth, bed-surface gradient $\mathbf{S}=-\nabla\eta$ (this is the kinematic wave approximation; $\eta$ is land surface height), and a roughness factor $C_r$:
$$\mathbf{q} = \frac{1}{C_r} \mathbf{S} H^\alpha |S|^{-1/2}$$
Reads may recognize this as a form of the Manning, Chezy, or Darcy-Weisbach equation. If $\alpha = 5/3$ then we have the Manning equation, and $C_r = n$ is "Manning's n". If $\alpha = 3/2$ then we have the Chezy/Darcy-Weisbach equation, and $C_r = 1/C = (f/8g)^{1/2}$ represents the Chezy roughness factor $C$ and the equivalent Darcy-Weisbach factor $f$.
### Numerical solution
The solution method used by this component is locally implicit, and works as follows. At each time step, we iterate from upstream to downstream over the topography. Because we are working downstream, we can assume that we know the total water inflow to a given cell. We solve the following mass conservation equation at each cell:
$$\frac{H^{t+1} - H^t}{\Delta t }= \frac{Q_{in}}{A} - \frac{Q_{out}}{A} + R$$
where $H$ is water depth at a given grid node, $t$ indicates time step number, $\Delta t$ is time step duration, $Q_{in}$ is total inflow discharge, $Q_{out}$ is total outflow discharge, $A$ is cell area, and $R$ is local runoff rate (precipitation minus infiltration; could be negative if runon infiltration is occurring).
The specific outflow discharge leaving a cell along one of its faces is:
$$q = (1/C_r) H^\alpha S^{1/2}$$
where $S$ is the downhill-positive gradient of the link that crosses this particular face. Outflow discharge is zero for links that are flat or "uphill" from the given node. Total discharge out of a cell is then the sum of (specific discharge x face width) over all outflow faces:
$$Q_{out} = \sum_{i=1}^N (1/C_r) H^\alpha S_i^{1/2} W_i$$
where $N$ is the number of outflow faces (i.e., faces where the ground slopes downhill away from the cell's node), and $W_i$ is the width of face $i$.
We use the depth at the cell's node, so this simplifies to:
$$Q_{out} = (1/C_r) H'^\alpha \sum_{i=1}^N S_i^{1/2} W_i$$
Notice that we know everything here except $H'$. The reason we know $Q_{out}$ is that it equals $Q_{in}$ (which is either zero or we calculated it previously) plus $RA$.
We define $H$ in the above as a weighted sum of the "old" (time step $t$) and "new" (time step $t+1$) depth values:
$$H' = w H^{t+1} + (1-w) H^t$$
If $w=1$, the method is fully implicit. If $w=0$, it is a simple forward explicit method.
When we combine these equations, we have an equation that includes the unknown $H^{t+1}$ and a bunch of terms that are known. If $w\ne 0$, it is a nonlinear equation in $H^{t+1}$, and must be solved iteratively. We do this using a root-finding method in the scipy.optimize library.
In order to implement the algorithm, we must already know which of neighbors of each node are lower than the neighbor, and what the slopes between them are. We accomplish this using the `FlowAccumulator` and `FlowDirectorMFD` components. Running the `FlowAccumulator` also generates a sorted list (array) of nodes in drainage order.
### The component
Import the needed libraries, then inspect the component's docstring:
```
import copy
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components.overland_flow import KinwaveImplicitOverlandFlow
from landlab.io.esri_ascii import read_esri_ascii
print(KinwaveImplicitOverlandFlow.__doc__)
```
The docstring for the `__init__` method will give us a list of parameters:
```
print(KinwaveImplicitOverlandFlow.__init__.__doc__)
```
## Example 1: downpour on a plane
The first example tests that the component can reproduce the expected steady flow pattern on a sloping plane, with a gradient of $S_p$. We'll adopt the Manning equation. Once the system comes into equilibrium, the discharge should increase width distance down the plane according to $q = Rx$. We can use this fact to solve for the corresponding water depth:
$$(1/n) H^{5/3} S^{1/2} = R x$$
which implies
$$H = \left( \frac{nRx}{S^{1/2}} \right)^{3/5}$$
This is the analytical solution against which to test the model.
Pick the initial and run conditions
```
# Process parameters
n = 0.01 # roughness coefficient, (s/m^(1/3))
dep_exp = 5.0 / 3.0 # depth exponent
S = 0.01 # slope of plane
R = 72.0 # runoff rate, mm/hr
# Run-control parameters
run_time = 240.0 # duration of run, (s)
nrows = 5 # number of node rows
ncols = 11 # number of node columns
dx = 2.0 # node spacing, m
dt = 10.0 # time-step size, s
plot_every = 60.0 # plot interval, s
# Derived parameters
num_steps = int(run_time / dt)
```
Create grid and fields:
```
# create and set up grid
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
grid.set_closed_boundaries_at_grid_edges(False, True, True, True) # open only on east
# add required field
elev = grid.add_zeros('topographic__elevation', at='node')
# set topography
elev[grid.core_nodes] = S * (np.amax(grid.x_of_node) - grid.x_of_node[grid.core_nodes])
```
Plot topography, first in plan view...
```
imshow_grid(grid, elev)
```
...then as a cross-section:
```
plt.plot(grid.x_of_node, elev)
plt.xlabel('Distance (m)')
plt.ylabel('Height (m)')
plt.grid(True)
# Instantiate the component
olflow = KinwaveImplicitOverlandFlow(grid,
runoff_rate=R,
roughness=n,
depth_exp=dep_exp
)
# Helpful function to plot the profile
def plot_flow_profile(grid, olflow):
"""Plot the middle row of topography and water surface
for the overland flow model olflow."""
nc = grid.number_of_node_columns
nr = grid.number_of_node_rows
startnode = nc * (nr // 2) + 1
midrow = np.arange(startnode, startnode + nc - 1, dtype=int)
topo = grid.at_node['topographic__elevation']
plt.plot(grid.x_of_node[midrow],
topo[midrow] + grid.at_node['surface_water__depth'][midrow],
'b'
)
plt.plot(grid.x_of_node[midrow],
topo[midrow],
'k'
)
plt.xlabel('Distance (m)')
plt.ylabel('Ground and water surface height (m)')
```
Run the component forward in time, plotting the output in the form of a profile:
```
next_plot = plot_every
for i in range(num_steps):
olflow.run_one_step(dt)
if (i + 1) * dt >= next_plot:
plot_flow_profile(grid, olflow)
next_plot += plot_every
# Compare with analytical solution for depth
Rms = R / 3.6e6 # convert to m/s
nc = grid.number_of_node_columns
x = grid.x_of_node[grid.core_nodes][:nc - 2]
Hpred = (n * Rms * x / (S ** 0.5)) ** 0.6
plt.plot(x, Hpred, 'r', label='Analytical')
plt.plot(x,
grid.at_node['surface_water__depth'][grid.core_nodes][:nc - 2],
'b--',
label='Numerical'
)
plt.xlabel('Distance (m)')
plt.ylabel('Water depth (m)')
plt.grid(True)
plt.legend()
```
## Example 2: overland flow on a DEM
For this example, we'll import a small digital elevation model (DEM) for a site in New Mexico, USA.
```
# Process parameters
n = 0.1 # roughness coefficient, (s/m^(1/3))
dep_exp = 5.0 / 3.0 # depth exponent
R = 72.0 # runoff rate, mm/hr
# Run-control parameters
rain_duration = 240.0 # duration of rainfall, s
run_time = 480.0 # duration of run, s
dt = 10.0 # time-step size, s
dem_filename = '../hugo_site_filled.asc'
# Derived parameters
num_steps = int(run_time / dt)
# set up arrays to hold discharge and time
time_since_storm_start = np.arange(0.0, dt * (2 * num_steps + 1), dt)
discharge = np.zeros(2 * num_steps + 1)
# Read the DEM file as a grid with a 'topographic__elevation' field
(grid, elev) = read_esri_ascii(dem_filename, name='topographic__elevation')
# Configure the boundaries: valid right-edge nodes will be open;
# all NODATA (= -9999) nodes will be closed.
grid.status_at_node[grid.nodes_at_right_edge] = grid.BC_NODE_IS_FIXED_VALUE
grid.status_at_node[np.isclose(elev, -9999.)] = grid.BC_NODE_IS_CLOSED
# display the topography
cmap = copy.copy(mpl.cm.get_cmap('pink'))
imshow_grid(grid, elev, colorbar_label='Elevation (m)', cmap=cmap)
```
It would be nice to track discharge at the watershed outlet, but how do we find the outlet location? We actually have several valid nodes along the right-hand edge. Then we'll keep track of the field `surface_water_inflow__discharge` at these nodes. We can identify the nodes by the fact that they are (a) at the right-hand edge of the grid, and (b) have positive elevations (the ones with -9999 are outside of the watershed).
```
indices = np.where(elev[grid.nodes_at_right_edge] > 0.0)[0]
outlet_nodes = grid.nodes_at_right_edge[indices]
print('Outlet nodes:')
print(outlet_nodes)
print('Elevations of the outlet nodes:')
print(elev[outlet_nodes])
# Instantiate the component
olflow = KinwaveImplicitOverlandFlow(grid,
runoff_rate=R,
roughness=n,
depth_exp=dep_exp
)
discharge_field = grid.at_node['surface_water_inflow__discharge']
for i in range(num_steps):
olflow.run_one_step(dt)
discharge[i+1] = np.sum(discharge_field[outlet_nodes])
plt.plot(time_since_storm_start[:num_steps], discharge[:num_steps])
plt.xlabel('Time (s)')
plt.ylabel('Discharge (cms)')
plt.grid(True)
cmap = copy.copy(mpl.cm.get_cmap('Blues'))
imshow_grid(grid,
grid.at_node['surface_water__depth'],
cmap=cmap,
colorbar_label='Water depth (m)'
)
```
Now turn down the rain and run it a bit longer...
```
olflow.runoff_rate = 1.0 # just 1 mm/hr
for i in range(num_steps, 2 * num_steps):
olflow.run_one_step(dt)
discharge[i+1] = np.sum(discharge_field[outlet_nodes])
plt.plot(time_since_storm_start, discharge)
plt.xlabel('Time (s)')
plt.ylabel('Discharge (cms)')
plt.grid(True)
cmap = copy.copy(mpl.cm.get_cmap('Blues'))
imshow_grid(grid,
grid.at_node['surface_water__depth'],
cmap=cmap,
colorbar_label='Water depth (m)'
)
```
Voila! A fine hydrograph, and a water-depth map that shows deeper water in the channels (and highlights depressions in the topography).
### Click here for more <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">Landlab tutorials</a>
| true |
code
| 0.712582 | null | null | null | null |
|
# SageMaker Debugger Profiling Report
SageMaker Debugger auto generated this report. You can generate similar reports on all supported training jobs. The report provides summary of training job, system resource usage statistics, framework metrics, rules summary, and detailed analysis from each rule. The graphs and tables are interactive.
**Legal disclaimer:** This report and any recommendations are provided for informational purposes only and are not definitive. You are responsible for making your own independent assessment of the information.
```
import json
import pandas as pd
import glob
import matplotlib.pyplot as plt
import numpy as np
import datetime
from smdebug.profiler.utils import us_since_epoch_to_human_readable_time, ns_since_epoch_to_human_readable_time
from smdebug.core.utils import setup_profiler_report
import bokeh
from bokeh.io import output_notebook, show
from bokeh.layouts import column, row
from bokeh.plotting import figure
from bokeh.models.widgets import DataTable, DateFormatter, TableColumn
from bokeh.models import ColumnDataSource, PreText
from math import pi
from bokeh.transform import cumsum
import warnings
from bokeh.models.widgets import Paragraph
from bokeh.models import Legend
from bokeh.util.warnings import BokehDeprecationWarning, BokehUserWarning
warnings.simplefilter('ignore', BokehDeprecationWarning)
warnings.simplefilter('ignore', BokehUserWarning)
output_notebook(hide_banner=True)
processing_job_arn = ""
# Parameters
processing_job_arn = "arn:aws:sagemaker:us-east-1:264082167679:processing-job/pytorch-training-2022-01-2-profilerreport-73c47060"
setup_profiler_report(processing_job_arn)
def create_piechart(data_dict, title=None, height=400, width=400, x1=0, x2=0.1, radius=0.4, toolbar_location='right'):
plot = figure(plot_height=height,
plot_width=width,
toolbar_location=toolbar_location,
tools="hover,wheel_zoom,reset,pan",
tooltips="@phase:@value",
title=title,
x_range=(-radius-x1, radius+x2))
data = pd.Series(data_dict).reset_index(name='value').rename(columns={'index':'phase'})
data['angle'] = data['value']/data['value'].sum() * 2*pi
data['color'] = bokeh.palettes.viridis(len(data_dict))
plot.wedge(x=0, y=0., radius=radius,
start_angle=cumsum('angle', include_zero=True),
end_angle=cumsum('angle'),
line_color="white",
source=data,
fill_color='color',
legend='phase'
)
plot.legend.label_text_font_size = "8pt"
plot.legend.location = 'center_right'
plot.axis.axis_label=None
plot.axis.visible=False
plot.grid.grid_line_color = None
plot.outline_line_color = "white"
return plot
from IPython.display import display, HTML, Markdown, Image
def pretty_print(df):
raw_html = df.to_html().replace("\\n","<br>").replace('<tr>','<tr style="text-align: left;">')
return display(HTML(raw_html))
```
## Training job summary
```
def load_report(rule_name):
try:
report = json.load(open('/opt/ml/processing/output/rule/profiler-output/profiler-reports/'+rule_name+'.json'))
return report
except FileNotFoundError:
print (rule_name + ' not triggered')
job_statistics = {}
report = load_report('MaxInitializationTime')
if report:
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
first_step = report['Details']["step_num"]["first"]
last_step = report['Details']["step_num"]["last"]
tmp = us_since_epoch_to_human_readable_time(report['Details']['job_start'] * 1000000)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Start time"] = f"{hour} {day}"
tmp = us_since_epoch_to_human_readable_time(report['Details']['job_end'] * 1000000)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["End time"] = f"{hour} {day}"
job_duration_in_seconds = int(report['Details']['job_end'] - report['Details']['job_start'])
job_statistics["Job duration"] = f"{job_duration_in_seconds} seconds"
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
tmp = us_since_epoch_to_human_readable_time(first_step)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Training loop start"] = f"{hour} {day}"
tmp = us_since_epoch_to_human_readable_time(last_step)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Training loop end"] = f"{hour} {day}"
training_loop_duration_in_seconds = int((last_step - first_step) / 1000000)
job_statistics["Training loop duration"] = f"{training_loop_duration_in_seconds} seconds"
initialization_in_seconds = int(first_step/1000000 - report['Details']['job_start'])
job_statistics["Initialization time"] = f"{initialization_in_seconds} seconds"
finalization_in_seconds = int(np.abs(report['Details']['job_end'] - last_step/1000000))
job_statistics["Finalization time"] = f"{finalization_in_seconds} seconds"
initialization_perc = int(initialization_in_seconds / job_duration_in_seconds * 100)
job_statistics["Initialization"] = f"{initialization_perc} %"
training_loop_perc = int(training_loop_duration_in_seconds / job_duration_in_seconds * 100)
job_statistics["Training loop"] = f"{training_loop_perc} %"
finalization_perc = int(finalization_in_seconds / job_duration_in_seconds * 100)
job_statistics["Finalization"] = f"{finalization_perc} %"
if report:
text = """The following table gives a summary about the training job. The table includes information about when the training job started and ended, how much time initialization, training loop and finalization took."""
if len(job_statistics) > 0:
df = pd.DataFrame.from_dict(job_statistics, orient='index')
start_time = us_since_epoch_to_human_readable_time(report['Details']['job_start'] * 1000000)
date = datetime.datetime.strptime(start_time, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
duration = job_duration_in_seconds
text = f"""{text} \n Your training job started on {day} at {hour} and ran for {duration} seconds."""
#pretty_print(df)
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
if finalization_perc < 0:
job_statistics["Finalization%"] = 0
if training_loop_perc < 0:
job_statistics["Training loop"] = 0
if initialization_perc < 0:
job_statistics["Initialization"] = 0
else:
text = f"""{text} \n Your training job started on {day} at {hour} and ran for {duration} seconds."""
if len(job_statistics) > 0:
df2 = df.reset_index()
df2.columns = ["0", "1"]
source = ColumnDataSource(data=df2)
columns = [TableColumn(field='0', title=""),
TableColumn(field='1', title="Job Statistics"),]
table = DataTable(source=source, columns=columns, width=450, height=380)
plot = None
if "Initialization" in job_statistics:
piechart_data = {}
piechart_data["Initialization"] = initialization_perc
piechart_data["Training loop"] = training_loop_perc
piechart_data["Finalization"] = finalization_perc
plot = create_piechart(piechart_data,
height=350,
width=500,
x1=0.15,
x2=0.15,
radius=0.15,
toolbar_location=None)
if plot != None:
paragraph = Paragraph(text=f"""{text}""", width = 800)
show(column(paragraph, row(table, plot)))
else:
paragraph = Paragraph(text=f"""{text}. No step information was profiled from your training job. The time spent on initialization and finalization cannot be computed.""" , width = 800)
show(column(paragraph, row(table)))
```
## System usage statistics
```
report = load_report('OverallSystemUsage')
text1 = ''
if report:
if "GPU" in report["Details"]:
for node_id in report["Details"]["GPU"]:
gpu_p95 = report["Details"]["GPU"][node_id]["p95"]
gpu_p50 = report["Details"]["GPU"][node_id]["p50"]
cpu_p95 = report["Details"]["CPU"][node_id]["p95"]
cpu_p50 = report["Details"]["CPU"][node_id]["p50"]
if gpu_p95 < 70 and cpu_p95 < 70:
text1 = f"""{text1}The 95th percentile of the total GPU utilization on node {node_id} is only {int(gpu_p95)}%.
The 95th percentile of the total CPU utilization is only {int(cpu_p95)}%. Node {node_id} is underutilized.
You may want to consider switching to a smaller instance type."""
elif gpu_p95 < 70 and cpu_p95 > 70:
text1 = f"""{text1}The 95th percentile of the total GPU utilization on node {node_id} is only {int(gpu_p95)}%.
However, the 95th percentile of the total CPU utilization is {int(cpu_p95)}%. GPUs on node {node_id} are underutilized,
likely because of CPU bottlenecks."""
elif gpu_p50 > 70:
text1 = f"""{text1}The median total GPU utilization on node {node_id} is {int(gpu_p50)}%.
GPUs on node {node_id} are well utilized."""
else:
text1 = f"""{text1}The median total GPU utilization on node {node_id} is {int(gpu_p50)}%.
The median total CPU utilization is {int(cpu_p50)}%."""
else:
for node_id in report["Details"]["CPU"]:
cpu_p95 = report["Details"]["CPU"][node_id]["p95"]
if cpu_p95 > 70:
text1 = f"""{text1}The 95th percentile of the total CPU utilization on node {node_id} is {int**(cpu_p95)}%. CPUs on node {node_id} are well utilized."""
text1 = Paragraph(text=f"""{text1}""", width=1100)
text2 = Paragraph(text=f"""The following table shows statistics of resource utilization per worker (node),
such as the total CPU and GPU utilization, and the memory utilization on CPU and GPU.
The table also includes the total I/O wait time and the total amount of data sent or received in bytes.
The table shows min and max values as well as p99, p90 and p50 percentiles.""", width=900)
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
units = {"CPU": "percentage", "CPU memory": "percentage", "GPU": "percentage", "Network": "bytes", "GPU memory": "percentage", "I/O": "percentage"}
if report:
for metric in report['Details']:
for node_id in report['Details'][metric]:
values = report['Details'][metric][node_id]
rows.append([node_id, metric, units[metric], values['max'], values['p99'], values['p95'], values['p50'], values['min']])
df = pd.DataFrame(rows)
df.columns = ['Node', 'metric', 'unit', 'max', 'p99', 'p95', 'p50', 'min']
df2 = df.reset_index()
source = ColumnDataSource(data=df2)
columns = [TableColumn(field='Node', title="node"),
TableColumn(field='metric', title="metric"),
TableColumn(field='unit', title="unit"),
TableColumn(field='max', title="max"),
TableColumn(field='p99', title="p99"),
TableColumn(field='p95', title="p95"),
TableColumn(field='p50', title="p50"),
TableColumn(field='min', title="min"),]
table = DataTable(source=source, columns=columns, width=800, height=df2.shape[0]*30)
show(column( text1, text2, row(table)))
report = load_report('OverallFrameworkMetrics')
if report:
if 'Details' in report:
display(Markdown(f"""## Framework metrics summary"""))
plots = []
text = ''
if 'phase' in report['Details']:
text = f"""The following two pie charts show the time spent on the TRAIN phase, the EVAL phase,
and others. The 'others' includes the time spent between steps (after one step has finished and before
the next step has started). Ideally, most of the training time should be spent on the
TRAIN and EVAL phases. If TRAIN/EVAL were not specified in the training script, steps will be recorded as
GLOBAL."""
if 'others' in report['Details']['phase']:
others = float(report['Details']['phase']['others'])
if others > 25:
text = f"""{text} Your training job spent quite a significant amount of time ({round(others,2)}%) in phase "others".
You should check what is happening in between the steps."""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase and others")
plots.append(plot)
if 'forward_backward' in report['Details']:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie chart on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the time was spent in event "{event}"."""
if perc > 70:
text = f"""There is quite a significant difference between the time spent on forward and backward
pass."""
else:
text = f"""{text} It shows that {int(perc)}% of the training time
was spent on "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plots)))
plots = []
text=''
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following piechart shows a breakdown of the CPU/GPU operators.
It shows that {int(ratio)}% of training time was spent on executing the "{key}" operator."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details']:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General framework operations")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plots)))
plots = []
text = ''
if 'horovod' in report['Details']:
display(Markdown(f"""#### Overview: Horovod metrics"""))
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""{text} The following pie chart shows a detailed breakdown of the Horovod metrics profiled
from your training job. The most expensive function was "{event}" with {int(perc)}%."""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Horovod metrics ")
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plot)))
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
values = []
if report:
if 'CPU_total' in report['Details']:
display(Markdown(f"""#### Overview: CPU operators"""))
event = max(report['Details']['CPU'], key=report['Details']['CPU'].get)
perc = report['Details']['CPU'][event]
for function in report['Details']['CPU']:
percentage = round(report['Details']['CPU'][function],2)
time = report['Details']['CPU_total'][function]
rows.append([percentage, time, function])
df = pd.DataFrame(rows)
df.columns = ['percentage', 'time', 'operator']
df = df.sort_values(by=['percentage'], ascending=False)
source = ColumnDataSource(data=df)
columns = [TableColumn(field='percentage', title="Percentage"),
TableColumn(field='time', title="Cumulative time in microseconds"),
TableColumn(field='operator', title="CPU operator"),]
table = DataTable(source=source, columns=columns, width=550, height=350)
text = Paragraph(text=f"""The following table shows a list of operators that ran on the CPUs.
The most expensive operator on the CPUs was "{event}" with {int(perc)} %.""")
plot = create_piechart(report['Details']['CPU'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
)
show(column(text, row(table, plot)))
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
values = []
if report:
if 'GPU_total' in report['Details']:
display(Markdown(f"""#### Overview: GPU operators"""))
event = max(report['Details']['GPU'], key=report['Details']['GPU'].get)
perc = report['Details']['GPU'][event]
for function in report['Details']['GPU']:
percentage = round(report['Details']['GPU'][function],2)
time = report['Details']['GPU_total'][function]
rows.append([percentage, time, function])
df = pd.DataFrame(rows)
df.columns = ['percentage', 'time', 'operator']
df = df.sort_values(by=['percentage'], ascending=False)
source = ColumnDataSource(data=df)
columns = [TableColumn(field='percentage', title="Percentage"),
TableColumn(field='time', title="Cumulative time in microseconds"),
TableColumn(field='operator', title="GPU operator"),]
table = DataTable(source=source, columns=columns, width=450, height=350)
text = Paragraph(text=f"""The following table shows a list of operators that your training job ran on GPU.
The most expensive operator on GPU was "{event}" with {int(perc)} %""")
plot = create_piechart(report['Details']['GPU'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
)
show(column(text, row(table, plot)))
```
## Rules summary
```
description = {}
description['CPUBottleneck'] = 'Checks if the CPU utilization is high and the GPU utilization is low. \
It might indicate CPU bottlenecks, where the GPUs are waiting for data to arrive \
from the CPUs. The rule evaluates the CPU and GPU utilization rates, and triggers the issue \
if the time spent on the CPU bottlenecks exceeds a threshold percent of the total training time. The default threshold is 50 percent.'
description['IOBottleneck'] = 'Checks if the data I/O wait time is high and the GPU utilization is low. \
It might indicate IO bottlenecks where GPU is waiting for data to arrive from storage. \
The rule evaluates the I/O and GPU utilization rates and triggers the issue \
if the time spent on the IO bottlenecks exceeds a threshold percent of the total training time. The default threshold is 50 percent.'
description['Dataloader'] = 'Checks how many data loaders are running in parallel and whether the total number is equal the number \
of available CPU cores. The rule triggers if number is much smaller or larger than the number of available cores. \
If too small, it might lead to low GPU utilization. If too large, it might impact other compute intensive operations on CPU.'
description['GPUMemoryIncrease'] = 'Measures the average GPU memory footprint and triggers if there is a large increase.'
description['BatchSize'] = 'Checks if GPUs are underutilized because the batch size is too small. \
To detect this problem, the rule analyzes the average GPU memory footprint, \
the CPU and the GPU utilization. '
description['LowGPUUtilization'] = 'Checks if the GPU utilization is low or fluctuating. \
This can happen due to bottlenecks, blocking calls for synchronizations, \
or a small batch size.'
description['MaxInitializationTime'] = 'Checks if the time spent on initialization exceeds a threshold percent of the total training time. \
The rule waits until the first step of training loop starts. The initialization can take longer \
if downloading the entire dataset from Amazon S3 in File mode. The default threshold is 20 minutes.'
description['LoadBalancing'] = 'Detects workload balancing issues across GPUs. \
Workload imbalance can occur in training jobs with data parallelism. \
The gradients are accumulated on a primary GPU, and this GPU might be overused \
with regard to other GPUs, resulting in reducing the efficiency of data parallelization.'
description['StepOutlier'] = 'Detects outliers in step duration. The step duration for forward and backward pass should be \
roughly the same throughout the training. If there are significant outliers, \
it may indicate a system stall or bottleneck issues.'
recommendation = {}
recommendation['CPUBottleneck'] = 'Consider increasing the number of data loaders \
or applying data pre-fetching.'
recommendation['IOBottleneck'] = 'Pre-fetch data or choose different file formats, such as binary formats that \
improve I/O performance.'
recommendation['Dataloader'] = 'Change the number of data loader processes.'
recommendation['GPUMemoryIncrease'] = 'Choose a larger instance type with more memory if footprint is close to maximum available memory.'
recommendation['BatchSize'] = 'The batch size is too small, and GPUs are underutilized. Consider running on a smaller instance type or increasing the batch size.'
recommendation['LowGPUUtilization'] = 'Check if there are bottlenecks, minimize blocking calls, \
change distributed training strategy, or increase the batch size.'
recommendation['MaxInitializationTime'] = 'Initialization takes too long. \
If using File mode, consider switching to Pipe mode in case you are using TensorFlow framework.'
recommendation['LoadBalancing'] = 'Choose a different distributed training strategy or \
a different distributed training framework.'
recommendation['StepOutlier'] = 'Check if there are any bottlenecks (CPU, I/O) correlated to the step outliers.'
files = glob.glob('/opt/ml/processing/output/rule/profiler-output/profiler-reports/*json')
summary = {}
for i in files:
rule_name = i.split('/')[-1].replace('.json','')
if rule_name == "OverallSystemUsage" or rule_name == "OverallFrameworkMetrics":
continue
rule_report = json.load(open(i))
summary[rule_name] = {}
summary[rule_name]['Description'] = description[rule_name]
summary[rule_name]['Recommendation'] = recommendation[rule_name]
summary[rule_name]['Number of times rule triggered'] = rule_report['RuleTriggered']
#summary[rule_name]['Number of violations'] = rule_report['Violations']
summary[rule_name]['Number of datapoints'] = rule_report['Datapoints']
summary[rule_name]['Rule parameters'] = rule_report['RuleParameters']
df = pd.DataFrame.from_dict(summary, orient='index')
df = df.sort_values(by=['Number of times rule triggered'], ascending=False)
display(Markdown(f"""The following table shows a profiling summary of the Debugger built-in rules.
The table is sorted by the rules that triggered the most frequently. During your training job, the {df.index[0]} rule
was the most frequently triggered. It processed {df.values[0,3]} datapoints and was triggered {df.values[0,2]} times."""))
with pd.option_context('display.colheader_justify','left'):
pretty_print(df)
analyse_phase = "training"
if job_statistics and "initialization_in_seconds" in job_statistics:
if job_statistics["initialization_in_seconds"] > job_statistics["training_loop_duration_in_seconds"]:
analyse_phase = "initialization"
time = job_statistics["initialization_in_seconds"]
perc = job_statistics["initialization_%"]
display(Markdown(f"""The initialization phase took {int(time)} seconds, which is {int(perc)}%*
of the total training time. Since the training loop has taken the most time,
we dive deep into the events occurring during this phase"""))
display(Markdown("""## Analyzing initialization\n\n"""))
time = job_statistics["training_loop_duration_in_seconds"]
perc = job_statistics["training_loop_%"]
display(Markdown(f"""The training loop lasted for {int(time)} seconds which is {int(perc)}% of the training job time.
Since the training loop has taken the most time, we dive deep into the events occured during this phase."""))
if analyse_phase == 'training':
display(Markdown("""## Analyzing the training loop\n\n"""))
if analyse_phase == "initialization":
display(Markdown("""### MaxInitializationTime\n\nThis rule helps to detect if the training initialization is taking too much time. \nThe rule waits until first step is available. The rule takes the parameter `threshold` that defines how many minutes to wait for the first step to become available. Default is 20 minutes.\nYou can run the rule locally in the following way:
"""))
_ = load_report("MaxInitializationTime")
if analyse_phase == "training":
display(Markdown("""### Step duration analysis"""))
report = load_report('StepOutlier')
if report:
parameters = report['RuleParameters']
params = report['RuleParameters'].split('\n')
stddev = params[3].split(':')[1]
mode = params[1].split(':')[1]
n_outlier = params[2].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text = f"""The StepOutlier rule measures step durations and checks for outliers. The rule
returns True if duration is larger than {stddev} times the standard deviation. The rule
also takes the parameter mode, that specifies whether steps from training or validation phase
should be checked. In your processing job mode was specified as {mode}.
Typically the first step is taking significantly more time and to avoid the
rule triggering immediately, one can use n_outliers to specify the number of outliers to ignore.
n_outliers was set to {n_outlier}.
The rule analysed {datapoints} datapoints and triggered {triggered} times.
"""
paragraph = Paragraph(text=text, width=900)
show(column(paragraph))
if report and len(report['Details']['step_details']) > 0:
for node_id in report['Details']['step_details']:
tmp = report['RuleParameters'].split('threshold:')
threshold = tmp[1].split('\n')[0]
n_outliers = report['Details']['step_details'][node_id]['number_of_outliers']
mean = report['Details']['step_details'][node_id]['step_stats']['mean']
stddev = report['Details']['step_details'][node_id]['stddev']
phase = report['Details']['step_details'][node_id]['phase']
display(Markdown(f"""**Step durations on node {node_id}:**"""))
display(Markdown(f"""The following table is a summary of the statistics of step durations measured on node {node_id}.
The rule has analyzed the step duration from {phase} phase.
The average step duration on node {node_id} was {round(mean, 2)}s.
The rule detected {n_outliers} outliers, where step duration was larger than {threshold} times the standard deviation of {stddev}s
\n"""))
step_stats_df = pd.DataFrame.from_dict(report['Details']['step_details'][node_id]['step_stats'], orient='index').T
step_stats_df.index = ['Step Durations in [s]']
pretty_print(step_stats_df)
display(Markdown(f"""The following histogram shows the step durations measured on the different nodes.
You can turn on or turn off the visualization of histograms by selecting or unselecting the labels in the legend."""))
plot = figure(plot_height=450,
plot_width=850,
title=f"""Step durations""")
colors = bokeh.palettes.viridis(len(report['Details']['step_details']))
for index, node_id in enumerate(report['Details']['step_details']):
probs = report['Details']['step_details'][node_id]['probs']
binedges = report['Details']['step_details'][node_id]['binedges']
plot.quad( top=probs,
bottom=0,
left=binedges[:-1],
right=binedges[1:],
line_color="white",
fill_color=colors[index],
fill_alpha=0.7,
legend=node_id)
plot.add_layout(Legend(), 'right')
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Step durations in [s]"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
plot.legend.location = 'center_right'
show(plot)
if report['RuleTriggered'] > 0:
text=f"""To get a better understanding of what may have caused those outliers,
we correlate the timestamps of step outliers with other framework metrics that happened at the same time.
The left chart shows how much time was spent in the different framework
metrics aggregated by event phase. The chart on the right shows the histogram of normal step durations (without
outliers). The following chart shows how much time was spent in the different
framework metrics when step outliers occurred. In this chart framework metrics are not aggregated byphase."""
plots = []
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether step outliers mainly happened during TRAIN or EVAL phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie chart on the right shows a detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The Ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators executed during the step outliers.
It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics that have been
recorded when step outliers happened. The most expensive function was {event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### GPU utilization analysis\n\n"""))
display(Markdown("""**Usage per GPU** \n\n"""))
report = load_report('LowGPUUtilization')
if report:
params = report['RuleParameters'].split('\n')
threshold_p95 = params[0].split(':')[1]
threshold_p5 = params[1].split(':')[1]
window = params[2].split(':')[1]
patience = params[3].split(':')[1]
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=Paragraph(text=f"""The LowGPUUtilization rule checks for a low and fluctuating GPU usage. If the GPU usage is
consistently low, it might be caused by bottlenecks or a small batch size. If usage is heavily
fluctuating, it can be due to bottlenecks or blocking calls. The rule computed the 95th and 5th
percentile of GPU utilization on {window} continuous datapoints and found {violations} cases where
p95 was above {threshold_p95}% and p5 was below {threshold_p5}%. If p95 is high and p5 is low,
it might indicate that the GPU usage is highly fluctuating. If both values are very low,
it would mean that the machine is underutilized. During initialization, the GPU usage is likely zero,
so the rule skipped the first {patience} data points.
The rule analysed {datapoints} datapoints and triggered {triggered} times.""", width=800)
show(text)
if len(report['Details']) > 0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
text = Paragraph(text=f"""Your training job is underutilizing the instance. You may want to consider
to either switch to a smaller instance type or to increase the batch size.
The last time that the LowGPUUtilization rule was triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps.
They show the utilization per GPU (without outliers).
To get a better understanding of the workloads throughout the whole training,
you can check the workload histogram in the next section.""", width=800)
show(text)
del report['Details']['last_timestamp']
for node_id in report['Details']:
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,17),
)
for index, key in enumerate(report['Details'][node_id]):
display(Markdown(f"""**GPU utilization of {key} on node {node_id}:**"""))
text = ""
gpu_max = report['Details'][node_id][key]['gpu_max']
p_95 = report['Details'][node_id][key]['gpu_95']
p_5 = report['Details'][node_id][key]['gpu_5']
text = f"""{text} The max utilization of {key} on node {node_id} was {gpu_max}%"""
if p_95 < int(threshold_p95):
text = f"""{text} and the 95th percentile was only {p_95}%.
{key} on node {node_id} is underutilized"""
if p_5 < int(threshold_p5):
text = f"""{text} and the 5th percentile was only {p_5}%"""
if p_95 - p_5 > 50:
text = f"""{text} The difference between 5th percentile {p_5}% and 95th percentile {p_95}% is quite
significant, which means that utilization on {key} is fluctuating quite a lot.\n"""
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
text=Paragraph(text=f"""{text}""", width=900)
show(text)
plot.yaxis.axis_label = "Utilization in %"
plot.xaxis.ticker = np.arange(index+2)
show(plot)
if analyse_phase == "training":
display(Markdown("""**Workload balancing**\n\n"""))
report = load_report('LoadBalancing')
if report:
params = report['RuleParameters'].split('\n')
threshold = params[0].split(':')[1]
patience = params[1].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
paragraph = Paragraph(text=f"""The LoadBalancing rule helps to detect issues in workload balancing
between multiple GPUs.
It computes a histogram of GPU utilization values for each GPU and compares then the
similarity between histograms. The rule checked if the distance of histograms is larger than the
threshold of {threshold}.
During initialization utilization is likely zero, so the rule skipped the first {patience} data points.
""", width=900)
show(paragraph)
if len(report['Details']) > 0:
for node_id in report['Details']:
text = f"""The following histogram shows the workload per GPU on node {node_id}.
You can enable/disable the visualization of a workload by clicking on the label in the legend.
"""
if len(report['Details']) == 1 and len(report['Details'][node_id]['workloads']) == 1:
text = f"""{text} Your training job only used one GPU so there is no workload balancing issue."""
plot = figure(plot_height=450,
plot_width=850,
x_range=(-1,100),
title=f"""Workloads on node {node_id}""")
colors = bokeh.palettes.viridis(len(report['Details'][node_id]['workloads']))
for index, gpu_id2 in enumerate(report['Details'][node_id]['workloads']):
probs = report['Details'][node_id]['workloads'][gpu_id2]
plot.quad( top=probs,
bottom=0,
left=np.arange(0,98,2),
right=np.arange(2,100,2),
line_color="white",
fill_color=colors[index],
fill_alpha=0.8,
legend=gpu_id2 )
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Utilization"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
paragraph = Paragraph(text=text)
show(column(paragraph, plot))
if "distances" in report['Details'][node_id]:
text = f"""The rule identified workload balancing issues on node {node_id}
where workloads differed by more than threshold {threshold}.
"""
for index, gpu_id2 in enumerate(report['Details'][node_id]['distances']):
for gpu_id1 in report['Details'][node_id]['distances'][gpu_id2]:
distance = round(report['Details'][node_id]['distances'][gpu_id2][gpu_id1], 2)
text = f"""{text} The difference of workload between {gpu_id2} and {gpu_id1} is: {distance}."""
paragraph = Paragraph(text=f"""{text}""", width=900)
show(column(paragraph))
if analyse_phase == "training":
display(Markdown("""### Dataloading analysis\n\n"""))
report = load_report('Dataloader')
if report:
params = report['RuleParameters'].split("\n")
min_threshold = params[0].split(':')[1]
max_threshold = params[1].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=f"""The number of dataloader workers can greatly affect the overall performance
of your training job. The rule analyzed the number of dataloading processes that have been running in
parallel on the training instance and compares it against the total number of cores.
The rule checked if the number of processes is smaller than {min_threshold}% or larger than
{max_threshold}% the total number of cores. Having too few dataloader workers can slowdown data preprocessing and lead to GPU
underutilization. Having too many dataloader workers may hurt the
overall performance if you are running other compute intensive tasks on the CPU.
The rule analysed {datapoints} datapoints and triggered {triggered} times."""
paragraph = Paragraph(text=f"{text}", width=900)
show(paragraph)
text = ""
if 'cores' in report['Details']:
cores = int(report['Details']['cores'])
dataloaders = report['Details']['dataloaders']
if dataloaders < cores:
text=f"""{text} Your training instance provided {cores} CPU cores, however your training job only
ran on average {dataloaders} dataloader workers in parallel. We recommend you to increase the number of
dataloader workers."""
if dataloaders > cores:
text=f"""{text} Your training instance provided {cores} CPU cores, however your training job ran
on average {dataloaders} dataloader workers. We recommed you to decrease the number of dataloader
workers."""
if 'pin_memory' in report['Details'] and report['Details']['pin_memory'] == False:
text=f"""{text} Using pinned memory also improves performance because it enables fast data transfer to CUDA-enabled GPUs.
The rule detected that your training job was not using pinned memory.
In case of using PyTorch Dataloader, you can enable this by setting pin_memory=True."""
if 'prefetch' in report['Details'] and report['Details']['prefetch'] == False:
text=f"""{text} It appears that your training job did not perform any data pre-fetching. Pre-fetching can improve your
data input pipeline as it produces the data ahead of time."""
paragraph = Paragraph(text=f"{text}", width=900)
show(paragraph)
colors=bokeh.palettes.viridis(10)
if "dataloading_time" in report['Details']:
median = round(report['Details']["dataloading_time"]['p50'],4)
p95 = round(report['Details']["dataloading_time"]['p95'],4)
p25 = round(report['Details']["dataloading_time"]['p25'],4)
binedges = report['Details']["dataloading_time"]['binedges']
probs = report['Details']["dataloading_time"]['probs']
text=f"""The following histogram shows the distribution of dataloading times that have been measured throughout your training job. The median dataloading time was {median}s.
The 95th percentile was {p95}s and the 25th percentile was {p25}s"""
plot = figure(plot_height=450,
plot_width=850,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
x_range=(binedges[0], binedges[-1])
)
plot.quad( top=probs,
bottom=0,
left=binedges[:-1],
right=binedges[1:],
line_color="white",
fill_color=colors[0],
fill_alpha=0.8,
legend="Dataloading events" )
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Dataloading in [s]"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
paragraph = Paragraph(text=f"{text}", width=900)
show(column(paragraph, plot))
if analyse_phase == "training":
display(Markdown(""" ### Batch size"""))
report = load_report('BatchSize')
if report:
params = report['RuleParameters'].split('\n')
cpu_threshold_p95 = int(params[0].split(':')[1])
gpu_threshold_p95 = int(params[1].split(':')[1])
gpu_memory_threshold_p95 = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
window = int(params[4].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text = Paragraph(text=f"""The BatchSize rule helps to detect if GPU is underutilized because of the batch size being
too small. To detect this the rule analyzes the GPU memory footprint, CPU and GPU utilization. The rule checked if the 95th percentile of CPU utilization is below cpu_threshold_p95 of
{cpu_threshold_p95}%, the 95th percentile of GPU utilization is below gpu_threshold_p95 of {gpu_threshold_p95}% and the 95th percentile of memory footprint \
below gpu_memory_threshold_p95 of {gpu_memory_threshold_p95}%. In your training job this happened {violations} times. \
The rule skipped the first {patience} datapoints. The rule computed the percentiles over window size of {window} continuous datapoints.\n
The rule analysed {datapoints} datapoints and triggered {triggered} times.
""", width=800)
show(text)
if len(report['Details']) >0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
del report['Details']['last_timestamp']
text = Paragraph(text=f"""Your training job is underutilizing the instance. You may want to consider
either switch to a smaller instance type or to increase the batch size.
The last time the BatchSize rule triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps. They the total
CPU utilization, the GPU utilization, and the GPU memory usage per GPU (without outliers).""",
width=800)
show(text)
for node_id in report['Details']:
xmax = max(20, len(report['Details'][node_id]))
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,xmax)
)
for index, key in enumerate(report['Details'][node_id]):
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
plot.xaxis.ticker = np.arange(index+2)
plot.yaxis.axis_label = "Utilization in %"
show(plot)
if analyse_phase == "training":
display(Markdown("""### CPU bottlenecks\n\n"""))
report = load_report('CPUBottleneck')
if report:
params = report['RuleParameters'].split('\n')
threshold = int(params[0].split(':')[1])
cpu_threshold = int(params[1].split(':')[1])
gpu_threshold = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
if report['Violations'] > 0:
perc = int(report['Violations']/report['Datapoints']*100)
else:
perc = 0
if perc < threshold:
string = 'below'
else:
string = 'above'
text = f"""The CPUBottleneck rule checked when the CPU utilization was above cpu_threshold of {cpu_threshold}%
and GPU utilization was below gpu_threshold of {gpu_threshold}%.
During initialization utilization is likely to be zero, so the rule skipped the first {patience} datapoints.
With this configuration the rule found {violations} CPU bottlenecks which is {perc}% of the total time. This is {string} the threshold of {threshold}%
The rule analysed {datapoints} data points and triggered {triggered} times."""
paragraph = Paragraph(text=text, width=900)
show(paragraph)
if report:
plots = []
text = ""
if report['RuleTriggered'] > 0:
low_gpu = report['Details']['low_gpu_utilization']
cpu_bottleneck = {}
cpu_bottleneck["GPU usage above threshold"] = report["Datapoints"] - report["Details"]["low_gpu_utilization"]
cpu_bottleneck["GPU usage below threshold"] = report["Details"]["low_gpu_utilization"] - len(report["Details"])
cpu_bottleneck["Low GPU usage due to CPU bottlenecks"] = len(report["Details"]["bottlenecks"])
n_bottlenecks = round(len(report['Details']['bottlenecks'])/datapoints * 100, 2)
text = f"""The following chart (left) shows how many datapoints were below the gpu_threshold of {gpu_threshold}%
and how many of those datapoints were likely caused by a CPU bottleneck. The rule found {low_gpu} out of {datapoints} datapoints which had a GPU utilization
below {gpu_threshold}%. Out of those datapoints {n_bottlenecks}% were likely caused by CPU bottlenecks.
"""
plot = create_piechart(cpu_bottleneck,
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Low GPU usage caused by CPU bottlenecks")
plots.append(plot)
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether CPU bottlenecks mainly
happened during train/validation phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between time spent on TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie charts on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event {event}"""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators that happened during CPU bottlenecks.
It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics
that have been recorded when the CPU bottleneck happened. The most expensive function was
{event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### I/O bottlenecks\n\n"""))
report = load_report('IOBottleneck')
if report:
params = report['RuleParameters'].split('\n')
threshold = int(params[0].split(':')[1])
io_threshold = int(params[1].split(':')[1])
gpu_threshold = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
if report['Violations'] > 0:
perc = int(report['Violations']/report['Datapoints']*100)
else:
perc = 0
if perc < threshold:
string = 'below'
else:
string = 'above'
text = f"""The IOBottleneck rule checked when I/O wait time was above io_threshold of {io_threshold}%
and GPU utilization was below gpu_threshold of {gpu_threshold}. During initialization utilization is likely to be zero, so the rule skipped the first {patience} datapoints.
With this configuration the rule found {violations} I/O bottlenecks which is {perc}% of the total time. This is {string} the threshold of {threshold}%.
The rule analysed {datapoints} datapoints and triggered {triggered} times."""
paragraph = Paragraph(text=text, width=900)
show(paragraph)
if report:
plots = []
text = ""
if report['RuleTriggered'] > 0:
low_gpu = report['Details']['low_gpu_utilization']
cpu_bottleneck = {}
cpu_bottleneck["GPU usage above threshold"] = report["Datapoints"] - report["Details"]["low_gpu_utilization"]
cpu_bottleneck["GPU usage below threshold"] = report["Details"]["low_gpu_utilization"] - len(report["Details"])
cpu_bottleneck["Low GPU usage due to I/O bottlenecks"] = len(report["Details"]["bottlenecks"])
n_bottlenecks = round(len(report['Details']['bottlenecks'])/datapoints * 100, 2)
text = f"""The following chart (left) shows how many datapoints were below the gpu_threshold of {gpu_threshold}%
and how many of those datapoints were likely caused by a I/O bottleneck. The rule found {low_gpu} out of {datapoints} datapoints which had a GPU utilization
below {gpu_threshold}%. Out of those datapoints {n_bottlenecks}% were likely caused by I/O bottlenecks.
"""
plot = create_piechart(cpu_bottleneck,
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Low GPU usage caused by I/O bottlenecks")
plots.append(plot)
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether I/O bottlenecks mainly happened during the training or validation phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie charts on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators that happened
during I/O bottlenecks. It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics that have been
recorded when I/O bottleneck happened. The most expensive function was {event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### GPU memory\n\n"""))
report = load_report('GPUMemoryIncrease')
if report:
params = report['RuleParameters'].split('\n')
increase = float(params[0].split(':')[1])
patience = params[1].split(':')[1]
window = params[2].split(':')[1]
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=Paragraph(text=f"""The GPUMemoryIncrease rule helps to detect large increase in memory usage on GPUs.
The rule checked if the moving average of memory increased by more than {increase}%.
So if the moving average increased for instance from 10% to {11+increase}%,
the rule would have triggered. During initialization utilization is likely 0, so the rule skipped the first {patience} datapoints.
The moving average was computed on a window size of {window} continuous datapoints. The rule detected {violations} violations
where the moving average between previous and current time window increased by more than {increase}%.
The rule analysed {datapoints} datapoints and triggered {triggered} times.""",
width=900)
show(text)
if len(report['Details']) > 0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
text = Paragraph(text=f"""Your training job triggered memory spikes.
The last time the GPUMemoryIncrease rule triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps. They show for each node and GPU the corresponding
memory utilization (without outliers).""", width=900)
show(text)
del report['Details']['last_timestamp']
for node_id in report['Details']:
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,17),
)
for index, key in enumerate(report['Details'][node_id]):
display(Markdown(f"""**Memory utilization of {key} on node {node_id}:**"""))
text = ""
gpu_max = report['Details'][node_id][key]['gpu_max']
text = f"""{text} The max memory utilization of {key} on node {node_id} was {gpu_max}%."""
p_95 = int(report['Details'][node_id][key]['p95'])
p_5 = report['Details'][node_id][key]['p05']
if p_95 < int(50):
text = f"""{text} The 95th percentile was only {p_95}%."""
if p_5 < int(5):
text = f"""{text} The 5th percentile was only {p_5}%."""
if p_95 - p_5 > 50:
text = f"""{text} The difference between 5th percentile {p_5}% and 95th percentile {p_95}% is quite
significant, which means that memory utilization on {key} is fluctuating quite a lot."""
text = Paragraph(text=f"""{text}""", width=900)
show(text)
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
plot.xaxis.ticker = np.arange(index+2)
plot.yaxis.axis_label = "Utilization in %"
show(plot)
```
| true |
code
| 0.356202 | null | null | null | null |
|
# Initialize a game
```
from ConnectN import ConnectN
game_setting = {'size':(6,6), 'N':4, 'pie_rule':True}
game = ConnectN(**game_setting)
% matplotlib notebook
from Play import Play
gameplay=Play(ConnectN(**game_setting),
player1=None,
player2=None)
```
# Define our policy
Please go ahead and define your own policy! See if you can train it under 1000 games and with only 1000 steps of exploration in each move.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from math import *
import numpy as np
from ConnectN import ConnectN
game_setting = {'size':(6,6), 'N':4}
game = ConnectN(**game_setting)
class Policy(nn.Module):
def __init__(self, game):
super(Policy, self).__init__()
# input = 6x6 board
# convert to 5x5x8
self.conv1 = nn.Conv2d(1, 16, kernel_size=2, stride=1, bias=False)
# 5x5x16 to 3x3x32
self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, bias=False)
self.size=3*3*32
# the part for actions
self.fc_action1 = nn.Linear(self.size, self.size//4)
self.fc_action2 = nn.Linear(self.size//4, 36)
# the part for the value function
self.fc_value1 = nn.Linear(self.size, self.size//6)
self.fc_value2 = nn.Linear(self.size//6, 1)
self.tanh_value = nn.Tanh()
def forward(self, x):
y = F.leaky_relu(self.conv1(x))
y = F.leaky_relu(self.conv2(y))
y = y.view(-1, self.size)
# action head
a = self.fc_action2(F.leaky_relu(self.fc_action1(y)))
avail = (torch.abs(x.squeeze())!=1).type(torch.FloatTensor)
avail = avail.view(-1, 36)
maxa = torch.max(a)
exp = avail*torch.exp(a-maxa)
prob = exp/torch.sum(exp)
# value head
value = self.tanh_value(self.fc_value2(F.leaky_relu( self.fc_value1(y) )))
return prob.view(6,6), value
policy = Policy(game)
```
# Define a MCTS player for Play
```
import MCTS
from copy import copy
def Policy_Player_MCTS(game):
mytree = MCTS.Node(copy(game))
for _ in range(1000):
mytree.explore(policy)
mytreenext, (v, nn_v, p, nn_p) = mytree.next(temperature=0.1)
return mytreenext.game.last_move
import random
def Random_Player(game):
return random.choice(game.available_moves())
```
# Play a game against a random policy
```
% matplotlib notebook
from Play import Play
gameplay=Play(ConnectN(**game_setting),
player1=Policy_Player_MCTS,
player2=None)
```
# Training
```
# initialize our alphazero agent and optimizer
import torch.optim as optim
game=ConnectN(**game_setting)
policy = Policy(game)
optimizer = optim.Adam(policy.parameters(), lr=.01, weight_decay=1.e-5)
! pip install progressbar
```
Beware, training is **VERY VERY** slow!!
```
# train our agent
from collections import deque
import MCTS
# try a higher number
episodes = 2000
import progressbar as pb
widget = ['training loop: ', pb.Percentage(), ' ',
pb.Bar(), ' ', pb.ETA() ]
timer = pb.ProgressBar(widgets=widget, maxval=episodes).start()
outcomes = []
policy_loss = []
Nmax = 1000
for e in range(episodes):
mytree = MCTS.Node(game)
logterm = []
vterm = []
while mytree.outcome is None:
for _ in range(Nmax):
mytree.explore(policy)
if mytree.N >= Nmax:
break
current_player = mytree.game.player
mytree, (v, nn_v, p, nn_p) = mytree.next()
mytree.detach_mother()
loglist = torch.log(nn_p)*p
constant = torch.where(p>0, p*torch.log(p),torch.tensor(0.))
logterm.append(-torch.sum(loglist-constant))
vterm.append(nn_v*current_player)
# we compute the "policy_loss" for computing gradient
outcome = mytree.outcome
outcomes.append(outcome)
loss = torch.sum( (torch.stack(vterm)-outcome)**2 + torch.stack(logterm) )
optimizer.zero_grad()
loss.backward()
policy_loss.append(float(loss))
optimizer.step()
if e%10==0:
print("game: ",e+1, ", mean loss: {:3.2f}".format(np.mean(policy_loss[-20:])),
", recent outcomes: ", outcomes[-10:])
if e%500==0:
torch.save(policy,'6-6-4-pie-{:d}.mypolicy'.format(e))
del loss
timer.update(e+1)
timer.finish()
```
# setup environment to pit your AI against the challenge policy '6-6-4-pie.policy'
```
challenge_policy = torch.load('6-6-4-pie.policy')
def Challenge_Player_MCTS(game):
mytree = MCTS.Node(copy(game))
for _ in range(1000):
mytree.explore(challenge_policy)
mytreenext, (v, nn_v, p, nn_p) = mytree.next(temperature=0.1)
return mytreenext.game.last_move
```
# Let the game begin!
```
% matplotlib notebook
gameplay=Play(ConnectN(**game_setting),
player2=Policy_Player_MCTS,
player1=Challenge_Player_MCTS)
```
| true |
code
| 0.803598 | null | null | null | null |
|
# Logistic Regression With Linear Boundary Demo
> ☝Before moving on with this demo you might want to take a look at:
> - 📗[Math behind the Logistic Regression](https://github.com/trekhleb/homemade-machine-learning/tree/master/homemade/logistic_regression)
> - ⚙️[Logistic Regression Source Code](https://github.com/trekhleb/homemade-machine-learning/blob/master/homemade/logistic_regression/logistic_regression.py)
**Logistic regression** is the appropriate regression analysis to conduct when the dependent variable is dichotomous (binary). Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables.
Logistic Regression is used when the dependent variable (target) is categorical.
For example:
- To predict whether an email is spam (`1`) or (`0`).
- Whether online transaction is fraudulent (`1`) or not (`0`).
- Whether the tumor is malignant (`1`) or not (`0`).
> **Demo Project:** In this example we will try to classify Iris flowers into tree categories (`Iris setosa`, `Iris virginica` and `Iris versicolor`) based on `petal_length` and `petal_width` parameters.
```
# To make debugging of logistic_regression module easier we enable imported modules autoreloading feature.
# By doing this you may change the code of logistic_regression library and all these changes will be available here.
%load_ext autoreload
%autoreload 2
# Add project root folder to module loading paths.
import sys
sys.path.append('../..')
```
### Import Dependencies
- [pandas](https://pandas.pydata.org/) - library that we will use for loading and displaying the data in a table
- [numpy](http://www.numpy.org/) - library that we will use for linear algebra operations
- [matplotlib](https://matplotlib.org/) - library that we will use for plotting the data
- [logistic_regression](https://github.com/trekhleb/homemade-machine-learning/blob/master/homemade/logistic_regression/logistic_regression.py) - custom implementation of logistic regression
```
# Import 3rd party dependencies.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Import custom logistic regression implementation.
from homemade.logistic_regression import LogisticRegression
```
### Load the Data
In this demo we will use [Iris data set](http://archive.ics.uci.edu/ml/datasets/Iris).
The data set consists of several samples from each of three species of Iris (`Iris setosa`, `Iris virginica` and `Iris versicolor`). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Based on the combination of these four features, [Ronald Fisher](https://en.wikipedia.org/wiki/Iris_flower_data_set) developed a linear discriminant model to distinguish the species from each other.
```
# Load the data.
data = pd.read_csv('../../data/iris.csv')
# Print the data table.
data.head(10)
```
### Plot the Data
Let's take two parameters `petal_length` and `petal_width` for each flower into consideration and plot the dependency of the Iris class on these two parameters.
```
# List of suppported Iris classes.
iris_types = ['SETOSA', 'VERSICOLOR', 'VIRGINICA']
# Pick the Iris parameters for consideration.
x_axis = 'petal_length'
y_axis = 'petal_width'
# Plot the scatter for every type of Iris.
for iris_type in iris_types:
plt.scatter(
data[x_axis][data['class'] == iris_type],
data[y_axis][data['class'] == iris_type],
label=iris_type
)
# Plot the data.
plt.xlabel(x_axis + ' (cm)')
plt.ylabel(y_axis + ' (cm)')
plt.title('Iris Types')
plt.legend()
plt.show()
```
### Prepara the Data for Training
Let's extract `petal_length` and `petal_width` data and form a training feature set and let's also form out training labels set.
```
# Get total number of Iris examples.
num_examples = data.shape[0]
# Get features.
x_train = data[[x_axis, y_axis]].values.reshape((num_examples, 2))
# Get labels.
y_train = data['class'].values.reshape((num_examples, 1))
```
### Init and Train Logistic Regression Model
> ☝🏻This is the place where you might want to play with model configuration.
- `polynomial_degree` - this parameter will allow you to add additional polynomial features of certain degree. More features - more curved the line will be.
- `max_iterations` - this is the maximum number of iterations that gradient descent algorithm will use to find the minimum of a cost function. Low numbers may prevent gradient descent from reaching the minimum. High numbers will make the algorithm work longer without improving its accuracy.
- `regularization_param` - parameter that will fight overfitting. The higher the parameter, the simplier is the model will be.
- `polynomial_degree` - the degree of additional polynomial features (`x1^2 * x2, x1^2 * x2^2, ...`). This will allow you to curve the predictions.
- `sinusoid_degree` - the degree of sinusoid parameter multipliers of additional features (`sin(x), sin(2*x), ...`). This will allow you to curve the predictions by adding sinusoidal component to the prediction curve.
```
# Set up linear regression parameters.
max_iterations = 1000 # Max number of gradient descent iterations.
regularization_param = 0 # Helps to fight model overfitting.
polynomial_degree = 0 # The degree of additional polynomial features.
sinusoid_degree = 0 # The degree of sinusoid parameter multipliers of additional features.
# Init logistic regression instance.
logistic_regression = LogisticRegression(x_train, y_train, polynomial_degree, sinusoid_degree)
# Train logistic regression.
(thetas, costs) = logistic_regression.train(regularization_param, max_iterations)
# Print model parameters that have been learned.
pd.DataFrame(thetas, columns=['Theta 1', 'Theta 2', 'Theta 3'], index=['SETOSA', 'VERSICOLOR', 'VIRGINICA'])
```
### Analyze Gradient Descent Progress
The plot below illustrates how the cost function value changes over each iteration. You should see it decreasing.
In case if cost function value increases it may mean that gradient descent missed the cost function minimum and with each step it goes further away from it.
From this plot you may also get an understanding of how many iterations you need to get an optimal value of the cost function.
```
# Draw gradient descent progress for each label.
labels = logistic_regression.unique_labels
plt.plot(range(len(costs[0])), costs[0], label=labels[0])
plt.plot(range(len(costs[1])), costs[1], label=labels[1])
plt.plot(range(len(costs[2])), costs[2], label=labels[2])
plt.xlabel('Gradient Steps')
plt.ylabel('Cost')
plt.legend()
plt.show()
```
### Calculate Model Training Precision
Calculate how many flowers from the training set have been guessed correctly.
```
# Make training set predictions.
y_train_predictions = logistic_regression.predict(x_train)
# Check what percentage of them are actually correct.
precision = np.sum(y_train_predictions == y_train) / y_train.shape[0] * 100
print('Precision: {:5.4f}%'.format(precision))
```
### Draw Decision Boundaries
Let's build our decision boundaries. These are the lines that distinguish classes from each other. This will give us a pretty clear overview of how successfull our training process was. You should see clear distinguishment of three sectors on the data plain.
```
# Get the number of training examples.
num_examples = x_train.shape[0]
# Set up how many calculations we want to do along every axis.
samples = 150
# Generate test ranges for x and y axis.
x_min = np.min(x_train[:, 0])
x_max = np.max(x_train[:, 0])
y_min = np.min(x_train[:, 1])
y_max = np.max(x_train[:, 1])
X = np.linspace(x_min, x_max, samples)
Y = np.linspace(y_min, y_max, samples)
# z axis will contain our predictions. So let's get predictions for every pair of x and y.
Z_setosa = np.zeros((samples, samples))
Z_versicolor = np.zeros((samples, samples))
Z_virginica = np.zeros((samples, samples))
for x_index, x in enumerate(X):
for y_index, y in enumerate(Y):
data = np.array([[x, y]])
prediction = logistic_regression.predict(data)[0][0]
if prediction == 'SETOSA':
Z_setosa[x_index][y_index] = 1
elif prediction == 'VERSICOLOR':
Z_versicolor[x_index][y_index] = 1
elif prediction == 'VIRGINICA':
Z_virginica[x_index][y_index] = 1
# Now, when we have x, y and z axes being setup and calculated we may print decision boundaries.
for iris_type in iris_types:
plt.scatter(
x_train[(y_train == iris_type).flatten(), 0],
x_train[(y_train == iris_type).flatten(), 1],
label=iris_type
)
plt.contour(X, Y, Z_setosa)
plt.contour(X, Y, Z_versicolor)
plt.contour(X, Y, Z_virginica)
plt.xlabel(x_axis + ' (cm)')
plt.ylabel(y_axis + ' (cm)')
plt.title('Iris Types')
plt.legend()
plt.show()
```
| true |
code
| 0.698509 | null | null | null | null |
|
# Working with Tensorforce to Train a Reinforcement-Learning Agent
This notebook serves as an educational introduction to the usage of Tensorforce using a gym-electric-motor (GEM) environment. The goal of this notebook is to give an understanding of what tensorforce is and how to use it to train and evaluate a reinforcement learning agent that can solve a current control problem of the GEM toolbox.
## 1. Installation
Before you can start you need to make sure that you have both gym-electric-motor and tensorforce installed. You can install both easily using pip:
- ```pip install gym-electric-motor```
- ```pip install tensorforce```
Alternatively, you can install their latest developer version directly from GitHub:
- [GitHub Gym-Electric-Motor](https://github.com/upb-lea/gym-electric-motor)
- [GitHub Tensorforce](https://github.com/tensorforce/tensorforce)
For this notebook, the following cell will do the job:
```
!pip install -q git+https://github.com/upb-lea/gym-electric-motor.git tensorforce==0.5.5
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
```
## 2. Setting up a GEM Environment
The basic idea behind reinforcement learning is to create a so-called agent, that should learn by itself to solve a specified task in a given environment.
This environment gives the agent feedback on its actions and reinforces the targeted behavior.
In this notebook, the task is to train a controller for the current control of a *permanent magnet synchronous motor* (*PMSM*).
In the following, the used GEM-environment is briefly presented, but this notebook does not focus directly on the detailed usage of GEM. If you are new to the used environment and interested in finding out what it does and how to use it, you should take a look at the [GEM cookbook](https://colab.research.google.com/github/upb-lea/gym-electric-motor/blob/master/examples/example_notebooks/GEM_cookbook.ipynb).
To save some space in this notebook, there is a function defined in an external python file called **getting_environment.py**. If you want to know how the environment's parameters are defined you can take a look at that file. By simply calling the **get_env()** function from the external file, you can set up an environment for a *PMSM* with discrete inputs.
The basic idea of the control setup from the GEM-environment is displayed in the following figure.

The agent controls the converter who converts the supply currents to the currents flowing into the motor - for the *PMSM*: $i_{sq}$ and $i_{sd}$
In the continuous case, the agent's action equals a duty cycle which will be modulated into a corresponding voltage.
In the discrete case, the agent's actions denote switching states of the converter at the given instant. Here, only a discrete amount of options are available. In this notebook, for the PMSM the *discrete B6 bridge converter* with six switches is utilized per default. This converter provides a total of eight possible actions.

The motor schematic is the following:

And the electrical ODEs for that motor are:
<h3 align="center">
<!-- $\frac{\mathrm{d}i_{sq}}{\mathrm{d}t} = \frac{u_{sq}-pL_d\omega_{me}i_{sd}-R_si_{sq}}{L_q}$
$\frac{\mathrm{d}i_{sd}}{\mathrm{d}t} = \frac{u_{sd}-pL_q\omega_{me}i_{sq}-R_si_{sd}}{L_d}$
$\frac{\mathrm{d}\epsilon_{el}}{\mathrm{d}t} = p\omega_{me}$
-->
$ \frac{\mathrm{d}i_{sd}}{\mathrm{d}t}=\frac{u_{sd} + p\omega_{me}L_q i_{sq} - R_s i_{sd}}{L_d} $ <br><br>
$\frac{\mathrm{d} i_{sq}}{\mathrm{d} t}=\frac{u_{sq} - p \omega_{me} (L_d i_{sd} + \mathit{\Psi}_p) - R_s i_{sq}}{L_q}$ <br><br>
$\frac{\mathrm{d}\epsilon_{el}}{\mathrm{d}t} = p\omega_{me}$
</h3>
The target for the agent is now to learn to control the currents. For this, a reference generator produces a trajectory that the agent has to follow.
Therefore, it has to learn a function (policy) from given states, references and rewards to appropriate actions.
For a deeper understanding of the used models behind the environment see the [documentation](https://upb-lea.github.io/gym-electric-motor/).
Comprehensive learning material to RL is also [freely available](https://github.com/upb-lea/reinforcement_learning_course_materials).
```
import numpy as np
from pathlib import Path
import gym_electric_motor as gem
from gym_electric_motor.reference_generators import \
MultipleReferenceGenerator,\
WienerProcessReferenceGenerator
from gym_electric_motor.visualization import MotorDashboard
from gym_electric_motor.core import Callback
from gym.spaces import Discrete, Box
from gym.wrappers import FlattenObservation, TimeLimit
from gym import ObservationWrapper
# helper functions and classes
class FeatureWrapper(ObservationWrapper):
"""
Wrapper class which wraps the environment to change its observation. Serves
the purpose to improve the agent's learning speed.
It changes epsilon to cos(epsilon) and sin(epsilon). This serves the purpose
to have the angles -pi and pi close to each other numerically without losing
any information on the angle.
Additionally, this wrapper adds a new observation i_sd**2 + i_sq**2. This should
help the agent to easier detect incoming limit violations.
"""
def __init__(self, env, epsilon_idx, i_sd_idx, i_sq_idx):
"""
Changes the observation space to fit the new features
Args:
env(GEM env): GEM environment to wrap
epsilon_idx(integer): Epsilon's index in the observation array
i_sd_idx(integer): I_sd's index in the observation array
i_sq_idx(integer): I_sq's index in the observation array
"""
super(FeatureWrapper, self).__init__(env)
self.EPSILON_IDX = epsilon_idx
self.I_SQ_IDX = i_sq_idx
self.I_SD_IDX = i_sd_idx
new_low = np.concatenate((self.env.observation_space.low[
:self.EPSILON_IDX], np.array([-1.]),
self.env.observation_space.low[
self.EPSILON_IDX:], np.array([0.])))
new_high = np.concatenate((self.env.observation_space.high[
:self.EPSILON_IDX], np.array([1.]),
self.env.observation_space.high[
self.EPSILON_IDX:],np.array([1.])))
self.observation_space = Box(new_low, new_high)
def observation(self, observation):
"""
Gets called at each return of an observation. Adds the new features to the
observation and removes original epsilon.
"""
cos_eps = np.cos(observation[self.EPSILON_IDX] * np.pi)
sin_eps = np.sin(observation[self.EPSILON_IDX] * np.pi)
currents_squared = observation[self.I_SQ_IDX]**2 + observation[self.I_SD_IDX]**2
observation = np.concatenate((observation[:self.EPSILON_IDX],
np.array([cos_eps, sin_eps]),
observation[self.EPSILON_IDX + 1:],
np.array([currents_squared])))
return observation
# define motor arguments
motor_parameter = dict(p=3, # [p] = 1, nb of pole pairs
r_s=17.932e-3, # [r_s] = Ohm, stator resistance
l_d=0.37e-3, # [l_d] = H, d-axis inductance
l_q=1.2e-3, # [l_q] = H, q-axis inductance
psi_p=65.65e-3, # [psi_p] = Vs, magnetic flux of the permanent magnet
)
# supply voltage
u_sup = 350
# nominal and absolute state limitations
nominal_values=dict(omega=4000*2*np.pi/60,
i=230,
u=u_sup
)
limit_values=dict(omega=4000*2*np.pi/60,
i=1.5*230,
u=u_sup
)
# defining reference-generators
q_generator = WienerProcessReferenceGenerator(reference_state='i_sq')
d_generator = WienerProcessReferenceGenerator(reference_state='i_sd')
rg = MultipleReferenceGenerator([q_generator, d_generator])
# defining sampling interval
tau = 1e-5
# defining maximal episode steps
max_eps_steps = 10_000
motor_initializer={'random_init': 'uniform', 'interval': [[-230, 230], [-230, 230], [-np.pi, np.pi]]}
reward_function=gem.reward_functions.WeightedSumOfErrors(
reward_weights={'i_sq': 10, 'i_sd': 10},
gamma=0.99, # discount rate
reward_power=1)
# creating gem environment
env = gem.make( # define a PMSM with discrete action space
"PMSMDisc-v1",
# visualize the results
visualization=MotorDashboard(state_plots=['i_sq', 'i_sd'], reward_plot=True),
# parameterize the PMSM and update limitations
motor_parameter=motor_parameter,
limit_values=limit_values, nominal_values=nominal_values,
# define the random initialisation for load and motor
load='ConstSpeedLoad',
load_initializer={'random_init': 'uniform', },
motor_initializer=motor_initializer,
reward_function=reward_function,
# define the duration of one sampling step
tau=tau, u_sup=u_sup,
# turn off terminations via limit violation, parameterize the rew-fct
reference_generator=rg, ode_solver='euler',
)
# remove one action from the action space to help the agent speed up its training
# this can be done as both switchting states (1,1,1) and (-1,-1,-1) - which are encoded
# by action 0 and 7 - both lead to the same zero voltage vector in alpha/beta-coordinates
env.action_space = Discrete(7)
# applying wrappers
eps_idx = env.physical_system.state_names.index('epsilon')
i_sd_idx = env.physical_system.state_names.index('i_sd')
i_sq_idx = env.physical_system.state_names.index('i_sq')
env = TimeLimit(FeatureWrapper(FlattenObservation(env),
eps_idx, i_sd_idx, i_sq_idx),
max_eps_steps)
```
## 3. Using Tensorforce
To take advantage of some already implemented deep-RL agents, we use the *tensorforce-framework*. It is built on *TensorFlow* and offers agents based on deep Q-networks, policy gradients, or actor-critic algorithms.
For more information to specific agents or different modules that can be used, some good explanations can be found in the corresponding [documentation](https://tensorforce.readthedocs.io/en/latest/).
For the control task with a discrete action space we will use a [deep Q-network (DQN)]((https://www.nature.com/articles/nature14236)).
### 3.1 Defining a Tensorforce-Environment
Tensorforce requires you to define a *tensorforce-environment*. This is done simply by using the ```Environment.create``` interface, which acts as a wrapper around usual [gym](https://github.com/openai/gym) instances.
```
from tensorforce.environments import Environment
# creating tensorforce environment
tf_env = Environment.create(environment=env,
max_episode_timesteps=max_eps_steps)
```
### 3.2 Setting-up a Tensorforce-Agent
The Agent is created just like the environment. The agent's parameters can be passed as arguments to the ```create()``` function or via a configuration as a dictionary or as *.json* file.
In the following, the way via a dictionary is demonstrated.
With the *tensorforce-framework* it is possible to define own network-architectures like it is shown below.
For some parameters, it can be useful to have a decaying value during the training. A possible way for this is also shown in the following code.
The exact meaning of the used parameters can be found in the already mentioned tensorforce documentation.
```
# using a parameter decay for the exploration
epsilon_decay = {'type': 'decaying',
'decay': 'polynomial',
'decay_steps': 50000,
'unit': 'timesteps',
'initial_value': 1.0,
'decay_rate': 5e-2,
'final_value': 5e-2,
'power': 3.0}
# defining a simple network architecture: 2 dense-layers with 64 nodes each
net = [
dict(type='dense', size=64, activation='relu'),
dict(type='dense', size=64, activation='relu'),
]
# defining the parameters of a dqn-agent
agent_config = {
'agent': 'dqn',
'memory': 200000,
'batch_size': 25,
'network': net,
'update_frequency': 1,
'start_updating': 10000,
'learning_rate': 1e-4,
'discount': 0.99,
'exploration': epsilon_decay,
'target_sync_frequency': 1000,
'target_update_weight': 1.0,
}
from tensorforce.agents import Agent
tau = 1e-5
simulation_time = 2 # seconds
training_steps = int(simulation_time // tau)
# creating agent via dictionary
dqn_agent = Agent.create(agent=agent_config, environment=tf_env)
```
### 3.3 Training the Agent
Training the agent is executed with the **tensorforce-runner**. The runner stores metrics during the training, like the reward per episode, and can be used to save learned agents. If you just want to experiment a little with an already trained agent, it is possible to skip the next cells and just load a pre-trained agent.
```
from tensorforce.execution import Runner
# create and train the agent
runner = Runner(agent=dqn_agent, environment=tf_env)
runner.run(num_timesteps=training_steps)
```
Accessing saved metrics from the runner, it is possible to have a look on the mean reward per episode or the corresponding episode-length.
```
# accesing the metrics from runner
rewards = np.asarray(runner.episode_rewards)
episode_length = np.asarray(runner.episode_timesteps)
# calculating the mean-reward per episode
mean_reward = rewards/episode_length
num_episodes = len(mean_reward)
# plotting mean-reward over episodes
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20,10))
ax1.plot(range(num_episodes), mean_reward, linewidth=3)
#plt.xticks(fontsize=15)
ax1.set_ylabel('mean-reward', fontsize=22)
ax1.grid(True)
ax1.tick_params(axis="y", labelsize=15)
# plotting episode length over episodes
ax2.plot(range(num_episodes), episode_length, linewidth=3)
ax2.set_xlabel('# episodes', fontsize=22)
ax2.set_ylabel('episode-length', fontsize=22)
ax2.tick_params(axis="y", labelsize=15)
ax2.tick_params(axis="x", labelsize=15)
ax2.grid(True)
plt.show()
print('number of episodes during training: ', len(rewards))
```
Saving the agents trained model makes it available for a separate evaluation and further usage.
```
agent_path = Path('saved_agents')
agent_path.mkdir(parents=True, exist_ok=True)
agent_name = 'dqn_agent_tensorforce'
runner.agent.save(directory=str(agent_path), filename=agent_name)
print('\n agent saved \n')
runner.close()
```
## 4. Evaluating the Trained Agent
### 4.1 Loading a Model
If a previously saved agent is available, it can be restored by using the runner to load the model with the ```load()``` function. To load the agent it is necessary to pass the directory, the filename, the environment, and the agent configuration used for the training.
```
from tensorforce import Agent
dqn_agent = Agent.load(
directory=str(agent_path),
filename=agent_name,
environment=tf_env,
**agent_config
)
print('\n agent loaded \n')
```
### 4.3 Evaluating the Agent
To use the trained agent as a controller, a typical loop to interact with the environment can be used, which is displayed in the cell below.
Now the agent takes the observations from the environment and reacts with an action, which is used to control the environment. To get an impression of how the trained agent performs, the trajectory of the control-states can be observed. A live-plot will be displayed in a jupyter-notebook. If you are using jupyter-lab, the following cell could cause problems regarding the visualization.
```
%matplotlib notebook
# currently the visualization crashes for larger values, than the defined value
visualization_steps = int(9e4)
obs = env.reset()
for step in range(visualization_steps):
# getting the next action from the agent
actions = dqn_agent.act(obs, evaluation=True)
# the env return the next state, reward and the information, if the state is terminal
obs, reward, done, _ = env.step(action=actions)
# activating the visualization
env.render()
if done:
# reseting the env, if a terminal state is reached
obs = env.reset()
```
In the next example a classic *environment-interaction loop* can be extended to access different metrics and values, e.g. the cumulated reward over all steps. The number of evaluation-steps can be reduced, but a higher variance of the evaluation result must then be accepted.
```
# test agent
steps = 250000
rewards = []
episode_lens = []
obs = env.reset()
terminal = False
cumulated_rew = 0
step_counter = 0
episode_rew = 0
for step in (range(steps)):
actions = dqn_agent.act(obs, evaluation=True)
obs, reward, done, _ = env.step(action=actions)
cumulated_rew += reward
episode_rew += reward
step_counter += 1
if done:
rewards.append(episode_rew)
episode_lens.append(step_counter)
episode_rew = 0
step_counter = 0
obs = env.reset()
done = False
print(f' \n Cumulated reward per step is {cumulated_rew/steps} \n')
print(f' \n Number of episodes Reward {len(episode_lens)} \n')
%matplotlib inline
# accesing the metrics from runner
rewards = np.asarray(rewards)
episode_length = np.asarray(episode_lens)
# calculating the mean-reward per episode
mean_reward = rewards/episode_length
num_episodes = len(rewards)
# plotting mean-reward over episodes
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))
ax1.plot(range(num_episodes), mean_reward, linewidth=3)
#plt.xticks(fontsize=15)
ax1.set_ylabel('reward', fontsize=22)
ax1.grid(True)
ax1.tick_params(axis="y", labelsize=15)
# plotting episode length over episodes
ax2.plot(range(num_episodes), episode_length, linewidth=3)
ax2.set_xlabel('# episodes', fontsize=22)
ax2.set_ylabel('episode-length', fontsize=20)
ax2.tick_params(axis="y", labelsize=15)
ax2.tick_params(axis="x", labelsize=15)
ax2.grid(True)
plt.show()
print('number of episodes during training: ', len(episode_lens))
```
| true |
code
| 0.821886 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/deeplearning.ai/nlp/c3_w1_03_trax_intro_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Trax : Ungraded Lecture Notebook
In this notebook you'll get to know about the Trax framework and learn about some of its basic building blocks.
## Background
### Why Trax and not TensorFlow or PyTorch?
TensorFlow and PyTorch are both extensive frameworks that can do almost anything in deep learning. They offer a lot of flexibility, but that often means verbosity of syntax and extra time to code.
Trax is much more concise. It runs on a TensorFlow backend but allows you to train models with 1 line commands. Trax also runs end to end, allowing you to get data, model and train all with a single terse statements. This means you can focus on learning, instead of spending hours on the idiosyncrasies of big framework implementation.
### Why not Keras then?
Keras is now part of Tensorflow itself from 2.0 onwards. Also, trax is good for implementing new state of the art algorithms like Transformers, Reformers, BERT because it is actively maintained by Google Brain Team for advanced deep learning tasks. It runs smoothly on CPUs,GPUs and TPUs as well with comparatively lesser modifications in code.
### How to Code in Trax
Building models in Trax relies on 2 key concepts:- **layers** and **combinators**.
Trax layers are simple objects that process data and perform computations. They can be chained together into composite layers using Trax combinators, allowing you to build layers and models of any complexity.
### Trax, JAX, TensorFlow and Tensor2Tensor
You already know that Trax uses Tensorflow as a backend, but it also uses the JAX library to speed up computation too. You can view JAX as an enhanced and optimized version of numpy.
**Watch out for assignments which import `import trax.fastmath.numpy as np`. If you see this line, remember that when calling `np` you are really calling Trax’s version of numpy that is compatible with JAX.**
As a result of this, where you used to encounter the type `numpy.ndarray` now you will find the type `jax.interpreters.xla.DeviceArray`.
Tensor2Tensor is another name you might have heard. It started as an end to end solution much like how Trax is designed, but it grew unwieldy and complicated. So you can view Trax as the new improved version that operates much faster and simpler.
### Resources
- Trax source code can be found on Github: [Trax](https://github.com/google/trax)
- JAX library: [JAX](https://jax.readthedocs.io/en/latest/index.html)
## Installing Trax
Trax has dependencies on JAX and some libraries like JAX which are yet to be supported in [Windows](https://github.com/google/jax/blob/1bc5896ee4eab5d7bb4ec6f161d8b2abb30557be/README.md#installation) but work well in Ubuntu and MacOS. We would suggest that if you are working on Windows, try to install Trax on WSL2.
Official maintained documentation - [trax-ml](https://trax-ml.readthedocs.io/en/latest/) not to be confused with this [TraX](https://trax.readthedocs.io/en/latest/index.html)
```
%%capture
!pip install trax==1.3.1
```
## Imports
```
%%capture
import numpy as np # regular ol' numpy
from trax import layers as tl # core building block
from trax import shapes # data signatures: dimensionality and type
from trax import fastmath # uses jax, offers numpy on steroids
# Trax version 1.3.1 or better
!pip list | grep trax
```
## Layers
Layers are the core building blocks in Trax or as mentioned in the lectures, they are the base classes.
They take inputs, compute functions/custom calculations and return outputs.
You can also inspect layer properties. Let me show you some examples.
### Relu Layer
First I'll show you how to build a relu activation function as a layer. A layer like this is one of the simplest types. Notice there is no object initialization so it works just like a math function.
**Note: Activation functions are also layers in Trax, which might look odd if you have been using other frameworks for a longer time.**
```
# Layers
# Create a relu trax layer
relu = tl.Relu()
# Inspect properties
print("-- Properties --")
print("name :", relu.name)
print("expected inputs :", relu.n_in)
print("promised outputs :", relu.n_out, "\n")
# Inputs
x = np.array([-2, -1, 0, 1, 2])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = relu(x)
print("-- Outputs --")
print("y :", y)
```
### Concatenate Layer
Now I'll show you how to build a layer that takes 2 inputs. Notice the change in the expected inputs property from 1 to 2.
```
# Create a concatenate trax layer
concat = tl.Concatenate()
print("-- Properties --")
print("name :", concat.name)
print("expected inputs :", concat.n_in)
print("promised outputs :", concat.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2, "\n")
# Outputs
y = concat([x1, x2])
print("-- Outputs --")
print("y :", y)
```
## Layers are Configurable
You can change the default settings of layers. For example, you can change the expected inputs for a concatenate layer from 2 to 3 using the optional parameter `n_items`.
```
# Configure a concatenate layer
concat_3 = tl.Concatenate(n_items=3) # configure the layer's expected inputs
print("-- Properties --")
print("name :", concat_3.name)
print("expected inputs :", concat_3.n_in)
print("promised outputs :", concat_3.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
x3 = x2 * 0.99
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2)
print("x3 :", x3, "\n")
# Outputs
y = concat_3([x1, x2, x3])
print("-- Outputs --")
print("y :", y)
```
**Note: At any point,if you want to refer the function help/ look up the [documentation](https://trax-ml.readthedocs.io/en/latest/) or use help function.**
```
#help(tl.Concatenate) #Uncomment this to see the function docstring with explaination
```
## Layers can have Weights
Some layer types include mutable weights and biases that are used in computation and training. Layers of this type require initialization before use.
For example the `LayerNorm` layer calculates normalized data, that is also scaled by weights and biases. During initialization you pass the data shape and data type of the inputs, so the layer can initialize compatible arrays of weights and biases.
```
# Uncomment any of them to see information regarding the function
# help(tl.LayerNorm)
# help(shapes.signature)
# Layer initialization
norm = tl.LayerNorm()
# You first must know what the input data will look like
x = np.array([0, 1, 2, 3], dtype="float")
# Use the input data signature to get shape and type for initializing weights and biases
# We need to convert the input datatype from usual tuple to trax ShapeDtype
norm.init(shapes.signature(x))
print("Normal shape:",x.shape, "Data Type:",type(x.shape))
print("Shapes Trax:",shapes.signature(x),"Data Type:",type(shapes.signature(x)))
# Inspect properties
print("-- Properties --")
print("name :", norm.name)
print("expected inputs :", norm.n_in)
print("promised outputs :", norm.n_out)
# Weights and biases
print("weights :", norm.weights[0])
print("biases :", norm.weights[1], "\n")
# Inputs
print("-- Inputs --")
print("x :", x)
# Outputs
y = norm(x)
print("-- Outputs --")
print("y :", y)
```
## Custom Layers
This is where things start getting more interesting!
You can create your own custom layers too and define custom functions for computations by using `tl.Fn`. Let me show you how.
```
#help(tl.Fn)
# Define a custom layer
# In this example you will create a layer to calculate the input times 2
def TimesTwo():
layer_name = "TimesTwo"
def func(x):
return x * 2
return tl.Fn(layer_name, func)
# Test it
times_two = TimesTwo()
# Inspect properties
print("-- Properties --")
print("name :", times_two.name)
print("expected inputs :", times_two.n_in)
print("promised outputs :", times_two.n_out, "\n")
# Inputs
x = np.array([1, 2, 3])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = times_two(x)
print("-- Outputs --")
print("y :", y)
```
## Combinators
You can combine layers to build more complex layers. Trax provides a set of objects named combinator layers to make this happen. Combinators are themselves layers, so behavior commutes.
### Serial Combinator
This is the most common and easiest to use. For example could build a simple neural network by combining layers into a single layer using the `Serial` combinator. This new layer then acts just like a single layer, so you can inspect intputs, outputs and weights. Or even combine it into another layer! Combinators can then be used as trainable models. _Try adding more layers_
**Note:As you must have guessed, if there is serial combinator, there must be a parallel combinator as well. Do try to explore about combinators and other layers from the trax documentation and look at the repo to understand how these layers are written.**
```
# help(tl.Serial)
# help(tl.Parallel)
# Serial combinator
serial = tl.Serial(
tl.LayerNorm(),
tl.Relu(),
times_two,
)
# Initialization
x = np.array([-2, -1, 0, 1, 2])
serial.init(shapes.signature(x))
print("-- Serial Model --")
print(serial,"\n")
print("-- Properties --")
print("name :", serial.name)
print("sublayers :", serial.sublayers)
print("expected inputs :", serial.n_in)
print("promised outputs :", serial.n_out)
print("weights & biases:", serial.weights, "\n")
# Inputs
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = serial(x)
print("-- Outputs --")
print("y :", y)
```
## JAX
Just remember to lookout for which numpy you are using, the regular ol' numpy or Trax's JAX compatible numpy. Both tend to use the alias np so watch those import blocks.
**Note:There are certain things which are still not possible in fastmath.numpy which can be done in numpy so you will see in assignments we will switch between them to get our work done.**
```
# Numpy vs fastmath.numpy have different data types
# Regular ol' numpy
x_numpy = np.array([1, 2, 3])
print("good old numpy : ", type(x_numpy), "\n")
# Fastmath and jax numpy
x_jax = fastmath.numpy.array([1, 2, 3])
print("jax trax numpy : ", type(x_jax))
```
## Summary
Trax is a concise framework, built on TensorFlow, for end to end machine learning. The key building blocks are layers and combinators. This notebook is just a taste, but sets you up with some key inuitions to take forward into the rest of the course and assignments where you will build end to end models.
| true |
code
| 0.444866 | null | null | null | null |
|
# XGBoost model for Bike sharing dataset
```
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# preprocessing methods
from sklearn.preprocessing import StandardScaler
# accuracy measures and data spliting
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
# deep learning libraries
import xgboost as xgb
import graphviz
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = 15, 7
```
## 1. Data import
```
DATADIR = '../data/bike/'
MODELDIR = '../checkpoints/bike-sharing/xgb/'
data_path = os.path.join(DATADIR, 'bike-sharing-processed.csv')
data = pd.read_csv(data_path)
data.set_index('date', inplace=True)
data.sort_index(inplace=True)
data.head()
plt.plot(data.cnt, '.')
plt.title('Bike sharing count')
plt.xlabel('sample id')
plt.ylabel('count')
plt.show()
```
## 2. Train test split
```
y = data[['cnt']].copy()
X = data.drop(columns=['cnt'], axis=1)
print(f'X and y shape:')
print(X.shape, y.shape)
# date selection
datelist = data.index.unique()
# two month data for testset
print(f'Test start date: {datelist[-61]}')
# Train test split : last 60 days for test set
X_train = X[X.index < datelist[-61]]
X_test = X[X.index >= datelist[-61]]
y_train = y[y.index < datelist[-61]]
y_test = y[y.index >= datelist[-61]]
print(f'Size of train and test set respectively:')
print(X_train.shape,X_test.shape, y_train.shape, y_test.shape)
```
## 3. Parameter selection using grid search
```
def xgb_parameter_selection(X, y, grid_param, xgb_param):
xgb_grid = GridSearchCV(estimator=xgb.XGBRegressor(**xgb_param, seed=seed),
param_grid=grid_param, cv=3)
xgb_grid.fit(X, y)
return xgb_grid
```
### 3.1 Depth and child weight selection
```
seed = 42
# max depth and child weight selection
grid_param_1 = {'max_depth': [3, 5],
'min_child_weight': [3, 5, 7]
}
xgb_param_1 = {'objective' :'reg:linear',
'silent' : 1,
'n_estimators': 100,
'learning_rate' : 0.1}
model_1 = xgb_parameter_selection(X_train, y_train, grid_param_1, xgb_param_1)
# print(f'Best estimator : {model_1.best_estimator_}')
print(f'Best parameter : {model_1.best_params_}')
print(f'Best score : {model_1.best_score_}')
```
### 3.2 colsample_bytree and subsample selection
```
# column and sample selection parameter
grid_param_2 = {'colsample_bytree' : [0.7, 1.0],
'subsample' : [0.8, 1]
}
xgb_param_2 = {'objective' :'reg:linear',
'silent' : 1,
'max_depth': 5,
'min_child_weight':7,
'n_estimators': 100,
'learning_rate' : 0.1,
'eval_metric' : 'mae' }
model_2 = xgb_parameter_selection(X_train, y_train, grid_param_2, xgb_param_2)
print(f'Best parameter : {model_2.best_params_}')
print(f'Best score : {model_2.best_score_}')
```
### 3.3 gamma selection
```
# gamma selection
grid_param_3 = {'gamma' : [0, 0.1, 0.2, 5]
}
xgb_param_3 = {'objective' :'reg:linear',
'silent' : 1,
'max_depth': 5,
'min_child_weight': 7,
'n_estimators': 100,
'learning_rate' : 0.1,
'colsample_bytree' : 0.7,
'subsample' : 1}
model_3 = xgb_parameter_selection(X_train, y_train, grid_param_3, xgb_param_3)
print(f'Best parameter : {model_3.best_params_}')
print(f'Best score : {model_3.best_score_}')
```
### 3.4 learning rate
```
# learning_rate selection
grid_param_4 = {'learning_rate' : [0.1, 0.01, 0.001]
}
xgb_param_4 = {'objective' :'reg:linear',
'silent' : 1,
'max_depth': 5,
'min_child_weight': 7,
'n_estimators': 100,
'learning_rate' : 0.1,
'colsample_bytree' : 0.7,
'subsample' : 1,
'gamma' : 0}
model_4 = xgb_parameter_selection(X_train, y_train, grid_param_4, xgb_param_4)
print(f'Best parameter : {model_4.best_params_}')
print(f'Best score : {model_4.best_score_}')
```
## 4. Final model training
```
final_param = {'objective' :'reg:linear',
'silent' : 1,
'max_depth': 5,
'min_child_weight': 7,
'n_estimators': 100,
'learning_rate' : 0.1,
'colsample_bytree' : 0.7,
'subsample' : 1,
'gamma' : 0,
'eval_metric' : 'mae'}
def xgb_final(X_train, y_train, param, MODELDIR):
model = xgb.XGBRegressor(**param)
model.fit(X_train, y_train, verbose=True)
# directory for saving model
if os.path.exists(MODELDIR):
pass
else:
os.makedirs(MODELDIR)
model.save_model(os.path.join(MODELDIR, 'xgb-v1.model'))
return model
model = xgb_final(X_train, y_train, final_param, MODELDIR)
```
## 5. Model evaluation
```
def model_evaluation(X_train, X_test, y_train, y_test):
# predict and tranform to original scale
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# MAE and NRMSE calculation
train_rmse = np.sqrt(mean_squared_error(y_train, y_train_pred))
train_mae = mean_absolute_error(y_train, y_train_pred)
train_nrmse = train_rmse/np.std(y_train.values)
test_rmse = np.sqrt(mean_squared_error(y_test, y_test_pred))
test_mae = mean_absolute_error(y_test, y_test_pred)
test_nrmse = test_rmse/np.std(y_test.values)
print(f'Training MAE: {np.round(train_mae, 3)}')
print(f'Trainig NRMSE: {np.round(train_nrmse, 3)}')
print()
print(f'Test MAE: {np.round(test_mae)}')
print(f'Test NRMSE: {np.round(test_nrmse)}')
return y_train_pred, y_test_pred
y_train_pred, y_test_pred = model_evaluation(X_train, X_test, y_train, y_test)
```
## 6. Result plotting
```
plt.plot(y_train.values, label='actual')
plt.plot(y_train_pred, label='predicted')
plt.ylabel('count')
plt.xlabel('sample id')
plt.title('Actual vs Predicted on training data using XGBoost')
plt.legend()
plt.tight_layout()
plt.show()
plt.plot(y_test.values, label='actual')
plt.plot(y_test_pred, label='predicted')
plt.ylabel('count')
plt.xlabel('sample id')
plt.title('Actual vs Predicted on test data using XGBoost', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
```
## 7. Variable importance
```
xgb.plot_importance(model)
plt.show()
```
| true |
code
| 0.474266 | null | null | null | null |
|
# Advanced Matplotlib Concepts Lecture
In this lecture we cover some more advanced topics which you won't usually use as often. You can always reference the documentation for more resources!
### Logarithmic Scale
* It is also possible to set a logarithmic scale for one or both axes. This functionality is in fact only one application of a more general transformation system in Matplotlib. Each of the axes' scales are set seperately using `set_xscale` and `set_yscale` methods which accept one parameter (with the value "log" in this case):
```
import matplotlib.pyplot as plt
import matplotlib as mp
%matplotlib inline
import numpy as np
x = np.linspace(0,5,11) # We go from 0 to 5 and grab 11 points which are linearly spaced.
y = x ** 2
fig, axes = plt.subplots(1, 2, figsize=(10,4))
axes[0].plot(x, x**2, x, np.exp(x))
axes[0].set_title("Normal scale")
axes[1].plot(x, x**2, x, np.exp(x))
axes[1].set_yscale("log")
axes[1].set_title("Logarithmic scale (y)");
```
### Placement of ticks and custom tick labels
* We can explicitly determine where we want the axis ticks with `set_xticks` and `set_yticks`, which both take a list of values for where on the axis the ticks are to be placed. We can also use the `set_xticklabels` and `set_yticklabels` methods to provide a list of custom text labels for each tick location:
```
fig, ax = plt.subplots(figsize=(10, 4))
ax.plot(x, x**2, x, x**3, lw=2)
ax.set_xticks([1, 2, 3, 4, 5])
ax.set_xticklabels([r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$', r'$\epsilon$'], fontsize=18)
yticks = [0, 50, 100, 150]
ax.set_yticks(yticks)
ax.set_yticklabels(["$%.1f$" % y for y in yticks], fontsize=18); # use LaTeX formatted labels
```
There are a number of more advanced methods for controlling major and minor tick placement in matplotlib figures, such as automatic placement according to different policies. See http://matplotlib.org/api/ticker_api.html for details.
#### Scientific notation
With large numbers on axes, it is often better use scientific notation:
```
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_title("scientific notation")
ax.set_yticks([0, 50, 100, 150])
from matplotlib import ticker
formatter = ticker.ScalarFormatter(useMathText=True)
formatter.set_scientific(True)
formatter.set_powerlimits((-1,1))
ax.yaxis.set_major_formatter(formatter)
```
## Axis number and axis label spacing
```
# distance between x and y axis and the numbers on the axes
mp.rcParams['xtick.major.pad'] = 5
mp.rcParams['ytick.major.pad'] = 5
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("label and axis spacing")
# padding between axis label and axis numbers
ax.xaxis.labelpad = 5
ax.yaxis.labelpad = 5
ax.set_xlabel("x")
ax.set_ylabel("y")
# restore defaults
mp.rcParams['xtick.major.pad'] = 3
mp.rcParams['ytick.major.pad'] = 3
```
#### Axis position adjustments
Unfortunately, when saving figures the labels are sometimes clipped, and it can be necessary to adjust the positions of axes a little bit. This can be done using `subplots_adjust`:
```
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("title")
ax.set_xlabel("x")
ax.set_ylabel("y")
fig.subplots_adjust(left=0.15, right=.9, bottom=0.1, top=0.9);
```
### Axis grid
With the `grid` method in the axis object, we can turn on and off grid lines. We can also customize the appearance of the grid lines using the same keyword arguments as the `plot` function:
```
fig, axes = plt.subplots(1, 2, figsize=(10,3))
# default grid appearance
axes[0].plot(x, x**2, x, x**3, lw=2)
axes[0].grid(True)
# custom grid appearance
axes[1].plot(x, x**2, x, x**3, lw=2)
axes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)
```
### Axis spines
* We can also change the properties of axis spines:
```
fig, ax = plt.subplots(figsize=(6,2))
ax.spines['bottom'].set_color('blue')
ax.spines['top'].set_color('blue')
ax.spines['left'].set_color('red')
ax.spines['left'].set_linewidth(2)
# turn off axis spine to the right
ax.spines['right'].set_color("none")
ax.yaxis.tick_left() # only ticks on the left side
```
### Twin axes
Sometimes it is useful to have dual x or y axes in a figure; for example, when plotting curves with different units together. Matplotlib supports this with the `twinx` and `twiny` functions:
```
fig, ax1 = plt.subplots()
ax1.plot(x, x**2, lw=2, color="blue")
ax1.set_ylabel(r"area $(m^2)$", fontsize=18, color="blue")
for label in ax1.get_yticklabels():
label.set_color("blue")
ax2 = ax1.twinx()
ax2.plot(x, x**3, lw=2, color="red")
ax2.set_ylabel(r"volume $(m^3)$", fontsize=18, color="red")
for label in ax2.get_yticklabels():
label.set_color("red")
```
### Axes where x and y is zero
```
fig, ax = plt.subplots()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0)) # set position of x spine to x=0
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0)) # set position of y spine to y=0
xx = np.linspace(-0.75, 1., 100)
ax.plot(xx, xx**3);
```
## Other 2D plot styles
In addition to the regular `plot` method, there are a number of other functions for generating different kind of plots. See the matplotlib plot gallery for a complete list of available plot types: http://matplotlib.org/gallery.html. Some of the more useful ones are show below:
```
n = np.array([0,1,2,3,4,5])
fig, axes = plt.subplots(1, 4, figsize=(12,3))
axes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))
axes[0].set_title("scatter")
axes[1].step(n, n**2, lw=2)
axes[1].set_title("step")
axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5)
axes[2].set_title("bar")
axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5);
axes[3].set_title("fill_between");
```
### Text annotation
* Annotating text in matplotlib figures can be done using the `text` function. It supports LaTeX formatting just like axis label texts and titles:
```
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
ax.text(0.15, 0.2, r"$y=x^2$", fontsize=20, color="blue")
ax.text(0.65, 0.1, r"$y=x^3$", fontsize=20, color="green");
```
### Figures with multiple subplots and insets
* Axes can be added to a matplotlib Figure canvas manually using `fig.add_axes` or using a sub-figure layout manager such as `subplots`, `subplot2grid`, or `gridspec`:
#### subplots
```
fig,ax = plt.subplots(2,3)
fig.tight_layout()
```
### subplot2grid
```
fig = plt.figure()
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2,0))
ax5 = plt.subplot2grid((3,3), (2,1))
fig.tight_layout()
```
## gridspec
```
import matplotlib.gridspec as gridspec
fig = plt.figure()
gs = gridspec.GridSpec(2, 3, height_ratios=[2,1], width_ratios=[1,2,1])
for g in gs:
ax = fig.add_subplot(g)
fig.tight_layout()
```
### add axes
* Manually adding axes with `add_axes` is useful for adding insets to figures:
```
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
fig.tight_layout()
# inset
inset_ax = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # X, Y, width, height
inset_ax.plot(xx, xx**2, xx, xx**3)
inset_ax.set_title('zoom near origin')
# set axis range
inset_ax.set_xlim(-.2, .2)
inset_ax.set_ylim(-.005, .01)
# set axis tick locations
inset_ax.set_yticks([0, 0.005, 0.01])
inset_ax.set_xticks([-0.1,0,.1]);
```
### Colormap and contour figures
* Colormaps and contour figures are useful for plotting functions of two variables. In most of these functions we will use a colormap to encode one dimension of the data. There are a number of predefined colormaps. It is relatively straightforward to define custom colormaps. For a list of pre-defined colormaps, see: http://www.scipy.org/Cookbook/Matplotlib/Show_colormaps
```
alpha = 0.7
phi_ext = 2 * np.pi * 0.5
def flux_qubit_potential(phi_m, phi_p):
return 2 + alpha - 2 * np.cos(phi_p) * np.cos(phi_m) - alpha * np.cos(phi_ext - 2*phi_p)
phi_m = np.linspace(0, 2*np.pi, 100)
phi_p = np.linspace(0, 2*np.pi, 100)
X,Y = np.meshgrid(phi_p, phi_m)
Z = flux_qubit_potential(X, Y).T
```
#### pcolor
```
fig, ax = plt.subplots()
p = ax.pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=mp.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())
cb = fig.colorbar(p, ax=ax)
```
#### imshow
```
fig, ax = plt.subplots()
im = ax.imshow(Z, cmap=mp.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
im.set_interpolation('bilinear')
cb = fig.colorbar(im, ax=ax)
```
## Contour
```
fig, ax = plt.subplots()
cnt = ax.contour(Z, cmap=mp.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
```
## 3D figures
* To use 3D graphics in matplotlib, we first need to create an instance of the `Axes3D` class. 3D axes can be added to a matplotlib figure canvas in exactly the same way as 2D axes; or, more conveniently, by passing a `projection='3d'` keyword argument to the `add_axes` or `add_subplot` methods.
```
from mpl_toolkits.mplot3d.axes3d import Axes3D
```
#### Surface plots
```
fig = plt.figure(figsize=(14,6))
# `ax` is a 3D-aware axis instance because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(1, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)
# surface_plot with color grading and color bar
ax = fig.add_subplot(1, 2, 2, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=mp.cm.coolwarm, linewidth=0, antialiased=False)
cb = fig.colorbar(p, shrink=0.5)
```
## Wire-frame plot
```
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4,color='teal')
```
#### Coutour plots with projections
```
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1, projection='3d')
ax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)
cset = ax.contour(X, Y, Z, zdir='z', offset=-np.pi, cmap=mp.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='x', offset=-np.pi, cmap=mp.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='y', offset=3*np.pi, cmap=mp.cm.coolwarm)
ax.set_xlim3d(-np.pi, 2*np.pi);
ax.set_ylim3d(0, 3*np.pi);
ax.set_zlim3d(-np.pi, 2*np.pi);
```
## FURTHER READING :
* http://www.matplotlib.org - The project web page for matplotlib.
* https://github.com/matplotlib/matplotlib - The source code for matplotlib.
* http://matplotlib.org/gallery.html - A large gallery showcaseing various types of plots matplotlib can create. Highly recommended!
* http://www.loria.fr/~rougier/teaching/matplotlib - A good matplotlib tutorial.
* http://scipy-lectures.github.io/matplotlib/matplotlib.html - Another good matplotlib reference.
| true |
code
| 0.64919 | null | null | null | null |
|
# Quantization of Image Classification Models
This tutorial demostrates how to apply INT8 quantization to Image Classification model using [Post-training Optimization Tool API](../../compression/api/README.md). The Mobilenet V2 model trained on Cifar10 dataset is used as an example. The code of this tutorial is designed to be extandable to custom model and dataset. It is assumed that OpenVINO is already installed. This tutorial consists of the following steps:
- Prepare the model for quantization
- Define data loading and accuracy validation functionality
- Run optimization pipeline
- Compare accuracy of the original and quantized models
- Compare performance of the original and quantized models
- Compare results on one picture
```
import os
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import torch
from addict import Dict
from compression.api import DataLoader, Metric
from compression.engines.ie_engine import IEEngine
from compression.graph import load_model, save_model
from compression.graph.model_utils import compress_model_weights
from compression.pipeline.initializer import create_pipeline
from openvino.runtime import Core
from torchvision import transforms
from torchvision.datasets import CIFAR10
# Set the data and model directories
DATA_DIR = 'data'
MODEL_DIR = 'model'
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(MODEL_DIR, exist_ok=True)
```
## Prepare the Model
Model preparation stage has the following steps:
- Download PyTorch model from Torchvision repository
- Convert it to ONNX format
- Run OpenVINO Model Optimizer tool to convert ONNX to OpenVINO Intermediate Representation (IR)
```
model = torch.hub.load("chenyaofo/pytorch-cifar-models", "cifar10_mobilenetv2_x1_0", pretrained=True)
model.eval()
dummy_input = torch.randn(1, 3, 32, 32)
onnx_model_path = Path(MODEL_DIR) / 'mobilenet_v2.onnx'
ir_model_xml = onnx_model_path.with_suffix('.xml')
ir_model_bin = onnx_model_path.with_suffix('.bin')
torch.onnx.export(model, dummy_input, onnx_model_path, verbose=True)
# Run OpenVINO Model Optimization tool to convert ONNX to OpenVINO IR
!mo --framework=onnx --data_type=FP16 --input_shape=[1,3,32,32] -m $onnx_model_path --output_dir $MODEL_DIR
```
## Define Data Loader
At this step the `DataLoader` interface from POT API is implemented.
```
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.247, 0.243, 0.261))])
dataset = CIFAR10(root=DATA_DIR, train=False, transform=transform, download=True)
# create DataLoader from CIFAR10 dataset
class CifarDataLoader(DataLoader):
def __init__(self, config):
"""
Initialize config and dataset.
:param config: created config with DATA_DIR path.
"""
if not isinstance(config, Dict):
config = Dict(config)
super().__init__(config)
self.indexes, self.pictures, self.labels = self.load_data(dataset)
def __len__(self):
return len(self.labels)
def __getitem__(self, index):
"""
Return one sample of index, label and picture.
:param index: index of the taken sample.
"""
if index >= len(self):
raise IndexError
return (self.indexes[index], self.labels[index]), self.pictures[index].numpy()
def load_data(self, dataset):
"""
Load dataset in needed format.
:param dataset: downloaded dataset.
"""
pictures, labels, indexes = [], [], []
for idx, sample in enumerate(dataset):
pictures.append(sample[0])
labels.append(sample[1])
indexes.append(idx)
return indexes, pictures, labels
```
## Define Accuracy Metric Calculation
At this step the `Metric` interface for accuracy Top-1 metric is implemented. It is used for validating accuracy of quantized model.
```
# Custom implementation of classification accuracy metric.
class Accuracy(Metric):
# Required methods
def __init__(self, top_k=1):
super().__init__()
self._top_k = top_k
self._name = 'accuracy@top{}'.format(self._top_k)
self._matches = []
@property
def value(self):
""" Returns accuracy metric value for the last model output. """
return {self._name: self._matches[-1]}
@property
def avg_value(self):
""" Returns accuracy metric value for all model outputs. """
return {self._name: np.ravel(self._matches).mean()}
def update(self, output, target):
""" Updates prediction matches.
:param output: model output
:param target: annotations
"""
if len(output) > 1:
raise Exception('The accuracy metric cannot be calculated '
'for a model with multiple outputs')
if isinstance(target, dict):
target = list(target.values())
predictions = np.argsort(output[0], axis=1)[:, -self._top_k:]
match = [float(t in predictions[i]) for i, t in enumerate(target)]
self._matches.append(match)
def reset(self):
""" Resets collected matches """
self._matches = []
def get_attributes(self):
"""
Returns a dictionary of metric attributes {metric_name: {attribute_name: value}}.
Required attributes: 'direction': 'higher-better' or 'higher-worse'
'type': metric type
"""
return {self._name: {'direction': 'higher-better',
'type': 'accuracy'}}
```
## Run Quantization Pipeline and compare the accuracy of the original and quantized models
Here we define a configuration for our quantization pipeline and run it.
NOTE: we use built-in `IEEngine` implementation of the `Engine` interface from the POT API for model inference. `IEEngine` is built on top of OpenVINO Python* API for inference and provides basic functionality for inference of simple models. If you have a more complicated inference flow for your model/models you should create your own implementation of `Engine` interface, for example by inheriting from `IEEngine` and extending it.
```
model_config = Dict({
'model_name': 'mobilenet_v2',
'model': ir_model_xml,
'weights': ir_model_bin
})
engine_config = Dict({
'device': 'CPU',
'stat_requests_number': 2,
'eval_requests_number': 2
})
dataset_config = {
'data_source': DATA_DIR
}
algorithms = [
{
'name': 'DefaultQuantization',
'params': {
'target_device': 'CPU',
'preset': 'performance',
'stat_subset_size': 300
}
}
]
# Steps 1-7: Model optimization
# Step 1: Load the model.
model = load_model(model_config)
# Step 2: Initialize the data loader.
data_loader = CifarDataLoader(dataset_config)
# Step 3 (Optional. Required for AccuracyAwareQuantization): Initialize the metric.
metric = Accuracy(top_k=1)
# Step 4: Initialize the engine for metric calculation and statistics collection.
engine = IEEngine(engine_config, data_loader, metric)
# Step 5: Create a pipeline of compression algorithms.
pipeline = create_pipeline(algorithms, engine)
# Step 6: Execute the pipeline.
compressed_model = pipeline.run(model)
# Step 7 (Optional): Compress model weights quantized precision
# in order to reduce the size of final .bin file.
compress_model_weights(compressed_model)
# Step 8: Save the compressed model to the desired path.
compressed_model_paths = save_model(model=compressed_model, save_path=MODEL_DIR, model_name="quantized_mobilenet_v2"
)
compressed_model_xml = compressed_model_paths[0]["model"]
compressed_model_bin = Path(compressed_model_paths[0]["model"]).with_suffix(".bin")
# Step 9: Compare accuracy of the original and quantized models.
metric_results = pipeline.evaluate(model)
if metric_results:
for name, value in metric_results.items():
print(f"Accuracy of the original model: {name}: {value}")
metric_results = pipeline.evaluate(compressed_model)
if metric_results:
for name, value in metric_results.items():
print(f"Accuracy of the optimized model: {name}: {value}")
```
## Compare Performance of the Original and Quantized Models
Finally, we will measure the inference performance of the FP32 and INT8 models. To do this, we use [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html) - OpenVINO's inference performance measurement tool.
NOTE: For more accurate performance, we recommended running benchmark_app in a terminal/command prompt after closing other applications. Run benchmark_app -m model.xml -d CPU to benchmark async inference on CPU for one minute. Change CPU to GPU to benchmark on GPU. Run benchmark_app --help to see an overview of all command line options.
```
# Inference FP16 model (IR)
!benchmark_app -m $ir_model_xml -d CPU -api async
# Inference INT8 model (IR)
!benchmark_app -m $compressed_model_xml -d CPU -api async
```
## Compare results on four pictures.
```
ie = Core()
# read and load float model
float_model = ie.read_model(
model=ir_model_xml, weights=ir_model_bin
)
float_compiled_model = ie.compile_model(model=float_model, device_name="CPU")
# read and load quantized model
quantized_model = ie.read_model(
model=compressed_model_xml, weights=compressed_model_bin
)
quantized_compiled_model = ie.compile_model(model=quantized_model, device_name="CPU")
# define all possible labels from CIFAR10
labels_names = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
all_pictures = []
all_labels = []
# get all pictures and their labels
for i, batch in enumerate(data_loader):
all_pictures.append(batch[1])
all_labels.append(batch[0][1])
def plot_pictures(indexes: list, all_pictures=all_pictures, all_labels=all_labels):
"""Plot 4 pictures.
:param indexes: a list of indexes of pictures to be displayed.
:param all_batches: batches with pictures.
"""
images, labels = [], []
num_pics = len(indexes)
assert num_pics == 4, f'No enough indexes for pictures to be displayed, got {num_pics}'
for idx in indexes:
assert idx < 10000, 'Cannot get such index, there are only 10000'
pic = np.rollaxis(all_pictures[idx].squeeze(), 0, 3)
images.append(pic)
labels.append(labels_names[all_labels[idx]])
f, axarr = plt.subplots(1, 4)
axarr[0].imshow(images[0])
axarr[0].set_title(labels[0])
axarr[1].imshow(images[1])
axarr[1].set_title(labels[1])
axarr[2].imshow(images[2])
axarr[2].set_title(labels[2])
axarr[3].imshow(images[3])
axarr[3].set_title(labels[3])
def infer_on_pictures(model, indexes: list, all_pictures=all_pictures):
""" Inference model on a few pictures.
:param net: model on which do inference
:param indexes: list of indexes
"""
predicted_labels = []
request = model.create_infer_request()
for idx in indexes:
assert idx < 10000, 'Cannot get such index, there are only 10000'
request.infer(inputs={'input.1': all_pictures[idx][None,]})
result = request.get_output_tensor(0).data
result = labels_names[np.argmax(result[0])]
predicted_labels.append(result)
return predicted_labels
indexes_to_infer = [7, 12, 15, 20] # to plot specify 4 indexes
plot_pictures(indexes_to_infer)
results_float = infer_on_pictures(float_compiled_model, indexes_to_infer)
results_quanized = infer_on_pictures(quantized_compiled_model, indexes_to_infer)
print(f"Labels for picture from float model : {results_float}.")
print(f"Labels for picture from quantized model : {results_quanized}.")
```
| true |
code
| 0.807745 | null | null | null | null |
|
# Activation Functions
This function introduces activation functions in TensorFlow
We start by loading the necessary libraries for this script.
```
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
# from tensorflow.python.framework import ops
# ops.reset_default_graph()
tf.reset_default_graph()
```
### Start a graph session
```
sess = tf.Session()
```
### Initialize the X range values for plotting
```
x_vals = np.linspace(start=-10., stop=10., num=100)
```
### Activation Functions:
ReLU activation
```
print(sess.run(tf.nn.relu([-3., 3., 10.])))
y_relu = sess.run(tf.nn.relu(x_vals))
```
ReLU-6 activation
```
print(sess.run(tf.nn.relu6([-3., 3., 10.])))
y_relu6 = sess.run(tf.nn.relu6(x_vals))
```
ReLU-6 refers to the following function
\begin{equation}
\min\left(\max(0, x), 6\right)
\end{equation}
Sigmoid activation
```
print(sess.run(tf.nn.sigmoid([-1., 0., 1.])))
y_sigmoid = sess.run(tf.nn.sigmoid(x_vals))
```
Hyper Tangent activation
```
print(sess.run(tf.nn.tanh([-1., 0., 1.])))
y_tanh = sess.run(tf.nn.tanh(x_vals))
```
Softsign activation
```
print(sess.run(tf.nn.softsign([-1., 0., 1.])))
y_softsign = sess.run(tf.nn.softsign(x_vals))
```
softsign refers to the following function
\begin{equation}
\frac{x}{1 + |x|}
\end{equation}
<br>
<img src="http://tecmemo.wpblog.jp/wp-content/uploads/2017/01/activation_04.png" width=40%>
Softplus activation

```
print(sess.run(tf.nn.softplus([-1., 0., 1.])))
y_softplus = sess.run(tf.nn.softplus(x_vals))
```
Softplus refers to the following function
\begin{equation}
\log\left(\exp(x) + 1\right)
\end{equation}
Exponential linear activation
```
print(sess.run(tf.nn.elu([-1., 0., 1.])))
y_elu = sess.run(tf.nn.elu(x_vals))
```
ELU refers to the following function
\begin{equation}\label{eq:}
f = \begin{cases}
\exp(x) - 1 &(x < 0 )\\
0 &(x \geq 0 )\\
\end{cases}
\end{equation}
### Plot the different functions
```
plt.style.use('ggplot')
plt.plot(x_vals, y_softplus, 'r--', label='Softplus', linewidth=2)
plt.plot(x_vals, y_relu, 'b:', label='ReLU', linewidth=2)
plt.plot(x_vals, y_relu6, 'g-.', label='ReLU6', linewidth=2)
plt.plot(x_vals, y_elu, 'k-', label='ExpLU', linewidth=0.5)
plt.ylim([-1.5,7])
plt.legend(loc='upper left', shadow=True, edgecolor='k')
plt.show()
plt.plot(x_vals, y_sigmoid, 'r--', label='Sigmoid', linewidth=2)
plt.plot(x_vals, y_tanh, 'b:', label='Tanh', linewidth=2)
plt.plot(x_vals, y_softsign, 'g-.', label='Softsign', linewidth=2)
plt.ylim([-1.3,1.3])
plt.legend(loc='upper left', shadow=True, edgecolor='k')
plt.show()
```


| true |
code
| 0.684778 | null | null | null | null |
|
# Bayesian Regression Using NumPyro
In this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](#References)]. In particular, we would like to explore the following:
- Write a simple model using the `sample` NumPyro primitive.
- Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest.
- Learn about inference utilities such as `Predictive` and `log_likelihood`.
- Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc.
## Tutorial Outline:
1. [Dataset](#Dataset)
2. [Regression Model to Predict Divorce Rate](#Regression-Model-to-Predict-Divorce-Rate)
- [Model-1: Predictor-Marriage Rate](#Model-1:-Predictor---Marriage-Rate)
- [Posterior Distribution over the Regression Parameters](#Posterior-Distribution-over-the-Regression-Parameters)
- [Posterior Predictive Distribution](#Posterior-Predictive-Distribution)
- [Predictive Utility With Effect Handlers](#Predictive-Utility-With-Effect-Handlers)
- [Model Predictive Density](#Model-Predictive-Density)
- [Model-2: Predictor-Median Age of Marriage](#Model-2:-Predictor---Median-Age-of-Marriage)
- [Model-3: Predictor-Marriage Rate and Median Age of Marriage](#Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage)
- [Divorce Rate Residuals by State](#Divorce-Rate-Residuals-by-State)
3. [Regression Model with Measurement Error](#Regression-Model-with-Measurement-Error)
- [Effect of Incorporating Measurement Noise on Residuals](#Effect-of-Incorporating-Measurement-Noise-on-Residuals)
4. [References](#References)
```
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use("bmh")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
assert numpyro.__version__.startswith("0.8.0")
```
## Dataset
For this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](#References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
```
DATASET_URL = "https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv"
dset = pd.read_csv(DATASET_URL, sep=";")
dset
```
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
```
vars = [
"Population",
"MedianAgeMarriage",
"Marriage",
"WaffleHouses",
"South",
"Divorce",
]
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette="husl");
```
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage.
There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
```
sns.regplot(x="WaffleHouses", y="Divorce", data=dset);
```
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](#References)] which explores the problem of causal association in the presence of multiple predictors.
For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial.
## Regression Model to Predict Divorce Rate
Let us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states.
First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
```
standardize = lambda x: (x - x.mean()) / x.std()
dset["AgeScaled"] = dset.MedianAgeMarriage.pipe(standardize)
dset["MarriageScaled"] = dset.Marriage.pipe(standardize)
dset["DivorceScaled"] = dset.Divorce.pipe(standardize)
```
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following:
- In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](#Posterior-Predictive-Distribution) on new data.
- In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](#References)], [[4](#References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data.
- The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](#Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
```
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
M, A = 0.0, 0.0
if marriage is not None:
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
numpyro.sample("obs", dist.Normal(mu, sigma), obs=divorce)
```
### Model 1: Predictor - Marriage Rate
We first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](#References)] for more details on the NUTS algorithm) to run inference on this simple model.
The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.html#numpyro.mcmc.MCMC) that has the following methods:
- `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase.
- `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic.
- `get_samples()`: gets samples from the posterior distribution.
Note the following:
- JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jax#random-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed.
- We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.html#numpyro.mcmc.HMC) class.
```
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
# Run NUTS.
kernel = NUTS(model)
num_samples = 2000
mcmc = MCMC(kernel, num_warmup=1000, num_samples=num_samples)
mcmc.run(
rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values
)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
```
#### Posterior Distribution over the Regression Parameters
We notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.
During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](#References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.
At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.html#numpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.html#numpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.
To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.html#numpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
```
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, "o")
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = (
jnp.expand_dims(samples_1["a"], -1)
+ jnp.expand_dims(samples_1["bM"], -1) * dset.MarriageScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Marriage rate", ylabel="Divorce rate", title="Regression line with 90% CI"
);
```
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected.
#### Prior Predictive Distribution
Let us check that we have set sensible priors by sampling from the prior predictive distribution. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.html#numpyro.infer.util.Predictive) utility for this purpose.
```
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
prior_predictive = Predictive(model, num_samples=100)
prior_predictions = prior_predictive(rng_key_, marriage=dset.MarriageScaled.values)[
"obs"
]
mean_prior_pred = jnp.mean(prior_predictions, axis=0)
hpdi_prior_pred = hpdi(prior_predictions, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_prior_pred, hpdi_prior_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
```
#### Posterior Predictive Distribution
Let us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
```
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)["obs"]
df = dset.filter(["Location"])
df["Mean Predictions"] = jnp.mean(predictions, axis=0)
df.head()
```
#### Predictive Utility With Effect Handlers
To remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jax#auto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
```
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace["obs"]["value"]
# vectorize predictions via vmap
predict_fn = vmap(
lambda rng_key, samples: predict(
rng_key, samples, model, marriage=dset.MarriageScaled.values
)
)
```
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function.
- The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model.
- The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC.
- The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density.
It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jax#auto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
```
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(["Location"])
df["Mean Predictions"] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
```
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit.
#### Posterior Predictive Density
Likewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](#References)] which is given by
$$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta
\approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\
= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))
$$.
Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
```
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace["obs"]
return obs_node["fn"].log_prob(obs_node["value"])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(
lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs)
)
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
```
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.html#log-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
```
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
```
### Model 2: Predictor - Median Age of Marriage
We will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following:
- Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate.
- We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
```
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = (
jnp.expand_dims(samples_2["a"], -1)
+ jnp.expand_dims(samples_2["bA"], -1) * dset.AgeScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Median marriage age",
ylabel="Divorce rate",
title="Regression line with 90% CI",
);
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_, age=dset.AgeScaled.values)["obs"]
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Median Age", ylabel="Divorce rate", title="Predictions with 90% CI");
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
```
### Model 3: Predictor - Marriage Rate and Median Age of Marriage
Finally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
```
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
```
### Divorce Rate Residuals by State
The regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
```
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(
rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values
)["obs"]
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, "--")
ax[0].errorbar(
pred_mean[idx],
y,
xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker="o",
ms=5,
mew=4,
ls="none",
alpha=0.8,
)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker="o", ls="none", color="gray")
ax[0].set(
xlabel="Posterior Predictive (red) vs. Actuals (gray)",
ylabel="State",
title="Posterior Predictive with 90% CI",
)
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10)
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, "--")
ax[1].errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
ax[1].set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
```
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.
Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section.
## Regression Model with Measurement Error
Note that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](#References)].
To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
```
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
divorce_rate = numpyro.sample("divorce_rate", dist.Normal(mu, sigma))
numpyro.sample("obs", dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset["DivorceScaledSD"] = dset["Divorce SE"] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
```
### Effect of Incorporating Measurement Noise on Residuals
Notice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
```
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
)["obs"]
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, "--")
ax.errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx], ls="none", color="orange", alpha=0.9)
# Plot earlier mean residual
ax.plot(
jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx],
y,
ls="none",
marker="o",
ms=6,
color="black",
alpha=0.6,
)
ax.set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10)
ax.text(
-2.8,
-7,
"Residuals (with error-bars) from current model (in red). "
"Black marker \nshows residuals from the previous model (Model 3). "
"Measurement \nerror is indicated by orange bar.",
);
```
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.
To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
```
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls="none", marker="o")
ax.plot(x, y2, ls="none", marker="o")
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], "--", color="gray")
ax.set(
xlabel="Measurement Noise",
ylabel="Residual",
title="Mean residuals (Model 4: red, Model 3: blue)",
);
```
The plot above shows what has happend in more detail - the regression line itself has moved to ensure a better fit for observations with low measurement noise (left of the plot) where the residuals have shrunk very close to 0. That is to say that data points with low measurement error have a concomitantly higher contribution in determining the regression line. On the other hand, for states with high measurement error (right of the plot), incorporating measurement noise allows us to move our posterior distribution mass closer to the observations resulting in a shrinkage of residuals as well.
## References
1. McElreath, R. (2016). Statistical Rethinking: A Bayesian Course with Examples in R and Stan CRC Press.
2. Stan Development Team. [Stan User's Guide](https://mc-stan.org/docs/2_19/stan-users-guide/index.html)
3. Goodman, N.D., and StuhlMueller, A. (2014). [The Design and Implementation of Probabilistic Programming Languages](http://dippl.org/)
4. Pyro Development Team. [Poutine: A Guide to Programming with Effect Handlers in Pyro](http://pyro.ai/examples/effect_handlers.html)
5. Hoffman, M.D., Gelman, A. (2011). The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.
6. Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo.
7. JAX Development Team (2018). [Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more](https://github.com/google/jax)
8. Gelman, A., Hwang, J., and Vehtari A. [Understanding predictive information criteria for Bayesian models](https://arxiv.org/pdf/1307.5928.pdf)
| true |
code
| 0.811788 | null | null | null | null |
|
# Customizing datasets in fastai
```
from fastai import *
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
```
In this tutorial, we'll see how to create custom subclasses of [`ItemBase`](/core.html#ItemBase) or [`ItemList`](/data_block.html#ItemList) while retaining everything the fastai library has to offer. To allow basic functions to work consistently across various applications, the fastai library delegates several tasks to one of those specific objets, and we'll see here which methods you have to implement to be able to have everything work properly. But first let's see take a step back to see where you'll use your end result.
## Links with the data block API
The data block API works by allowing you to pick a class that is responsible to get your items and another class that is charged with getting your targets. Combined together, they create a pytorch [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) that is then wrapped inside a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). The training set, validation set and maybe test set are then all put in a [`DataBunch`](/basic_data.html#DataBunch).
The data block API allows you to mix and match what class your inputs have, what clas you target have, how to do the split between train and validation set, then how to create the [`DataBunch`](/basic_data.html#DataBunch), but if you have a very specific kind of input/target, the fastai classes might no be sufficient to you. This tutorial is there to explain what is needed to create a new class of items and what methods are important to implement or override.
It goes in two phases: first we focus on what you need to create a custom [`ItemBase`](/core.html#ItemBase) class (which the type of your inputs/targets) then on how to create your custom [`ItemList`](/data_block.html#ItemList) (which is basically a set of [`ItemBase`](/core.html#ItemBase)) while highlining which methods are called by the library.
## Creating a custom [`ItemBase`](/core.html#ItemBase) subclass
The fastai library contains three basic type of [`ItemBase`](/core.html#ItemBase) that you might want to subclass:
- [`Image`](/vision.image.html#Image) for vision applications
- [`Text`](/text.data.html#Text) for text applications
- [`TabularLine`](/tabular.data.html#TabularLine) for tabular applications
Whether you decide to create your own item class or to subclass one of the above, here is what you need to implement:
### Basic attributes
Those are the more importants attribute your custom [`ItemBase`](/core.html#ItemBase) needs as they're used everywhere in the fastai library:
- `ItemBase.data` is the thing that is passed to pytorch when you want to create a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). This is what needs to be fed to your model. Note that it might be different from the representation of your item since you might want something that is more understandable.
- `ItemBase.obj` is the thing that truly represents the underlying object behind your item. It should be sufficient to create a copy of your item. For instance, when creating the test set, the basic label is the `obj` attribute of the first label (or y) in the training set.
- `__str__` representation: if applicable, this is what will be displayed when the fastai library has to show your item.
If we take the example of a [`MultiCategory`](/core.html#MultiCategory) object `o` for instance:
- `o.obj` is the list of tags that object has
- `o.data` is a tensor where the tags are one-hot encoded
- `str(o)` returns the tags separated by ;
If you want to code the way data augmentation should be applied to your custom `Item`, you should write an `apply_tfms` method. This is what will be called if you apply a [`transform`](/vision.transform.html#vision.transform) block in the data block API.
### Advanced show methods
If you want to use methods such a `data.show_batch()` or `learn.show_results()` with a brand new kind of [`ItemBase`](/core.html#ItemBase) you will need to implement two other methods. In both cases, the generic function will grab the tensors of inputs, targets and predictions (if applicable), reconstruct the corespoding [`ItemBase`](/core.html#ItemBase) (see below) but it will delegate to the [`ItemBase`](/core.html#ItemBase) the way to display the results.
``` python
def show_xys(self, xs, ys, **kwargs)->None:
def show_xyzs(self, xs, ys, zs, **kwargs)->None:
```
In both cases `xs` and `ys` represent the inputs and the targets, in the second case `zs` represent the predictions. They are lists of the same length that depend on the `rows` argument you passed. The kwargs are passed from `data.show_batch()` / `learn.show_results()`. As an example, here is the source code of those methods in [`Image`](/vision.image.html#Image):
``` python
def show_xys(self, xs, ys, figsize:Tuple[int,int]=(9,10), **kwargs):
"Show the `xs` and `ys` on a figure of `figsize`. `kwargs` are passed to the show method."
rows = int(math.sqrt(len(xs)))
fig, axs = plt.subplots(rows,rows,figsize=figsize)
for i, ax in enumerate(axs.flatten() if rows > 1 else [axs]):
xs[i].show(ax=ax, y=ys[i], **kwargs)
plt.tight_layout()
def show_xyzs(self, xs, ys, zs, figsize:Tuple[int,int]=None, **kwargs):
"""Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`.
`kwargs` are passed to the show method."""
figsize = ifnone(figsize, (6,3*len(xs)))
fig,axs = plt.subplots(len(xs), 2, figsize=figsize)
fig.suptitle('Ground truth / Predictions', weight='bold', size=14)
for i,(x,y,z) in enumerate(zip(xs,ys,zs)):
x.show(ax=axs[i,0], y=y, **kwargs)
x.show(ax=axs[i,1], y=z, **kwargs)
```
### Example: ImageTuple
For cycleGANs, we need to create a custom type of items since we feed the model tuples of images. Let's look at how to code this. The basis is to code the `obj` and [`data`](/vision.data.html#vision.data) attributes. We do that in the init. The object is the tuple of images and the data their underlying tensors normalized between -1 and 1.
```
class ImageTuple(ItemBase):
def __init__(self, img1, img2):
self.img1,self.img2 = img1,img2
self.obj,self.data = (img1,img2),[-1+2*img1.data,-1+2*img2.data]
```
Then we want to apply data augmentation to our tuple of images. That's done by writing and `apply_tfms` method as we saw before. Here we just pass that call to the two underlying images then update the data.
```
def apply_tfms(self, tfms, **kwargs):
self.img1 = self.img1.apply_tfms(tfms, **kwargs)
self.img2 = self.img2.apply_tfms(tfms, **kwargs)
self.data = [-1+2*self.img1.data,-1+2*self.img2.data]
return self
```
We define a last method to stack the two images next ot each other, which we will use later for a customized `show_batch`/ `show_results` behavior.
```
def to_one(self): return Image(0.5+torch.cat(self.data,2)/2)
```
This is all your need to create your custom [`ItemBase`](/core.html#ItemBase). You won't be able to use it until you have put it inside your custom [`ItemList`](/data_block.html#ItemList) though, so you should continue reading the next section.
## Creating a custom [`ItemList`](/data_block.html#ItemList) subclass
This is the main class that allows you to group your inputs or your targets in the data block API. You can then use any of the splitting or labelling methods before creating a [`DataBunch`](/basic_data.html#DataBunch). To make sure everything is properly working, her eis what you need to know.
### Class variables
Whether you're directly subclassing [`ItemList`](/data_block.html#ItemList) or one of the particular fastai ones, make sure to know the content of the following three variables as you may need to adjust them:
- `_bunch` contains the name of the class that will be used to create a [`DataBunch`](/basic_data.html#DataBunch)
- `_processor` contains a class (or a list of classes) of [`PreProcessor`](/data_block.html#PreProcessor) that will then be used as the default to create processor for this [`ItemList`](/data_block.html#ItemList)
- `_label_cls` contains the class that will be used to create the labels by default
`_label_cls` is the first to be used in the data block API, in the labelling function. If this variable is set to `None`, the label class will be guessed between [`CategoryList`](/data_block.html#CategoryList), [`MultiCategoryList`](/data_block.html#MultiCategoryList) and [`FloatList`](/data_block.html#FloatList) depending on the type of the first item. The default can be overriden by passing a `label_cls` in the kwargs of the labelling function.
`_processor` is the second to be used. The processors are called at the end of the labelling to apply some kind of function on your items. The default processor of the inputs can be overriden by passing a `processor` in the kwargs when creating the [`ItemList`](/data_block.html#ItemList), the default processor of the targets can be overriden by passing a `processor` in the kwargs of the labelling function.
Processors are useful for pre-processing some data, but you also need to put in their state any variable you want to save for the call of `data.export()` before creating a [`Learner`](/basic_train.html#Learner) object for inference: the state of the [`ItemList`](/data_block.html#ItemList) isn't saved there, only their processors. For instance `SegmentationProcessor` only reason to exist is to save the dataset classes, and during the process call, it doesn't do anything apart from setting the `classes` and `c` attributes to its dataset.
``` python
class SegmentationProcessor(PreProcessor):
def __init__(self, ds:ItemList): self.classes = ds.classes
def process(self, ds:ItemList): ds.classes,ds.c = self.classes,len(self.classes)
```
`_bunch` is the last class variable usd in the data block. When you type the final `databunch()`, the data block API calls the `_bunch.create` method with the `_bunch` of the inputs.
### Keeping \_\_init\_\_ arguments
If you pass additional arguments in your `__init__` call that you save in the state of your [`ItemList`](/data_block.html#ItemList), be wary to also pass them along in the `new` method as this one is used to create your training and validation set when splitting. The basic scheme is:
``` python
class MyCustomItemList(ItemList):
def __init__(self, items, my_arg, **kwargs):
self.my_arg = my_arg
super().__init__(items, **kwargs)
def new(self, items, **kwargs):
return super().new(items, self.my_arg, **kwargs)
```
Be sure to keep the kwargs as is, as they contain all the additional stuff you can pass to an [`ItemList`](/data_block.html#ItemList).
### Important methods
#### - get
The most important method you have to implement is `get`: this one will explain your custom [`ItemList`](/data_block.html#ItemList) how to general an [`ItemBase`](/core.html#ItemBase) from the thign stored in its `items` array. For instance an [`ImageItemList`](/vision.data.html#ImageItemList) has the following `get` method:
``` python
def get(self, i):
fn = super().get(i)
res = self.open(fn)
self.sizes[i] = res.size
return res
```
The first line basically looks at `self.items[i]` (which is a filename). The second line opens it since the `open`method is just
``` python
def open(self, fn): return open_image(fn)
```
The third line is there for [`ImagePoints`](/vision.image.html#ImagePoints) or [`ImageBBox`](/vision.image.html#ImageBBox) targets that require the size of the input [`Image`](/vision.image.html#Image) to be created. Note that if you are building a custom target class and you need the size of an image, you should call `self.x.size[i]`.
```
jekyll_note("""If you just want to customize the way an `Image` is opened, subclass `Image` and just change the
`open` method.""")
```
#### - reconstruct
This is the method that is called in `data.show_batch()`, `learn.predict()` or `learn.show_results()` to transform a pytorch tensor back in an [`ItemBase`](/core.html#ItemBase). In a way, it does the opposite of calling `ItemBase.data`. It should take a tensor `t` and return the same king of thing as the `get` method.
In some situations ([`ImagePoints`](/vision.image.html#ImagePoints), [`ImageBBox`](/vision.image.html#ImageBBox) for instance) you need to have a look at the corresponding input to rebuild your item. In this case, you should have a second argument called `x` (don't change that name). For instance, here is the `reconstruct` method of [`PointsItemList`](/vision.data.html#PointsItemList):
```python
def reconstruct(self, t, x): return ImagePoints(FlowField(x.size, t), scale=False)
```
#### - analyze_pred
This is the method that is called in `learn.predict()` or `learn.show_results()` to transform predictions in an output tensor suitable for `reconstruct`. For instance we may need to take the maximum argument (for [`Category`](/core.html#Category)) or the predictions greater than a certain threshold (for [`MultiCategory`](/core.html#MultiCategory)). It should take a tensor, along with optional kwargs and return a tensor.
For instance, here is the `anaylze_pred` method of [`MultiCategoryList`](/data_block.html#MultiCategoryList):
```python
def analyze_pred(self, pred, thresh:float=0.5): return (pred >= thresh).float()
```
`thresh` can then be passed as kwarg during the calls to `learn.predict()` or `learn.show_results()`.
### Advanced show methods
If you want to use methods such a `data.show_batch()` or `learn.show_results()` with a brand new kind of [`ItemBase`](/core.html#ItemBase) you will need to implement two other methods. In both cases, the generic function will grab the tensors of inputs, targets and predictions (if applicable), reconstruct the coresponding (as seen before) but it will delegate to the [`ItemList`](/data_block.html#ItemList) the way to display the results.
``` python
def show_xys(self, xs, ys, **kwargs)->None:
def show_xyzs(self, xs, ys, zs, **kwargs)->None:
```
In both cases `xs` and `ys` represent the inputs and the targets, in the second case `zs` represent the predictions. They are lists of the same length that depend on the `rows` argument you passed. The kwargs are passed from `data.show_batch()` / `learn.show_results()`. As an example, here is the source code of those methods in [`ImageItemList`](/vision.data.html#ImageItemList):
``` python
def show_xys(self, xs, ys, figsize:Tuple[int,int]=(9,10), **kwargs):
"Show the `xs` and `ys` on a figure of `figsize`. `kwargs` are passed to the show method."
rows = int(math.sqrt(len(xs)))
fig, axs = plt.subplots(rows,rows,figsize=figsize)
for i, ax in enumerate(axs.flatten() if rows > 1 else [axs]):
xs[i].show(ax=ax, y=ys[i], **kwargs)
plt.tight_layout()
def show_xyzs(self, xs, ys, zs, figsize:Tuple[int,int]=None, **kwargs):
"""Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`.
`kwargs` are passed to the show method."""
figsize = ifnone(figsize, (6,3*len(xs)))
fig,axs = plt.subplots(len(xs), 2, figsize=figsize)
fig.suptitle('Ground truth / Predictions', weight='bold', size=14)
for i,(x,y,z) in enumerate(zip(xs,ys,zs)):
x.show(ax=axs[i,0], y=y, **kwargs)
x.show(ax=axs[i,1], y=z, **kwargs)
```
Linked to this method is the class variable `_show_square` of an [`ItemList`](/data_block.html#ItemList). It defaults to `False` but if it's `True`, the `show_batch` method will send `rows * rows` `xs` and `ys` to `show_xys` (so that it shows a square of inputs/targets), like here for iamges.
### Example: ImageTupleList
Continuing our custom item example, we create a custom [`ItemList`](/data_block.html#ItemList) class that will wrap those `ImageTuple` properly. The first thing is to write a custom `__init__` method (since we need to list of filenames here) which means we also have to change the `new` method.
```
class ImageTupleList(ImageItemList):
def __init__(self, items, itemsB=None, **kwargs):
self.itemsB = itemsB
super().__init__(items, **kwargs)
def new(self, items, **kwargs):
return super().new(items, itemsB=self.itemsB, **kwargs)
```
We then specify how to get one item. Here we pass the image in the first list of items, and pick one randomly in the second list.
```
def get(self, i):
img1 = super().get(i)
fn = self.itemsB[random.randint(0, len(self.itemsB)-1)]
return ImageTuple(img1, open_image(fn))
```
We also add a custom factory method to directly create an `ImageTupleList` from two folders.
```
@classmethod
def from_folders(cls, path, folderA, folderB, **kwargs):
itemsB = ImageItemList.from_folder(path/folderB).items
res = super().from_folder(path/folderA, itemsB=itemsB, **kwargs)
res.path = path
return res
```
Finally, we have to specify how to reconstruct the `ImageTuple` from tensors if we want `show_batch` to work. We recreate the images and denormalize.
```
def reconstruct(self, t:Tensor):
return ImageTuple(Image(t[0]/2+0.5),Image(t[1]/2+0.5))
```
There is no need to write a `analyze_preds` method since the default behavior (returning the output tensor) is what we need here. However `show_results` won't work properly unless the target (which we don't really care about here) has the right `reconstruct` method: the fastai library uses the `reconstruct` method of the target on the outputs. That's why we create another custom [`ItemList`](/data_block.html#ItemList) with just that `reconstruct` method. The first line is to reconstruct our dummy targets, and the second one is the same as in `ImageTupleList`.
```
class TargetTupleList(ItemList):
def reconstruct(self, t:Tensor):
if len(t.size()) == 0: return t
return ImageTuple(Image(t[0]/2+0.5),Image(t[1]/2+0.5))
```
To make sure our `ImageTupleList` uses that for labelling, we pass it in `_label_cls` and this is what the result looks like.
```
class ImageTupleList(ImageItemList):
_label_cls=TargetTupleList
def __init__(self, items, itemsB=None, **kwargs):
self.itemsB = itemsB
super().__init__(items, **kwargs)
def new(self, items, **kwargs):
return super().new(items, itemsB=self.itemsB, **kwargs)
def get(self, i):
img1 = super().get(i)
fn = self.itemsB[random.randint(0, len(self.itemsB)-1)]
return ImageTuple(img1, open_image(fn))
def reconstruct(self, t:Tensor):
return ImageTuple(Image(t[0]/2+0.5),Image(t[1]/2+0.5))
@classmethod
def from_folders(cls, path, folderA, folderB, **kwargs):
itemsB = ImageItemList.from_folder(path/folderB).items
res = super().from_folder(path/folderA, itemsB=itemsB, **kwargs)
res.path = path
return res
```
Lastly, we want to customize the behavior of `show_batch` and `show_results`. Remember the `to_one` method just puts the two images next to each other.
```
def show_xys(self, xs, ys, figsize:Tuple[int,int]=(12,6), **kwargs):
"Show the `xs` and `ys` on a figure of `figsize`. `kwargs` are passed to the show method."
rows = int(math.sqrt(len(xs)))
fig, axs = plt.subplots(rows,rows,figsize=figsize)
for i, ax in enumerate(axs.flatten() if rows > 1 else [axs]):
xs[i].to_one().show(ax=ax, **kwargs)
plt.tight_layout()
def show_xyzs(self, xs, ys, zs, figsize:Tuple[int,int]=None, **kwargs):
"""Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`.
`kwargs` are passed to the show method."""
figsize = ifnone(figsize, (12,3*len(xs)))
fig,axs = plt.subplots(len(xs), 2, figsize=figsize)
fig.suptitle('Ground truth / Predictions', weight='bold', size=14)
for i,(x,z) in enumerate(zip(xs,zs)):
x.to_one().show(ax=axs[i,0], **kwargs)
z.to_one().show(ax=axs[i,1], **kwargs)
```
| true |
code
| 0.930899 | null | null | null | null |
|
# Creating a Real-Time Inferencing Service
You've spent a lot of time in this course training and registering machine learning models. Now it's time to deploy a model as a real-time service that clients can use to get predictions from new data.
## Connect to Your Workspace
The first thing you need to do is to connect to your workspace using the Azure ML SDK.
> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
```
## Deploy a Model as a Web Service
You have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.
First, let's determine what models you have registered in the workspace.
```
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
```
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
```
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
```
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
```
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(folder_name, exist_ok=True)
print(folder_name, 'folder created.')
```
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* that will be deployed to the web service:
```
%%writefile $folder_name/score_diabetes.py
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
```
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
```
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = folder_name + "/diabetes_env.yml"
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
```
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:
1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.
2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.
3. Deploy the model as a web service.
4. Verify the status of the deployed service.
> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).
Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
```
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
source_directory = folder_name,
entry_script="score_diabetes.py",
conda_file="diabetes_env.yml")
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
```
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to check the status and get the service logs to help you troubleshoot.
```
print(service.state)
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
```
Take a look at your workspace in [Azure ML Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.
You can also retrieve the names of web services in your workspace by running the following code:
```
for webservice_name in ws.webservices:
print(webservice_name)
```
## Use the Web Service
With the service deployed, now you can consume it from a client application.
```
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
```
You can also send multiple patient observations to the service, and get back a prediction for each one.
```
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
```
The code above uses the Azure ML SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure ML SDK, but simply make HTTP requests to the web service.
Let's determine the URL to which these applications must submit their requests:
```
endpoint = service.scoring_uri
print(endpoint)
```
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON (or binary) format, and receive back the predicted class(es).
```
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
```
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling authentication. This would require REST requests to include an **Authorization** header.
## Delete the Service
When you no longer need your service, you should delete it to avoid incurring unecessary charges.
```
service.delete()
print ('Service deleted.')
```
For more information about publishing a model as a service, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where)
| true |
code
| 0.290515 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/bhuiyanmobasshir94/Cow-weight-and-Breed-Prediction/blob/main/notebooks/031_dec.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import pandas as pd
import sys
import os
import PIL
import PIL.Image
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import layers
import tensorflow_datasets as tfds
import pathlib
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
## Globals
YT_IMAGE_TO_TAKE = 4
images_dataset_url = "https://cv-datasets-2021.s3.amazonaws.com/images.tar.gz"
images_data_dir = tf.keras.utils.get_file(origin=images_dataset_url,
fname='images',
untar=True)
images_data_dir = pathlib.Path(images_data_dir)
yt_images_dataset_url = "https://cv-datasets-2021.s3.amazonaws.com/yt_images.tar.gz"
yt_images_data_dir = tf.keras.utils.get_file(origin=yt_images_dataset_url,
fname='yt_images',
untar=True)
yt_images_data_dir = pathlib.Path(yt_images_data_dir)
if sys.platform == 'darwin':
os.system(f"dot_clean {images_data_dir}")
os.system(f"dot_clean {yt_images_data_dir}")
elif sys.platform.startswith("lin"):
os.system(f"cd {images_data_dir} && find . -type f -name '._*' -delete")
os.system(f"cd {yt_images_data_dir} && find . -type f -name '._*' -delete")
image_count = len(list(images_data_dir.glob('*/*.jpg')))
print(image_count)
yt_image_count = len(list(yt_images_data_dir.glob('*/*.jpg')))
print(yt_image_count)
df = pd.read_csv("https://cv-datasets-2021.s3.amazonaws.com/dataset.csv")
df.shape
df.columns
df.head(2)
images = list(images_data_dir.glob('*/*.jpg'))
yt_images = list(yt_images_data_dir.glob('*/*.jpg'))
min_height = 0
max_height = 0
min_width = 0
max_width = 0
for i, image in enumerate(images):
w, h = PIL.Image.open(str(image)).size
if i == 0:
min_height = h
max_height = h
min_width = w
max_width = w
if h <= min_height:
min_height = h
if h >= max_height:
max_height = h
if w <= min_width:
min_width = w
if w >= max_width:
max_width = w
print(f"min_height: {min_height}")
print(f"min_width: {min_width}")
print(f"max_height: {max_height}")
print(f"max_width: {max_width}")
min_height = 0
max_height = 0
min_width = 0
max_width = 0
for i, image in enumerate(yt_images):
w, h = PIL.Image.open(str(image)).size
if i == 0:
min_height = h
max_height = h
min_width = w
max_width = w
if h <= min_height:
min_height = h
if h >= max_height:
max_height = h
if w <= min_width:
min_width = w
if w >= max_width:
max_width = w
print(f"min_height: {min_height}")
print(f"min_width: {min_width}")
print(f"max_height: {max_height}")
print(f"max_width: {max_width}")
f_df = pd.DataFrame(columns = ['file_path', 'teeth', 'age_in_year', 'breed', 'height_in_inch', 'weight_in_kg'])
for index, row in df.iterrows():
images = list(images_data_dir.glob(f"{row['sku']}/*.jpg"))
yt_images = list(yt_images_data_dir.glob(f"{row['sku']}/*.jpg"))
for image in images:
f_df = f_df.append({'file_path' : image, 'teeth' : row['teeth'], 'age_in_year' : row['age_in_year'], 'breed': row['breed'], 'height_in_inch': row['height_in_inch'], 'weight_in_kg': row['weight_in_kg']},
ignore_index = True)
for idx, image in enumerate(yt_images):
if idx == (YT_IMAGE_TO_TAKE - 1):
break
f_df = f_df.append({'file_path' : image, 'teeth' : row['teeth'], 'age_in_year' : row['age_in_year'], 'breed': row['breed'], 'height_in_inch': row['height_in_inch'], 'weight_in_kg': row['weight_in_kg']},
ignore_index = True)
f_df.shape
f_df.head(1)
def label_encode(df):
teeth_le = preprocessing.LabelEncoder()
df['teeth']= teeth_le.fit_transform(df['teeth'])
breed_le = preprocessing.LabelEncoder()
df['breed']= breed_le.fit_transform(df['breed'])
age_in_year_le = preprocessing.LabelEncoder()
df['age_in_year']= age_in_year_le.fit_transform(df['age_in_year'])
print(teeth_le.classes_)
print(breed_le.classes_)
print(age_in_year_le.classes_)
return df
def inverse_transform(le, series=[]):
return le.inverse_transform(series)
f_df = label_encode(f_df)
# train_df, valid_test_df = train_test_split(f_df, test_size=0.3)
# validation_df, test_df = train_test_split(valid_test_df, test_size=0.3)
# print(f"train_df: {train_df.shape}")
# print(f"validation_df: {validation_df.shape}")
# print(f"test_df: {test_df.shape}")
train_df, test_df = train_test_split(f_df, test_size=0.1)
print(f"train_df: {train_df.shape}")
print(f"test_df: {test_df.shape}")
# min_height: 450
# min_width: 800
# input: [image, teeth]
# outpur: [age_in_year, breed, height_in_inch, weight_in_kg]
# class CustomDataGen(tf.keras.utils.Sequence):
# def __init__(self, df, X_col, y_col,
# batch_size,
# input_size=(450, 800, 3), # (input_height, input_width, input_channel)
# shuffle=True):
# self.df = df.copy()
# self.X_col = X_col
# self.y_col = y_col
# self.batch_size = batch_size
# self.input_size = input_size
# self.shuffle = shuffle
# self.n = len(self.df)
# # self.n_teeth = df[X_col['teeth']].max()
# # self.n_breed = df[y_col['breed']].nunique()
# def on_epoch_end(self):
# if self.shuffle:
# self.df = self.df.sample(frac=1).reset_index(drop=True)
# def __get_input(self, path, target_size):
# image = tf.keras.preprocessing.image.load_img(path)
# image_arr = tf.keras.preprocessing.image.img_to_array(image)
# # image_arr = image_arr[ymin:ymin+h, xmin:xmin+w]
# image_arr = tf.image.resize(image_arr,(target_size[0], target_size[1])).numpy()
# return image_arr/255.
# def __get_output(self, label, num_classes):
# return tf.keras.utils.to_categorical(label, num_classes=num_classes)
# def __get_data(self, batches):
# # Generates data containing batch_size samples
# path_batch = batches[self.X_col['file_path']]
# # teeth_batch = batches[self.X_col['teeth']]
# # breed_batch = batches[self.y_col['breed']]
# weight_in_kg_batch = batches[self.y_col['weight_in_kg']]
# height_in_inch_batch = batches[self.y_col['height_in_inch']]
# age_in_year_batch = batches[self.y_col['age_in_year']]
# X0 = np.asarray([self.__get_input(x, self.input_size) for x in path_batch])
# # y0_batch = np.asarray([self.__get_output(y, self.n_teeth) for y in teeth_batch])
# # y1_batch = np.asarray([self.__get_output(y, self.n_breed) for y in breed_batch])
# y0 = np.asarray([tf.cast(y, tf.float32) for y in weight_in_kg_batch])
# y1 = np.asarray([tf.cast(y, tf.float32) for y in height_in_inch_batch])
# y2 = np.asarray([tf.cast(y, tf.float32) for y in age_in_year_batch])
# return X0, tuple([y0, y1, y2])
# def __getitem__(self, index):
# batches = self.df[index * self.batch_size:(index + 1) * self.batch_size]
# X, y = self.__get_data(batches)
# return X, y
# def __len__(self):
# return self.n // self.batch_size
# traingen = CustomDataGen(train_df,
# X_col={'file_path':'file_path', 'teeth': 'teeth'},
# y_col={'breed': 'breed', 'weight_in_kg': 'weight_in_kg', 'height_in_inch': 'height_in_inch', 'age_in_year': 'age_in_year'},
# batch_size=128, input_size=(450, 800, 3))
# testgen = CustomDataGen(test_df,
# X_col={'file_path':'file_path', 'teeth': 'teeth'},
# y_col={'breed': 'breed', 'weight_in_kg': 'weight_in_kg', 'height_in_inch': 'height_in_inch', 'age_in_year': 'age_in_year'},
# batch_size=128, input_size=(450, 800, 3))
# validgen = CustomDataGen(validation_df,
# X_col={'file_path':'file_path', 'teeth': 'teeth'},
# y_col={'breed': 'breed', 'weight_in_kg': 'weight_in_kg', 'height_in_inch': 'height_in_inch', 'age_in_year': 'age_in_year'},
# batch_size=128, input_size=(450, 800, 3))
def __get_input(path, target_size):
image = tf.keras.preprocessing.image.load_img(path)
image_arr = tf.keras.preprocessing.image.img_to_array(image)
image_arr = tf.image.resize(image_arr,(target_size[0], target_size[1])).numpy()
return image_arr/255.
def data_loader(df, image_size=(450, 800, 3)):
y0 = tf.cast(df.weight_in_kg, tf.float32)
print(y0.shape)
y1 = tf.cast(df.height_in_inch, tf.float32)
print(y1.shape)
# y2 = tf.cast(df.age_in_year, tf.float32)
y2 = keras.utils.to_categorical(df.age_in_year)
print(y2.shape)
y3 = keras.utils.to_categorical(df.breed)
print(y3.shape)
path_batch = df.file_path
X0 = tf.cast([__get_input(x, image_size) for x in path_batch], tf.float32)
print(X0.shape)
X1 = keras.utils.to_categorical(df.teeth)
print(X1.shape)
return (X0, X1), (y0, y1, y2, y3)
(X0, X1), (y0, y1, y2, y3) = data_loader(train_df, (150, 150, 3))
# input = keras.Input(shape=(128, 128, 3), name="original_img")
# x = layers.Conv2D(64, 3, activation="relu")(input)
# x = layers.Conv2D(128, 3, activation="relu")(x)
# x = layers.MaxPooling2D(3)(x)
# x = layers.Conv2D(128, 3, activation="relu")(x)
# x = layers.Conv2D(64, 3, activation="relu")(x)
# x = layers.GlobalMaxPooling2D()(x)
input0 = keras.Input(shape=(150, 150, 3), name="img")
x = layers.Conv2D(32, 3, activation="relu")(input0)
x = layers.MaxPooling2D(2)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(2)(x)
x = layers.Conv2D(64, 3, activation="relu")(x)
x = layers.GlobalMaxPooling2D()(x)
# input1 = keras.Input(shape=(3,), name="teeth")
out_a = keras.layers.Dense(1, activation='linear', name='wt_rg')(x)
out_b = keras.layers.Dense(1, activation='linear', name='ht_rg')(x)
# out_c = keras.layers.Dense(1, activation='linear', name='ag_rg')(x)
out_c = keras.layers.Dense(3, activation='softmax', name='ag_3cls')(x)
out_d = keras.layers.Dense(8, activation='softmax', name='brd_8cls')(x)
encoder = keras.Model( inputs = input0 , outputs = [out_a, out_b, out_c, out_d], name="encoder")
encoder.compile(
loss = {
"wt_rg": tf.keras.losses.MeanSquaredError(),
"ht_rg": tf.keras.losses.MeanSquaredError(),
# "ag_rg": tf.keras.losses.MeanSquaredError()
"ag_3cls": tf.keras.losses.CategoricalCrossentropy(),
"brd_8cls": tf.keras.losses.CategoricalCrossentropy()
},
metrics = {
"wt_rg": 'mse',
"ht_rg": 'mse',
"ag_3cls": 'accuracy',
"brd_8cls": 'accuracy'
},
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)
)
encoder.fit(X0, [y0, y1, y2, y3], epochs=30, verbose=2, batch_size=32, validation_split=0.2)
# encoder.output
keras.utils.plot_model(encoder, "encoder.png", show_shapes=True)
(tX0, tX1), (ty0, ty1, ty2, ty3) = data_loader(test_df, (150, 150, 3))
test_scores = encoder.evaluate(tX0, [ty0, ty1, ty2, ty3], verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
p1, p2, p3 = encoder.predict([tf.expand_dims(tX0[0], 0), tf.expand_dims(tX1[0], 0)])
# print(p0);ty0[0]
print(p1);ty1[0]
print(p2.argmax());ty2[0].argmax()
print(p3.argmax());ty3[0].argmax()
Cattle are commonly raised as livestock for meat (beef or veal, see beef cattle), for milk (see dairy cattle), and for hides, which are used to make leather. They are used as riding animals and draft animals (oxen or bullocks, which pull carts, plows and other implements). Another product of cattle is their dung, which can be used to create manure or fuel.
```
| true |
code
| 0.373876 | null | null | null | null |
|
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/image_displacement.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/image_displacement.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Image/image_displacement.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/image_displacement.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
import math
# Load the two images to be registered.
image1 = ee.Image('SKYSAT/GEN-A/PUBLIC/ORTHO/MULTISPECTRAL/s01_20150502T082736Z')
image2 = ee.Image('SKYSAT/GEN-A/PUBLIC/ORTHO/MULTISPECTRAL/s01_20150305T081019Z')
# Use bicubic resampling during registration.
image1Orig = image1.resample('bicubic')
image2Orig = image2.resample('bicubic')
# Choose to register using only the 'R' bAnd.
image1RedBAnd = image1Orig.select('R')
image2RedBAnd = image2Orig.select('R')
# Determine the displacement by matching only the 'R' bAnds.
displacement = image2RedBAnd.displacement(**{
'referenceImage': image1RedBAnd,
'maxOffset': 50.0,
'patchWidth': 100.0
})
# Compute image offset And direction.
offset = displacement.select('dx').hypot(displacement.select('dy'))
angle = displacement.select('dx').atan2(displacement.select('dy'))
# Display offset distance And angle.
Map.addLayer(offset, {'min':0, 'max': 20}, 'offset')
Map.addLayer(angle, {'min': -math.pi, 'max': math.pi}, 'angle')
Map.setCenter(37.44,0.58, 15)
# Use the computed displacement to register all Original bAnds.
registered = image2Orig.displace(displacement)
# Show the results of co-registering the images.
visParams = {'bands': ['R', 'G', 'B'], 'max': 4000}
Map.addLayer(image1Orig, visParams, 'Reference')
Map.addLayer(image2Orig, visParams, 'BefOre Registration')
Map.addLayer(registered, visParams, 'After Registration')
alsoRegistered = image2Orig.register(**{
'referenceImage': image1Orig,
'maxOffset': 50.0,
'patchWidth': 100.0
})
Map.addLayer(alsoRegistered, visParams, 'Also Registered')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| true |
code
| 0.632503 | null | null | null | null |
|
[Loss Function](https://www.bualabs.com/archives/2673/what-is-loss-function-cost-function-error-function-loss-function-how-cost-function-work-machine-learning-ep-1/) หรือ Cost Function คือ การคำนวน Error ว่า yhat ที่โมเดลทำนายออกมา ต่างจาก y ของจริง อยู่เท่าไร แล้วหาค่าเฉลี่ย เพื่อที่จะนำมาหา Gradient ของ Loss ขึ้นกับ Weight ต่าง ๆ ด้วย Backpropagation แล้วใช้อัลกอริทึม [Gradient Descent](https://www.bualabs.com/archives/631/what-is-gradient-descent-in-deep-learning-what-is-stochastic-gradient-descent-sgd-optimization-ep-1/) ทำให้ Loss น้อยลง ในการเทรนรอบถัดไป
ในเคสนี้เราจะพูดถึง Loss Function สำหรับงาน Classification (Discrete ค่าไม่ต่อเนื่อง) ที่เป็นที่นิยมมากที่สุด ได้แก่ Cross Entropy Loss
* yhat เป็น Probability ที่ออกมาจากโมเดลที่ Layer สุดท้ายเป็น [Softmax Function](https://www.bualabs.com/archives/1819/what-is-softmax-function-how-to-use-softmax-function-benefit-of-softmax/)
* y เป็นข้อมูลที่อยู่ในรูปแบบ [One Hot Encoding](https://www.bualabs.com/archives/1902/what-is-one-hot-encoding-benefit-one-hot-encoding-why-one-hot-encoding-in-machine-learning/)
# 0. Import
```
import torch
from torch import tensor
import matplotlib.pyplot as plt
```
# 1. Data
เราจะสร้างข้อมูลตัวอย่างขึ้นมา Dog = 0, Cat 1, Rat = 2
## y
สมมติค่า y จากข้อมูลตัวอย่าง ที่เราต้องการจริง ๆ เป็นดังนี้
```
y = tensor([0, 1, 2, 0, 0, 1, 0, 2, 2, 1])
y
n, c = len(y), y.max()+1
y_onehot = torch.zeros(n, c)
y_onehot[torch.arange(n), y] = 1
y_onehot
```
## yhat
สมมติว่า โมเดลเราทำนายออกมาได้ nn
```
yhat = tensor([[3., 2., 1.],
[5., 6., 2.],
[0., 0., 5.],
[2., 3., 1.],
[5., 4., 3.],
[1., 0., 3.],
[5., 3., 2.],
[2., 2., 4.],
[8., 5., 3.],
[3., 4., 0.]])
```
เราจะใช้ [Softmax Function จาก ep ที่แล้ว](https://www.bualabs.com/archives/1819/what-is-softmax-function-how-to-use-softmax-function-benefit-of-softmax/) แล้วเติม log เอาไว้สำหรับใช้ในขั้นตอนถัดไป
$$\hbox{softmax(x)}_{i} = \frac{e^{x_{i}}}{\sum_{0 \leq j \leq n-1} e^{x_{j}}}$$
```
def log_softmax(z):
z = z - z.max(-1, keepdim=True)[0]
exp_z = torch.exp(z)
sum_exp_z = torch.sum(exp_z, -1, keepdim=True)
return (exp_z / sum_exp_z).log()
```
yhat กลายเป็น Probability ของ 3 Category
```
log_softmax(yhat)
```
## argmax เปรียบเทียบ y และ yhat
argmax ใช้หาตำแหน่งที่ มีค่ามากที่สุด ในที่นี้ เราสนใจค่ามากที่สุดใน มิติที่ 1
```
yhat.argmax(1)
y
```
ตรงกัน 7 อัน
```
(yhat.argmax(1) == y).sum()
```
# 2. Cross Entropy Loss
Cross Entropy Loss (Logistic Regression) หรือ Log Loss คือ การคำนวน Error ว่า yhat ต่างจาก y อยู่เท่าไร ด้วยการนำ Probability มาคำนวน หมายถึง ทายถูก แต่ไม่มั่นใจก็จะ Loss มาก หรือ ยิ่งทายผิด แต่มั่นใจมาก ก็จะ Loss มาก โดยคำนวนทั้ง Batch แล้วหาค่าเฉลี่ย
* p(x) มีค่าระหว่าง 0 ถึง 1 (ทำให้ผ่าน log แล้วติดลบ เมื่อเจอกับเครื่องหมายลบด้านหน้า จะกลายเป็นบวก)
* Cross Entropy Loss มีค่าระหว่าง 0 ถึง Infinity (ถ้าเป็น 0 คือไม่ Error เลย)
# 2.1 สูตร Cross Entropy Loss
เรียกว่า Negative Log Likelihood
$$ NLL = -\sum x\, \log p(x) $$
เนื่องจาก ค่า $x$ อยู่ในรูป One Hot Encoding เราสามารถเขียนใหม่ได้เป็น $-\log(p_{i})$ โดย i เป็น Index ของ y ที่เราต้องการ
## 2.2 โค้ด Negative Log Likelihood
```
# log_probs = log of probability, target = target
def nll(log_probs, target):
return -(log_probs[torch.arange(log_probs.size()[0]), target]).mean()
```
## 2.3 การใช้งาน Negative Log Likelihood
```
loss = nll(log_softmax(yhat), y)
loss
```
## 2.4 Optimize
เนื่องจาก
$$\log \left ( \frac{a}{b} \right ) = \log(a) - \log(b)$$
ทำให้เราแยก เศษและส่วน ออกเป็น 2 ก่อนลบกัน
และถ้า x ใหญ่เกินไป เมื่อนำมา exp จะทำให้ nan ได้ จากสูตรด้านล่าง
$$\log \left ( \sum_{j=1}^{n} e^{x_{j}} \right ) = \log \left ( e^{a} \sum_{j=1}^{n} e^{x_{j}-a} \right ) = a + \log \left ( \sum_{j=1}^{n} e^{x_{j}-a} \right )$$
a คือ max(x) เราสามารถ exp(x-a) ให้ x เป็นค่าติดลบให้หมด เมื่อ exp จะได้ไม่เกิน 1 แล้วค่อยไปบวกก a กลับทีหลังได้
จาก 2 สูตรด้านบน เราสามารถ Optimize โค้ด ได้ดังนี้
```
def log_softmax2(z):
m = z.max(-1, keepdim=True)[0]
return z - ((z-m).exp().sum(-1, keepdim=True).log()+m)
```
หรือ
```
def log_softmax3(z):
return z - (z).logsumexp(-1, keepdim=True)
```
### เปรียบเทียบผลลัพธ์กับ PyTorch
```
import torch.nn.functional as F
F.cross_entropy(yhat, y)
nll(log_softmax(yhat), y)
nll(log_softmax2(yhat), y)
nll(log_softmax3(yhat), y)
```
ผลลัพธ์ถูกต้อง ตรงกับ PyTorch F.cross_entropy
## 2.5 พล็อตกราฟ
เราจะสมมติว่า Dog = 0, Cat = 1 และในข้อมูลตัวอย่างมีแต่ Dog (0) อย่างเดียว เราจะลองดูว่าพล็อตกราฟไล่ตั้งแต่ ความน่าจะเป็น 0-100%
เราจะสร้างข้อมูลตัวอย่างขึ้นมา ให้ y เป็น 0 จำนวน 100 ตัว แทนรูปภาพ Dog 100 รูป เราจะได้เอาไว้พล็อตกราฟ
```
y = torch.zeros(100)
y[:10]
```
yhat คือ Output ของโมเดลว่า ความน่าจะเป็นรูป Dog (Column 0) และความน่าจะเป็นรูป Cat (Column 1) เราจะไล่ข้อมูลตั้งแต่ (หมา 0% แมว 100%) ไปยัง (หมา 100% แมว 0%)
```
yhat = torch.zeros(100, 2)
yhat[range(0, 100), 0] = torch.arange(0., 1., 0.01)
yhat[:, 1] = 1-yhat[:, 0]
yhat[:10]
```
คำนวนค่าความน่าจะเป็น ของทั้ง 2 Class เอาไว้พล็อตกราฟ
```
classes = torch.tensor([0., 1.])
yhat_classes = yhat @ classes.t()
yhat_classes[:10]
```
Log ค่า Probability (ของจริงจะมาจาก Softmax ตามตัวอย่างด้านบน) เตรียมไว้เข้าสูตร
```
log_probs = yhat.log()
log_probs[:10]
```
Negative Log Likelihood
```
loss = -(log_probs[torch.arange(log_probs.size()[0]), y.long()])
loss[:10]
```
### พล็อตกราฟ y, yhat, loss และ log loss
* ข้อมูลตัวอย่าง y ที่สมมติว่าเท่ากับ 0 อย่างเดียว (เส้นสีแดง) เทียบกับ yhat ที่ทำนายไล่ตั้งแต่ 1 ไปถึง 0 (ทายผิดไล่ไปถึงทายถูก เส้นสีเขียว)
* สังเกต Loss สีส้ม เริ่มจากซ้ายสุด Ground Truth เท่ากับ 0 (เส้นสีแดง) แต่โมเดลทายผิด ทายว่าเป็น 1 (เส้นสีเขียว) ด้วยความมั่นใจ 100% ทำให้ Loss พุ่งขึ้นถึง Infinity
* เลื่อนมาตรงกลาง Loss จะลดลงอย่างรวดเร็ว เมื่อโมเดลทายผิด แต่ไม่ได้มั่นใจเต็มร้อย
* ด้านขวา Loss ลดลงเรื่อย ๆ จนเป็น 0 เมื่อโมเดลทายถูก ว่าเป็น 0 ด้วยความมั่นใจ 100%
* Log of Loss คือเปลี่ยน Loss ที่อยู่ช่วง Infinity ถึง 0 เป็น Log Scale จะได้ช่วง Infinity ถึง -Infinity จะได้ Balance ดูง่ายขึ้น
```
fig,ax = plt.subplots(figsize=(9, 9))
ax.scatter(yhat[:,0].numpy(), loss.log(), label="Log of Loss")
ax.scatter(yhat[:,0].numpy(), loss, label="Loss")
ax.plot(yhat[:,0].numpy(), yhat_classes.numpy(), label="yhat", color='green')
ax.plot(yhat[:,0].numpy(), y.numpy(), label="y", color='red')
ax.grid(True)
ax.legend(loc='upper right')
```
# 3. Loss Function อื่น ๆ
เราจะเป็นที่ต้องเข้าใจความเป็นมา และกลไกการทำงานภายใน ของ Loss Function เนื่องจากเมื่อเราต้องการออกแบบโมเดล ในการแก้ปัญหาที่ซับซ้อนมากขึ้น เราต้องออกแบบ Loss Function ให้เข้ากับงานนั้นด้วย เช่น อาจจะเอาหลาย ๆ Loss Function เช่น [Regression Loss](https://www.bualabs.com/archives/1928/what-is-mean-absolute-error-mae-mean-squared-error-mse-root-mean-squared-error-rmse-loss-function-ep-2/) มาผสมกัน แล้ว Weight น้ำหนัก รวมเป็น Loss ที่เราต้องการ เป็นต้น
```
```
| true |
code
| 0.6857 | null | null | null | null |
|
# CatBoostRegressor with RobustScaler
This Code template is for regression analysis using CatBoostRegressor and Robust Scaler Feature Scaling technique. CatBoost is an algorithm for gradient boosting on decision trees.
<img src="https://cdn.blobcity.com/assets/gpu_recommended.png" height="25" style="margin-bottom:-15px" />
### Required Packages
```
!pip install catboost
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.preprocessing import LabelEncoder, RobustScaler
from sklearn.model_selection import train_test_split
from catboost import CatBoostRegressor
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ''
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Data Rescaling
It scales features using statistics that are robust to outliers. This method removes the median and scales the data in the range between 1st quartile and 3rd quartile. i.e., in between 25th quantile and 75th quantile range. This range is also called an Interquartile range.
<a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html">More about Robust Scaler</a>
```
robust = RobustScaler()
x_train = robust.fit_transform(x_train)
x_test = robust.transform(x_test)
```
### Model
CatBoost is an algorithm for gradient boosting on decision trees. Developed by Yandex researchers and engineers, it is the successor of the MatrixNet algorithm that is widely used within the company for ranking tasks, forecasting and making recommendations
#### Tuning parameters
1. **learning_rate**:, float, default = it is defined automatically for Logloss, MultiClass & RMSE loss functions depending on the number of iterations if none of these parameters is set
>The learning rate. Used for reducing the gradient step.
2. **l2_leaf_reg**: float, default = 3.0
>Coefficient at the L2 regularization term of the cost function. Any positive value is allowed.
3. **bootstrap_type**: string, default = depends on the selected mode and processing unit
>Bootstrap type. Defines the method for sampling the weights of objects.
* Supported methods:
* Bayesian
* Bernoulli
* MVS
* Poisson (supported for GPU only)
* No
4. **subsample**: float, default = depends on the dataset size and the bootstrap type
>Sample rate for bagging. This parameter can be used if one of the following bootstrap types is selected:
* Poisson
* Bernoulli
* MVS
For more information refer: [API](https://catboost.ai/docs/concepts/python-reference_catboostregressor.html)
```
# Build Model here
model = CatBoostRegressor(verbose=False)
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
## Creator: Abhishek Garg, Github: [Profile](https://github.com/abhishek-252)
| true |
code
| 0.511168 | null | null | null | null |
|
```
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
from os import listdir, path
import numpy as np
from collections import defaultdict
import datetime
import random
random.seed(42) # Keep the order stable everytime shuffling the files while creating training datasets
```
## Global variables
```
seq_length = 36 # This will be used to keep the fixed input size for the first CNN layer
dim = 6 # Number of datapoints in a single reading accX,accY,accZ,gyrX,gyrY,gyrZ
num_classes = 10 # Number of output classes [0,9]
```
## Sequence Padding
#### When collecting sequence data, individual samples have different lengths. Since the input data for a convolutional neural network must be a single tensor, samples need to be padded. The sequence are padded at the beginning and at the end with neighboring values.
```
def padding(data):
padded_data = []
noise_level = [ 20, 20, 20, 0.2, 0.2, 0.2 ]
tmp_data = (np.random.rand(seq_length, dim) - 0.5) * noise_level + data[0]
tmp_data[(seq_length - min(len(data), seq_length)):] = data[:min(len(data), seq_length)]
padded_data.append(tmp_data)
tmp_data = (np.random.rand(seq_length, dim) - 0.5) * noise_level + data[-1]
tmp_data[:min(len(data), seq_length)] = data[:min(len(data), seq_length)]
padded_data.append(tmp_data)
return padded_data
```
## Convert to TensorFlow dataset, keeps data and labels together
```
def build_dataset(data, label):
# Add 2 padding, initialize data and label
padded_num = 2
length = len(data) * padded_num
features = np.zeros((length, seq_length, dim))
labels = np.zeros(length)
# Get padding for train, valid and test
for idx, (data, label) in enumerate(zip(data, label)):
padded_data = padding(data)
for num in range(padded_num):
features[padded_num * idx + num] = padded_data[num]
labels[padded_num * idx + num] = label
# Turn into tf.data.Dataset
dataset = tf.data.Dataset.from_tensor_slices((features, labels.astype("int32")))
return length, dataset
```
## Time Warping
```
def time_warping(molecule, denominator, data):
tmp_data = [[0 for i in range(len(data[0]))] for j in range((int(len(data) / molecule) - 1) * denominator)]
for i in range(int(len(data) / molecule) - 1):
for j in range(len(data[i])):
for k in range(denominator):
tmp_data[denominator * i + k][j] = (data[molecule * i + k][j] * (denominator - k)
+ data[molecule * i + k + 1][j] * k) / denominator
return tmp_data
```
## Data augmentation
```
def augment_data(original_data, original_label):
new_data = []
new_label = []
for idx, (data, label) in enumerate(zip(original_data, original_label)): # pylint: disable=unused-variable
# Original data
new_data.append(data)
new_label.append(label)
# Shift Sequence
for num in range(5): # pylint: disable=unused-variable
new_data.append((np.array(data, dtype=np.float32) +
(random.random() - 0.5) * 200).tolist())
new_label.append(label)
# Add Random noise
tmp_data = [[0 for i in range(len(data[0]))] for j in range(len(data))]
for num in range(5):
for i in range(len(tmp_data)):
for j in range(len(tmp_data[i])):
tmp_data[i][j] = data[i][j] + 5 * random.random()
new_data.append(tmp_data)
new_label.append(label)
# Time warping
fractions = [(3, 2), (5, 3), (2, 3), (3, 4), (9, 5), (6, 5), (4, 5)]
for molecule, denominator in fractions:
new_data.append(time_warping(molecule, denominator, data))
new_label.append(label)
# Movement amplification
for molecule, denominator in fractions:
new_data.append(
(np.array(data, dtype=np.float32) * molecule / denominator).tolist())
new_label.append(label)
return new_data, new_label
```
## Load data from files
```
def load_data(data_type, files):
data = []
labels = []
random.shuffle(files)
for file in files:
with open(file) as f:
label = path.splitext(file)[0][-1]
labels.append(label)
readings = []
for line in f:
reading = line.strip().split(',')
readings.append([float(i) for i in reading[0:6]])
data.append(readings)
if data_type == 'train':
data, labels = augment_data(data, labels)
return build_dataset(data, labels)
```
## Prepare training, validation, and test datasets
```
files_path = defaultdict(list)
dir = './data'
for filename in listdir(dir):
if filename.endswith('.csv'):
digit = path.splitext(filename)[0][-1]
files_path[digit].append(path.join(dir, filename))
train_files = []
validation_files = []
test_files = []
for digit in files_path:
random.shuffle(files_path[digit])
train_split = int(len(files_path[digit]) * 0.6) # 60%
validation_split = train_split + int(len(files_path[digit]) * 0.2) # 20%
train_files += files_path[digit][:train_split]
validation_files += files_path[digit][train_split:validation_split]
# remaining 20%
test_files += files_path[digit][validation_split:]
train_length, train_data = load_data('train', train_files)
validation_length, validation_data = load_data('validation', validation_files)
test_length, test_data = load_data('test', test_files )
print('train_length={} validation_length={} test_length{}'.format(train_length, validation_length, test_length))
```
## Build a sequential model
```
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(8, (3, 3), padding="same", activation="relu", input_shape=(seq_length, dim, 1)),
tf.keras.layers.Conv2D(8, (3, 3), padding="same", activation="relu"),
tf.keras.layers.MaxPool2D((2, 2)),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Conv2D(8, (3, 3), padding="same", activation="relu"),
tf.keras.layers.MaxPool2D((2, 2), padding="same"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Conv2D(16, (3, 3), padding="same", activation="relu"),
tf.keras.layers.MaxPool2D((2, 2), padding="same"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Conv2D(16, (3, 3), padding="same", activation="relu"),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(32, activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(num_classes, activation="softmax")
])
model.summary()
```
## Compile and start training
```
epochs = 100
batch_size = 64
steps_per_epoch=1000
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])
def reshape_function(data, label):
reshaped_data = tf.reshape(data, [-1, dim, 1])
return reshaped_data, label
train_data = train_data.map(reshape_function)
validation_data = validation_data.map(reshape_function)
train_data = train_data.batch(batch_size).repeat()
validation_data = validation_data.batch(batch_size)
logdir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
# Uncomment the ines below if you like to see how training proceeds
# %load_ext tensorboard
# %tensorboard --logdir logdir
model.fit(
train_data,
epochs=epochs,
validation_data=validation_data,
steps_per_epoch=steps_per_epoch,
validation_steps=int((validation_length - 1) / batch_size + 1),
callbacks=[tensorboard_callback])
```
## Evaluate the trained model on test dataset
```
test_data = test_data.map(reshape_function)
test_labels = np.zeros(test_length)
# There is no easy function to get the labels back from the tf.data.Dataset :(
# Need to iterate over dataset
idx = 0
for data, label in test_data:
test_labels[idx] = label.numpy()
idx += 1
test_data = test_data.batch(batch_size)
loss, acc = model.evaluate(test_data)
pred = np.argmax(model.predict(test_data), axis=1)
# Create a confusion matrix to see how model predicts
confusion = tf.math.confusion_matrix(labels=tf.constant(test_labels), predictions=tf.constant(pred), num_classes=num_classes)
print(confusion)
```
## Convert model to TFLite format
### Note: Currently quantized TFLite format does not work with TFLite Micro library
```
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("model.tflite", "wb").write(tflite_model)
# Convert the model to the TensorFlow Lite format with quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
open("model_quantized.tflite", "wb").write(tflite_model)
```
| true |
code
| 0.502441 | null | null | null | null |
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span><ul class="toc-item"><li><span><a href="#Example-01:-Extract-text" data-toc-modified-id="Example-01:-Extract-text-1.1"><span class="toc-item-num">1.1 </span>Example 01: Extract text</a></span><ul class="toc-item"><li><span><a href="#Write-the-code-for-the-main-steps-aiming-web-scraping" data-toc-modified-id="Write-the-code-for-the-main-steps-aiming-web-scraping-1.1.1"><span class="toc-item-num">1.1.1 </span>Write the code for the main steps aiming web scraping</a></span><ul class="toc-item"><li><span><a href="#Send-request-and-catch-response" data-toc-modified-id="Send-request-and-catch-response-1.1.1.1"><span class="toc-item-num">1.1.1.1 </span>Send request and catch response</a></span></li><li><span><a href="#get-the-content-of-the-response" data-toc-modified-id="get-the-content-of-the-response-1.1.1.2"><span class="toc-item-num">1.1.1.2 </span>get the content of the response</a></span></li><li><span><a href="#parse-webpage" data-toc-modified-id="parse-webpage-1.1.1.3"><span class="toc-item-num">1.1.1.3 </span>parse webpage</a></span></li><li><span><a href="#Extra:-Use-prettify-to-have-a-'prettier'-view-of-the-page's-code" data-toc-modified-id="Extra:-Use-prettify-to-have-a-'prettier'-view-of-the-page's-code-1.1.1.4"><span class="toc-item-num">1.1.1.4 </span>Extra: Use prettify to have a 'prettier' view of the page's code</a></span></li></ul></li><li><span><a href="#Title?" data-toc-modified-id="Title?-1.1.2"><span class="toc-item-num">1.1.2 </span>Title?</a></span></li><li><span><a href="#Text-per-section-(e.g.-1.-What-is-cryptocurrency?)" data-toc-modified-id="Text-per-section-(e.g.-1.-What-is-cryptocurrency?)-1.1.3"><span class="toc-item-num">1.1.3 </span>Text per section (e.g. 1. What is cryptocurrency?)</a></span></li></ul></li><li><span><a href="#Example-02:-Extract-table-info" data-toc-modified-id="Example-02:-Extract-table-info-1.2"><span class="toc-item-num">1.2 </span>Example 02: Extract table info</a></span></li><li><span><a href="#Example-03:-Extract-information-from-hyperlink" data-toc-modified-id="Example-03:-Extract-information-from-hyperlink-1.3"><span class="toc-item-num">1.3 </span>Example 03: Extract information from hyperlink</a></span></li></ul></li></ul></div>
# Introduction
```
# importing packages
import requests
from bs4 import BeautifulSoup
import pandas as pd
```
## Example 01: Extract text
```
url_01 = "https://www.nerdwallet.com/article/investing/cryptocurrency-7-things-to-know#:~:text=A%20cryptocurrency%20(or%20%E2%80%9Ccrypto%E2%80%9D,sell%20or%20trade%20them%20securely."
```
### Write the code for the main steps aiming web scraping
#### Send request and catch response
```
# response =
```
#### get the content of the response
```
# content =
```
#### parse webpage
```
# parser =
```
#### Extra: Use prettify to have a 'prettier' view of the page's code
`parser` is a `BeautifulSoup object`, which represents the document as a nested data structure.
The `prettify()` method will turn a Beautiful Soup parse tree into a nicely formatted Unicode string, making it much easy to visualize the tree structure.
```
def parse_website(url):
"""
Parse content of a website
Args:
url (str): url of the website of which we want to acess the content
Return:
parser: representation of the document as a nested data structure.
"""
# Send request and catch response
response = requests.get(url)
# get the content of the response
content = response.content
# parse webpage
parser = BeautifulSoup(content, "lxml")
return parser
parser_01 = parse_website(url_01)
```
### Title?
```
# access title of the web page
#obtain text between tags
```
### Text per section (e.g. 1. What is cryptocurrency?)
1. Access subtitles (titles of sections e.g. "Cryptocurrency definition")

```
# subtitles =
# texts =
# text_01 = texts[0:6]
# text_01
```
Apply some cleaning to the piece of text bellow if you have time...
```
# text_01 = text_01[0:4]
```
## Example 02: Extract table info
```
url_02 = "https://www.worldometers.info/population/countries-in-the-eu-by-population/"
parser_02 = parse_website(url_02)
print(parser_02.prettify())
```

```
# Obtain information from tag <table>
# table =
# table
# tip: prettify table to see better the information you are looking for
# Obtain column names within tag <th> with attribute col
# list_col =
# Clean text
# list_col =
# list_col
# Create a dataframe
# EU_population_data =
```
From the table prettify we see that the rows are located under tag <tr> and items are located under tag <td> . Use this info and fill your dataframe.
```
# Create a for loop to fill EU_population_data
# EU_population_data
```
## Example 03: Extract information from hyperlink
Applying web scraping to [`https://jadsmkbdatalab.nl/voorbeeldcases/`](https://jadsmkbdatalab.nl/voorbeeldcases/).
Right click to `inspect` element in the webpage. Notice that the information we look for is between h3's...

In this example, you will face something new. Before doing as usual and using the parser function check the response using requests.
Which code did you get?
TIP: Check this [stackoverflow answer](https://stackoverflow.com/questions/38489386/python-requests-403-forbidden)
```
url_03 = 'https://jadsmkbdatalab.nl/voorbeeldcases/'
# response =
# response
# Modify the steps we have learn to solve the issue
# get response
# get the content of the response
# parse webpage
# print(parser_03.prettify())
# find hyperlinks
# links =
# links
# Obtain url of the links
```
Updating function to include headers...
```
def parse_website(url):
"""
Parse content of a website
Args:
url (str): url of the website of which we want to acess the content
Return:
parser: representation of the document as a nested data structure.
"""
# Send request and catch response
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"}
response = requests.get(url, headers=headers)
# get the content of the response
content = response.content
# parse webpage
parser = BeautifulSoup(content, "lxml")
return parser
# parse and prettify one of the obtained urls
# parser_03_0 =
# find all paragraphs within parser_03_0
# paragraphs =
# paragraphs
# Obtain text of paragraphs
# saving the content of this page
```
| true |
code
| 0.475544 | null | null | null | null |
|
# **Neural machine translation with attention**
Today we will train a sequence to sequence (seq2seq) model for Spanish to English translation. This is an advanced example that assumes some knowledge of sequence to sequence models.
After training the model in this notebook, you will be able to input a Spanish sentence, such as *"¿todavia estan en casa?"*, and return the English translation: *"are you still at home?"*
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 minutes to run on a single P100 GPU.
```
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
```
## **Download and prepare the dataset**
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
```
May I borrow this book? ¿Puedo tomar prestado este libro?
```
There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
1. Add a *start* and *end* token to each sentence.
2. Clean the sentences by removing special characters.
3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
4. Pad each sentence to a maximum length.
```
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence).encode('utf-8'))
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# creating cleaned input, output pairs
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
```
### **Limit the size of the dataset to experiment faster (optional)**
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data):
```
# Try experimenting with the size of that dataset
num_examples = 100000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# Calculate max_length of the target tensors
max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
def convert(lang, tensor):
for t in tensor:
if t!=0:
print ("%d ----> %s" % (t, lang.index_word[t]))
print ("Input Language; index to word mapping")
convert(inp_lang, input_tensor_train[0])
print ()
print ("Target Language; index to word mapping")
convert(targ_lang, target_tensor_train[0])
```
### **Create a tf.data dataset**
```
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 256
units = 1024 # better if embedding_dim*4
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
```
## **Write the encoder and decoder model**
Implement an encoder-decoder model with attention which you can read about in the TensorFlow [Neural Machine Translation (seq2seq) tutorial](https://github.com/tensorflow/nmt). This example uses a more recent set of APIs. This notebook implements the [attention equations](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism) from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. The below picture and formulas are an example of attention mechanism from [Luong's paper](https://arxiv.org/abs/1508.04025v5).
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.
Here are the equations that are implemented:
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
This tutorial uses [Bahdanau attention](https://arxiv.org/pdf/1409.0473.pdf) for the encoder. Let's decide on notation before writing the simplified form:
* FC = Fully connected (dense) layer
* EO = Encoder output
* H = hidden state
* X = input to the decoder
And the pseudo-code:
* `score = FC(tanh(FC(EO) + FC(H)))`
* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.
* `embedding output` = The input to the decoder X is passed through an embedding layer.
* `merged vector = concat(embedding output, context vector)`
* This merged vector is then given to the GRU
The shapes of all the vectors at each step have been specified in the comments in the code:
```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
# we are doing this to broadcast addition along the time axis to calculate the score
query_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(query_with_time_axis) + self.W2(values)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((BATCH_SIZE, 1)),
sample_hidden, sample_output)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
```
## **Define the optimizer and the loss function**
```
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
```
## **Checkpoints (Object-based saving)**
```
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
```
## **Training**
1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.
2. The encoder output, encoder hidden state and the decoder input (which is the *start token*) is passed to the decoder.
3. The decoder returns the *predictions* and the *decoder hidden state*.
4. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
5. Use *teacher forcing* to decide the next input to the decoder.
6. *Teacher forcing* is the technique where the *target word* is passed as the *next input* to the decoder.
7. The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
```
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
```
## **Translate**
* The evaluate function is similar to the training loop, except we don't use *teacher forcing* here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
* Stop predicting when the model predicts the *end token*.
* And store the *attention weights for every time step*.
Note: The encoder output is calculated only once for one input.
```
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
```
## **Restore the latest checkpoint and test**
```
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# as near translation
translate(u'trata de averiguarlo.')
```
| true |
code
| 0.570032 | null | null | null | null |
|
# Predict google map review dataset
## model
- kcbert
- fine-tuned with naver shopping review dataset (200,000개)
- train 5 epochs
- 0.97 accuracy
## dataset
- google map review of tourist places in Daejeon, Korea
```
import torch
from torch import nn, Tensor
from torch.optim import Optimizer
from torch.utils.data import DataLoader, RandomSampler, DistributedSampler, random_split
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from torch.nn import CrossEntropyLoss
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning import LightningModule, Trainer, seed_everything
from pytorch_lightning.metrics.functional import accuracy, precision, recall
from transformers import AdamW, BertForSequenceClassification, AdamW, BertConfig, AutoTokenizer, BertTokenizer, TrainingArguments
from keras.preprocessing.sequence import pad_sequences
import random
import numpy as np
import time
import datetime
import pandas as pd
import os
from tqdm import tqdm
import pandas as pd
from transformers import AutoTokenizer, AutoModelWithLMHead
from keras.preprocessing.sequence import pad_sequences
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
pj_path = os.getenv('HOME') + '/Projects/JeongCheck'
data_path = pj_path + '/compare'
data_list = os.listdir(data_path)
print(len(data_list))
data_list
file_list = os.listdir(data_path)
file_list
spacing = pd.read_csv(data_path + f'/{file_list[0]}')
spell = pd.read_csv(data_path + f'/{file_list[1]}')
spacing.head()
spell.head()
len(spacing), len(spell)
print(spacing.isna().sum())
print('\n')
print(spell.isna().sum())
print(set(spacing.label))
print(set(spell.label))
print(len(spacing[spacing.label==2]))
print(len(spell[spell.label==2]))
test_spac = spacing.copy()
test_spel = spell.copy()
print(len(test_spac), len(test_spel))
```
중립 데이터 제외
```
test_spac = test_spac[test_spac.label != 2]
print(len(test_spac))
test_spel = test_spel[test_spel.label != 2]
print(len(test_spel))
from transformers import BertForSequenceClassification, AdamW, BertConfig
tokenizer = AutoTokenizer.from_pretrained("beomi/kcbert-base")
# Load BertForSequenceClassification, the pretrained BERT model with a single
# linear classification layer on top.
model = BertForSequenceClassification.from_pretrained(
pj_path + "/bert_model/checkpoint-2000",
num_labels = 2,
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
params = list(model.named_parameters())
print('The BERT model has {:} different named parameters.\n'.format(len(params)))
print('==== Embedding Layer ====\n')
for p in params[0:5]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== First Transformer ====\n')
for p in params[5:21]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Output Layer ====\n')
for p in params[-4:]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
def convert_input_data(sentences):
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
MAX_LEN = 64
# 토큰을 숫자 인덱스로 변환
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
# 문장을 MAX_LEN 길이에 맞게 자르고, 모자란 부분을 패딩 0으로 채움
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
# 어텐션 마스크 초기화
attention_masks = []
# 어텐션 마스크를 패딩이 아니면 1, 패딩이면 0으로 설정
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
inputs = torch.tensor(input_ids)
masks = torch.tensor(attention_masks)
return inputs, masks
def test_sentences(sentences):
# 평가모드로 변경!!!!!
model.eval()
inputs, masks = convert_input_data(sentences)
# 데이터를 GPU에 넣음
b_input_ids = inputs.to(device)
b_input_mask = masks.to(device)
# 그래디언트 계산 안함
with torch.no_grad():
# Forward 수행
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
# 로스 구함
logits = outputs[0]
# CPU로 데이터 이동
logits = logits.detach().cpu().numpy()
return logits
device = "cuda:0"
model = model.to(device)
```
## 데이터 변환
```
def preprocessing(df):
df.document=df.comment.replace('[^A-Za-zㄱ-ㅎㅏ-ㅣ가-힣]+','')
return df
# result = preprocessing(gr_data)
# result = result.dropna()
# print(result)
# 감성분석할 comment 추출
def export_com(preprocessed_df):
sens =[]
for sen in preprocessed_df.comment:
sens.append(sen)
print('check lenght :', len(sens), len(preprocessed_df)) # 개수 확인
print('sample sentence :', sens[1])
return sens
def make_predicted_label(sen):
sen = [sen]
score = test_sentences(sen)
result = np.argmax(score)
if result == 0: # negative
return 0
elif result == 1: # positive
return 1
def predict_label(model, df, place_name):
result = preprocessing(df)
result = result.dropna()
sens = export_com(result)
scores_data=[]
for sen in sens:
scores_data.append(make_predicted_label(sen))
df['pred'] = scores_data
cor = df[df.label == df.pred]
uncor = df[df.label != df.pred]
print('correct prediction num :', len(cor))
print('uncorrect prediction num :', len(uncor))
print('correct label check :' ,set(cor.label))
# df.to_csv(pj_path + f'/sentiment_data/{place_name}_pred_kcbert.csv')
return df
print('### spacing ###')
predict_spac = predict_label(model, test_spac, 'total')
print('### spell ###')
predict_spel = predict_label(model, test_spel, 'total')
```
## Loss (RMSE)
```
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import math
def rmse(y, y_pred):
from sklearn.metrics import mean_squared_error
import math
print('lenght check (origin, prediction):', len(y), len(y_pred))
rmse_label = math.sqrt(mean_squared_error(y, y_pred))
print('rmse of label :', rmse_label)
```
## Accuracy
```
def acc(y, y_pred, total):
correct = (y_pred == y).sum().item()
print(f'Accuracy of the network on the {total} test text: %d %%' % (
100 * correct / total))
```
## f1-score
```
from sklearn.metrics import f1_score, classification_report
def f1(y, y_pred):
score = f1_score(y, y_pred)
report = classification_report(y, y_pred)
print('f1 score :', score)
print('===== classification report =====')
print(report)
```
## calculate performance
- RMSE
- Accuracy
- f1-score
```
def cal_perform(df):
y = df.label
y_pred = df.pred
if len(y) == len(y_pred):
total = len(y)
print('label length :', total)
else:
print('It has different length !')
rmse(y, y_pred)
acc(y, y_pred, total)
f1(y, y_pred)
print('===== spacing =====')
cal_perform(predict_spac)
print('===== spell =====')
cal_perform(predict_spel)
```
| true |
code
| 0.601067 | null | null | null | null |
|
# Metadata preprocessing tutorial
Melusine **prepare_data.metadata_engineering subpackage** provides classes to preprocess the metadata :
- **MetaExtension :** a transformer which creates an 'extension' feature extracted from regex in metadata. It extracts the extensions of mail adresses.
- **MetaDate :** a transformer which creates new features from dates such as: hour, minute, dayofweek.
- **MetaAttachmentType :** a transformer which creates an 'attachment type' feature extracted from regex in metadata. It extracts the extensions of attached files.
- **Dummifier :** a transformer to dummifies categorial features.
All the classes have **fit_transform** methods.
### Input dataframe
- To use a **MetaExtension** transformer : the dataframe requires a **from** column
- To use a **MetaDate** transformer : the dataframe requires a **date** column
- To use a **MetaAttachmentType** transformer : the dataframe requires a **attachment** column with the list of attached files
```
from melusine.data.data_loader import load_email_data
import ast
df_emails = load_email_data()
df_emails = df_emails[['from','date', 'attachment']]
df_emails['from']
df_emails['date']
df_emails['attachment'] = df_emails['attachment'].apply(ast.literal_eval)
df_emails['attachment']
```
### MetaExtension transformer
A **MetaExtension transformer** creates an *extension* feature extracted from regex in metadata. It extracts the extensions of mail adresses.
```
from melusine.prepare_email.metadata_engineering import MetaExtension
meta_extension = MetaExtension()
df_emails = meta_extension.fit_transform(df_emails)
df_emails.extension
```
### MetaDate transformer
A **MetaDate transformer** creates new features from dates : hour, minute and dayofweek
```
from melusine.prepare_email.metadata_engineering import MetaDate
meta_date = MetaDate()
df_emails = meta_date.fit_transform(df_emails)
df_emails.date[0]
df_emails.hour[0]
df_emails.loc[0,'min']
df_emails.dayofweek[0]
```
### MetaAttachmentType transformer
A **MetaAttachmentType transformer** creates an *attachment_type* feature extracted from an attachment names list. It extracts the extensions of attachments files.
```
from melusine.prepare_email.metadata_engineering import MetaAttachmentType
meta_pj = MetaAttachmentType()
df_emails = meta_pj.fit_transform(df_emails)
df_emails.attachment_type
```
### Dummifier transformer
A **Dummifier transformer** dummifies categorial features.
Its arguments are :
- **columns_to_dummify** : a list of the metadata columns to dummify.
```
from melusine.prepare_email.metadata_engineering import Dummifier
dummifier = Dummifier(columns_to_dummify=['extension','attachment_type', 'dayofweek', 'hour', 'min'])
df_meta = dummifier.fit_transform(df_emails)
df_meta.columns
df_meta.head()
df_meta.to_csv('./data/metadata.csv', index=False, encoding='utf-8', sep=';')
```
### Custom metadata transformer
A custom transformer can be implemented to extract metadata from a column :
```python
from sklearn.base import BaseEstimator, TransformerMixin
class MetaDataCustom(BaseEstimator, TransformerMixin):
"""Transformer which creates custom matadata
Compatible with scikit-learn API.
"""
def __init__(self):
"""
arguments
"""
def fit(self, X, y=None):
""" Fit method"""
return self
def transform(self, X):
"""Transform method"""
X['custom_metadata'] = X['column'].apply(self.get_metadata)
return X
```
The name of the output column can then be given as argument to a Dummifier transformer :
```python
dummifier = Dummifier(columns_to_dummify=['custom_metadata'])
```
| true |
code
| 0.507324 | null | null | null | null |
|
<p><img alt="DataOwl" width=150 src="http://gwsolutions.cl/Images/dataowl.png", align="left", hspace=0, vspace=5></p>
<h1 align="center">Aplicación de la derivada</h1>
<h4 align="center">Ecuaciones de una variable y Optimización</h4>
<pre><div align="center"> La idea de este notebook es que sirva para iniciarse en conceptos
matemáticos para aplicar la derivada numérica en la resolución
de ecuaciones de una variable y optimización.</div>
# Aplicaciones de la derivada
En clases anteriores, abordamos el problema de encontrar dónde una función se anula. En este Notebook veremos que las derivadas también nos pueden ayudar en este desafío, además de poder aplicarse en otros problemas, como la aproximación de una función mediante polinomios y la optimización de una función.
## 4. Ecuaciones de una variable (continuación)
### 4.1 Método de la Secante
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/92/Secant_method.svg/450px-Secant_method.svg.png" alt="Método de la secante" width=280 align="center" hspace=0 vspace=5 padding:5px />
De la clase anterior, sabemos que una función $f$ puede ser cortada en dos de sus puntos mediante una recta llamada *secante* Esta recta tiene una ecuación bien definida, esta vez dada por los puntos $(x_0,f(x_0))$, $(x_1,f(x_1))$ y la fórmula
$$y\ =\ \frac{f(x_1)-f(x_0)}{x_1-x_0}(x-x_1)+f(x_1)$$
Para encontrar un valor $x$ en que $f(x)=0$, se puede aproximar el resultado esperado utilizando $y=0$ en la fórmula anterior. Esto da lugar a una solución parcial
$$x = x_1-f(x_1)\frac{x_1-x_0}{f(x_1)-f(x_0)}$$
Esto se puede extender de forma iterativa, generando una sucesión de valores $x_n$ que se aproximan a la solución real:
$$x_n = x_{n-1}-f(x_{n-1})\frac{x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}$$
Esto depende de la elección de dos puntos de inicio, $x_0$ y $x_1$, además de algunas propiedades que debe satisfacer $f$, que mencionaremos dentro de poco. Este método se conoce como **Método de la Secante**.
### 4.2 Método de Newton-Raphson
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/e/e0/NewtonIteration_Ani.gif/450px-NewtonIteration_Ani.gif" width=450 alt="Método de Newton-Raphson" align="center"/>
Del mismo modo, si la diferencia entre $x_{n-1}$ y $x_{n-2}$ es "pequeña", se puede aproximar la sucesión anterior a la fórmula
$$x_n = x_{n-1}-\frac{f(x_{n-1})}{f'(x_{n-1})}$$
donde ahora la recurrencia sólo depende de un paso anterior, por lo cual se requiere un solo punto $x_0$ de inicio. Además, este último método converge más rápido que el Método de la Secante, y se conoce como **Método de Newton-Raphson**.
### 4.3 Hipótesis necesarias
Es necesario notar que ambos métodos tienen sus limitaciones, siendo las más importante que haya **sólo un cero** (para el Método de la Secante), que $f'(x)\neq0$ (para el Método de Newton-Raphson) y que la función sea continuamente dos veces derivable.
Este último punto nos obliga a definir qué es ser "continuamente dos veces derivable". Sin embargo, la forma más inmediata de abordar esto, es simplemente decir que el método utilizado para calcular la derivada de $f$ lo usamos ahora para calcular la derivada de $f'$, y que el resultado es continuo, en el sentido que vimos en clases anteriores. Lo que se consigue es llamado **segunda derivada** de $f$, y se denota $\frac{d^2f}{dx^2}(x)$ o $f''(x)$.
## 5. Optimización
El objetivo de esta rama de las Matemáticas es encontrar dónde las funciones alcanzan su valor máximo o mínimo, qué condiciones deben cumplirse para que éstos existan y de qué forma se puede aproximar dichos valores. En esta sección, veremos ideas básicas de optimización en funciones reales diferenciables con derivada continua, en problemas irrestrictos.
Recordemos que la derivada de una función $f$ representa en un punto $x$ cuánto vale la pendiente de la recta tangente a su curva, en ese punto. Por lo tanto, cuando $f'(x)>0$, se dice que la función crece entorno a ese punto, y decrece cuando $f'(x)<0$. Como vimos en el ejercicio de encontrar ceros en una función continua, el hecho de que haya un cambio de signo en la función $f'$ implica que debe existir un valor $\bar{x}$ en que $f(\bar{x})=0$. Un punto $\bar{x}$ con esta característica se llama **punto estacionario**, ya que ahí la función no crece ni decrece. Esto indica que, si encontramos dicho valor $\bar{x}$, éste será un *candidato* a máximo o mínimo (¡existen los *puntos silla*!).
Ya conocemos formas de encontrar ceros de una función. Podemos aplicarlo en nuestra función $f'$ para encontrar, aproximadamente, dónde ésta se anula y así tener lo candidatos a óptimo. Para conocer mejor la naturaleza de el candidato $\bar{x}$, será necesario calcular $f''(\bar{x})$. Si se obtiene que $f''(\bar{x})>0$, sin duda $\bar{x}$ será **mínimo**, mientras que si $f''(\bar{x})<0$, $\bar{x}$ será **máximo**. El caso $f''(\bar{x})=0$ es más problemático, aunque matemáticamente es abordable y visualmente lo es aún más.
```
# Importando las librerías
%matplotlib notebook
import numpy as np
import matplotlib.colors as mcolors # Nos permite utilizar una paleta de colores más amplia
import matplotlib.pyplot as plt
import experimento5 as ex
def f(x): # Alguna función de ejemplo
return 0.5 - np.sin(2 * x)
def g(x): # Alguna función de ejemplo
return (np.exp(-x ** 2) - x) / ((x + 1) ** 2 + (x - 1) ** 2)
def h(x):
return np.exp(x)
# Probamos nuestras funciones para calcular derivadas, con distintos tamaños de arreglo y dx
x = np.linspace(-np.pi, np.pi, 1000)
y = g(x)
dydx1 = ex.derivadavec(x, y)
dydx2 = ex.derivadafun(0, np.pi/2, g, 0.001)
x2 = np.linspace(0, np.pi/2, len(dydx2))
plt.plot(x, dydx1, color='gold', label='Derivada vector', zorder=0)
plt.plot(x2, dydx2, color='crimson', label='Derivada función', zorder=1)
plt.plot(x, y, color='gray', label='Función', zorder=2)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Comparación de cálculo de derivadas')
plt.legend()
plt.grid()
plt.show()
# Buscamos los ceros de g(x)
x0, y0 = ex.ceros(-3, 3, g)
print(x0)
# Probamos nuestras funciones para calcular derivadas, con distintos tamaños de arreglo y dx
x = np.linspace(-np.pi, np.pi, 10000)
y = g(x)
dydx = ex.derivadavec(x, y)
d2ydx2 = ex.derivadavec(x, dydx)
x0, y0 = ex.ceros(-5, 5, g, x[1]-x[0])
plt.plot(x, y, color='gray', label='g(x)', zorder=0)
plt.plot(x, dydx, color='red', label='g´(x)', zorder=1)
plt.plot(x, d2ydx2, color='blue', label='g´´(x)', zorder=2)
plt.plot(x0, y0, marker='*', color='green', label='Cero de g', linestyle='', zorder=3)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Función y sus derivadas')
plt.legend()
plt.grid()
plt.show()
# Incluimos el cálculo de los x en que f(x)=0
x1, y1 = ex.cerof(x, dydx, x[1]-x[0])
# Probamos nuestras funciones para calcular derivadas, con distintos tamaños de arreglo y dx
x = np.linspace(-np.pi, np.pi, 10000)
y = g(x)
dydx = ex.derivadavec(x, y)
d2ydx2 = ex.derivadavec(x, dydx)
plt.plot(x, y, color='gray', label='g(x)', zorder=0)
plt.plot(x, dydx, color='red', label='g´(x)', zorder=1)
plt.plot(x, d2ydx2, color='blue', label='g´´(x)', zorder=2)
plt.plot(x1, y1, marker='*', color='green', label='Cero de g', linestyle='', zorder=3)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Función y sus derivadas')
plt.legend()
plt.grid()
plt.show()
```
¿Cómo podríamos encontrar el valor de f''(x) en los valores x que encontramos?
## Ejercicios
**1.-** Intente escribir un código para utilizar el Método de la Secante y el Método de Newton-Raphson y aplíquelo a alguna de las funciones vistas.
**2.-**
**a)** En relación con el problema de encontrar $x\in[a,b]$ tal que $f(x)\ =\ 0$, busque (por ejemplo, en Wikipedia) información sobre el Método de Householder o *Householder's Method*. Note que el método de Newton-Raphson es uno de estos modelos, pero que hay casos en que se usa derivadas de orden superior. Intente escribir un algoritmo con alguno de esos métodos (incluso puede hacer un algoritmo que permita utilizar cualquiera de los métodos), y aplíquelo a la función
$$f(x)\ =\ \frac{e^{-x^2}-x^3}{(x+1)^2+(x-1)^2}$$
Para ello, grafique esa función en algún intervalo en que se sepa que la función se anula. Puede ayudarse con el uso de una grilla, escribiendo
```Python
plt.grid() # Para desplegar la grilla
plt.show() # Para mostrar el gráfico
```
y tome un valor inicial $x_0$ que visualmente se halle cercano a la solución.
**b)** Haga lo mismo que antes, buscando información sobre el Método de Halley (o *Halley's Method*).
**3.-** Utilice el Notebook y cualquiera de los métodos vistos o los definidos en clase para estudiar las siguientes funciones:
<ol style="list-style-type:lower-alpha">
<li>$\qquad f(x) = x^p,\quad p\in\mathbb{R}$. Pruebe con distintos valores de $p$ (distinga entre $p\ge0$ y $p<0$ <br><br></li>
<li>$\qquad g(x) = \frac{x}{\sqrt{x^2+1}}$ <br><br></li>
<li>$\qquad h(x) = \frac{\sin^2(x)}{x},\quad x\neq0$</li>
</ol>
**4.-** Intente programar un algoritmo para encontrar los mínimos y máximos de una función $f$, si los tiene.
| true |
code
| 0.512327 | null | null | null | null |
|
# Results summary
| Logistic Regression | LightGBM Classifier | Logistic Regression + ATgfe |
|-------------------------------------------------------------------------|------------------------------------------------------------------------|--------------------------------------------------------------------|
| <ul> <li>10-CV Accuracy: 0.926</li><li>Test-data Accuracy: 0.911</li><li>ROC_AUC: 0.99</li> </ul> | <ul> <li>10-CV Accuracy: 0.946</li><li>Test-data Accuracy: 0.977</li><li>ROC_AUC: 1.0</li> </ul> | <ul> <li>10-CV Accuracy: **0.98**</li><li>Test-data Accuracy: **1.0**</li><li>ROC_AUC: **1.0**</li> </ul> |
# Import packages
```
from atgfe.GeneticFeatureEngineer import GeneticFeatureEngineer
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.pipeline import make_pipeline
from sklearn.compose import make_column_transformer
from sklearn.metrics import accuracy_score, make_scorer, balanced_accuracy_score, recall_score
from yellowbrick.classifier import ClassificationReport, ConfusionMatrix, ROCAUC, PrecisionRecallCurve
from lightgbm import LGBMClassifier
from sklearn import datasets
def prepare_column_names(columns):
return [col.replace(' ', '').replace('(cm)', '_cm') for col in columns]
sklearn_data = datasets.load_iris()
columns = prepare_column_names(sklearn_data.feature_names)
df = pd.DataFrame(data=sklearn_data.data, columns=columns)
df['class'] = sklearn_data.target
df['class'] = df['class'].astype(str)
df.head()
target = 'class'
X = df.drop(target, axis=1).copy()
Y = df.loc[:, target].copy()
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=42)
classes = ['setosa', 'versicolor', 'virginica']
numerical_features = X.columns.tolist()
def classification_report(model):
visualizer = ClassificationReport(model, classes=classes, support=True)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def roc_auc(model):
visualizer = ROCAUC(model, classes=classes)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def confusion_matrix(model):
visualizer = ConfusionMatrix(model, classes=classes)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def precision_recall_curve(model):
visualizer = PrecisionRecallCurve(model)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def score_model(model, X, y):
evaluation_metric_scorer = make_scorer(balanced_accuracy_score, greater_is_better=True)
scores = cross_val_score(estimator=model, X=X, y=y, cv=10, scoring=evaluation_metric_scorer, n_jobs=-1)
scores_mean = scores.mean()
score_std = scores.std()
print('Mean of metric: {}, std: {}'.format(scores_mean, score_std))
def score_test_data_for_model(model, X_test, y_test):
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('Balanced Accuracy: {}'.format(balanced_accuracy_score(y_test, y_pred)))
print('Accuracy: {}'.format(accuracy_score(y_test, y_pred)))
def create_new_model():
model = make_pipeline(StandardScaler(), LogisticRegression(random_state=77, n_jobs=-1, solver='saga'))
return model
```
# Using LightGBM
```
lgbm_model = LGBMClassifier(n_estimators=100, random_state=7)
score_model(lgbm_model, X, Y)
classification_report(lgbm_model)
confusion_matrix(lgbm_model)
precision_recall_curve(lgbm_model)
roc_auc(lgbm_model)
lgbm_model.fit(X_train, y_train)
score_test_data_for_model(lgbm_model, X_test, y_test)
```
# Using Logistic Regression
```
model = create_new_model()
score_model(model, X, Y)
classification_report(model)
confusion_matrix(model)
precision_recall_curve(model)
roc_auc(model)
score_test_data_for_model(model, X_test, y_test)
```
# Using ATgfe
```
model = create_new_model()
def micro_recall_score(y_true, y_pred):
return recall_score(y_true, y_pred, average='micro')
gfe = GeneticFeatureEngineer(model, x_train=X_train, y_train=y_train, numerical_features=numerical_features,
number_of_candidate_features=2, number_of_interacting_features=4,
evaluation_metric=micro_recall_score, minimize_metric=False, enable_weights=True,
n_jobs=62, cross_validation_in_objective_func=True, objective_func_cv=3)
gfe.fit(mu=10, lambda_=120, early_stopping_patience=5, mutation_probability=0.4, crossover_probability=0.6)
```
# Apply GFE
```
new_X = gfe.transform(X)
new_X.head(20)
model = create_new_model()
score_model(model, new_X, Y)
X_train, X_test, y_train, y_test = train_test_split(new_X, Y, test_size=0.3, random_state=42)
classification_report(model)
confusion_matrix(model)
precision_recall_curve(model)
roc_auc(model)
score_test_data_for_model(model, X_test, y_test)
```
| true |
code
| 0.733553 | null | null | null | null |
|
## In this notebook:
- Using a pre-trained convnet to do feature extraction
- Use ConvBase only for feature extraction, and use a separate machine learning classifier
- Adding ```Dense``` layers to top of a frozen ConvBase, allowing us to leverage data augmentation
- Fine-tuning a pre-trained convnet (Skipped, because I am tired now)
### In previous notebook:
- Training your own small convnets from scratch
- Using data augmentation to mitigate overfitting
```
from datetime import date
date.today()
author = "NirantK. https://github.com/NirantK/keras-practice"
print(author)
import keras
print('Keras Version:', keras.__version__)
import os
if os.name=='nt':
print('We are on Windows')
import os, shutil
pwd = os.getcwd()
```
Feature extraction
---
This consists of using the representations learned by a previous network to extract interesting features from new samples. These features are then run through a new classifier, which is trained from scratch.

Warning: The line below triggers a download. You need good speed Internet!
```
from keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
```
We passed three arguments to the constructor:
- **```weights```**, to specify which weight checkpoint to initialize the model from
- **```include_top```**, which refers to including or not the densely-connected classifier on top of the network. By default, this densely-connected classifier would correspond to the 1000 classes from ImageNet. Since we intend to use our own densely-connected classifier (with only two classes, cat and dog), we don’t need to include it.
- **```input_shape```**, the shape of the image tensors that we will feed to the network. This argument is purely optional: if we don’t pass it, then the network will be able to process inputs of any size.
(from *Deep Learning in Python by F. Chollet*)
What does the **VGG16** thing look like?
```
conv_base.summary()
```
Feature Extraction
---
Pros:
- Fast, and cheap
- Works on CPU
Cons:
- Does not allow us to use data augmentation
- Because we do feature extraction and classification in separate steps
```
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
base_dir = os.path.join(pwd, 'data/cats_and_dogs_small/')
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
datagen = ImageDataGenerator(rescale=1./255)
batch_size = 1
def extract_features(directory, sample_count):
features = np.zeros(shape=(sample_count, 4, 4, 512))
labels = np.zeros(shape=(sample_count))
generator = datagen.flow_from_directory(
directory,
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary')
i = 0
for inputs_batch, labels_batch in generator:
features_batch = conv_base.predict(inputs_batch)
try:
features[i * batch_size : (i + 1) * batch_size] = features_batch
except ValueError:
print(i)
raise ValueError
labels[i * batch_size : (i + 1) * batch_size] = labels_batch
i += 1
if i * batch_size >= sample_count:
# Note that since generators yield data indefinitely in a loop,
# we must `break` after every image has been seen once.
break
return features, labels
%time train_features, train_labels = extract_features(train_dir, 2000)
%time validation_features, validation_labels = extract_features(validation_dir, 1000)
%time test_features, test_labels = extract_features(test_dir, 1000)
train_features = np.reshape(train_features, (2000, 4 * 4 * 512))
validation_features = np.reshape(validation_features, (1000, 4 * 4 * 512))
test_features = np.reshape(test_features, (1000, 4 * 4 * 512))
```
**Model Training:**
```
from keras import models
from keras import layers
from keras import optimizers
model = models.Sequential()
model.add(layers.Dense(256, activation='relu', input_dim=4 * 4 * 512))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizers.RMSprop(lr=2e-5),
loss='binary_crossentropy',
metrics=['acc'])
%time history = model.fit(train_features, train_labels, epochs=15, batch_size=20, validation_data=(validation_features, validation_labels))
model.save('cats_and_dogs_small_feature_extraction.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
This is Overfitting!
---
We can see that the training and validation accuracy curve diverge from each other rather quickly. This alone is might not be a sure shot sign of overfitting. We also observe that the training loss drops smoothly while validation loss actually increases. These two graphs in conjunction with each other indicate overfittig.
**Why did this overfit despite dropout?**
We did *NOT* do data augmentation
Extending the ConvBase Model!
---
Pros:
- Better performance (accuracy)
- Better Generalization (less overfitting)
- Because we can use data augmentation
Cons:
- Expensive compute
**Warning: Do not attempt this without a GPU. Your Python process can/will crash after a few hours**
```
from keras import models
from keras import layers
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
```
### Freezing ConvBase model: VGG16
Freezing means we do not update the layer weights in those particular layers. This is important for our present application.
```
print('This is the number of trainable weights '
'before freezing the conv base:', len(model.trainable_weights))
conv_base.trainable = False
print('This is the number of trainable weights '
'after freezing the conv base:', len(model.trainable_weights))
model.summary()
# compare the Trainable Params value from the previous model summary
from keras.preprocessing.image import ImageDataGenerator
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# just for reference, let's calculate the test accuracy
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
%time test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
```
| true |
code
| 0.818592 | null | null | null | null |
|
# Gradient Descent Optimizations
Mini-batch and stochastic gradient descent is widely used in deep learning, where the large number of parameters and limited memory make the use of more sophisticated optimization methods impractical. Many methods have been proposed to accelerate gradient descent in this context, and here we sketch the ideas behind some of the most popular algorithms.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
```
## Smoothing with exponentially weighted averages
```
n = 50
x = np.arange(n) * np.pi
y = np.cos(x) * np.exp(x/100) - 10*np.exp(-0.01*x)
```
### Exponentially weighted average
The exponentially weighted average adds a fraction $\beta$ of the current value to a leaky running sum of past values. Effectively, the contribution from the $t-n$th value is scaled by
$$
\beta^n(1 - \beta)
$$
For example, here are the contributions to the current value after 5 iterations (iteration 5 is the current iteration)
| iteration | contribution |
| --- | --- |
| 1 | $\beta^4(1 - \beta)$ |
| 2 | $\beta^3(1 - \beta)$ |
| 3 | $\beta^2(1 - \beta)$ |
| 4 | $\beta^1(1 - \beta)$ |
| 5 | $(1 - \beta)$ |
Since $\beta \lt 1$, the contribution decreases exponentially with the passage of time. Effectively, this acts as a smoother for a function.
```
def ewa(y, beta):
"""Exponentially weighted average."""
n = len(y)
zs = np.zeros(n)
z = 0
for i in range(n):
z = beta*z + (1 - beta)*y[i]
zs[i] = z
return zs
```
### Exponentially weighted average with bias correction
Since the EWA starts from 0, there is an initial bias. This can be corrected by scaling with
$$
\frac{1}{1 - \beta^t}
$$
where $t$ is the iteration number.
```
def ewabc(y, beta):
"""Exponentially weighted average with bias correction."""
n = len(y)
zs = np.zeros(n)
z = 0
for i in range(n):
z = beta*z + (1 - beta)*y[i]
zc = z/(1 - beta**(i+1))
zs[i] = zc
return zs
beta = 0.9
plt.plot(x, y, 'o-')
plt.plot(x, ewa(y, beta), c='red', label='EWA')
plt.plot(x, ewabc(y, beta), c='orange', label='EWA with bias correction')
plt.legend()
pass
```
## Momentum in 1D
Momentum comes from physics, where the contribution of the gradient is to the velocity, not the position. Hence we create an accessory variable $v$ and increment it with the gradient. The position is then updated with the velocity in place of the gradient. The analogy is that we can think of the parameter $x$ as a particle in an energy well with potential energy $U = mgh$ where $h$ is given by our objective function $f$. The force generated is a function of the rat of change of potential energy $F \propto \nabla U \propto \nabla f$, and we use $F = ma$ to get that the acceleration $a \propto \nabla f$. Finally, we integrate $a$ over time to get the velocity $v$ and integrate $v$ to get the displacement $x$. Note that we need to damp the velocity otherwise the particle would just oscillate forever.
We use a version of the update that simply treats the velocity as an exponentially weighted average popularized by Andrew Ng in his Coursera course. This is the same as the momentum scheme motivated by physics with some rescaling of constants.
```
def f(x):
return x**2
def grad(x):
return 2*x
def gd(x, grad, alpha, max_iter=10):
xs = np.zeros(1 + max_iter)
xs[0] = x
for i in range(max_iter):
x = x - alpha * grad(x)
xs[i+1] = x
return xs
def gd_momentum(x, grad, alpha, beta=0.9, max_iter=10):
xs = np.zeros(1 + max_iter)
xs[0] = x
v = 0
for i in range(max_iter):
v = beta*v + (1-beta)*grad(x)
vc = v/(1+beta**(i+1))
x = x - alpha * vc
xs[i+1] = x
return xs
```
### Gradient descent with moderate step size
```
alpha = 0.1
x0 = 1
xs = gd(x0, grad, alpha)
xp = np.linspace(-1.2, 1.2, 100)
plt.plot(xp, f(xp))
plt.plot(xs, f(xs), 'o-', c='red')
for i, (x, y) in enumerate(zip(xs, f(xs)), 1):
plt.text(x, y+0.2, i,
bbox=dict(facecolor='yellow', alpha=0.5), fontsize=14)
pass
```
### Gradient descent with large step size
When the step size is too large, gradient descent can oscillate and even diverge.
```
alpha = 0.95
xs = gd(1, grad, alpha)
xp = np.linspace(-1.2, 1.2, 100)
plt.plot(xp, f(xp))
plt.plot(xs, f(xs), 'o-', c='red')
for i, (x, y) in enumerate(zip(xs, f(xs)), 1):
plt.text(x*1.2, y, i,
bbox=dict(facecolor='yellow', alpha=0.5), fontsize=14)
pass
```
### Gradient descent with momentum
Momentum results in cancellation of gradient changes in opposite directions, and hence damps out oscillations while amplifying consistent changes in the same direction. This is perhaps clearer in the 2D example below.
```
alpha = 0.95
xs = gd_momentum(1, grad, alpha, beta=0.9)
xp = np.linspace(-1.2, 1.2, 100)
plt.plot(xp, f(xp))
plt.plot(xs, f(xs), 'o-', c='red')
for i, (x, y) in enumerate(zip(xs, f(xs)), 1):
plt.text(x, y+0.2, i,
bbox=dict(facecolor='yellow', alpha=0.5), fontsize=14)
pass
```
## Momentum and RMSprop in 2D
```
def f2(x):
return x[0]**2 + 100*x[1]**2
def grad2(x):
return np.array([2*x[0], 200*x[1]])
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
pass
def gd2(x, grad, alpha, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0,:] = x
for i in range(max_iter):
x = x - alpha * grad(x)
xs[i+1,:] = x
return xs
def gd2_momentum(x, grad, alpha, beta=0.9, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0, :] = x
v = 0
for i in range(max_iter):
v = beta*v + (1-beta)*grad(x)
vc = v/(1+beta**(i+1))
x = x - alpha * vc
xs[i+1, :] = x
return xs
```
### Gradient descent with large step size
We get severe oscillations.
```
alpha = 0.01
x0 = np.array([-1,-1])
xs = gd2(x0, grad2, alpha, max_iter=75)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Vanilla gradient descent')
pass
```
### Gradient descent with momentum
The damping effect is clear.
```
alpha = 0.01
x0 = np.array([-1,-1])
xs = gd2_momentum(x0, grad2, alpha, beta=0.9, max_iter=75)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Gradieent descent with momentum')
pass
```
### Gradient descent with RMSprop
RMSprop scales the learning rate in each direction by the square root of the exponentially weighted sum of squared gradients. Near a saddle or any plateau, there are directions where the gradient is very small - RMSporp encourages larger steps in those directions, allowing faster escape.
```
def gd2_rmsprop(x, grad, alpha, beta=0.9, eps=1e-8, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0, :] = x
v = 0
for i in range(max_iter):
v = beta*v + (1-beta)*grad(x)**2
x = x - alpha * grad(x) / (eps + np.sqrt(v))
xs[i+1, :] = x
return xs
alpha = 0.1
x0 = np.array([-1,-1])
xs = gd2_rmsprop(x0, grad2, alpha, beta=0.9, max_iter=10)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Gradient descent with RMSprop')
pass
```
### ADAM
ADAM (Adaptive Moment Estimation) combines the ideas of momentum, RMSprop and bias correction. It is probably the most popular gradient descent method in current deep learning practice.
```
def gd2_adam(x, grad, alpha, beta1=0.9, beta2=0.999, eps=1e-8, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0, :] = x
m = 0
v = 0
for i in range(max_iter):
m = beta1*m + (1-beta1)*grad(x)
v = beta2*v + (1-beta2)*grad(x)**2
mc = m/(1+beta1**(i+1))
vc = v/(1+beta2**(i+1))
x = x - alpha * m / (eps + np.sqrt(vc))
xs[i+1, :] = x
return xs
alpha = 0.1
x0 = np.array([-1,-1])
xs = gd2_adam(x0, grad2, alpha, beta1=0.9, beta2=0.9, max_iter=10)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Gradient descent with RMSprop')
pass
```
| true |
code
| 0.592372 | null | null | null | null |
|
<h1 style="text-align:center">Chapter 2</h1>
---
###### Words
---
Take a look at this sentence :
'The quick brown fox jumps over the lazy fox, and took his meal.'
* The sentence has 13 _Words_ if you don't count punctuations, and 15 if you count punctions.
* To count punctuation as a word or not depends on the task in hand.
* For some tasks like P-O-S tagging & speech synthesis, punctuations are treated as words. (Hello! and Hello? are different in speech synthesis)
```
len('The quick brown fox jumps over the lazy fox, and took his meal.'.split())
```
##### Utterance
> An utterance is a spoken correlate of a sentence. (Speaking a sentence is utterance)
Take a look at this sentence:
'I am goi- going to the market to buy ummm fruits.'
* This utterance has two kinds of <strong>disfluencies</strong>(disorder in smooth flow).
1. Fragment - The broken off word 'goi' is a fragment.
2. Fillers - Words like ummm, uhhh, are called fillers or filled pauses.
##### Lemma
> A lemma is a set of lexical forms having the same stem, the same major part-of-speech, and the same word sense.
* Wordform is the full inflected or derived form of the word.
Example,
Wordforms - cats,cat
Lemma - cat
Wordforms - Moving, move
Lemma - move
##### Vocabulary, Wordtypes, and Wordtokens
* Vocabulary - It is the set of distinct words in a corpus.
* Wordtypes - It is the size of the vocabulary V i.e. |V|
* Wordtokens - It is the total number of running words.
Take a look at this sentence:
'They picnicked by the pool, then lay back on the grass and looked at the stars.'
Here,
* Vocabulary = V = {They, picnicked, by, the, pool, then, lay, back, on, grass, and, looked, at, stars}
* Wordtypes = |V| = 14
* Wordtokens(ignoring punctuation) = 16
```
def vocTypeToken(sentence):
tokens = sentence.split()
vocabulary = list(set(tokens))
wordtypes = len(vocabulary)
wordtokens = len(tokens)
print("Sentence = {}\n".format(sentence))
print("Tokens = {}\n".format(tokens))
print("Vocabulary = {}\n".format(sorted(vocabulary)))
print("Wordtypes = {}\n".format(wordtypes))
print("Wordtokens = {}".format(wordtokens))
sentence = 'They picnicked by the pool, then lay back on the grass and looked at the stars.'
vocTypeToken(sentence)
```
###### Herdan's Law
> The larger the corpora we look at, the more wordtypes we find. The relationsip between wordtypes and tokens is called <strong>Herdan's Law</strong>
\begin{equation*}
|V| = kN^\beta
\end{equation*}
, k and \\(\beta\\) are positive consonants.
The value of \\(\beta\\) depends on the corpus size and is in the range of 0 to 1.
* We can say that the vocabulary size for a text goes up significantly faster than the square root of its length in words.
---
- Another rough measure of number of words in a corpus is the number of lemmas.
##### Code switching
> The phenonmenon of changing lanugage while reading or writing is called code switching.
Example,
'Tu mera dost hai or rahega, don't worry.'
---
## Text Normalization
---
Before any type of natural language processing, the text has to be brought a normal condition or state.
The below mentioned three tasks are common for almost every normalization process.
1. Tokenizing ( breaking into words )
2. Normalizing word formats
3. Segmenting sentences
### Word tokenization
---
> The task of segmenting text into words.
<p style="color:red">Why you should not use split() for tokenizaiton.</p>
If using split() on the text, the words like 'Mr. Randolf', emails like 'hello@internet.com' may be broken down as ['Mr.','Randolf'], emails may be broken down as ['hello','@','internet','.','com'].
This is not what we generally want, hence special tokenization algorithms must be used.
* Commas are generally used as word boundaries but also in large numbers (540,000).
* Periods are generally used as sentence boundaries but also in emails, urls, salutation.
##### Clitic
> Clitics are words that can't stand on their own. They are attached to other words. Tokenizer can be used to expand clitics.
Example of clitics,
What're, Who's, You're.
- Tokenization algorithms can also be used to tokenize multiwords like 'New York', 'rock N roll'.
This tokenization is used in conjunction with <strong>Named Entity Detection</strong> (the task of detecting name, places, dates, organizations)
Python code for tokenization below
```
from nltk.tokenize import word_tokenize
text = 'The San Francisco-based restaurant," they said, "doesn’t charge $10".'
print(word_tokenize(text))
from nltk.tokenize import wordpunct_tokenize
print(wordpunct_tokenize(text))
```
Since tokenization needs to run before any language processing, it needs to be fast.
Regex based tokenization is fast but not that smart while handling punctuations, and language dilemma.
There are many tokenization algorithms like ByteLevelBPETokenizer, CharBPETokenizer, SentencePieceBPETokenizer.
Below excercise shows step by step guide to modern way of tokenization using [huggingface's](https://huggingface.co/) ultrafast tokenization library - [Tokenizer](https://github.com/huggingface/tokenizers)
---
#### Notice the speed of huggingface tokenizer and nltk tokenizer
```
!python3 -m pip install tokenizers #install tokenizer
from tokenizers import (BertWordPieceTokenizer)
tokenizer = BertWordPieceTokenizer("bert-base-uncased-vocab.txt", lowercase=True)
from datetime import datetime
def textTokenizer(text):
start = (datetime.now())
print(tokenizer.encode(text).tokens)
end = (datetime.now())
print("Time taken - {}".format((end-start).total_seconds()))
textTokenizer('Number expressions introduce other complications as well; while commas nor- mally appear at word boundaries, commas are used inside numbers in English, every three digits.')
```
* We will discuss about [CLS] and [SEP] later
```
from datetime import datetime
def nltkTokenizer(text):
start = (datetime.now())
print(word_tokenize(text))
end = (datetime.now())
print("Time taken - {}".format((end-start).total_seconds()))
nltkTokenizer('Number expressions introduce other complications as well; while commas nor- mally appear at word boundaries, commas are used inside numbers in English, every three digits.')
```
##### Word segmentation
> Some languages(like Chinese) don't have words seperated by spaces, hence tokenization is not easily done. So word segmentation is done using sequence models trained on hand seperated datasets.
| true |
code
| 0.273817 | null | null | null | null |
|
## Hi, i was having a hard time trying to load this huge data set as a pandas data frame on my pc, so i searched for alternative ways of doing this as i don't want to pay for cloud services and don't have access to better machines.
### actually the solution was pretty simple, so i'm sharing what i ended up with, maybe i can help other struggling with the same problem.
obs: this approach won't let you analyse or summarize the data as pandas data frames would (at least not easily),
any criticism or tips are welcomed.
```
import csv
from datetime import datetime
def clean_data(input_data_path='../input/train.csv', output_data_path='../data/train_cleaned.csv'):
"""
Clean the data set, removing any row with missing values,
delimiter longitudes and latitudes to fit only NY city values,
only fare amount greater than 0,
and passenger count greater than 0 and lesser than 7,
i also removed the header as i'm using tensorflow to load data.
:param input_data_path: path containing the raw data set.
:param output_data_path: path to write the cleaned data.
"""
with open(input_data_path, 'r') as inp, open(output_data_path, 'w', newline='') as out:
writer = csv.writer(out)
count = 0
for row in csv.reader(inp):
# Remove header
if count > 0:
# Only rows with non-null values
if len(row) == 8:
try:
fare_amount = float(row[1])
pickup_longitude = float(row[3])
pickup_latitude = float(row[4])
dropoff_longitude = float(row[5])
dropoff_latitude = float(row[6])
passenger_count = float(row[7])
if ((-76 <= pickup_longitude <= -72) and (-76 <= dropoff_longitude <= -72) and
(38 <= pickup_latitude <= 42) and (38 <= dropoff_latitude <= 42) and
(1 <= passenger_count <= 6) and fare_amount > 0):
writer.writerow(row)
except:
pass
count += 1
def pre_process_train_data(input_data_path='data/train_cleaned.csv', output_data_path='data/train_processed.csv'):
"""
Pre process the train data, deriving, year, month, day and hour for each row.
:param input_data_path: path containing the full data set.
:param output_data_path: path to write the pre processed set.
"""
with open(input_data_path, 'r') as inp, open(output_data_path, 'w', newline='') as out:
writer = csv.writer(out)
for row in csv.reader(inp):
pickup_datetime = datetime.strptime(row[2], '%Y-%m-%d %H:%M:%S %Z')
row.append(pickup_datetime.year)
row.append(pickup_datetime.month)
row.append(pickup_datetime.day)
row.append(pickup_datetime.hour)
row.append(pickup_datetime.weekday())
writer.writerow(row)
def pre_process_test_data(input_data_path='data/test.csv', output_data_path='data/test_processed.csv'):
"""
Pre process the test data, deriving, year, month, day and hour for each row.
:param input_data_path: path containing the full data set.
:param output_data_path: path to write the pre processed set.
"""
with open(input_data_path, 'r') as inp, open(output_data_path, 'w', newline='') as out:
writer = csv.writer(out)
count = 0
for row in csv.reader(inp):
if count > 0:
pickup_datetime = datetime.strptime(row[1], '%Y-%m-%d %H:%M:%S %Z')
row.append(pickup_datetime.year)
row.append(pickup_datetime.month)
row.append(pickup_datetime.day)
row.append(pickup_datetime.hour)
row.append(pickup_datetime.weekday())
writer.writerow(row)
else:
# Only the header
writer.writerow(row)
count += 1
def split_data(input_data_path, train_data_path, validation_data_path, ratio=30):
"""
Splits the csv file (meant to generate train and validation sets).
:param input_data_path: path containing the full data set.
:param train_data_path: path to write the train set.
:param validation_data_path: path to write the validation set.
:param ratio: ration to split train and validation sets, (default: 1 of every 30 rows will be validation or 0,033%)
"""
with open(input_data_path, 'r') as inp, open(train_data_path, 'w', newline='') as out1, \
open(validation_data_path, 'w', newline='') as out2:
writer1 = csv.writer(out1)
writer2 = csv.writer(out2)
count = 0
for row in csv.reader(inp):
if count % ratio == 0:
writer2.writerow(row)
else:
writer1.writerow(row)
count += 1
```
| true |
code
| 0.495789 | null | null | null | null |
|
# Introduction to AlTar/Pyre applications
### 1. Introduction
An AlTar application is based on the [pyre](https://github.com/pyre/pyre) framework. Compared with traditional Python programming, the `pyre` framework provides enhanced features for developing high performance scientific applications, including
- It introduces a new programming model based on configurable components. A configurable component can be an attribute/parameter, or a method/protocol which may have different implementations. The latter will be especially helpful for users to swap between different algorithms/methods for a given procedure at runtime.
- Configurable components also offer users an easy way to configure parameters and settings in an application. To pass parameters through command line (e.g, by `argparse`) and property `setter` is usually a formidable task for applications with a large parameter set. In pyre, this can be done by one `json`-type configuration file.
- An AlTar/pyre application can deploy itself automatically to different computing platforms, such as a standalone computer, GPU workstations, computer clusters or clouds, with a simple change of the `shell` configuration, a configurable component.
- Pyre also integrates high performance scientific libraries such as [GNU Scientific Library](https://www.gnu.org/software/gsl/) (for linear algebra and statistics), and [CUDA](https://developer.nvidia.com/cuda-downloads) (for GPU accelerated computing). It also offers an easy procedure for users to develop their own applications with mixed Python/C/C++/Fortran/CUDA programming, to achieve both high performance and user-friendly interfaces.
In this tutorial, we will use a `Hello world!` application to demonstrate how an AlTar application, with configurable components, is constructed and runs slightly differently from conventional Python scripts.
### 2. The Hello application
We create below an application to say "Hello" to someone (attribute `who`) several times (attribute `times`).
```
# import the altar module
import altar
# create an application based on altar.application
class HelloApp(altar.application, family='altar.applications.hello'):
"""
A specialized AlTar application to say hello
"""
# user configurable components
who = altar.properties.str(default='world')
who.doc = "the person to say hello to"
times = altar.properties.int(default=1)
times.validators = altar.constraints.isPositive()
times.doc = "how many times you want to say hello"
# define methods
def main(self):
"""
The main method
"""
for i in range(self.times):
print(f"Hello {self.who}!")
# all done
return
```
The `HelloApp` application is derived from the `altar.application` base class in order to inherit various features offered by the pyre framework. It has two attributes, `who` and `times`, which are defined as configurable compnents. A component can be one of the basic Python data types, specified by altar.properties.[int, float, str, list, dict ...], or a user-defined component class.
To run the HelloApp, we create an instance with a name='hello'. We pass the settings of `who` and `times` by a configuration file [hello.pfg](hello.pfg) (in default, the app instance searches for a `NAME.pfg` configuration file with `NAME` the same as the instance name):
```
; application instance name
hello:
; components configuration
who = AlTar users ; no start/end quotes for strings are needed in pfg file
times = 3
```
In a `pfg` (pyre config) configuration file, indents are used to show the hierarchy of each configurable component. An alternative is to use the dot notation in Python, e.g.,
```
; an alternative way to write configurations
hello.who = AlTar users
hello.times = 3
```
```
# create a HelloApp instance with a name
helloapp = HelloApp(name='hello')
# when it is created, it searches for settings in hello.pfg to initialize configurable components
# run the instance main method
helloapp.run()
```
Once an instance is created(registered), all its components are processed to be regular Python objects which you may access/modify.
```
print(f"'{helloapp.who}' is going to be changed")
helloapp.who='pyre users'
helloapp.main()
```
You may also modify the [hello.pfg](hello.pfg) file for new configurations and re-run the program. Caveat: for jupyter/ipython, you may need to restart the kernel for new settings to be accepted.
### 3. Run HelloApp from command line
AlTar/pyre applications are designed to run as regular shell applications, which offer more options to run with command line arguments. We create a [hello.py](hello.py) script to include the `HelloApp` class definition as well as to define a `__main__` method to create an instance and call `main()`.
```
# bootstrap
if __name__ == "__main__":
# build an instance of the default app
app = HelloApp(name="hello")
# invoke the main entry point
status = app.main()
# share
raise SystemExit(status)
```
```
# run hello app from a shell with cmdLine settings
!python3 hello.py --who="World" --times=1
```
By default, the app instance searches for the configuration file named `hello.pfg` as its name='hello'. It is also possible to use a different configuration file by a ``--config`` option.
```
; hello2.pfg
; application instance name (still need to be the same as the instance name)
hello:
; configurations
who = pyre users
times = 1
```
```
# run hello app with a specified configuration file
!python3 hello.py --config=hello2.pfg
# run hello app with both a configuration file and cmdLine settings
# pfg file settings will be overriden by the cmdLine ones
!python3 hello.py --config=hello2.pfg --times=2
```
| true |
code
| 0.405125 | null | null | null | null |
|
# Hacking Into FasterRcnn in Pytorch
- toc: true
- badges: true
- comments: true
- categories: [jupyter]
- image: images/chart-preview.png
# Brief Intro
In the post I will show how to tweak some of the internals of FaterRcnn in Pytorch. I am assuming the reader is someone who already have trained an object detection model using pytorch. If not there is and excellent tutorial in [pytorch website](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html).
## Small Insight into the model
Basically Faster Rcnn is a two stage detector
1. The first stage is the Region proposal network which is resposible for knowing the objectness and corresponding bounding boxes. So essentially the RegionProposalNetwork will give the proposals of whether and object is there or not
2. These proposals will be used by the RoIHeads which outputs the detections .
* Inside the RoIHeads roi align is done
* There will be a box head and box predictor
* The losses for the predictions
3. In this post i will try to show how we can add custom parts to the torchvision FasterRcnn
```
#collapse-hide
import torch
import torchvision
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
import torch.nn as nn
import torch.nn.functional as F
print(f'torch version {torch.__version__}')
print(f'torchvision version {torchvision.__version__}')
```
# Custom Backone
1. The backbone can be without FeaturePyramidNetwork
2. With FeaturePyramidNetwork
## Custom Backbone without FPN
This is pretty well written in the pytorch tutorials section, i will add some comments to it additionally
```
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
#we need to specify an outchannel of this backone specifically because this outchannel will be
#used as an inchannel for the RPNHEAD which is producing the out of RegionProposalNetwork
#we can know the number of outchannels by looking into the backbone "backbone??"
backbone.out_channels = 1280
#by default the achor generator FasterRcnn assign will be for a FPN backone, so
#we need to specify a different anchor generator
anchor_generator = AnchorGenerator(sizes=((128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
#here at each position in the grid there will be 3x3=9 anchors
#and if our backbone is not FPN then the forward method will assign the name '0' to feature map
#so we need to specify '0 as feature map name'
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'],
output_size=9,
sampling_ratio=2)
#the output size is the output shape of the roi pooled features which will be used by the box head
model = FasterRCNN(backbone,num_classes=2,rpn_anchor_generator=anchor_generator)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 600)]
predictions = model(x)
```
## Custom Backbone with FPN
The Resnet50Fpn available in torchvision
```
# load a model pre-trained pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
# replace the classifier with a new one, that has
# num_classes which is user-defined
num_classes = 2 # 1 class (person) + background
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
### Adding a different resenet backbone
1. Just change to a different resenet
1. Shows how we should change roi_pooler and anchor_generator along with the backbone changes if we are not using all the layers from FPN
### Using all layers from FPN
```
#hte returned layers are layer1,layer2,layer3,layer4 in returned_layers
backbone = torchvision.models.detection.backbone_utils.resnet_fpn_backbone('resnet101',pretrained=True)
model = FasterRCNN(backbone,num_classes=2)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
### Using not all layers from FPN
The size of the last fature map in a Resnet50.Later i will show the sizes of the feature maps we use when we use FPN.
```
#collapse-hide
#just to show what will be out of of a normal resnet without fpn
res = torchvision.models.resnet50()
pure = nn.Sequential(*list(res.children())[:-2])
temp = torch.rand(1,3,400,400)
pure(temp).shape
```
The required layers can be obtained by specifying the returned layers parameters.Also the resnet backbone of different depth can be used.
```
#the returned layers are layer1,layer2,layer3,layer4 in returned_layers
backbone = torchvision.models.detection.backbone_utils.resnet_fpn_backbone('resnet101',pretrained=True,
returned_layers=[2,3,4])
```
Here we are using feature maps of the following shapes.
```
#collapse-hide
out = backbone(temp)
for i in out.keys():
print(i,' ',out[i].shape)
#from the above we can see that the feature are feat maps should be 0,1,2,pool
#where pool comes from the default extra block
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0','1','2','pool'],
output_size=7,
sampling_ratio=2)
```
So essentially what we did was we selected the last three layers in FPN by specifying them in the returned layers, by default, the backbone will add a pool layer on top of the last layer. So we are left with four layers. Now the RoIAlign need to be done in these four layers. If we dnt specify the RoIAlign it will use the by default assume we have used all layers from FPN in torchvision. So we need to specifically give the feauture maps that we used. The usage of feature maps can be our application specific, some time you might need to detect small objects sometimes the object of interest will be large objects only.
```
#we will need to give anchor_generator because the deafault anchor generator assumes we use all layers in fpn
#since we have four layers in fpn here we need to specify 4 anchors
anchor_sizes = ((32), (64), (128),(256) )
aspect_ratios = ((0.5,1.0, 1.5,2.0,)) * len(anchor_sizes)
anchor_generator = AnchorGenerator(anchor_sizes, aspect_ratios)
```
Since we have four layers in our FPN we need to specify the anchors. So here each feature map will have 4 anchors at each position.So the first feature map will have anchor size 32 and four of them will be there at each position in the feature map of aspect_ratios (0.5,1.0, 1.5,2.0). Now we can pass these to the FasterRCNN class
```
model = FasterRCNN(backbone,num_classes=2,rpn_anchor_generator=anchor_generator,box_roi_pool=roi_pooler)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
# Custom Predictor
The predictor is what that outputs the classes and the corresponding bboxes . By default these have two layers one for class and one for bboxes,but we can add more before it if we want to,so if you have a ton of data this might come handy,(remember there is already a box head before the predictor head, so you might not need this)
```
class Custom_predictor(nn.Module):
def __init__(self,in_channels,num_classes):
super(Custom_predictor,self).__init__()
self.additional_layer = nn.Linear(in_channels,in_channels) #this is the additional layer
self.cls_score = nn.Linear(in_channels, num_classes)
self.bbox_pred = nn.Linear(in_channels, num_classes * 4)
def forward(self,x):
if x.dim() == 4:
assert list(x.shape[2:]) == [1, 1]
x = x.flatten(start_dim=1)
x = self.additional_layer(x)
scores = self.cls_score(x)
bbox_deltas = self.bbox_pred(x)
return scores, bbox_deltas
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
#we need the out channels of the box head to pass tpp custom predictor
in_features = model.roi_heads.box_head.fc7.out_features
#now we can add the custom predictor to the model
num_classes =2
model.roi_heads.box_predictor = Custom_predictor(in_features,num_classes)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
# Custom BoxHead
The ouptuts of the roi_align are first passed through the box head before they are passed to the Predictor, there are two linear layers and we can customize them as we want, be careful with the dimensions since they can break the pipeline
```
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
class CustomHead(nn.Module):
def __init__(self,in_channels,roi_outshape,representation_size):
super(CustomHead,self).__init__()
self.conv = nn.Conv2d(in_channels,in_channels,kernel_size=3,padding=1)#this is teh additional layer adde
#we will be sending a flattened layer, the size will eb in_channels*w*h, here roi_outshape represents it
self.fc6 = nn.Linear(in_channels*roi_outshape**2, representation_size)
self.fc7 = nn.Linear(representation_size, representation_size)
def forward(self,x):
# breakpoint()
x = self.conv(x)
x = x.flatten(start_dim=1)
import torch.nn.functional as F
x = F.relu(self.fc6(x))
x = F.relu(self.fc7(x))
return x
```
1. We need in_channels and representation size, remember the output of this is the input of box_predictor, so we can get the representation size of box_head from the input of box_predictor.
2. The in_channels can be got from the backbone out channels.
3. After the flattening the width and height also need to be considered which we wil get from roi_pool output.
```
in_channels = model.backbone.out_channels
roi_outshape = model.roi_heads.box_roi_pool.output_size[0]
representation_size=model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_head = CustomHead(in_channels,roi_outshape,representation_size)
num_classes=2
model.roi_heads.box_predictor = FastRCNNPredictor(representation_size, num_classes)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
# CustomLoss Function
This is the modification for loss of FasterRcnn Predictor.
1. You can modify the loss by defining the fastrcnn_loss and making chages where you want.
2. Then pass as say model.roi_heads.fastrcnn_loss = Custom_loss
3. Usually we replace the F.crossentropy loss by say Focal loss or label smoothing loss
```
import torchvision.models.detection._utils as det_utils
import torch.nn.functional as F
```
The below loss function is taken from [Aman Aroras blog](https://amaarora.github.io/2020/07/18/label-smoothing.html).
```
# Helper functions from fastai
def reduce_loss(loss, reduction='mean'):
return loss.mean() if reduction=='mean' else loss.sum() if reduction=='sum' else loss
# Implementation from fastai https://github.com/fastai/fastai2/blob/master/fastai2/layers.py#L338
class LabelSmoothingCrossEntropy(nn.Module):
def __init__(self, ε:float=0.1, reduction='mean'):
super().__init__()
self.ε,self.reduction = ε,reduction
def forward(self, output, target):
# number of classes
c = output.size()[-1]
log_preds = F.log_softmax(output, dim=-1)
loss = reduce_loss(-log_preds.sum(dim=-1), self.reduction)
nll = F.nll_loss(log_preds, target, reduction=self.reduction)
# (1-ε)* H(q,p) + ε*H(u,p)
return (1-self.ε)*nll + self.ε*(loss/c)
custom_loss = LabelSmoothingCrossEntropy()
#torchvision.models.detection.roi_heads.fastrcnn_loss??
def custom_fastrcnn_loss(class_logits, box_regression, labels, regression_targets):
# type: (Tensor, Tensor, List[Tensor], List[Tensor]) -> Tuple[Tensor, Tensor]
"""
Computes the loss for Faster R-CNN.
Arguments:
class_logits (Tensor)
box_regression (Tensor)
labels (list[BoxList])
regression_targets (Tensor)
Returns:
classification_loss (Tensor)
box_loss (Tensor)
"""
labels = torch.cat(labels, dim=0)
regression_targets = torch.cat(regression_targets, dim=0)
classification_loss = custom_loss(class_logits, labels) #ADDING THE CUSTOM LOSS HERE
# get indices that correspond to the regression targets for
# the corresponding ground truth labels, to be used with
# advanced indexing
sampled_pos_inds_subset = torch.where(labels > 0)[0]
labels_pos = labels[sampled_pos_inds_subset]
N, num_classes = class_logits.shape
box_regression = box_regression.reshape(N, -1, 4)
box_loss = det_utils.smooth_l1_loss(
box_regression[sampled_pos_inds_subset, labels_pos],
regression_targets[sampled_pos_inds_subset],
beta=1 / 9,
size_average=False,
)
box_loss = box_loss / labels.numel()
return classification_loss, box_loss
```
# Note on how to vary the anchor generator
The way in which anchor generators are assigned when we use backbone with and without fpn is different. When we are not using FPN there will be only one feature map and for that feature map we need to specify anchors of different shapes.
```
anchor_generator = AnchorGenerator(sizes=((128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
```
In the above case suppose we have a feature map of shape 7x7, then at each cell in it there will be 9 anchors,three each of shapes 128,256 and 512,with the corresponding aspect rations. But when we are using FPN we have different feature maps, so its more effective we use different feature maps for different layers. Small sized objects are deteted using the earlier feature maps and thus for those we can specify a small sized anchor say 32 and for the later layers we can specify larger anchors.
```
anchor_sizes = ((32), (64), (128),(256) )
aspect_ratios = ((0.5,1.0, 1.5,2.0,)) * len(anchor_sizes)
anchor_generator = AnchorGenerator(anchor_sizes, aspect_ratios)
```
In the above i am using the same aspect ratio for all the sizes so i am just multiplying by the lenght of the anchor_sizes, but if we want to specify different aspect ratios its totally possible. But be carefull to specifiy the same number of aspect ratios for each anchor sizes
# Credits
All the above hacks are just modification of the existing wonderful torchvision library.
| true |
code
| 0.790722 | null | null | null | null |
|
<!-- dom:TITLE: PHY321: Harmonic Oscillations, Damping, Resonances and time-dependent Forces -->
# PHY321: Harmonic Oscillations, Damping, Resonances and time-dependent Forces
<!-- dom:AUTHOR: [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/) at Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA & Department of Physics, University of Oslo, Norway -->
<!-- Author: -->
**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway
Date: **Mar 1, 2021**
Copyright 1999-2021, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license
## Aims and Overarching Motivation
### Monday
Damped oscillations. Analytical and numerical solutions
**Reading suggestion**: Taylor sections 5.4-5.5.
### Wednesday
No lecture, study day
### Friday
Driven oscillations and resonances with examples.
**Reading suggestion**: Taylor sections 5.5-5.6.
## Damped Oscillators
We consider only the case where the damping force is proportional to
the velocity. This is counter to dragging friction, where the force is
proportional in strength to the normal force and independent of
velocity, and is also inconsistent with wind resistance, where the
magnitude of the drag force is proportional the square of the
velocity. Rolling resistance does seem to be mainly proportional to
the velocity. However, the main motivation for considering damping
forces proportional to the velocity is that the math is more
friendly. This is because the differential equation is linear,
i.e. each term is of order $x$, $\dot{x}$, $\ddot{x}\cdots$, or even
terms with no mention of $x$, and there are no terms such as $x^2$ or
$x\ddot{x}$. The equations of motion for a spring with damping force
$-b\dot{x}$ are
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
m\ddot{x}+b\dot{x}+kx=0.
\label{_auto1} \tag{1}
\end{equation}
$$
## Harmonic Oscillator, Damping
Just to make the solution a bit less messy, we rewrite this equation as
<!-- Equation labels as ordinary links -->
<div id="eq:dampeddiffyq"></div>
$$
\begin{equation}
\label{eq:dampeddiffyq} \tag{2}
\ddot{x}+2\beta\dot{x}+\omega_0^2x=0,~~~~\beta\equiv b/2m,~\omega_0\equiv\sqrt{k/m}.
\end{equation}
$$
Both $\beta$ and $\omega$ have dimensions of inverse time. To find solutions (see appendix C in the text) you must make an educated guess at the form of the solution. To do this, first realize that the solution will need an arbitrary normalization $A$ because the equation is linear. Secondly, realize that if the form is
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
x=Ae^{rt}
\label{_auto2} \tag{3}
\end{equation}
$$
that each derivative simply brings out an extra power of $r$. This
means that the $Ae^{rt}$ factors out and one can simply solve for an
equation for $r$. Plugging this form into Eq. ([2](#eq:dampeddiffyq)),
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
r^2+2\beta r+\omega_0^2=0.
\label{_auto3} \tag{4}
\end{equation}
$$
## Harmonic Oscillator, Solutions of Damped Motion
Because this is a quadratic equation there will be two solutions,
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
r=-\beta\pm\sqrt{\beta^2-\omega_0^2}.
\label{_auto4} \tag{5}
\end{equation}
$$
We refer to the two solutions as $r_1$ and $r_2$ corresponding to the
$+$ and $-$ roots. As expected, there should be two arbitrary
constants involved in the solution,
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
x=A_1e^{r_1t}+A_2e^{r_2t},
\label{_auto5} \tag{6}
\end{equation}
$$
where the coefficients $A_1$ and $A_2$ are determined by initial
conditions.
The roots listed above, $\sqrt{\omega_0^2-\beta_0^2}$, will be
imaginary if the damping is small and $\beta<\omega_0$. In that case,
$r$ is complex and the factor $\exp{(rt)}$ will have some oscillatory
behavior. If the roots are real, there will only be exponentially
decaying solutions. There are three cases:
## Underdamped: $\beta<\omega_0$
$$
\begin{eqnarray}
x&=&A_1e^{-\beta t}e^{i\omega't}+A_2e^{-\beta t}e^{-i\omega't},~~\omega'\equiv\sqrt{\omega_0^2-\beta^2}\\
\nonumber
&=&(A_1+A_2)e^{-\beta t}\cos\omega't+i(A_1-A_2)e^{-\beta t}\sin\omega't.
\end{eqnarray}
$$
Here we have made use of the identity
$e^{i\omega't}=\cos\omega't+i\sin\omega't$. Because the constants are
arbitrary, and because the real and imaginary parts are both solutions
individually, we can simply consider the real part of the solution
alone:
<!-- Equation labels as ordinary links -->
<div id="eq:homogsolution"></div>
$$
\begin{eqnarray}
\label{eq:homogsolution} \tag{7}
x&=&B_1e^{-\beta t}\cos\omega't+B_2e^{-\beta t}\sin\omega't,\\
\nonumber
\omega'&\equiv&\sqrt{\omega_0^2-\beta^2}.
\end{eqnarray}
$$
## Critical dampling: $\beta=\omega_0$
In this case the two terms involving $r_1$ and $r_2$ are identical
because $\omega'=0$. Because we need to arbitrary constants, there
needs to be another solution. This is found by simply guessing, or by
taking the limit of $\omega'\rightarrow 0$ from the underdamped
solution. The solution is then
<!-- Equation labels as ordinary links -->
<div id="eq:criticallydamped"></div>
$$
\begin{equation}
\label{eq:criticallydamped} \tag{8}
x=Ae^{-\beta t}+Bte^{-\beta t}.
\end{equation}
$$
The critically damped solution is interesting because the solution
approaches zero quickly, but does not oscillate. For a problem with
zero initial velocity, the solution never crosses zero. This is a good
choice for designing shock absorbers or swinging doors.
## Overdamped: $\beta>\omega_0$
$$
\begin{eqnarray}
x&=&A_1\exp{-(\beta+\sqrt{\beta^2-\omega_0^2})t}+A_2\exp{-(\beta-\sqrt{\beta^2-\omega_0^2})t}
\end{eqnarray}
$$
This solution will also never pass the origin more than once, and then
only if the initial velocity is strong and initially toward zero.
Given $b$, $m$ and $\omega_0$, find $x(t)$ for a particle whose
initial position is $x=0$ and has initial velocity $v_0$ (assuming an
underdamped solution).
The solution is of the form,
$$
\begin{eqnarray*}
x&=&e^{-\beta t}\left[A_1\cos(\omega' t)+A_2\sin\omega't\right],\\
\dot{x}&=&-\beta x+\omega'e^{-\beta t}\left[-A_1\sin\omega't+A_2\cos\omega't\right].\\
\omega'&\equiv&\sqrt{\omega_0^2-\beta^2},~~~\beta\equiv b/2m.
\end{eqnarray*}
$$
From the initial conditions, $A_1=0$ because $x(0)=0$ and $\omega'A_2=v_0$. So
$$
x=\frac{v_0}{\omega'}e^{-\beta t}\sin\omega't.
$$
## Harmonic Oscillator, Solutions
Consider a single solution with no arbitrary constants, which we will
call a **particular solution**, $x_p(t)$. It should be emphasized
that this is **A** particular solution, because there exists an
infinite number of such solutions because the general solution should
have two arbitrary constants. Now consider solutions to the same
equation without the driving term, which include two arbitrary
constants. These are called either **homogenous solutions** or
**complementary solutions**, and were given in the previous section,
e.g. Eq. ([7](#eq:homogsolution)) for the underdamped case. The
homogenous solution already incorporates the two arbitrary constants,
so any sum of a homogenous solution and a particular solution will
represent the **general solution** of the equation. The general
solution incorporates the two arbitrary constants $A$ and $B$ to
accommodate the two initial conditions. One could have picked a
different particular solution, i.e. the original particular solution
plus any homogenous solution with the arbitrary constants $A_p$ and
$B_p$ chosen at will. When one adds in the homogenous solution, which
has adjustable constants with arbitrary constants $A'$ and $B'$, to
the new particular solution, one can get the same general solution by
simply adjusting the new constants such that $A'+A_p=A$ and
$B'+B_p=B$. Thus, the choice of $A_p$ and $B_p$ are irrelevant, and
when choosing the particular solution it is best to make the simplest
choice possible.
## Harmonic Oscillator, Particular Solution
To find a particular solution, one first guesses at the form,
<!-- Equation labels as ordinary links -->
<div id="eq:partform"></div>
$$
\begin{equation}
\label{eq:partform} \tag{9}
x_p(t)=D\cos(\omega t-\delta),
\end{equation}
$$
and rewrite the differential equation as
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
D\left\{-\omega^2\cos(\omega t-\delta)-2\beta\omega\sin(\omega t-\delta)+\omega_0^2\cos(\omega t-\delta)\right\}=\frac{F_0}{m}\cos(\omega t).
\label{_auto6} \tag{10}
\end{equation}
$$
One can now use angle addition formulas to get
$$
\begin{eqnarray}
D\left\{(-\omega^2\cos\delta+2\beta\omega\sin\delta+\omega_0^2\cos\delta)\cos(\omega t)\right.&&\\
\nonumber
\left.+(-\omega^2\sin\delta-2\beta\omega\cos\delta+\omega_0^2\sin\delta)\sin(\omega t)\right\}
&=&\frac{F_0}{m}\cos(\omega t).
\end{eqnarray}
$$
Both the $\cos$ and $\sin$ terms need to equate if the expression is to hold at all times. Thus, this becomes two equations
$$
\begin{eqnarray}
D\left\{-\omega^2\cos\delta+2\beta\omega\sin\delta+\omega_0^2\cos\delta\right\}&=&\frac{F_0}{m}\\
\nonumber
-\omega^2\sin\delta-2\beta\omega\cos\delta+\omega_0^2\sin\delta&=&0.
\end{eqnarray}
$$
After dividing by $\cos\delta$, the lower expression leads to
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
\tan\delta=\frac{2\beta\omega}{\omega_0^2-\omega^2}.
\label{_auto7} \tag{11}
\end{equation}
$$
## Solving with Driven Oscillations
Using the identities $\tan^2+1=\csc^2$ and $\sin^2+\cos^2=1$, one can also express $\sin\delta$ and $\cos\delta$,
$$
\begin{eqnarray}
\sin\delta&=&\frac{2\beta\omega}{\sqrt{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}},\\
\nonumber
\cos\delta&=&\frac{(\omega_0^2-\omega^2)}{\sqrt{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}}
\end{eqnarray}
$$
Inserting the expressions for $\cos\delta$ and $\sin\delta$ into the expression for $D$,
<!-- Equation labels as ordinary links -->
<div id="eq:Ddrive"></div>
$$
\begin{equation}
\label{eq:Ddrive} \tag{12}
D=\frac{F_0/m}{\sqrt{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}}.
\end{equation}
$$
For a given initial condition, e.g. initial displacement and velocity,
one must add the homogenous solution then solve for the two arbitrary
constants. However, because the homogenous solutions decay with time
as $e^{-\beta t}$, the particular solution is all that remains at
large times, and is therefore the steady state solution. Because the
arbitrary constants are all in the homogenous solution, all memory of
the initial conditions are lost at large times, $t>>1/\beta$.
The amplitude of the motion, $D$, is linearly proportional to the
driving force ($F_0/m$), but also depends on the driving frequency
$\omega$. For small $\beta$ the maximum will occur at
$\omega=\omega_0$. This is referred to as a resonance. In the limit
$\beta\rightarrow 0$ the amplitude at resonance approaches infinity.
## Alternative Derivation for Driven Oscillators
Here, we derive the same expressions as in Equations ([9](#eq:partform)) and ([12](#eq:Ddrive)) but express the driving forces as
$$
\begin{eqnarray}
F(t)&=&F_0e^{i\omega t},
\end{eqnarray}
$$
rather than as $F_0\cos\omega t$. The real part of $F$ is the same as before. For the differential equation,
<!-- Equation labels as ordinary links -->
<div id="eq:compdrive"></div>
$$
\begin{eqnarray}
\label{eq:compdrive} \tag{13}
\ddot{x}+2\beta\dot{x}+\omega_0^2x&=&\frac{F_0}{m}e^{i\omega t},
\end{eqnarray}
$$
one can treat $x(t)$ as an imaginary function. Because the operations
$d^2/dt^2$ and $d/dt$ are real and thus do not mix the real and
imaginary parts of $x(t)$, Eq. ([13](#eq:compdrive)) is effectively 2
equations. Because $e^{\omega t}=\cos\omega t+i\sin\omega t$, the real
part of the solution for $x(t)$ gives the solution for a driving force
$F_0\cos\omega t$, and the imaginary part of $x$ corresponds to the
case where the driving force is $F_0\sin\omega t$. It is rather easy
to solve for the complex $x$ in this case, and by taking the real part
of the solution, one finds the answer for the $\cos\omega t$ driving
force.
We assume a simple form for the particular solution
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
x_p=De^{i\omega t},
\label{_auto8} \tag{14}
\end{equation}
$$
where $D$ is a complex constant.
From Eq. ([13](#eq:compdrive)) one inserts the form for $x_p$ above to get
$$
\begin{eqnarray}
D\left\{-\omega^2+2i\beta\omega+\omega_0^2\right\}e^{i\omega t}=(F_0/m)e^{i\omega t},\\
\nonumber
D=\frac{F_0/m}{(\omega_0^2-\omega^2)+2i\beta\omega}.
\end{eqnarray}
$$
The norm and phase for $D=|D|e^{-i\delta}$ can be read by inspection,
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
|D|=\frac{F_0/m}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}},~~~~\tan\delta=\frac{2\beta\omega}{\omega_0^2-\omega^2}.
\label{_auto9} \tag{15}
\end{equation}
$$
This is the same expression for $\delta$ as before. One then finds $x_p(t)$,
<!-- Equation labels as ordinary links -->
<div id="eq:fastdriven1"></div>
$$
\begin{eqnarray}
\label{eq:fastdriven1} \tag{16}
x_p(t)&=&\Re\frac{(F_0/m)e^{i\omega t-i\delta}}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}\\
\nonumber
&=&\frac{(F_0/m)\cos(\omega t-\delta)}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}.
\end{eqnarray}
$$
This is the same answer as before.
If one wished to solve for the case where $F(t)= F_0\sin\omega t$, the imaginary part of the solution would work
<!-- Equation labels as ordinary links -->
<div id="eq:fastdriven2"></div>
$$
\begin{eqnarray}
\label{eq:fastdriven2} \tag{17}
x_p(t)&=&\Im\frac{(F_0/m)e^{i\omega t-i\delta}}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}\\
\nonumber
&=&\frac{(F_0/m)\sin(\omega t-\delta)}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}.
\end{eqnarray}
$$
## Damped and Driven Oscillator
Consider the damped and driven harmonic oscillator worked out above. Given $F_0, m,\beta$ and $\omega_0$, solve for the complete solution $x(t)$ for the case where $F=F_0\sin\omega t$ with initial conditions $x(t=0)=0$ and $v(t=0)=0$. Assume the underdamped case.
The general solution including the arbitrary constants includes both the homogenous and particular solutions,
$$
\begin{eqnarray*}
x(t)&=&\frac{F_0}{m}\frac{\sin(\omega t-\delta)}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}
+A\cos\omega't e^{-\beta t}+B\sin\omega't e^{-\beta t}.
\end{eqnarray*}
$$
The quantities $\delta$ and $\omega'$ are given earlier in the
section, $\omega'=\sqrt{\omega_0^2-\beta^2},
\delta=\tan^{-1}(2\beta\omega/(\omega_0^2-\omega^2)$. Here, solving
the problem means finding the arbitrary constants $A$ and
$B$. Satisfying the initial conditions for the initial position and
velocity:
$$
\begin{eqnarray*}
x(t=0)=0&=&-\eta\sin\delta+A,\\
v(t=0)=0&=&\omega\eta\cos\delta-\beta A+\omega'B,\\
\eta&\equiv&\frac{F_0}{m}\frac{1}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}.
\end{eqnarray*}
$$
The problem is now reduced to 2 equations and 2 unknowns, $A$ and $B$. The solution is
$$
\begin{eqnarray}
A&=& \eta\sin\delta ,~~~B=\frac{-\omega\eta\cos\delta+\beta\eta\sin\delta}{\omega'}.
\end{eqnarray}
$$
## Resonance Widths; the $Q$ factor
From the previous two sections, the particular solution for a driving force, $F=F_0\cos\omega t$, is
$$
\begin{eqnarray}
x_p(t)&=&\frac{F_0/m}{\sqrt{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}}\cos(\omega_t-\delta),\\
\nonumber
\delta&=&\tan^{-1}\left(\frac{2\beta\omega}{\omega_0^2-\omega^2}\right).
\end{eqnarray}
$$
If one fixes the driving frequency $\omega$ and adjusts the
fundamental frequency $\omega_0=\sqrt{k/m}$, the maximum amplitude
occurs when $\omega_0=\omega$ because that is when the term from the
denominator $(\omega_0^2-\omega^2)^2+4\omega^2\beta^2$ is at a
minimum. This is akin to dialing into a radio station. However, if one
fixes $\omega_0$ and adjusts the driving frequency one minimize with
respect to $\omega$, e.g. set
<!-- Equation labels as ordinary links -->
<div id="_auto10"></div>
$$
\begin{equation}
\frac{d}{d\omega}\left[(\omega_0^2-\omega^2)^2+4\omega^2\beta^2\right]=0,
\label{_auto10} \tag{18}
\end{equation}
$$
and one finds that the maximum amplitude occurs when
$\omega=\sqrt{\omega_0^2-2\beta^2}$. If $\beta$ is small relative to
$\omega_0$, one can simply state that the maximum amplitude is
<!-- Equation labels as ordinary links -->
<div id="_auto11"></div>
$$
\begin{equation}
x_{\rm max}\approx\frac{F_0}{2m\beta \omega_0}.
\label{_auto11} \tag{19}
\end{equation}
$$
$$
\begin{eqnarray}
\frac{4\omega^2\beta^2}{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}=\frac{1}{2}.
\end{eqnarray}
$$
For small damping this occurs when $\omega=\omega_0\pm \beta$, so the $FWHM\approx 2\beta$. For the purposes of tuning to a specific frequency, one wants the width to be as small as possible. The ratio of $\omega_0$ to $FWHM$ is known as the _quality_factor, or $Q$ factor,
<!-- Equation labels as ordinary links -->
<div id="_auto12"></div>
$$
\begin{equation}
Q\equiv \frac{\omega_0}{2\beta}.
\label{_auto12} \tag{20}
\end{equation}
$$
## Numerical Studies of Driven Oscillations
Solving the problem of driven oscillations numerically gives us much
more flexibility to study different types of driving forces. We can
reuse our earlier code by simply adding a driving force. If we stay in
the $x$-direction only this can be easily done by adding a term
$F_{\mathrm{ext}}(x,t)$. Note that we have kept it rather general
here, allowing for both a spatial and a temporal dependence.
Before we dive into the code, we need to briefly remind ourselves
about the equations we started with for the case with damping, namely
$$
m\frac{d^2x}{dt^2} + b\frac{dx}{dt}+kx(t) =0,
$$
with no external force applied to the system.
Let us now for simplicty assume that our external force is given by
$$
F_{\mathrm{ext}}(t) = F_0\cos{(\omega t)},
$$
where $F_0$ is a constant (what is its dimension?) and $\omega$ is the frequency of the applied external driving force.
**Small question:** would you expect energy to be conserved now?
Introducing the external force into our lovely differential equation
and dividing by $m$ and introducing $\omega_0^2=\sqrt{k/m}$ we have
$$
\frac{d^2x}{dt^2} + \frac{b}{m}\frac{dx}{dt}+\omega_0^2x(t) =\frac{F_0}{m}\cos{(\omega t)},
$$
Thereafter we introduce a dimensionless time $\tau = t\omega_0$
and a dimensionless frequency $\tilde{\omega}=\omega/\omega_0$. We have then
$$
\frac{d^2x}{d\tau^2} + \frac{b}{m\omega_0}\frac{dx}{d\tau}+x(\tau) =\frac{F_0}{m\omega_0^2}\cos{(\tilde{\omega}\tau)},
$$
Introducing a new amplitude $\tilde{F} =F_0/(m\omega_0^2)$ (check dimensionality again) we have
$$
\frac{d^2x}{d\tau^2} + \frac{b}{m\omega_0}\frac{dx}{d\tau}+x(\tau) =\tilde{F}\cos{(\tilde{\omega}\tau)}.
$$
Our final step, as we did in the case of various types of damping, is
to define $\gamma = b/(2m\omega_0)$ and rewrite our equations as
$$
\frac{d^2x}{d\tau^2} + 2\gamma\frac{dx}{d\tau}+x(\tau) =\tilde{F}\cos{(\tilde{\omega}\tau)}.
$$
This is the equation we will code below using the Euler-Cromer method.
```
DeltaT = 0.001
#set up arrays
tfinal = 20 # in years
n = ceil(tfinal/DeltaT)
# set up arrays for t, v, and x
t = np.zeros(n)
v = np.zeros(n)
x = np.zeros(n)
# Initial conditions as one-dimensional arrays of time
x0 = 1.0
v0 = 0.0
x[0] = x0
v[0] = v0
gamma = 0.2
Omegatilde = 0.5
Ftilde = 1.0
# Start integrating using Euler-Cromer's method
for i in range(n-1):
# Set up the acceleration
# Here you could have defined your own function for this
a = -2*gamma*v[i]-x[i]+Ftilde*cos(t[i]*Omegatilde)
# update velocity, time and position
v[i+1] = v[i] + DeltaT*a
x[i+1] = x[i] + DeltaT*v[i+1]
t[i+1] = t[i] + DeltaT
# Plot position as function of time
fig, ax = plt.subplots()
ax.set_ylabel('x[m]')
ax.set_xlabel('t[s]')
ax.plot(t, x)
fig.tight_layout()
save_fig("ForcedBlockEulerCromer")
plt.show()
```
In the above example we have focused on the Euler-Cromer method. This
method has a local truncation error which is proportional to $\Delta t^2$
and thereby a global error which is proportional to $\Delta t$.
We can improve this by using the Runge-Kutta family of
methods. The widely popular Runge-Kutta to fourth order or just **RK4**
has indeed a much better truncation error. The RK4 method has a global
error which is proportional to $\Delta t$.
Let us revisit this method and see how we can implement it for the above example.
## Differential Equations, Runge-Kutta methods
Runge-Kutta (RK) methods are based on Taylor expansion formulae, but yield
in general better algorithms for solutions of an ordinary differential equation.
The basic philosophy is that it provides an intermediate step in the computation of $y_{i+1}$.
To see this, consider first the following definitions
<!-- Equation labels as ordinary links -->
<div id="_auto13"></div>
$$
\begin{equation}
\frac{dy}{dt}=f(t,y),
\label{_auto13} \tag{21}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto14"></div>
$$
\begin{equation}
y(t)=\int f(t,y) dt,
\label{_auto14} \tag{22}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto15"></div>
$$
\begin{equation}
y_{i+1}=y_i+ \int_{t_i}^{t_{i+1}} f(t,y) dt.
\label{_auto15} \tag{23}
\end{equation}
$$
To demonstrate the philosophy behind RK methods, let us consider
the second-order RK method, RK2.
The first approximation consists in Taylor expanding $f(t,y)$
around the center of the integration interval $t_i$ to $t_{i+1}$,
that is, at $t_i+h/2$, $h$ being the step.
Using the midpoint formula for an integral,
defining $y(t_i+h/2) = y_{i+1/2}$ and
$t_i+h/2 = t_{i+1/2}$, we obtain
<!-- Equation labels as ordinary links -->
<div id="_auto16"></div>
$$
\begin{equation}
\int_{t_i}^{t_{i+1}} f(t,y) dt \approx hf(t_{i+1/2},y_{i+1/2}) +O(h^3).
\label{_auto16} \tag{24}
\end{equation}
$$
This means in turn that we have
<!-- Equation labels as ordinary links -->
<div id="_auto17"></div>
$$
\begin{equation}
y_{i+1}=y_i + hf(t_{i+1/2},y_{i+1/2}) +O(h^3).
\label{_auto17} \tag{25}
\end{equation}
$$
However, we do not know the value of $y_{i+1/2}$. Here comes thus the next approximation, namely, we use Euler's
method to approximate $y_{i+1/2}$. We have then
<!-- Equation labels as ordinary links -->
<div id="_auto18"></div>
$$
\begin{equation}
y_{(i+1/2)}=y_i + \frac{h}{2}\frac{dy}{dt}=y(t_i) + \frac{h}{2}f(t_i,y_i).
\label{_auto18} \tag{26}
\end{equation}
$$
This means that we can define the following algorithm for
the second-order Runge-Kutta method, RK2.
4
6
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
<!-- Equation labels as ordinary links -->
<div id="_auto20"></div>
$$
\begin{equation}
k_2=hf(t_{i+1/2},y_i+k_1/2),
\label{_auto20} \tag{28}
\end{equation}
$$
with the final value
<!-- Equation labels as ordinary links -->
<div id="_auto21"></div>
$$
\begin{equation}
y_{i+i}\approx y_i + k_2 +O(h^3).
\label{_auto21} \tag{29}
\end{equation}
$$
The difference between the previous one-step methods
is that we now need an intermediate step in our evaluation,
namely $t_i+h/2 = t_{(i+1/2)}$ where we evaluate the derivative $f$.
This involves more operations, but the gain is a better stability
in the solution.
The fourth-order Runge-Kutta, RK4, has the following algorithm
4
9
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
$$
k_3=hf(t_i+h/2,y_i+k_2/2)\hspace{0.5cm} k_4=hf(t_i+h,y_i+k_3)
$$
with the final result
$$
y_{i+1}=y_i +\frac{1}{6}\left( k_1 +2k_2+2k_3+k_4\right).
$$
Thus, the algorithm consists in first calculating $k_1$
with $t_i$, $y_1$ and $f$ as inputs. Thereafter, we increase the step
size by $h/2$ and calculate $k_2$, then $k_3$ and finally $k_4$. The global error goes as $O(h^4)$.
However, at this stage, if we keep adding different methods in our
main program, the code will quickly become messy and ugly. Before we
proceed thus, we will now introduce functions that enbody the various
methods for solving differential equations. This means that we can
separate out these methods in own functions and files (and later as classes and more
generic functions) and simply call them when needed. Similarly, we
could easily encapsulate various forces or other quantities of
interest in terms of functions. To see this, let us bring up the code
we developed above for the simple sliding block, but now only with the simple forward Euler method. We introduce
two functions, one for the simple Euler method and one for the
force.
Note that here the forward Euler method does not know the specific force function to be called.
It receives just an input the name. We can easily change the force by adding another function.
```
def ForwardEuler(v,x,t,n,Force):
for i in range(n-1):
v[i+1] = v[i] + DeltaT*Force(v[i],x[i],t[i])
x[i+1] = x[i] + DeltaT*v[i]
t[i+1] = t[i] + DeltaT
def SpringForce(v,x,t):
# note here that we have divided by mass and we return the acceleration
return -2*gamma*v-x+Ftilde*cos(t*Omegatilde)
```
It is easy to add a new method like the Euler-Cromer
```
def ForwardEulerCromer(v,x,t,n,Force):
for i in range(n-1):
a = Force(v[i],x[i],t[i])
v[i+1] = v[i] + DeltaT*a
x[i+1] = x[i] + DeltaT*v[i+1]
t[i+1] = t[i] + DeltaT
```
and the Velocity Verlet method (be careful with time-dependence here, it is not an ideal method for non-conservative forces))
```
def VelocityVerlet(v,x,t,n,Force):
for i in range(n-1):
a = Force(v[i],x[i],t[i])
x[i+1] = x[i] + DeltaT*v[i]+0.5*a
anew = Force(v[i],x[i+1],t[i+1])
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
t[i+1] = t[i] + DeltaT
```
Finally, we can now add the Runge-Kutta2 method via a new function
```
def RK2(v,x,t,n,Force):
for i in range(n-1):
# Setting up k1
k1x = DeltaT*v[i]
k1v = DeltaT*Force(v[i],x[i],t[i])
# Setting up k2
vv = v[i]+k1v*0.5
xx = x[i]+k1x*0.5
k2x = DeltaT*vv
k2v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Final result
x[i+1] = x[i]+k2x
v[i+1] = v[i]+k2v
t[i+1] = t[i]+DeltaT
```
Finally, we can now add the Runge-Kutta2 method via a new function
```
def RK4(v,x,t,n,Force):
for i in range(n-1):
# Setting up k1
k1x = DeltaT*v[i]
k1v = DeltaT*Force(v[i],x[i],t[i])
# Setting up k2
vv = v[i]+k1v*0.5
xx = x[i]+k1x*0.5
k2x = DeltaT*vv
k2v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Setting up k3
vv = v[i]+k2v*0.5
xx = x[i]+k2x*0.5
k3x = DeltaT*vv
k3v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Setting up k4
vv = v[i]+k3v
xx = x[i]+k3x
k4x = DeltaT*vv
k4v = DeltaT*Force(vv,xx,t[i]+DeltaT)
# Final result
x[i+1] = x[i]+(k1x+2*k2x+2*k3x+k4x)/6.
v[i+1] = v[i]+(k1v+2*k2v+2*k3v+k4v)/6.
t[i+1] = t[i] + DeltaT
```
The Runge-Kutta family of methods are particularly useful when we have a time-dependent acceleration.
If we have forces which depend only the spatial degrees of freedom (no velocity and/or time-dependence), then energy conserving methods like the Velocity Verlet or the Euler-Cromer method are preferred. As soon as we introduce an explicit time-dependence and/or add dissipitave forces like friction or air resistance, then methods like the family of Runge-Kutta methods are well suited for this.
The code below uses the Runge-Kutta4 methods.
```
DeltaT = 0.001
#set up arrays
tfinal = 20 # in years
n = ceil(tfinal/DeltaT)
# set up arrays for t, v, and x
t = np.zeros(n)
v = np.zeros(n)
x = np.zeros(n)
# Initial conditions (can change to more than one dim)
x0 = 1.0
v0 = 0.0
x[0] = x0
v[0] = v0
gamma = 0.2
Omegatilde = 0.5
Ftilde = 1.0
# Start integrating using Euler's method
# Note that we define the force function as a SpringForce
RK4(v,x,t,n,SpringForce)
# Plot position as function of time
fig, ax = plt.subplots()
ax.set_ylabel('x[m]')
ax.set_xlabel('t[s]')
ax.plot(t, x)
fig.tight_layout()
save_fig("ForcedBlockRK4")
plt.show()
```
<!-- !split -->
## Principle of Superposition and Periodic Forces (Fourier Transforms)
If one has several driving forces, $F(t)=\sum_n F_n(t)$, one can find
the particular solution to each $F_n$, $x_{pn}(t)$, and the particular
solution for the entire driving force is
<!-- Equation labels as ordinary links -->
<div id="_auto22"></div>
$$
\begin{equation}
x_p(t)=\sum_nx_{pn}(t).
\label{_auto22} \tag{30}
\end{equation}
$$
This is known as the principal of superposition. It only applies when
the homogenous equation is linear. If there were an anharmonic term
such as $x^3$ in the homogenous equation, then when one summed various
solutions, $x=(\sum_n x_n)^2$, one would get cross
terms. Superposition is especially useful when $F(t)$ can be written
as a sum of sinusoidal terms, because the solutions for each
sinusoidal (sine or cosine) term is analytic, as we saw above.
Driving forces are often periodic, even when they are not
sinusoidal. Periodicity implies that for some time $\tau$
$$
\begin{eqnarray}
F(t+\tau)=F(t).
\end{eqnarray}
$$
One example of a non-sinusoidal periodic force is a square wave. Many
components in electric circuits are non-linear, e.g. diodes, which
makes many wave forms non-sinusoidal even when the circuits are being
driven by purely sinusoidal sources.
The code here shows a typical example of such a square wave generated using the functionality included in the **scipy** Python package. We have used a period of $\tau=0.2$.
```
%matplotlib inline
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t)
plt.plot(t, SqrSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
For the sinusoidal example studied in the previous subsections the
period is $\tau=2\pi/\omega$. However, higher harmonics can also
satisfy the periodicity requirement. In general, any force that
satisfies the periodicity requirement can be expressed as a sum over
harmonics,
<!-- Equation labels as ordinary links -->
<div id="_auto23"></div>
$$
\begin{equation}
F(t)=\frac{f_0}{2}+\sum_{n>0} f_n\cos(2n\pi t/\tau)+g_n\sin(2n\pi t/\tau).
\label{_auto23} \tag{31}
\end{equation}
$$
From the previous subsection, one can write down the answer for
$x_{pn}(t)$, by substituting $f_n/m$ or $g_n/m$ for $F_0/m$ into Eq.s
([16](#eq:fastdriven1)) or ([17](#eq:fastdriven2)) respectively. By
writing each factor $2n\pi t/\tau$ as $n\omega t$, with $\omega\equiv
2\pi/\tau$,
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef1"></div>
$$
\begin{equation}
\label{eq:fourierdef1} \tag{32}
F(t)=\frac{f_0}{2}+\sum_{n>0}f_n\cos(n\omega t)+g_n\sin(n\omega t).
\end{equation}
$$
The solutions for $x(t)$ then come from replacing $\omega$ with
$n\omega$ for each term in the particular solution in Equations
([9](#eq:partform)) and ([12](#eq:Ddrive)),
$$
\begin{eqnarray}
x_p(t)&=&\frac{f_0}{2k}+\sum_{n>0} \alpha_n\cos(n\omega t-\delta_n)+\beta_n\sin(n\omega t-\delta_n),\\
\nonumber
\alpha_n&=&\frac{f_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\beta_n&=&\frac{g_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\delta_n&=&\tan^{-1}\left(\frac{2\beta n\omega}{\omega_0^2-n^2\omega^2}\right).
\end{eqnarray}
$$
Because the forces have been applied for a long time, any non-zero
damping eliminates the homogenous parts of the solution, so one need
only consider the particular solution for each $n$.
The problem will considered solved if one can find expressions for the
coefficients $f_n$ and $g_n$, even though the solutions are expressed
as an infinite sum. The coefficients can be extracted from the
function $F(t)$ by
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef2"></div>
$$
\begin{eqnarray}
\label{eq:fourierdef2} \tag{33}
f_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\cos(2n\pi t/\tau),\\
\nonumber
g_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\sin(2n\pi t/\tau).
\end{eqnarray}
$$
To check the consistency of these expressions and to verify
Eq. ([33](#eq:fourierdef2)), one can insert the expansion of $F(t)$ in
Eq. ([32](#eq:fourierdef1)) into the expression for the coefficients in
Eq. ([33](#eq:fourierdef2)) and see whether
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~\left\{
\frac{f_0}{2}+\sum_{m>0}f_m\cos(m\omega t)+g_m\sin(m\omega t)
\right\}\cos(n\omega t).
\end{eqnarray}
$$
Immediately, one can throw away all the terms with $g_m$ because they
convolute an even and an odd function. The term with $f_0/2$
disappears because $\cos(n\omega t)$ is equally positive and negative
over the interval and will integrate to zero. For all the terms
$f_m\cos(m\omega t)$ appearing in the sum, one can use angle addition
formulas to see that $\cos(m\omega t)\cos(n\omega
t)=(1/2)(\cos[(m+n)\omega t]+\cos[(m-n)\omega t]$. This will integrate
to zero unless $m=n$. In that case the $m=n$ term gives
<!-- Equation labels as ordinary links -->
<div id="_auto24"></div>
$$
\begin{equation}
\int_{-\tau/2}^{\tau/2}dt~\cos^2(m\omega t)=\frac{\tau}{2},
\label{_auto24} \tag{34}
\end{equation}
$$
and
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~f_n/2\\
\nonumber
&=&f_n~\checkmark.
\end{eqnarray}
$$
The same method can be used to check for the consistency of $g_n$.
Consider the driving force:
<!-- Equation labels as ordinary links -->
<div id="_auto25"></div>
$$
\begin{equation}
F(t)=At/\tau,~~-\tau/2<t<\tau/2,~~~F(t+\tau)=F(t).
\label{_auto25} \tag{35}
\end{equation}
$$
Find the Fourier coefficients $f_n$ and $g_n$ for all $n$ using Eq. ([33](#eq:fourierdef2)).
Only the odd coefficients enter by symmetry, i.e. $f_n=0$. One can find $g_n$ integrating by parts,
<!-- Equation labels as ordinary links -->
<div id="eq:fouriersolution"></div>
$$
\begin{eqnarray}
\label{eq:fouriersolution} \tag{36}
g_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2}dt~\sin(n\omega t) \frac{At}{\tau}\\
\nonumber
u&=&t,~dv=\sin(n\omega t)dt,~v=-\cos(n\omega t)/(n\omega),\\
\nonumber
g_n&=&\frac{-2A}{n\omega \tau^2}\int_{-\tau/2}^{\tau/2}dt~\cos(n\omega t)
+\left.2A\frac{-t\cos(n\omega t)}{n\omega\tau^2}\right|_{-\tau/2}^{\tau/2}.
\end{eqnarray}
$$
The first term is zero because $\cos(n\omega t)$ will be equally
positive and negative over the interval. Using the fact that
$\omega\tau=2\pi$,
$$
\begin{eqnarray}
g_n&=&-\frac{2A}{2n\pi}\cos(n\omega\tau/2)\\
\nonumber
&=&-\frac{A}{n\pi}\cos(n\pi)\\
\nonumber
&=&\frac{A}{n\pi}(-1)^{n+1}.
\end{eqnarray}
$$
## Fourier Series
More text will come here, chpater 5.7-5.8 of Taylor are discussed
during the lectures. The code here uses the Fourier series discussed
in chapter 5.7 for a square wave signal. The equations for the
coefficients are are discussed in Taylor section 5.7, see Example
5.4. The code here visualizes the various approximations given by
Fourier series compared with a square wave with period $T=0.2$, witth
$0.1$ and max value $F=2$. We see that when we increase the number of
components in the Fourier series, the Fourier series approximation gets closes and closes to the square wave signal.
```
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
T =0.2
# Max value of square signal
Fmax= 2.0
# Width of signal
Width = 0.1
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
FourierSeriesSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t+np.pi*Width/T)
a0 = Fmax*Width/T
FourierSeriesSignal = a0
Factor = 2.0*Fmax/np.pi
for i in range(1,500):
FourierSeriesSignal += Factor/(i)*np.sin(np.pi*i*Width/T)*np.cos(i*t*2*np.pi/T)
plt.plot(t, SqrSignal)
plt.plot(t, FourierSeriesSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
| true |
code
| 0.692928 | null | null | null | null |
|
Parametric non Parametric inference
===================
Suppose you have a physical model of an output variable, which takes the form of a parametric model. You now want to model the random effects of the data by a non-parametric (better: infinite parametric) model, such as a Gaussian Process as described in [BayesianLinearRegression](../background/BayesianLinearRegression.ipynb). We can do inference in both worlds, the parameteric and infinite parametric one, by extending the features to a mix between
\begin{align}
p(\mathbf{y}|\boldsymbol{\Phi}, \alpha, \sigma) &= \int p(\mathbf{y}|\boldsymbol{\Phi}, \mathbf{w}, \sigma)p(\mathbf{w}|\alpha) \,\mathrm{d}\mathbf{w}\\
&= \langle\mathcal{N}(\mathbf{y}|\boldsymbol{\Phi}\mathbf{w}, \sigma^2\mathbf{I})\rangle_{\mathcal{N}(\mathbf{0}, \alpha\mathbf{I})}\\
&= \mathcal{N}(\mathbf{y}|\mathbf{0}, \alpha\boldsymbol{\Phi}\boldsymbol{\Phi}^\top + \sigma^2\mathbf{I})
\end{align}
Thus, we can maximize this marginal likelihood w.r.t. the hyperparameters $\alpha, \sigma$ by log transforming and maximizing:
\begin{align}
\hat\alpha, \hat\sigma = \mathop{\arg\max}_{\alpha, \sigma}\log p(\mathbf{y}|\boldsymbol{\Phi}, \alpha, \sigma)
\end{align}
So we will define a mixed inference model mixing parametric and non-parametric models together. One part is described by a paramtric feature space mapping $\boldsymbol{\Phi}\mathbf{w}$ and the other part is a non-parametric function $\mathbf{f}_\text{n}$. For this we define the underlying function $\mathbf{f}$ as
$$
\begin{align}
p(\mathbf{f}) &= p\left(
\underbrace{
\begin{bmatrix}
\delta(t-T)\\
\boldsymbol{\Phi}
\end{bmatrix}
}_{=:\mathbf{A}}
\left.
\begin{bmatrix}
\mathbf{f}_{\text{n}}\\
\mathbf{w}
\end{bmatrix}
\right|
\mathbf{0},
\mathbf{A}
\underbrace{
\begin{bmatrix}
\mathbf{K}_{\mathbf{f}} & \\
& \mathbf{K}_{\mathbf{w}}
\end{bmatrix}
}_{=:\boldsymbol{\Sigma}}
\mathbf{A}^\top
\right)\enspace,
\end{align}
$$
where $\mathbf{K}_{\mathbf{f}}$ is the covariance describing the non-parametric part $\mathbf{f}_\text{n}\sim\mathcal{N}(\mathbf{0}, \mathbf{K}_\mathbf{f})$ and $\mathbf{K}_{\mathbf{w}}$ is the covariance of the prior over $\mathbf{w}\sim\mathcal{N}(\mathbf{w}|\mathbf{0}, \mathbf{K}_{\mathbf{w}})$.
Thus we can now predict the different parts and even the paramters $\mathbf{w}$ themselves using (Note: If someone is willing to write down the proper path to this, here a welcome and thank you very much. Thanks to Philipp Hennig for his ideas in this.)
$$
\begin{align}
p(\mathbf{f}|\mathbf{y}) &=
\mathcal{N}(\mathbf{f} |
\boldsymbol{\Sigma}\mathbf{A}^\top
\underbrace{
(\mathbf{A}\boldsymbol{\Sigma}\mathbf{A}^\top + \sigma^2\mathbf{I})^{-1}}_{=:\mathbf{K}^{-1}}\mathbf{y}, \boldsymbol{\Sigma}-\boldsymbol{\Sigma}\mathbf{A}^\top\mathbf{K}^{-1}\mathbf{A}\boldsymbol{\Sigma})
\\
p(\mathbf{w}|\mathbf{y}) &= \mathcal{N}(\mathbf{w} | \mathbf{K}_\mathbf{w}\boldsymbol{\Phi}^\top\mathbf{K}^{-1}\mathbf{y},
\mathbf{K}_{\mathbf{w}}-\mathbf{K}_{\mathbf{w}}\boldsymbol{\Phi}^\top\mathbf{K}^{-1}\boldsymbol{\Phi}\mathbf{K}_{\mathbf{w}}))
\\
p(\mathbf{f}_\text{n}|\mathbf{y}) &= \mathcal{N}(\mathbf{f}_\text{n}| \mathbf{K}_\mathbf{f}\mathbf{K}^{-1}\mathbf{y},
\mathbf{K}_{\mathbf{f}}-\mathbf{K}_{\mathbf{f}}\mathbf{K}^{-1}\mathbf{K}_{\mathbf{f}}))
\end{align}
$$
```
import GPy, numpy as np, pandas as pd
from GPy.kern import LinearSlopeBasisFuncKernel, DomainKernel, ChangePointBasisFuncKernel
%matplotlib inline
from matplotlib import pyplot as plt
```
We will create some data with a non-linear function, strongly driven by piecewise linear trends:
```
np.random.seed(12345)
x = np.random.uniform(0, 10, 40)[:,None]
x.sort(0)
starts, stops = np.arange(0, 10, 3), np.arange(1, 11, 3)
k_lin = LinearSlopeBasisFuncKernel(1, starts, stops, variance=1., ARD=1)
Phi = k_lin.phi(x)
_ = plt.plot(x, Phi)
```
We will assume the prior over $w_i\sim\mathcal{N}(0, 3)$ and a Matern32 structure in the non-parametric part. Additionally, we add a half parametric part, which is a periodic effect only active between $x\in[3,8]$:
```
k = GPy.kern.Matern32(1, .3)
Kf = k.K(x)
k_per = GPy.kern.PeriodicMatern32(1, variance=100, period=1)
k_per.period.fix()
k_dom = DomainKernel(1, 1., 5.)
k_perdom = k_per * k_dom
Kpd = k_perdom.K(x)
np.random.seed(1234)
alpha = np.random.gamma(3, 1, Phi.shape[1])
w = np.random.normal(0, alpha)[:,None]
f_SE = np.random.multivariate_normal(np.zeros(x.shape[0]), Kf)[:, None]
f_perdom = np.random.multivariate_normal(np.zeros(x.shape[0]), Kpd)[:, None]
f_w = Phi.dot(w)
f = f_SE + f_w + f_perdom
y = f + np.random.normal(0, .1, f.shape)
plt.plot(x, f_w)
_ = plt.plot(x, y)
# Make sure the function is driven by the linear trend, as there can be a difficulty in identifiability.
```
With this data, we can fit a model using the basis functions as paramtric part. If you want to implement your own basis function kernel, see GPy.kern._src.basis_funcs.BasisFuncKernel and implement the necessary parts. Usually it is enough to implement the phi(X) method, returning the higher dimensional mapping of inputs X.
```
k = (GPy.kern.Bias(1)
+ GPy.kern.Matern52(1)
+ LinearSlopeBasisFuncKernel(1, ARD=1, start=starts, stop=stops, variance=.1, name='linear_slopes')
+ k_perdom.copy()
)
k.randomize()
m = GPy.models.GPRegression(x, y, k)
m.checkgrad()
m.optimize()
m.plot()
x_pred = np.linspace(0, 10, 500)[:,None]
pred_SE, var_SE = m._raw_predict(x_pred, kern=m.kern.Mat52)
pred_per, var_per = m._raw_predict(x_pred, kern=m.kern.mul)
pred_bias, var_bias = m._raw_predict(x_pred, kern=m.kern.bias)
pred_lin, var_lin = m._raw_predict(x_pred, kern=m.kern.linear_slopes)
m.plot_f(resolution=500, predict_kw=dict(kern=m.kern.Mat52), plot_data=False)
plt.plot(x, f_SE)
m.plot_f(resolution=500, predict_kw=dict(kern=m.kern.mul), plot_data=False)
plt.plot(x, f_perdom)
m.plot_f(resolution=500, predict_kw=dict(kern=m.kern.linear_slopes), plot_data=False)
plt.plot(x, f_w)
w_pred, w_var = m.kern.linear_slopes.posterior_inf()
df = pd.DataFrame(w, columns=['truth'], index=np.arange(Phi.shape[1]))
df['mean'] = w_pred
df['std'] = np.sqrt(w_var.diagonal())
np.round(df, 2)
```
| true |
code
| 0.672869 | null | null | null | null |
|
# CST PTM Data Overview
The PTM data from CST has a significant amount of missing data and requires special consideration when normalizing. The starting data is ratio-level-data - where log2 ratios have been calculated from the cancerous cell lines compared to the non-cancerous 'Normal Pool' data from within the 'plex'. This data is under the lung_cellline_3_1_16 directory and each PTM type has its own '_combined_ratios.tsv' file.
This notebook will overview the ratio-level datat from the PTM types: phosphorylation, methylation, and acetylation. The figures in this notebook demonstrate that there is a systematic difference in the distributions of PTM measurements in the lung cancer cell lines regardless of PTMs with missing data are considered. The normalization procedures used to correct for this systematic bias are discussed in the [CST_PTM_Normalization_Overview](https://github.com/MaayanLab/CST_Lung_Cancer_Viz/blob/master/CST_PTM_Normalization_Overview.ipynb) notebook.
The systematic difference in average PTM ratios in the cell lines could be due to a number of factors:
* it could be biological in nature, e.g. some cell line have uniformly higher PTM levels than others
* some cell lines might have higher/lower metabolism rates which will result in differences in incorporation of heavy isotopes
* some cell lines might reproduce faster/slower during the time period where cells are exposed to heavy isotopes, which would result in differences in the population size of the different cell lines
In any case, it can be useful towards understanding the differences in cell line behavior to remove this systematic difference.
# Phosphorylation Data
I'll start by having a look at the phosphorylation data that can be found in
`lung_cellline_3_1_16/lung_cellline_phospho/lung_cellline_TMT_phospho_combined_ratios.tsv`
This file was made using the `process_latest_cst_data.py` script. First I'll make the necessary imports.
```
# imports and plotting defaults
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
matplotlib.style.use('ggplot')
from copy import deepcopy
# use clustergrammer module to load/process (source code in clustergrammer directory)
from clustergrammer import Network
```
Next, I'll load the phosphorylation ratio data and simplify the column names (to improve readability)
```
# load data data and export as pandas dataframe: inst_df
def load_data(filename):
'''
load data using clustergrammer and export as pandas dataframe
'''
net = deepcopy(Network())
net.load_file(filename)
tmp_df = net.dat_to_df()
inst_df = tmp_df['mat']
# simplify column names (remove categories)
col_names = inst_df.columns.tolist()
# simple_col_names = []
# for inst_name in col_names:
# simple_col_names.append(inst_name[0])
inst_df.columns = col_names
print(inst_df.shape)
ini_rows = inst_df.index.tolist()
unique_rows = list(set(ini_rows))
if len(ini_rows) > len(unique_rows):
print('found duplicate PTMs')
else:
print('did not find duplicate PTMs')
return inst_df
filename = '../lung_cellline_3_1_16/lung_cellline_phospho/' + \
'lung_cellline_TMT_phospho_combined_ratios.tsv'
inst_df = load_data(filename)
```
I loaded the phosphorylation tsv file using clustergrammer and exported it as a pandas dataframe. We can see that there are 5,798 unique phosphorylation sites measured in all 45 lung cancer cell lines.
### Missing Phosphorylation Data
However, there is also a large amount of missing data, e.g. no cell line has all 5798 phosphorylations mesaured. We can plot the number of measured phosphorylation sites (e.g. non-NaN values in the dataframe) below to get a sense of the amount of missing data
```
inst_df.count().sort_values().plot(kind='bar', figsize=(10,2))
print(type(inst_df))
```
In the above visualization I have ranked the cell lines based in increasing number of measurements. We can see that there is a pattern in the missing data. The 45 cell lines appear to be aranged into nine groups of 5 cell lines each. These groups correpond to the 9 'plexes', or 'batches', in which the cell lines were measured. Each plex measured one control, Normal Pool, and five cancer cell lines (note that some cell lines have been measured in more than one plex and these have their plex number appended to their name).
### Cell Line Phosphorylation Distributions
Since each cell line has a large number of measured phosphorylations (at least 1,500) we can reasonably expect that the distributions of phosphorylation levels in the cell lines will be similar. This is based on the assumption that biological variation is not systematic and should not result in consistently higher or lower measurements in the cell lines.
Below we plot the mean values (ratios) of all measured phosphorylations in each cell line and order the cell lines by their average phosphorylation levels in ascending order.
```
def plot_cl_boxplot_with_missing_data(inst_df):
'''
Make a box plot of the cell lines where the cell lines are ranked based
on their average PTM levels
'''
# get the order of the cell lines based on their mean
sorter = inst_df.mean().sort_values().index.tolist()
# reorder based on ascending mean values
sort_df = inst_df[sorter]
# box plot of PTM values ordered based on increasing mean
sort_df.plot(kind='box', figsize=(10,3), rot=90, ylim=(-8,8))
plot_cl_boxplot_with_missing_data(inst_df)
```
We can see that there is a significant difference in the mean phosphorylation level across the cell lines. These large differenecs in the cell line distributions lead us to believe that there is a systematic error in the measurements that needs to be corrected.
However, each cell line has a different subset of phosphorylations measured so to more fairly compare the cell lines we should only compare commonly measured phosphorylations.
Below we plot the mean values of phosphorylations that were measured in all cell lines.
```
def plot_cl_boxplot_no_missing_data(inst_df):
# get the order of the cell lines based on their mean
sorter = inst_df.mean().sort_values().index.tolist()
# reorder based on ascending mean values
sort_df = inst_df[sorter]
# transpose to get PTMs as columns
tmp_df = sort_df.transpose()
# keep only PTMs that are measured in all cell lines
ptm_num_meas = tmp_df.count()
ptm_all_meas = ptm_num_meas[ptm_num_meas == 45]
ptm_all_meas = ptm_all_meas.index.tolist()
print('There are ' + str(len(ptm_all_meas)) + ' PTMs measured in all cell lines')
# only keep ptms that are measured in all cell lines
# I will call this full_df as in no missing measurements
full_df = tmp_df[ptm_all_meas]
# transpose back to PTMs as rows
full_df = full_df.transpose()
full_df.plot(kind='box', figsize=(10,3), rot=90, ylim=(-8,8))
num_ptm_all_meas = len(ptm_all_meas)
plot_cl_boxplot_no_missing_data(inst_df)
```
From the above box plot we can see that there is a significant difference in the distributions of the cell lines even when we only consider phosphorylations that were measured in all cell lines (note that the cell lines are in the same order as the previous box plot). This indicates that this systematic differnce in average phosphorylation values is not caused by missing values.
Since we do not expect biological variation to cause this type of systematic difference between cell lines we can conclude that the large differences between cell lines are likely the result of systematic experimental error that should be corrected. Normalizing the data will be discussed [here](https://github.com/MaayanLab/CST_Lung_Cancer_Viz)
# Acetylation Data
I will perform the same overview on the acetylation data. There are 1,192 unique acetylations measured in the 45 cell lines.
```
filename = '../lung_cellline_3_1_16/lung_cellline_Ack/' + \
'lung_cellline_TMT_Ack_combined_ratios.tsv'
inst_df = load_data(filename)
```
### Missing Acetylation Data
```
inst_df.count().sort_values().plot(kind='bar', figsize=(10,2))
```
### Cell Line Acetylation Distributions
```
plot_cl_boxplot_with_missing_data(inst_df)
```
Distribution of Acetylation data that was measured in all cell lines
```
plot_cl_boxplot_no_missing_data(inst_df)
```
# Methylation Data
The methylation data has been broken up into Arginine and Lysine methylation.
## Arginine Methylation
There are 1,248 Arginine methylations measured in all 42 cell lines
```
filename = '../lung_cellline_3_1_16/lung_cellline_Rme1/' + \
'lung_cellline_TMT_Rme1_combined_ratios.tsv'
inst_df = load_data(filename)
```
### Missing Arginine Methylation Data
```
inst_df.count().sort_values().plot(kind='bar', figsize=(10,2))
```
### Cell Line Arginine Methylation Distributions
```
plot_cl_boxplot_with_missing_data(inst_df)
```
Argining Methylation that was measured in all cell lines
```
plot_cl_boxplot_no_missing_data(inst_df)
```
## Lysine Methylation Data
There are 230 lysine methylations measured in all cell line
```
filename = '../lung_cellline_3_1_16/lung_cellline_Kme1/' + \
'lung_cellline_TMT_Kme1_combined_ratios.tsv'
inst_df = load_data(filename)
```
### Missing Lysine Methylation Data
Some cell lines have as few as 40 lysine methylations measured.
```
inst_df.count().sort_values().plot(kind='bar', figsize=(10,2))
```
### Cell Line Lysine Metylation Distributions
```
plot_cl_boxplot_with_missing_data(inst_df)
```
Lysine methylation that was measured in all cell lines
```
plot_cl_boxplot_no_missing_data(inst_df)
```
There were only 26 lysine methylations that were measured in all cell lines. We still see the bias in the average values across the cell lines.
# Conclusions
We see that the PTM measurements (phosphorylation, acetylation, and methylation) all show large differences in average behavior across the cell lines. Furthermore, the cell lines with the highest and lowest ratios are frequently the same: DMS153 is hte cell line with the lowest ratios and H661 is the cell line with the highest ratios in all cases. '
In other words, if we were to ask which cell line has the highest or lowest level of a particular PTM site we would almost always get the same cell line no matter which site we were interested in. Since this type of uniform and systematic difference between cell lines is not what we expect biologically we can conclude that the ratio data should be normalized in some way. The normalization procedure and its affects on cell line clustering are discussed in the notebook [CST_PTM_Normalization_Overview](https://github.com/MaayanLab/CST_Lung_Cancer_Viz/blob/master/CST_PTM_Normalization_Overview.ipynb) notebook.
| true |
code
| 0.314578 | null | null | null | null |
|
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# LightGBM: A Highly Efficient Gradient Boosting Decision Tree
This notebook will give you an example of how to train a LightGBM model to estimate click-through rates on an e-commerce advertisement. We will train a LightGBM based model on the Criteo dataset.
[LightGBM](https://github.com/Microsoft/LightGBM) is a gradient boosting framework that uses tree-based learning algorithms. It is designed to be distributed and efficient with the following advantages:
* Fast training speed and high efficiency.
* Low memory usage.
* Great accuracy.
* Support of parallel and GPU learning.
* Capable of handling large-scale data.
## Global Settings and Imports
```
import sys, os
sys.path.append("../../")
import numpy as np
import lightgbm as lgb
import papermill as pm
import pandas as pd
import category_encoders as ce
from tempfile import TemporaryDirectory
from sklearn.metrics import roc_auc_score, log_loss
import reco_utils.recommender.lightgbm.lightgbm_utils as lgb_utils
import reco_utils.dataset.criteo as criteo
print("System version: {}".format(sys.version))
print("LightGBM version: {}".format(lgb.__version__))
```
### Parameter Setting
Let's set the main related parameters for LightGBM now. Basically, the task is a binary classification (predicting click or no click), so the objective function is set to binary logloss, and 'AUC' metric, is used as a metric which is less effected by imbalance in the classes of the dataset.
Generally, we can adjust the number of leaves (MAX_LEAF), the minimum number of data in each leaf (MIN_DATA), maximum number of trees (NUM_OF_TREES), the learning rate of trees (TREE_LEARNING_RATE) and EARLY_STOPPING_ROUNDS (to avoid overfitting) in the model to get better performance.
Besides, we can also adjust some other listed parameters to optimize the results. [In this link](https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst), a list of all the parameters is shown. Also, some advice on how to tune these parameters can be found [in this url](https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters-Tuning.rst).
```
MAX_LEAF = 64
MIN_DATA = 20
NUM_OF_TREES = 100
TREE_LEARNING_RATE = 0.15
EARLY_STOPPING_ROUNDS = 20
METRIC = "auc"
SIZE = "sample"
params = {
'task': 'train',
'boosting_type': 'gbdt',
'num_class': 1,
'objective': "binary",
'metric': METRIC,
'num_leaves': MAX_LEAF,
'min_data': MIN_DATA,
'boost_from_average': True,
#set it according to your cpu cores.
'num_threads': 20,
'feature_fraction': 0.8,
'learning_rate': TREE_LEARNING_RATE,
}
```
## Data Preparation
Here we use CSV format as the example data input. Our example data is a sample (about 100 thousand samples) from [Criteo dataset](https://www.kaggle.com/c/criteo-display-ad-challenge). The Criteo dataset is a well-known industry benchmarking dataset for developing CTR prediction models, and it's frequently adopted as evaluation dataset by research papers. The original dataset is too large for a lightweight demo, so we sample a small portion from it as a demo dataset.
Specifically, there are 39 columns of features in Criteo, where 13 columns are numerical features (I1-I13) and the other 26 columns are categorical features (C1-C26).
```
nume_cols = ["I" + str(i) for i in range(1, 14)]
cate_cols = ["C" + str(i) for i in range(1, 27)]
label_col = "Label"
header = [label_col] + nume_cols + cate_cols
with TemporaryDirectory() as tmp:
all_data = criteo.load_pandas_df(size=SIZE, local_cache_path=tmp, header=header)
display(all_data.head())
```
First, we cut three sets (train_data (first 80%), valid_data (middle 10%) and test_data (last 10%)), cut from the original all data. <br>
Notably, considering the Criteo is a kind of time-series streaming data, which is also very common in recommendation scenario, we split the data by its order.
```
# split data to 3 sets
length = len(all_data)
train_data = all_data.loc[:0.8*length-1]
valid_data = all_data.loc[0.8*length:0.9*length-1]
test_data = all_data.loc[0.9*length:]
```
## Basic Usage
### Ordinal Encoding
Considering LightGBM could handle the low-frequency features and missing value by itself, for basic usage, we only encode the string-like categorical features by an ordinal encoder.
```
ord_encoder = ce.ordinal.OrdinalEncoder(cols=cate_cols)
def encode_csv(df, encoder, label_col, typ='fit'):
if typ == 'fit':
df = encoder.fit_transform(df)
else:
df = encoder.transform(df)
y = df[label_col].values
del df[label_col]
return df, y
train_x, train_y = encode_csv(train_data, ord_encoder, label_col)
valid_x, valid_y = encode_csv(valid_data, ord_encoder, label_col, 'transform')
test_x, test_y = encode_csv(test_data, ord_encoder, label_col, 'transform')
print('Train Data Shape: X: {trn_x_shape}; Y: {trn_y_shape}.\nValid Data Shape: X: {vld_x_shape}; Y: {vld_y_shape}.\nTest Data Shape: X: {tst_x_shape}; Y: {tst_y_shape}.\n'
.format(trn_x_shape=train_x.shape,
trn_y_shape=train_y.shape,
vld_x_shape=valid_x.shape,
vld_y_shape=valid_y.shape,
tst_x_shape=test_x.shape,
tst_y_shape=test_y.shape,))
train_x.head()
```
### Create model
When both hyper-parameters and data are ready, we can create a model:
```
lgb_train = lgb.Dataset(train_x, train_y.reshape(-1), params=params, categorical_feature=cate_cols)
lgb_valid = lgb.Dataset(valid_x, valid_y.reshape(-1), reference=lgb_train, categorical_feature=cate_cols)
lgb_test = lgb.Dataset(test_x, test_y.reshape(-1), reference=lgb_train, categorical_feature=cate_cols)
lgb_model = lgb.train(params,
lgb_train,
num_boost_round=NUM_OF_TREES,
early_stopping_rounds=EARLY_STOPPING_ROUNDS,
valid_sets=lgb_valid,
categorical_feature=cate_cols)
```
Now let's see what is the model's performance:
```
test_preds = lgb_model.predict(test_x)
auc = roc_auc_score(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))
logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds), eps=1e-12)
res_basic = {"auc": auc, "logloss": logloss}
print(res_basic)
pm.record("res_basic", res_basic)
```
<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=default"></script>
## Optimized Usage
### Label-encoding and Binary-encoding
Next, since LightGBM has a better capability in handling dense numerical features effectively, we try to convert all the categorical features in original data into numerical ones, by label-encoding [3] and binary-encoding [4]. Also due to the sequence property of Criteo, the label-encoding we adopted is executed one-by-one, which means we encode the samples in order, by the information of the previous samples before each sample (sequential label-encoding and sequential count-encoding). Besides, we also filter the low-frequency categorical features and fill the missing values by the mean of corresponding columns for the numerical features. (consulting `lgb_utils.NumEncoder`)
Specifically, in `lgb_utils.NumEncoder`, the main steps are as follows.
* Firstly, we convert the low-frequency categorical features to `"LESS"` and the missing categorical features to `"UNK"`.
* Secondly, we convert the missing numerical features into the mean of corresponding columns.
* Thirdly, the string-like categorical features are ordinal encoded like the example shown in basic usage.
* And then, we target encode the categorical features in the samples order one-by-one. For each sample, we add the label and count information of its former samples into the data and produce new features. Formally, for $i=1,2,...,n$, we add $\frac{\sum\nolimits_{j=1}^{i-1} I(x_j=c) \cdot y}{\sum\nolimits_{j=1}^{i-1} I(x_j=c)}$ as a new label feature for current sample $x_i$, where $c$ is a category to encode in current sample, so $(i-1)$ is the number of former samples, and $I(\cdot)$ is the indicator function that check the former samples contain $c$ (whether $x_j=c$) or not. At the meantime, we also add the count frequency of $c$, which is $\frac{\sum\nolimits_{j=1}^{i-1} I(x_j=c)}{i-1}$, as a new count feature.
* Finally, based on the results of ordinal encoding, we add the binary encoding results as new columns into the data.
Note that the statistics used in the above process only updates when fitting the training set, while maintaining static when transforming the testing set because the label of test data should be considered as unknown.
```
label_col = 'Label'
num_encoder = lgb_utils.NumEncoder(cate_cols, nume_cols, label_col)
train_x, train_y = num_encoder.fit_transform(train_data)
valid_x, valid_y = num_encoder.transform(valid_data)
test_x, test_y = num_encoder.transform(test_data)
del num_encoder
print('Train Data Shape: X: {trn_x_shape}; Y: {trn_y_shape}.\nValid Data Shape: X: {vld_x_shape}; Y: {vld_y_shape}.\nTest Data Shape: X: {tst_x_shape}; Y: {tst_y_shape}.\n'
.format(trn_x_shape=train_x.shape,
trn_y_shape=train_y.shape,
vld_x_shape=valid_x.shape,
vld_y_shape=valid_y.shape,
tst_x_shape=test_x.shape,
tst_y_shape=test_y.shape,))
```
### Training and Evaluation
```
lgb_train = lgb.Dataset(train_x, train_y.reshape(-1), params=params)
lgb_valid = lgb.Dataset(valid_x, valid_y.reshape(-1), reference=lgb_train)
lgb_model = lgb.train(params,
lgb_train,
num_boost_round=NUM_OF_TREES,
early_stopping_rounds=EARLY_STOPPING_ROUNDS,
valid_sets=lgb_valid)
test_preds = lgb_model.predict(test_x)
auc = roc_auc_score(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))
logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds), eps=1e-12)
res_optim = {"auc": auc, "logloss": logloss}
print(res_optim)
pm.record("res_optim", res_optim)
```
## Model saving and loading
Now we finish the basic training and testing for LightGBM, next let's try to save and reload the model, and then evaluate it again.
```
with TemporaryDirectory() as tmp:
save_file = os.path.join(tmp, r'finished.model')
lgb_model.save_model(save_file)
loaded_model = lgb.Booster(model_file=save_file)
# eval the performance again
test_preds = loaded_model.predict(test_x)
auc = roc_auc_score(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))
logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds), eps=1e-12)
print({"auc": auc, "logloss": logloss})
```
## Additional Reading
\[1\] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. LightGBM: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems. 3146–3154.<br>
\[2\] The parameters of LightGBM: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst <br>
\[3\] Anna Veronika Dorogush, Vasily Ershov, and Andrey Gulin. 2018. CatBoost: gradient boosting with categorical features support. arXiv preprint arXiv:1810.11363 (2018).<br>
\[4\] Scikit-learn. 2018. categorical_encoding. https://github.com/scikit-learn-contrib/categorical-encoding<br>
| true |
code
| 0.377139 | null | null | null | null |
|
### Dataset
Lets Load the dataset. We shall use the following datasets:
Features are in: "sido0_train.mat"
Labels are in: "sido0_train.targets"
```
from scipy.io import loadmat
import numpy as np
X = loadmat(r"/Users/rkiyer/Desktop/teaching/CS6301/jupyter/data/sido0_matlab/sido0_train.mat")
y = np.loadtxt(r"/Users/rkiyer/Desktop/teaching/CS6301/jupyter/data/sido0_matlab/sido0_train.targets")
# Statistics of the Dense Format of X
X = X['X'].todense()
print(X.shape)
```
### Logistic Regression Definition
Lets use the Logistic Regression definition we previously used
```
def LogisticLoss(w, X, y, lam):
# Computes the cost function for all the training samples
m = X.shape[0]
Xw = np.dot(X,w)
yT = y.reshape(-1,1)
yXw = np.multiply(yT,Xw)
f = np.sum(np.logaddexp(0,-yXw)) + 0.5*lam*np.sum(np.multiply(w,w))
gMul = 1/(1 + np.exp(yXw))
ymul = -1*np.multiply(yT, gMul)
g = np.dot(ymul.reshape(1,-1),X) + lam*w.reshape(1,-1)
g = g.reshape(-1,1)
return [f, g]
```
### Barzelia Borwein step length
Lets invoke BB Step Length Gradient Descent
```
from numpy import linalg as LA
def gdBB(funObj,w,maxEvals,alpha,gamma,X,y,lam, verbosity, freq):
[f,g] = funObj(w,X,y,lam)
funEvals = 1
funVals = []
f_old = f
g_old = g
funVals.append(f)
numBackTrack = 0
while(1):
wp = w - alpha*g
[fp,gp] = funObj(wp,X,y,lam)
funVals.append(f)
funEvals = funEvals+1
backtrack = 0
if funEvals > 2:
g_diff = g - g_old
alpha = -alpha*np.dot(g_old.T, g_diff)[0,0]/np.dot(g_diff.T, g_diff)[0,0]
while fp > f - gamma*alpha*np.dot(g.T, g):
alpha = alpha*alpha*np.dot(g.T, g)[0,0]/(2*(fp + np.dot(g.T, g)[0,0]*alpha - f))
wp = w - alpha*g
[fp,gp] = funObj(wp,X,y,lam)
funVals.append(f)
funEvals = funEvals+1
numBackTrack = numBackTrack + 1
f_old = f
g_old = g
w = wp
f = fp
g = gp
optCond = LA.norm(g, np.inf)
if ((verbosity > 0) and (funEvals % freq == 0)):
print(funEvals,alpha,f,optCond)
if (optCond < 1e-2):
break
if (funEvals >= maxEvals):
break
return (funVals,numBackTrack)
[nSamples,nVars] = X.shape
w = np.zeros((nVars,1))
(funV1,numBackTrack) = gdBB(LogisticLoss,w,250,1,1e-4,X,y,1,1,10)
print(len(funV1))
print("Number of Backtrackings = " + str(numBackTrack))
```
### Conjugate Gradient Descent
Nonlinear Conjugate Gradient Descent
```
from numpy import linalg as LA
def gdCG(funObj,w,maxEvals,alpha,gamma,X,y,lam, verbosity, freq):
[f,g] = funObj(w,X,y,lam)
funEvals = 1
funVals = []
f_old = f
g_old = g
funVals.append(f)
numBackTrack = 0
d = g
while(1):
wp = w - alpha*d
[fp,gp] = funObj(wp,X,y,lam)
funVals.append(f)
funEvals = funEvals+1
backtrack = 0
if funEvals > 2:
alpha = min(1,2*(f_old - f)/np.dot(g.T, g)[0,0])
beta = np.dot(g.T, g)[0,0]/np.dot(g_old.T, g_old)[0,0]
d = g + beta*d
else:
d = g
while fp > f - gamma*alpha*np.dot(g.T, d)[0,0]:
alpha = alpha*alpha*np.dot(g.T, d)[0,0]/(2*(fp + np.dot(g.T, d)[0,0]*alpha - f))
wp = w - alpha*d
[fp,gp] = funObj(wp,X,y,lam)
funVals.append(f)
funEvals = funEvals+1
numBackTrack = numBackTrack + 1
f_old = f
g_old = g
w = wp
f = fp
g = gp
optCond = LA.norm(g, np.inf)
if ((verbosity > 0) and (funEvals % freq == 0)):
print(funEvals,alpha,f,optCond)
if (optCond < 1e-2):
break
if (funEvals >= maxEvals):
break
return (funVals,numBackTrack)
[nSamples,nVars] = X.shape
w = np.zeros((nVars,1))
(funV1,numBackTrack) = gdCG(LogisticLoss,w,250,1,1e-4,X,y,1,1,10)
print(len(funV1))
print("Number of Backtrackings = " + str(numBackTrack))
```
| true |
code
| 0.256972 | null | null | null | null |
|
# Singleton Networks
```
import qualreas as qr
import os
import copy
qr_path = os.path.join(os.getenv('PYPROJ'), 'qualreas')
alg_dir = os.path.join(qr_path, "Algebras")
```
## Make a Test Network
```
test1_net_dict = {
'name': 'Network Copy Test #1',
'algebra': 'Extended_Linear_Interval_Algebra',
'description': 'Testing/Developing network copy functionality',
'nodes': [
['U', ['ProperInterval', 'Point']],
['V', ['ProperInterval', 'Point']],
['W', ['ProperInterval']],
['X', ['Point']]
],
'edges': [
['U', 'V', 'B'],
['U', 'W', 'M'],
['W', 'V', 'O'],
['X', 'W', 'D']
]
}
test2_net_dict = {
'name': 'Network Copy Test #2',
'algebra': 'Extended_Linear_Interval_Algebra',
'description': 'Testing/Developing network copy functionality',
'nodes': [
['X', ['ProperInterval']],
['Y', ['ProperInterval']],
['Z', ['ProperInterval']]
],
'edges': [
['X', 'Y', 'B'],
['Y', 'Z', 'B']
]
}
test1_net = qr.Network(algebra_path=alg_dir, network_dict=test1_net_dict)
test2_net = qr.Network(algebra_path=alg_dir, network_dict=test2_net_dict)
test1_net.propagate()
test1_net.summary(show_all=False)
test2_net.propagate()
test2_net.summary(show_all=False)
```
## Test Changing Constraint on an Edge
Look at all the edge contraints
```
for eg in test1_net.edges:
print(test1_net.edges[eg[0], eg[1]]['constraint'])
```
Grab the Head (src) and Tail (tgt) of the 3rd edge, above.
```
src, tgt = list(test1_net.edges)[2]
test1_net.edges[src,tgt]['constraint']
```
Change the constraint and look at the result on the edge & its converse.
```
test1_net.set_constraint(src, tgt, test1_net.algebra.relset('D|M|FI'))
test1_net.edges[src,tgt]['constraint']
test1_net.edges[tgt,src]['constraint']
```
## Test Copy Network
```
test1_net_copy = test1_net.copy()
#test1_net_copy = qr.copy(test1_net)
test1_net_copy.summary()
test1_net_copy.propagate()
test1_net_copy.summary(show_all=False)
done = []
result = []
for eg in test1_net_copy.edges:
src = eg[0]; tgt = eg[1]
srcID = src.name; tgtID = tgt.name
if not (src, tgt) in done:
cons = test1_net_copy.edges[src, tgt]['constraint']
print(srcID, tgtID, cons)
if len(cons) > 1:
result.append((srcID, tgtID, cons))
done.append((tgt, src))
rels = []
for rel in result[0][2]:
rels.append(rel)
rels
foo = [1, 2, 3]
a = foo.pop()
a
foo
def _all_realizations_aux(in_work, result):
if len(in_work) == 0:
print("DONE")
return result
else:
print("Get next net in work")
next_net = in_work.pop()
if finished(next_net):
print(" This one's finished")
result.append(next_net)
_all_realizations_aux(in_work, result)
else:
print(" Expanding net")
_all_realizations_aux(in_work + expand(next_net), result)
def expand(net):
expansion = []
for src, tgt in net.edges:
edge_constraint = net.edges[src, tgt]['constraint']
if len(edge_constraint) > 1:
print("--------")
print(f"Edge Constraint: {edge_constraint}")
for rel in edge_constraint:
print(f" Relation: {rel}")
net_copy = net.copy()
src_node, tgt_node, _ = net_copy.get_edge(src.name, tgt.name, return_names=False)
net_copy.set_constraint(src_node, tgt_node, net_copy.algebra.relset(rel))
expansion.append(net_copy)
print(f" Expansion: {expansion}")
break
return expansion
def finished(net):
"""Returns True if all constraints are singletons."""
answer = True
for src, tgt in net.edges:
edge_constraint = net.edges[src, tgt]['constraint']
if len(edge_constraint) > 1:
answer = False
break
return answer
x = _all_realizations_aux([test1_net_copy], list())
len(x)
foo = expand(test1_net)
foo
foo[0].summary(show_all=False)
foo[1].summary(show_all=False)
foo[2].summary(show_all=False)
finished(test1_net)
finished(test2_net)
```
| true |
code
| 0.394026 | null | null | null | null |
|
# Two Market Makers - via Pontryagin
This notebook corresponds to section 4 (**Agent based models**) of "Market Based Mechanisms for Incentivising Exchange Liquidity Provision" available [here](https://vega.xyz/papers/liquidity.pdf). It models two market makers and solves the resulting game by an iterative scheme based on the Pontryagin optimality principle.
```
import math, sys
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from os import path
count = 0
from matplotlib.backends.backend_pdf import PdfPages
T = 0.4;
sigma0 = 3
sigma1 = 0.5
lambd = 0.1
r = 0.0
rRisk0 = 0.3
rRisk1 = 0.1
delta_a = 1e-4
fee_scaling = 0.1
# This is key; how does instantenaous trading volume react
# to market making stake
# and to fees. You could specify different beleifs for the two different agents.
def fee_volume_response(f):
f = np.maximum(f, np.zeros(np.size(f)))
f = np.minimum(f, np.ones(np.size(f)))
return 1.0/(f+0.01) - f
def stake_volume_response(S):
return 1.0 / (1+np.exp(-0.05*S+2)) - 1.0 / (1+np.exp(2))
# Check that the shape below is concave (i.e. there is a single maximum) we need
# this if we want the optimization procedure to converge
x_span = np.linspace(0,1, 1000)
y = fee_scaling * fee_volume_response(x_span) * x_span
print('Max %f' % max(y))
max_idx=np.argmax(y)
plt.xlabel('fee in %')
plt.ylabel('volume in %')
plt.title('Fee response times fee')
plt.plot(x_span,y)
# Check that the shape below is concave (i.e. there is a single maximum) we need
# this if we want the optimization procedure to converge.
# Of course you may be lucky and things will work even in the case when it's not exactly convex...
x_span = np.linspace(0,200, 200)
y = stake_volume_response(x_span)
plt.xlabel('stake')
plt.ylabel('volume in %')
plt.title('Stake response')
plt.plot(x_span,y)
# As things are set-up at moment the agents only differ in their belief about
# the maximum trading volume they'd expect to see
def trading_volume0(f,S):
N_max = 10000
return N_max * fee_volume_response(f) * stake_volume_response(S)
def trading_volume1(f,S):
N_max = 50000
return N_max * fee_volume_response(f) * stake_volume_response(S)
def running_gain0(t,f,S0,S1,a0):
frac = S0/(S0+S1)
stake = S0+S1
return np.exp(-r*t) * (frac * fee_scaling * f * trading_volume0(f,stake) - max(lambd * sigma0 * S0,0)) - max(np.exp(rRisk0*t)*S0, 0) \
- delta_a * a0*a0
def running_gain1(t,f,S0,S1,a1):
frac = S1/(S0+S1)
stake = S0+S1
return np.exp(-r*t) * (frac * fee_scaling * f * trading_volume1(f,stake) - max(lambd * sigma1 * S1,0)) - max(np.exp(rRisk1*t)*S1, 0) \
- delta_a * a1*a1
def running_gain_x_0(t,x,S_1, a0):
f = x[0]
S_0 = x[1]
return running_gain0(t,f,S_0,S_1, a0)
def running_gain_x_1(t,x,S_0, a1):
f = x[0]
S_1 = x[1]
return running_gain1(t,f,S_0,S_1, a1)
# Below we define the gradients (using finite difference)
# of the running gain specified above - this is just a technicality
# used in the subsequent optimization.
def grad_x_of_running_gain_0(t,x,S1,a):
delta = 1e-8
grad = np.zeros(2)
#print(x)
x_plus = x + np.array([delta, 0])
x_minus = x - np.array([delta, 0])
rg_plus = running_gain_x_0(t,x_plus,S1,a)
rg_minus = running_gain_x_0(t,x_minus,S1,a)
#print(x_plus)
grad[0] = (rg_plus - rg_minus)/(2*delta)
x_plus = x + np.array([0, delta])
x_minus = x - np.array([0, delta])
rg_plus = running_gain_x_0(t,x_plus,S1,a)
rg_minus = running_gain_x_0(t,x_minus,S1,a)
grad[1] = (rg_plus - rg_minus)/(2*delta)
return grad
def grad_x_of_running_gain_1(t,x,S0,a):
delta = 1e-8
grad = np.zeros(2)
x_plus = x + np.array([delta, 0])
x_minus = x - np.array([delta, 0])
rg_plus = running_gain_x_1(t,x_plus,S0,a)
rg_minus = running_gain_x_1(t,x_minus,S0,a)
grad[0] = (rg_plus - rg_minus)/(2*delta)
x_plus = x + np.array([0, delta])
x_minus = x - np.array([0, delta])
rg_plus = running_gain_x_1(t,x_plus,S0,a)
rg_minus = running_gain_x_1(t,x_minus,S0,a)
grad[1] = (rg_plus - rg_minus)/(2*delta)
return grad
# Initialization
L_S = 150;
L_f = 1;
N_T = 200; delta_t = T / (N_T-1);
N_S = 45;
N_f = 45;
t_span = np.linspace(0, T, N_T)
f_span = np.linspace(0, L_f, N_f)
S_span = np.linspace(0, L_S, N_S)
def grid_idx_from(S,S_span):
min_S = S_span[0]
N_S = np.size(S_span)
max_S = S_span[N_S-1]
delta_S = (max_S-min_S)/(N_S-1)
return max(min(int(round(S/delta_S)), N_S-1),0)
F_vals = np.zeros([np.size(f_span), np.size(S_span)])
f_times_V_vals = np.zeros([np.size(f_span), np.size(S_span)])
grad_F_vals = np.zeros([np.size(f_span), np.size(S_span), 2])
for f_idx in range(0, np.size(f_span)):
for S_idx in range(0, np.size(S_span)):
f = f_span[f_idx]
S = S_span[S_idx]
F_vals[f_idx,S_idx] = running_gain0(T, f, S, 10, 0)
f_times_V_vals[f_idx,S_idx] = f*trading_volume0(f,S)
grad_F_vals[f_idx,S_idx,:] = grad_x_of_running_gain_0(T, np.array([f, S]), 10, 0)
max_idx = np.unravel_index(np.argmax(F_vals, axis=None),F_vals.shape)
print(f_span[max_idx[0]])
print(S_span[max_idx[1]])
plotGridX, plotGridY = np.meshgrid(S_span, f_span)
fig = plt.figure()
#ax1 = fig.add_subplot(111,projection='3d')
ax1 = fig.gca(projection='3d')
surf = ax1.plot_surface(plotGridX, plotGridY, f_times_V_vals[:,:], cmap=cm.autumn, antialiased=True)
ax1.set_xlabel('stake')
ax1.set_ylabel('fee')
ax1.set_zlabel('V')
ax1.set_zlim(0, 40000)
ax1.view_init(30, 20)
ax1.set_title('Agent 1')
plt.savefig('response1.pdf')
gamma_f = -0.02
gamma_S = 5
m = 1
def drift_0(a0,a1):
b = np.zeros(2)
b[0] = gamma_f*(a0+a1)
b[1] = gamma_S*a0
return b
def drift_1(a0,a1):
b = np.zeros(2)
b[0] = gamma_f*(a0+a1)
b[1] = gamma_S*a1
return b
def grad_a0_H0(y,a0,a1):
val = gamma_f*y[0] + gamma_S*y[1] - 2*delta_a*a0
return val
def grad_a1_H1(y,a0,a1):
val = gamma_f*y[0] + gamma_S*y[1] - 2*delta_a*a1
return val
# Fix initial fee & and stake of two players
fee_init = 0.5 # has to be between 0 and 1
player0_stake = 250
player1_stake = 10
# Learning params:
# higher value means faster convergence but less stability i.e.:
# if you see stupid output (explosion, negative fees etc.) set this lower.
rho = 0.05
# learning takes a long time and if it says "failed at the end it might just means that it's still updating a bit."
max_iter = 6000
#stopping criteria: once the updates are smaller than this in l-infinity then stop
max_error = 0.1
# fees are the 0th component, stake is the 1st component
# first player, index 0
actions0 = np.zeros([1,N_T+1])
x_vals0 = np.zeros([2,N_T+1])
x_vals0[:,0] = np.array([fee_init, player0_stake])
y_vals0 = np.zeros([2,N_T+1])
# second player, index 1
actions1 = np.zeros([1,N_T+1])
x_vals1 = np.zeros([2,N_T+1])
x_vals1[:,0] = np.array([fee_init, player1_stake])
y_vals1 = np.zeros([2,N_T+1])
def run_iterative_system(max_iter,max_error):
actions_old0 = np.zeros([1,N_T+1])
actions_old1 = np.zeros([1,N_T+1])
diff = 0; failed_to_converge=True
for iter_idx in range(0,max_iter):
# Run x0, x1 forwards
for i in range(0,N_T):
x_vals0[:,i+1] = x_vals0[:,i] + drift_0(actions0[0,i], actions1[0,i]) * delta_t
# second guy only updates the stake
# but the fee evolution is copied from first
x_vals1[0,i+1] = x_vals0[0,i+1]
x_vals1[1,i+1] = x_vals1[1,i] + drift_1(actions0[0,i], actions1[0,i])[1] * delta_t
# Run y0, y1 backwards
y_vals0[:,N_T] = np.zeros(2)
y_vals1[:,N_T] = np.zeros(2)
for i in reversed(range(0,N_T)):
S0 = x_vals0[1,i]
S1 = x_vals1[1,i]
grad_x_F_0 = grad_x_of_running_gain_0(t_span[i], x_vals0[:,i], S1, actions0[0,i])
grad_x_F_1 = grad_x_of_running_gain_1(t_span[i], x_vals1[:,i], S0, actions1[0,i])
y_vals0[:,i] = y_vals0[:,i+1] + grad_x_F_0 * delta_t
y_vals1[:,i] = y_vals1[:,i+1] + grad_x_F_1 * delta_t
for i in range(0,N_T):
# Do one gradient ascent step (we are maximizing)
actions0[0,i] = actions0[0,i] + rho*grad_a0_H0(y_vals0[:,i],actions0[0,i],actions1[0,i])
actions1[0,i] = actions1[0,i] + rho*grad_a1_H1(y_vals1[:,i],actions0[0,i],actions1[0,i])
diff0 = np.max(np.abs(actions0 - actions_old0))
diff1 = np.max(np.abs(actions1 - actions_old1))
if (diff0 < max_error) and (diff1 < max_error) :
print('Converged; iteration %d, diff0 is %f, diff1 is %f' % (iter_idx, diff0, diff1))
failed_to_converge = False
break
actions_old0 = np.copy(actions0)
actions_old1 = np.copy(actions1)
if failed_to_converge:
print('Failed after %d iteration, diff0 is %f, diff1 is %f' % (max_iter, diff0,diff1))
%timeit -n1 -r1 run_iterative_system(max_iter, max_error)
plt.plot(t_span, 1000 * fee_scaling * x_vals0[0,0:N_T].T,label='f0 in 10 x %')
plt.plot(t_span, 1000 * fee_scaling * x_vals1[0,0:N_T].T,color='green',label='f1 in 10 x %')
plt.xlabel('time')
plt.plot(t_span, x_vals0[1,0:N_T].T,color='red',label='stake 0')
plt.plot(t_span, x_vals1[1,0:N_T].T,color='pink',label='stake 1')
plt.title('State evolution - fees and stake')
plt.xlabel('time')
plt.ylabel('level')
plt.legend()
plt.savefig('state.pdf')
fig = plt.figure()
plt.plot(t_span, actions0[0,0:N_T].T,label='a - 0')
plt.plot(t_span, actions1[0,0:N_T].T, color='green',label='a - 1')
plt.title('Actions evolution')
plt.xlabel('time')
plt.ylabel('actions fees')
plt.xlabel('time')
plt.ylabel('level')
plt.legend()
plt.savefig('actions.pdf')
print('Minimum fee %.2f%%. Final fee %.2f%%.' % (fee_scaling * 100*min(x_vals1[0,0:N_T]),fee_scaling * 100*x_vals1[0,N_T-1]))
print('Minimum stake %.0f. Maximum stake %.0f. Final stake %.0f.' % (min(x_vals0[1,0:N_T]+x_vals1[1,0:N_T]),max(x_vals0[1,0:N_T]+x_vals1[1,0:N_T]),x_vals0[1,N_T-1]+x_vals1[1,N_T-1]))
# Adjoint process plot: this is a 'dummy' process used in the optimization
# and you can ignore it if all goes well
fig = plt.figure()
plt.plot(t_span, 0.1*y_vals0[0,0:N_T].T, label='adj. fees 0')
plt.plot(t_span, 0.1*y_vals1[0,0:N_T].T, color='green', label='adj. fees 1')
plt.xlabel('time')
plt.plot(t_span, y_vals0[1,0:N_T].T, color = 'red', label='adj. stake 0')
plt.plot(t_span, y_vals1[1,0:N_T].T, color = 'pink', label='adj. stake 0')
plt.title('Adjoint evolution - fees and stake')
plt.xlabel('time')
plt.legend()
```
| true |
code
| 0.49762 | null | null | null | null |
|
# Evaluate the Performance of MPNN models
Get all of the models, regardless how we trained them and evaluate their performance
```
%matplotlib inline
from matplotlib import pyplot as plt
from datetime import datetime
from sklearn import metrics
from tqdm import tqdm
from glob import glob
import pandas as pd
import numpy as np
import json
import os
```
## Find the Models and Summarize Them
There are `best_model.h5` files in subdirectories that contain data on their configuration.
```
models = glob(os.path.join('**', 'test_predictions.csv'), recursive=True)
print(f'Found {len(models)} models')
def generate_summary(path):
"""Generate the summary of a model, given path to its output
Args:
path (str): Path ot the trained weights
Returns:
(dict) Model information
"""
# Store the directory first
dir_name = os.path.dirname(path)
output = {'path': dir_name}
# Get the host and run parameters
for f in ['host_info.json', 'run_params.json']:
with open(os.path.join(dir_name, f)) as fp:
output.update(json.load(fp))
# Compute the number of nodes
output['n_nodes'] = output['total_ranks'] // output['ranks_per_node'] \
if 'total_ranks' in output else 1
# Convert the start time to a datetime
output['start_time'] = datetime.fromisoformat(output['start_time'])
# Get the log infomration
log_file = os.path.join(dir_name, 'log.csv')
log = pd.read_csv(log_file)
output['completed_epochs'] = len(log)
output['val_loss'] = log['val_loss'].min()
output['loss'] = log['loss'].min()
output['epoch_time'] = np.percentile(log['epoch_time'], 50)
output['total_train_time'] = log['epoch_time'].sum()
output['total_node_hours'] = output['total_train_time'] * output['n_nodes']
# Compute performance on hold-out set
results = pd.read_csv(os.path.join(output['path'], 'test_predictions.csv'))
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error', 'median_absolute_error']:
v = getattr(metrics, m)(results['y_true'], results['y_pred'])
output[m] = v
return output
model_info = pd.DataFrame([generate_summary(m) for m in models])
print(f'Found {len(model_info)} models')
```
## Print out Best Performer
We are going to pick the one that has the best performance on the test set
### Coarse Network
See how we did on the "node per water" network
```
model = model_info.query('network_choice=="coarse"').sort_values('mean_absolute_error').iloc[0]
print(f'Model being evaluated: {model["path"]}')
model[['path', 'network_choice', 'activation', 'message_steps', 'dropout', 'features', 'batch_size']]
model[['loss', 'val_loss', 'mean_squared_error']]
```
Plot the logs
```
log = pd.read_csv(os.path.join(model['path'], 'log.csv'))
fig, ax = plt.subplots(figsize=(3.5, 2.5))
ax.semilogy(log['epoch'], log['loss'], label='Train')
ax.semilogy(log['epoch'], log['val_loss'], label='Validation')
ax.legend()
ax.set_xlabel('Epoch')
ax.set_ylabel('Loss')
```
*Finding*: Huge variance in validation loss is indicative of overfitting
Plot the performance on the test set
```
results = pd.read_csv(os.path.join(model['path'], 'test_predictions.csv'))
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error']:
v = getattr(metrics, m)(results['y_true'], results['y_pred'])
print(f'{m}: {v: .2f}')
```
Plot the true vs predicted
```
fig, ax = plt.subplots()
ax.scatter(results['y_true'], results['y_pred'], s=0.5, alpha=0.2)
ax.plot(ax.get_xlim(), ax.get_ylim(), 'k--')
ax.set_xlabel('$E$, True')
ax.set_ylabel('$E$, ML')
fig.set_size_inches(3.5, 3.5)
```
Plot only the largest cluster size
```
subset = results.query(f'n_waters == {results["n_waters"].max()}')
print(f'Scores for the {len(subset)} largest molecules with {results["n_waters"].max()} waters')
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error']:
v = getattr(metrics, m)(subset['y_true'], subset['y_pred'])
print(f'{m}: {v: .2f}')
fig, ax = plt.subplots()
errors = subset['y_pred'] - subset['y_true']
bins = np.linspace(-10, 10, 256)
ax.hist(errors, bins=bins, density=False)
ax.set_xlabel('Error (kcal/mol)')
ax.set_ylabel('Frequency')
fig.set_size_inches(3.5, 2)
fig, ax = plt.subplots(figsize=(3.5, 3.5))
ax.scatter(subset['y_true'], subset['y_pred'], s=0.5, alpha=0.1)
ax.set_ylim(-340, -305)
ax.set_xlim(ax.get_ylim())
ax.set_ylim(ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_xlim(), 'k--')
ax.set_xlabel('$E$ (kcal/mol), True')
ax.set_ylabel('$E$ (kcal/mol), ML')
fig.tight_layout()
```
### Atomic Network
See how we did for the "node per atom" network
```
model = model_info.query('network_choice=="atomic"').sort_values('mean_absolute_error').iloc[0]
print(f'Model being evaluated: {model["path"]}')
model[['path', 'network_choice', 'activation', 'message_steps', 'dropout', 'features', 'batch_size']]
model[['loss', 'val_loss', 'mean_squared_error']]
```
Plot the logs
```
log = pd.read_csv(os.path.join(model['path'], 'log.csv'))
fig, ax = plt.subplots()
ax.semilogy(log['epoch'], log['loss'], label='Train')
ax.semilogy(log['epoch'], log['val_loss'], label='Validation')
ax.legend()
ax.set_xlabel('Epoch')
ax.set_ylabel('Loss')
```
*Finding*: Huge variance in validation loss is indicative of overfitting
Plot the performance on the test set
```
results = pd.read_csv(os.path.join(model['path'], 'test_predictions.csv'))
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error']:
v = getattr(metrics, m)(results['y_true'], results['y_pred'])
print(f'{m}: {v: .2f}')
```
Plot the true vs predicted
```
fig, ax = plt.subplots(figsize=(3.5, 3.5))
ax.set_title('Performance on hold-out set')
ax.scatter(results['y_true'], results['y_pred'], s=0.5, alpha=0.2)
ax.plot(ax.get_xlim(), ax.get_ylim(), 'k--')
ax.set_xlabel('$E$, True')
ax.set_ylabel('$E$, ML')
fig.set_size_inches(3.5, 3.5)
```
Plot only the largest cluster size
```
subset = results.query(f'n_waters == {results["n_waters"].max()}')
print(f'Scores for the {len(subset)} largest molecules with {results["n_waters"].max()} waters')
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error']:
v = getattr(metrics, m)(subset['y_true'], subset['y_pred'])
print(f'{m}: {v: .2f}')
fig, ax = plt.subplots()
errors = subset['y_pred'] - subset['y_true']
bins = np.linspace(-10, 10, 256)
ax.hist(errors, bins=bins, density=False)
ax.set_xlabel('Error (kcal/mol)')
ax.set_ylabel('Frequency')
fig.set_size_inches(3.5, 2)
fig, ax = plt.subplots(figsize=(3.5, 3.5))
ax.set_title('Clusters with 30 waters')
ax.scatter(subset['y_true'], subset['y_pred'], s=0.5, alpha=0.1)
ax.set_ylim(-340, -305)
ax.set_xlim(ax.get_ylim())
ax.set_ylim(ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_xlim(), 'k--')
ax.set_xlabel('$E$ (kcal/mol), True')
ax.set_ylabel('$E$ (kcal/mol), ML')
fig.tight_layout()
```
Make a publication-ready figure
```
fig, axs = plt.subplots(1, 3, figsize=(6.5, 2.5))
# Predicted vs actual plots
n_waters = results["n_waters"].max()
subset = results.query(f'n_waters == {n_waters}')
for d, ax, title in zip([results, subset], axs,
['Full Dataset', '30-Water Clusters']):
ax.set_title(title)
ax.scatter(d['y_true'], d['y_pred'], s=0.7, alpha=0.2, edgecolor='none')
max_ = max(ax.get_xlim()[1], ax.get_ylim()[1])
min_ = min(ax.get_xlim()[0], ax.get_ylim()[0])
ax.set_xlim([min_, max_])
ax.set_ylim(ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_xlim(), 'k--')
ax.set_xlabel('$E$ (kcal/mol), True')
ax.set_ylabel('$E$ (kcal/mol), ML')
mae = metrics.mean_absolute_error(d['y_true'], d['y_pred'])
r2 = metrics.r2_score(d['y_true'], d['y_pred'])
ax.text(0.99, 0, f'MAE: {mae:.2f}\n$R^2$: {r2:.2f}',
ha='right', va='bottom', transform=ax.transAxes,
fontsize=10)
# Box and wisker plot
ax = axs[2]
error_stats = []
for s, subset in results.groupby('n_waters'):
error = np.abs(subset['y_pred'] - subset['y_true']) / s
error_stats.append({'size': s, 'mae': error.mean()})
error_stats = pd.DataFrame(error_stats)
ax.plot(error_stats['size'], error_stats['mae'], '--o', ms=3)
ax.set_xlabel('# Waters')
ax.set_ylabel('MAE (kcal/mol/water)')
# Add figure labels
for ax, l in zip(axs[:2], ['a', 'b']):
ax.text(0.02, 0.9, f'({l})', transform=ax.transAxes)
axs[2].text(0.82, 0.9, '(c)', transform=axs[2].transAxes)
fig.tight_layout()
fig.savefig(os.path.join('figures', 'mpnn-performance.png'), dpi=320)
```
## Make the Box Plot
To match Jenna's
```
results['abs_error_per_water'] = np.abs(results['y_true'] - results['y_pred']) / results['n_waters']
def make_box_plot(df, metric='abs_error_per_water'):
boxplot = df.query('n_waters >= 10 and n_waters <= 30').boxplot(metric, 'n_waters', grid=False, fontsize=20, figsize=(12,6), return_type='both')
plt.ylim(-0.01,0.7)
plt.ylabel('Absolute Error\n(kcal/mol/water)', fontsize=22, fontweight='bold', labelpad=15)
plt.xlabel('Cluster Size', fontsize=22, fontweight='bold', labelpad=15)
plt.xticks(range(1,23,2), ['10','12','14','16','18','20','22','24','26','28','30'])
plt.xlim(0, 22)
plt.suptitle('')
plt.title('')
plt.tight_layout()
plt.savefig('figures/mpnn_boxplot-horz.png',dpi=600)
make_box_plot(results)
```
## Evaluate Hyperparameter Sweeps
We did some manual hyperparameter tuning for the atomic model
### Batch Sizes
Evaluate different batch sizes to get a tradeoff between accuracy and using the full GPU
```
base_query = ('epochs==32 and shuffle_buffer_size==2097152 and activation=="sigmoid" '
'and message_steps==4 and network_choice=="atomic" and dropout==0 and features==64')
model_info.query(base_query).sort_values('val_loss')[['batch_size', 'loss', 'val_loss', 'mean_squared_error', 'epoch_time']]
```
*Finding*: We get decent accuracy with a batch size of 1024 and still use 90% of the GPU
### Activation Function
We evaluated different activation functions for the message steps
```
base_query = ('batch_size==1024 and epochs==32 and shuffle_buffer_size==2097152 '
'and message_steps==4 and network_choice=="atomic" and dropout==0 and features==64')
model_info.query(base_query).sort_values('mean_squared_error')[['activation', 'loss', 'val_loss', 'mean_squared_error', 'epoch_time']]
```
*Finding*: We should go with the softplus. Fastest and most accurate
### Number of Message Passing Layers
We compared increasing the number of message passing layers
```
base_query = ('hostname=="lambda3" and shuffle_buffer_size==2097152 and batch_size==1024 and activation=="softplus" and epochs==32 '
'and network_choice=="atomic"')
model_info.query(base_query).sort_values('message_steps')[['network_choice', 'message_steps',
'loss', 'val_loss', 'mean_squared_error',
'epoch_time']]
fig, ax = plt.subplots()
for label, subset in model_info.query(base_query).sort_values('message_steps').groupby('network_choice'):
ax.plot(subset['message_steps'], subset['mean_absolute_error'], '-o', label=label)
ax.set_xscale('log', base=2)
ax.set_xlabel('Message Steps')
ax.set_ylabel('Mean Absolute Error')
ax.legend()
```
*Finding*: We need many message passing layers, which can get expensive
| true |
code
| 0.679896 | null | null | null | null |
|
(tune-mnist-keras)=
# Using Keras & TensorFlow with Tune
```{image} /images/tf_keras_logo.jpeg
:align: center
:alt: Keras & TensorFlow Logo
:height: 120px
:target: https://www.keras.io
```
```{contents}
:backlinks: none
:local: true
```
## Example
```
import argparse
import os
from filelock import FileLock
from tensorflow.keras.datasets import mnist
import ray
from ray import tune
from ray.tune.schedulers import AsyncHyperBandScheduler
from ray.tune.integration.keras import TuneReportCallback
def train_mnist(config):
# https://github.com/tensorflow/tensorflow/issues/32159
import tensorflow as tf
batch_size = 128
num_classes = 10
epochs = 12
with FileLock(os.path.expanduser("~/.data.lock")):
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential(
[
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(config["hidden"], activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(num_classes, activation="softmax"),
]
)
model.compile(
loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(lr=config["lr"], momentum=config["momentum"]),
metrics=["accuracy"],
)
model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
verbose=0,
validation_data=(x_test, y_test),
callbacks=[TuneReportCallback({"mean_accuracy": "accuracy"})],
)
def tune_mnist(num_training_iterations):
sched = AsyncHyperBandScheduler(
time_attr="training_iteration", max_t=400, grace_period=20
)
analysis = tune.run(
train_mnist,
name="exp",
scheduler=sched,
metric="mean_accuracy",
mode="max",
stop={"mean_accuracy": 0.99, "training_iteration": num_training_iterations},
num_samples=10,
resources_per_trial={"cpu": 2, "gpu": 0},
config={
"threads": 2,
"lr": tune.uniform(0.001, 0.1),
"momentum": tune.uniform(0.1, 0.9),
"hidden": tune.randint(32, 512),
},
)
print("Best hyperparameters found were: ", analysis.best_config)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--smoke-test", action="store_true", help="Finish quickly for testing"
)
parser.add_argument(
"--server-address",
type=str,
default=None,
required=False,
help="The address of server to connect to if using " "Ray Client.",
)
args, _ = parser.parse_known_args()
if args.smoke_test:
ray.init(num_cpus=4)
elif args.server_address:
ray.init(f"ray://{args.server_address}")
tune_mnist(num_training_iterations=5 if args.smoke_test else 300)
```
## More Keras and TensorFlow Examples
- {doc}`/tune/examples/includes/pbt_memnn_example`: Example of training a Memory NN on bAbI with Keras using PBT.
- {doc}`/tune/examples/includes/tf_mnist_example`: Converts the Advanced TF2.0 MNIST example to use Tune
with the Trainable. This uses `tf.function`.
Original code from tensorflow: https://www.tensorflow.org/tutorials/quickstart/advanced
- {doc}`/tune/examples/includes/pbt_tune_cifar10_with_keras`:
A contributed example of tuning a Keras model on CIFAR10 with the PopulationBasedTraining scheduler.
| true |
code
| 0.7731 | null | null | null | null |
|
```
# Useful for debugging
%load_ext autoreload
%autoreload 2
# Nicer plotting
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
matplotlib.rcParams['figure.figsize'] = (8,4)
```
# Disgten example
Similar to the simple example, but generating particles with Distgen
```
from distgen import Generator
YAML="""
n_particle: 10000
random_type: hammersley
start:
type: cathode
MTE:
value: 414
units: meV
total_charge:
value: 250
units: pC
r_dist:
n_sigma_cutoff: 1.5
sigma_xy:
value: 0.4
units: mm
type: radial_gaussian
t_dist:
type: superposition
dists:
d1:
type: gaussian
avg_t:
units: ps
value: -1
sigma_t:
units: ps
value: 1
d2:
type: gaussian
avg_t:
units: ps
value: 1
sigma_t:
units: ps
value: 1
"""
G = Generator(YAML)
# Tune the two dist separation
G['t_dist:dists:d1:avg_t:value'] = -1
G['t_dist:dists:d2:avg_t:value'] = 1
G.run()
GP = G.particles
GP.plot('t')
GP.plot('pz')
from impact import Impact
import matplotlib.pyplot as plt
import os
ifile = 'templates/lcls_injector/ImpactT.in'
os.path.exists(ifile)
# Make Impact object
I = Impact(ifile, initial_particles = G.particles, verbose=True)
# This will use the initial particles
I.write_initial_particles(update_header=True)
# Change some things
I.header['Nx'] = 16
I.header['Ny'] = 16
I.header['Nz'] = 16
I.header['Dt'] = 5e-13
# Turn Space Charge off
I.header['Bcurr'] = 0
# Other switches
I.timeout = 1000
# Switches for MPI
I.use_mpi=True
I.header['Nprow'] = 1
I.header['Npcol'] = 4
# Change stop location
I.stop = 1.5
#I.ele['stop_1']['s'] = I.ele['OTR2']['s']+.001
I.run()
I.input.keys()
I.output.keys()
I.output['stats'].keys()
I.output['slice_info'].keys()
```
# Particles
```
# Particles are automatically parsed in to openpmd-beamphysics ParticleGroup objects
I.output['particles']
PI = I.output['particles']['initial_particles']
PF = I.output['particles']['final_particles']
# Original particles
GP.plot('t', 'pz')
# Readback of initial particles from Impact-T.
PI.plot('t', 'pz')
# The initial time was shifted to account for this
I.header['Tini']
# Get the final particles, calculate some statistic
P = I.output['particles']['final_particles']
P['mean_energy']
# Show the units
P.units('mean_energy')
P.plot('z', 'pz')
```
# Stats
```
# Impact's own calculated statistics can be retieved
len(I.stat('norm_emit_x')), I.stat('norm_emit_x')[-1]
# Compare these.
key1 = 'mean_z'
key2 = 'sigma_x'
units1 = str(I.units(key1))
units2 = str(I.units(key2))
plt.xlabel(key1+f' ({units1})')
plt.ylabel(key2+f' ({units2})')
plt.plot(I.stat(key1), I.stat(key2))
plt.scatter(
[I.particles[name][key1] for name in I.particles],
[I.particles[name][key2] for name in I.particles], color='red')
```
# Archive, and restart from the middle
```
afile = I.archive()
I2 = Impact(verbose=False)
I2.load_archive(afile)
# Patch in these particles
I2.initial_particles = I2.particles['YAG02']
# Turn off cathode start
I2.header['Flagimg'] = 0
I2.configure()
# Run again
I2.use_mpi=True
I2.run()
# Compare these.
key1 = 'mean_z'
key2 = 'sigma_x'
units1 = str(I.units(key1))
units2 = str(I.units(key2))
plt.xlabel(key1+f' ({units1})')
plt.ylabel(key2+f' ({units2})')
plt.plot(I.stat(key1), I.stat(key2), color='black', label='original run')
plt.plot(I2.stat(key1), I2.stat(key2), color='red', label='restart run')
plt.scatter(
[I.particles[name][key1] for name in I.particles],
[I.particles[name][key2] for name in I.particles], color='black')
plt.scatter(
[I2.particles[name][key1] for name in I2.particles],
[I2.particles[name][key2] for name in I2.particles], color='red', marker='x')
plt.legend()
# Cleanup
os.remove(afile)
```
| true |
code
| 0.63982 | null | null | null | null |
|
```
#https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
```
# MNIST Dataset
### http://yann.lecun.com/exdb/mnist/
### The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
```
import matplotlib.pyplot as plt
import h5py #pip install h5py -- https://www.h5py.org/
#load train
f = h5py.File('MNISTdata.hdf5', 'r')
train_x, train_y = f['x_train'][:], f['y_train'][:,0]
f.close()
print("train_x", train_x.shape, train_x.dtype)
#each image is stored in 784*1 numpy.ndarray, basically 28*28 image
type(train_x)
plt.imshow(train_x[0].reshape(28, 28)), train_y[0]
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.utils.data
import torch.optim as optim
import torch.backends.cudnn as cudnn
import numpy as np
import os
import os.path
import argparse
from torch.autograd import Variable
class FNN(nn.Module):#Fully connected Neural Network
"""FNN."""
def __init__(self):
"""FNN Builder."""
super(FNN, self).__init__()
self.fc_layer = nn.Sequential(
nn.Linear(784, 100),#100 is the number of hidden nodes in the hidden layer
nn.ReLU(inplace=True),
nn.Linear(100, 10)
)
#self.layer1 = nn.Linear(784, 100)
#self.layer2 = nn.ReLU(inplace=True)
#self.layer3 = nn.Linear(100, 10)
def forward(self, x):
"""Perform forward."""
x = self.fc_layer(x)
return x
#x = self.layer1(x)
#x = self.layer2(x)
#x = self.layer3(x)
#y = self.fc_layer(x)
#return y
# 784*100 + 100*10 - NN
# 784
def calculate_accuracy(loader, is_gpu):
"""Calculate accuracy.
Args:
loader (torch.utils.data.DataLoader): training / test set loader
is_gpu (bool): whether to run on GPU
Returns:
tuple: (overall accuracy, class level accuracy)
"""
correct = 0
total = 0
for data in loader:
inputs, labels = data
if is_gpu:
inputs = inputs.cuda()
labels = labels.cuda()
inputs, labels = Variable(inputs), Variable(labels)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
# forward + backward + optimize
outputs = net(inputs)#forward
total += labels.size(0)
#correct += (predicted == labels).sum()
correct += (predicted == labels[:,0].T).sum()
return 100*correct.item()/float(total)
parser = argparse.ArgumentParser()
# hyperparameters settings
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--wd', type=float, default=5e-4, help='weight decay')#lr/(c+wd)
parser.add_argument('--epochs', type=int, default=50,
help='number of epochs to train')
parser.add_argument('--batch_size_train', type=int,
default=16, help='training set input batch size')
parser.add_argument('--batch_size_test', type=int,
default=16, help='test set input batch size')
parser.add_argument('--is_gpu', type=bool, default=False,
help='whether training using GPU')
import sys
sys.argv=['']
del sys
# parse the arguments
opt = parser.parse_args()
f = h5py.File('MNISTdata.hdf5','r')
x_test_set=np.float32(f['x_test'][:])
y_test_set=np.int32(np.array(f['y_test'][:,0])).reshape(-1,1)
x_train_set=np.float32(f['x_train'][:])
y_train_set=np.int32(np.array(f['y_train'][:,0])).reshape(-1,1)
f.close()
#num_samples = y_train_set.shape[0]
#y_train_set = y_train_set.reshape(1, num_samples)
#y_train_set = np.eye(10)[y_train_set.astype('int32')]
#y_train_set = y_train_set.T.reshape(10, num_samples)
#num_samples = y_test_set.shape[0]
#y_test_set = y_test_set.reshape(1, num_samples)
#y_test_set = np.eye(10)[y_test_set.astype('int32')]
#y_test_set = y_test_set.T.reshape(10, num_samples)
trainset = torch.utils.data.TensorDataset(torch.Tensor(x_train_set), torch.Tensor(y_train_set)) # create your datset
trainloader = torch.utils.data.DataLoader(
trainset, batch_size=opt.batch_size_train, shuffle=True)
#mini-batch gradient, stochastic gradient descent - 1 sample
testset = torch.utils.data.TensorDataset(torch.Tensor(x_test_set), torch.Tensor(y_test_set)) # create your datset
testloader = torch.utils.data.DataLoader(
testset, batch_size=opt.batch_size_test, shuffle=False)
type(trainset), type(trainloader)
# create the FNN instance
net = FNN()
# For training on GPU, transfer net and data into the GPU
if opt.is_gpu:
net = net.cuda()
net = torch.nn.DataParallel(net, device_ids=range(torch.cuda.device_count()))
cudnn.benchmark = True
else:
print('Training on CPU')
# Loss function and optimizer
criterion = nn.CrossEntropyLoss()#N dim -> prob (softmax) -> CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=opt.lr, weight_decay=opt.wd)#a variant of SGD
for epoch in range(opt.epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
#if training on GPU, wrap the data into the cuda
if opt.is_gpu:
inputs = inputs.cuda()
labels = labels.cuda()
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)#forward
loss = criterion(outputs, labels[:, 0].long())
loss.backward()#compute gradients
optimizer.step()#descent
# calculate loss
running_loss += loss.data.item()
# Normalizing the loss by the total number of train batches
running_loss /= len(trainloader)
# Calculate training/test set accuracy of the existing model
train_accuracy = calculate_accuracy(trainloader, opt.is_gpu)
test_accuracy = calculate_accuracy(testloader, opt.is_gpu)
print("Iteration: {0} | Loss: {1} | Training accuracy: {2}% | Test accuracy: {3}%".format(
epoch+1, running_loss, train_accuracy, test_accuracy))
loss, loss.requires_grad
outputs
labels[:, 0].long()
```
# Without Pytorch
```
import h5py
import numpy as np
import argparse
def sigmoid(x):
"""
define scale function
"""
return np.exp(x)/(1.0+np.exp(x))
def RELU(x):
return np.np.maximum(x,0)
def reluDerivative(x):
return np.array([reluDerivativeSingleElement(xi) for xi in x])
def reluDerivativeSingleElement(xi):
if xi > 0:
return 1
elif xi <= 0:
return 0
def compute_loss(Y,V):
L_sum = np.sum(np.multiply(Y, np.log(V)))
m = Y.shape[1]
L = -(1./m) * L_sum
return L
def feed_forward(X, params):
tempt={}
tempt["Z"]=np.matmul(params["W"], X) + params["b1"]
tempt["H"]=sigmoid(tempt["Z"])
#tempt["H"]=RELU(tempt["Z"])
tempt["U"]=np.matmul(params["C"], tempt["H"]) + params["b2"]
tempt["V"]=np.exp(tempt["U"]) / np.sum(np.exp(tempt["U"]), axis=0)
return tempt
def back_propagate(X, Y, params, tempt, m_batch):
# X is m*n matrix
# Y is m*1 matrix
# tempt is the value in each neural cell
dU=tempt["V"]-Y # the loss of output layer
dC=(1. / m_batch) * np.matmul(dU, tempt["H"].T)
db2=(1. / m_batch) * np.sum(dU, axis=1, keepdims=True)
dH=np.matmul(params["C"].T, dU)
dZ = dH * sigmoid(tempt["Z"]) * (1 - sigmoid(tempt["Z"]))
#dZ=dH*reluDerivative(tempt["Z"])
dW = (1. / m_batch) * np.matmul(dZ, X.T)
db1 = (1. / m_batch) * np.sum(dZ, axis=1, keepdims=True)
grads={"dW":dW, "db1":db1, "dC":dC, "db2":db2}
return grads
#hyperparameters
epochs=10
batch_size=1
batchs=np.int32(60000/batch_size)
LR=0.01
dh=100#number of hidden nodes
#getting 60000 samples of training data and 10000 samples of testing data
f=h5py.File('MNISTdata.hdf5','r')
x_test_set=np.float32(f['x_test'][:])
y_test_set=np.int32(np.array(f['y_test'][:,0])).reshape(-1,1)
x_train_set=np.float32(f['x_train'][:])
y_train_set=np.int32(np.array(f['y_train'][:,0])).reshape(-1,1)
f.close()
X=np.vstack((x_train_set,x_test_set))
Y=np.vstack((y_train_set,y_test_set))
num_samples=Y.shape[0]
Y=Y.reshape(1,num_samples)
Y_new = np.eye(10)[Y.astype('int32')]
Y_new = Y_new.T.reshape(10, num_samples)
X_train, X_test=X[:60000].T, X[60000:].T
Y_train, Y_test=Y_new[:,:60000], Y_new[:,60000:]
#building fully connected neural network with one hidden layer
#initialization of parameters
params={"b1":np.zeros((dh,1)),
"W":np.random.randn(dh,784)*np.sqrt(1. / 784),
"b2":np.zeros((10,1)),
"C":np.random.randn(10,dh)*np.sqrt(1. / dh)}
#training the network
for num_epoches in range(epochs):
if (num_epoches > 5):
LR = 0.001
if (num_epoches > 10):
LR = 0.0001
if (num_epoches > 15):
LR = 0.00001
#shuffle the training data
shuffle_index=np.random.permutation(X_train.shape[1])
X_train= X_train[:, shuffle_index]
Y_train=Y_train[:, shuffle_index]
for num_batch in range(batchs):
left_index=num_batch*batch_size
right_index=min(left_index+batch_size,x_train_set.shape[0]-1)
m_batch=right_index-left_index
X=X_train[:,left_index:right_index]
Y=Y_train[:,left_index:right_index]
tempt=feed_forward(X, params)
grads = back_propagate(X, Y, params, tempt, 1)
#gradient descent
params["W"] = params["W"] - LR * grads["dW"]
params["b1"] = params["b1"] - LR * grads["db1"]
params["C"] = params["C"] - LR * grads["dC"]
params["b2"] = params["b2"] - LR * grads["db2"]
#compute loss on training data
tempt = feed_forward(X_train, params)
train_loss = compute_loss(Y_train, tempt["V"])
#compute loss on test set
tempt=feed_forward(X_test, params)
test_loss = compute_loss(Y_test, tempt["V"])
total_correct=0
for n in range(Y_test.shape[1]):
p = tempt["V"][:,n]
prediction = np.argmax(p)
if prediction == np.argmax(Y_test[:,n]):
total_correct+=1
accuracy = np.float32(total_correct) / (Y_test.shape[1])
#print(params)
print("Epoch {}: training loss = {}, test loss = {}, accuracy={}".format(
num_epoches + 1, train_loss, test_loss, accuracy))
```
# ML Model with JD Data
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
from scipy import stats
#read/write data from/to local files
prefix_path = 'JD_data/'
# 'skus' table
skus = pd.read_csv(prefix_path + 'JD_sku_data.csv')
# 'users' table
users = pd.read_csv(prefix_path + 'JD_user_data.csv')
# 'clicks' table
clicks = pd.read_csv(prefix_path + 'JD_click_data.csv')
# 'orders' table
orders = pd.read_csv(prefix_path + 'JD_order_data.csv')
# 'delivery' table
delivery = pd.read_csv(prefix_path + 'JD_delivery_data.csv')
# 'inventory' table
inventory = pd.read_csv(prefix_path + 'JD_inventory_data.csv')
# 'network' table
network = pd.read_csv(prefix_path + 'JD_network_data.csv')
orders['order_date'] = pd.to_datetime(orders['order_date'])
orders['weekday'] = orders['order_date'].dt.dayofweek
df_temp = orders[['weekday','final_unit_price']]
#Add dummy variables
df_temp1 = pd.get_dummies(df_temp['weekday'], prefix='weekday')
cols_to_keep = ['final_unit_price']
df_temp = df_temp[cols_to_keep].join(df_temp1.iloc[:,0:])#not df_temp1.ix[:,0:], consider the gender case
df_temp['intercept'] = 1
train_cols_ = df_temp.columns[1:]#can write ['x1', 'x2'] manually
train_df = df_temp[train_cols_]
opt2 = parser.parse_args()
trainset_JD = torch.utils.data.TensorDataset(torch.Tensor(train_df.values), torch.Tensor(df_temp['final_unit_price'].values)) # create your datset
trainloader_JD = torch.utils.data.DataLoader(
trainset_JD, batch_size=opt2.batch_size_train, shuffle=True)
class FNN_JD(nn.Module):
"""FNN."""
def __init__(self):
"""FNN Builder."""
super(FNN_JD, self).__init__()
self.fc_layer = nn.Sequential(
nn.Linear(8, 4),
nn.ReLU(inplace=True),
nn.Linear(4, 1)
)
#self.fc_layer = nn.Sequential(
# nn.Linear(8, 4),
# nn.ReLU(inplace=True),
# nn.Linear(4, 2),
# nn.ReLU(inplace=True),
# nn.Linear(2, 1)
#)
def forward(self, x):
"""Perform forward."""
x = self.fc_layer(x)
return x
# create the FNN instance
net_JD = FNN_JD()
# For training on GPU, transfer net and data into the GPU
if opt2.is_gpu:
net_JD = net.cuda()
net_JD = torch.nn.DataParallel(net, device_ids=range(torch.cuda.device_count()))
cudnn.benchmark = True
else:
print('Training on CPU')
# Loss function and optimizer
criterion_JD = nn.MSELoss()
optimizer_JD = optim.Adam(net_JD.parameters(), lr=opt2.lr, weight_decay=opt2.wd)
train_df
for epoch in range(opt2.epochs):
running_loss = 0.0
for i, data in enumerate(trainloader_JD, 0):
# get the inputs
inputs, prices = data
#if training on GPU, wrap the data into the cuda
if opt2.is_gpu:
inputs = inputs.cuda()
prices = prices.cuda()
# wrap them in Variable
inputs, prices = Variable(inputs), Variable(prices)
# zero the parameter gradients
optimizer_JD.zero_grad()
# forward + backward + optimize
outputs = net_JD(inputs)
loss = criterion_JD(outputs[:,0], prices)
loss.backward()
optimizer_JD.step()
# calculate loss
running_loss += loss.data.item()
# Normalizing the loss by the total number of train batches
#running_loss /= len(trainloader)
# Calculate training/test set accuracy of the existing model
#train_accuracy = calculate_accuracy(trainloader, opt.is_gpu)
print("Iteration: {0} | Loss: {1}".format(
epoch+1, running_loss))
#sum of squared error
opt2.batch_size_train * 197859128
```
## Ways to improve accuracy:
### 1. hyperparameter tuning: different algorithm and learning rate - SGD, different loss function, batch size
### 2. different network structures, different activiation layer
### 3. more features/inputs
# Compare with Linear Regression
```
import statsmodels.api as sm
df_temp = orders[['weekday','final_unit_price']]
#Add dummy variables
df_temp1 = pd.get_dummies(df_temp['weekday'], prefix='weekday')
cols_to_keep = ['final_unit_price']
df_temp = df_temp[cols_to_keep].join(df_temp1.iloc[:,1:])#not df_temp1.ix[:,0:], consider the gender case
df_temp['intercept'] = 1
train_cols_ = df_temp.columns[1:]#can write ['x1', 'x2'] manually
train_df = df_temp[train_cols_]
linear_model = sm.OLS(df_temp['final_unit_price'], train_df)
res = linear_model.fit()
print(res.summary())
res.params
coef = res.params.values
x = train_df.values
y = df_temp['final_unit_price']
loss = 0
for i in range(len(y)):
predict = np.dot(coef, x[i])
loss += (predict - y[i])**2
loss
# 8*4 + 4*1
# 7
```
| true |
code
| 0.783223 | null | null | null | null |
|
<h1 align=center>The Cobweb Model</h1>
Presentation follows <a href="http://www.parisschoolofeconomics.eu/docs/guesnerie-roger/hommes94.pdf">Hommes, <em>JEBO 1994</em></a>. Let $p_t$ denote the <em>observed price</em> of goods and $p_t^e$ the <em>expected price</em> of goods in period $t$. Similarly, let $q_t^d$ denote the <em>quantity demanded</em> of all goods in period $t$ and $q_t^s$ the <em>quantity supplied</em> of all goods in period $t$.
\begin{align}
q_t^d =& D(p_t) \tag{1} \\
q_t^s =& S(p_t^e) \tag{2} \\
q_t^d =& q_t^s \tag{3} \\
p_t^e =& p_{t-1}^e + w\big(p_{t-1} - p_{t-1}^e\big) = (1 - w)p_{t-1}^e + w p_{t-1} \tag{4}
\end{align}
Equation 1 says that the quantity demanded of goods in period $t$ is some function of the <em>observed price</em> in period $t$. Equation 2, meanwhile, states that the quantity of goods supplied in period $t$ is a function of the <em>expected price</em> in period $t$. Equation 3 is a market clearing equilibrium condition. Finally, equation 4 is an adaptive expectation formation rule that specifies how goods producers form their expectations about the price of goods in period $t$ as a function of past prices.
Combine the equations as follows. Note that equation 3 implies that...
$$ D(p_t) = q_t^d = q_t^s = S(p_t^e) $$
...and therefore, assuming the demand function $D$ is invertible, we can write the observed price of goods in period $t$ as...
$$ p_t = D^{-1}\big(S(p_t^e)\big). \tag{5}$$
Substituting equation 5 into equation 4 we arrive at the following difference equation
$$ p_{t+1}^e = w D^{-1}\big(S(p_t^e)\big) + (1 - w)p_t^e. \tag{7}$$
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import functools
import ipywidgets
import matplotlib.pyplot as plt
import numpy as np
from scipy import optimize
import seaborn as sns
import cobweb
def observed_price(D_inverse, S, expected_price, **params):
"""The observed price of goods in a particular period."""
actual_price = D_inverse(S(expected_price, **params), **params)
return actual_price
def adaptive_expectations(D_inverse, S, expected_price, w, **params):
"""An adaptive expectations price forecasting rule."""
actual_price = observed_price(D_inverse, S, expected_price, **params)
price_forecast = w * actual_price + (1 - w) * expected_price
return price_forecast
```
<h2> Non-linear supply functions </h2>
When thinking about supply it helps to start with the following considerations...
<ol>
<li> ...when prices are low, the quantity supplied increases slowly because of fixed costs of production (think startup costs, etc).
<li> ...when prices are high, supply also increases slowly because of capacity constraints.
</ol>
These considerations motivate our focus on "S-shaped" supply functions...
$$ S_{\gamma}(p_t^e) = -tan^{-1}(-\gamma \bar{p}) + tan^{-1}(\gamma (p_t^e - \bar{p})). \tag{10}$$
The parameter $0 < \gamma < \infty$ controls the "steepness" of the supply function.
```
def quantity_supply(expected_price, gamma, p_bar, **params):
"""The quantity of goods supplied in period t given the epxected price."""
return -np.arctan(-gamma * p_bar) + np.arctan(gamma * (expected_price - p_bar))
```
<h3> Exploring supply shocks </h3>
Interactively change the value of $\gamma$ to see the impact on the shape of the supply function.
```
ipywidgets.interact?
interactive_quantity_supply_plot = ipywidgets.interact(cobweb.quantity_supply_plot,
S=ipywidgets.fixed(quantity_supply),
gamma=cobweb.gamma_float_slider,
p_bar=cobweb.p_bar_float_slider)
```
<h2> Special case: Linear demand functions </h2>
Suppose the the quantity demanded of goods is a simple, decresing linear function of the observed price.
$$ q_t^d = D(p_t) = a - b p_t \implies p_t = D^{-1}(q_t^d) = \frac{a}{b} - \frac{1}{b}q_t^d \tag{11} $$
...where $-\infty < a < \infty$ and $0 < b < \infty$.
```
def quantity_demand(observed_price, a, b):
"""The quantity demand of goods in period t given the price."""
quantity = a - b * observed_price
return quantity
def inverse_demand(quantity_demand, a, b, **params):
"""The price of goods in period t given the quantity demanded."""
price = (a / b) - (1 / b) * quantity_demand
return price
```
<h3> Exploring demand shocks </h3>
Interactively change the values of $a$ and $b$ to get a feel for how they impact demand. Shocks to $a$ shift the entire demand curve; shocks to $b$ change the slope of the demand curve (higher $b$ implies greater sensitivity to price; lower $b$ implies less sensitivity to price).
```
interactive_quantity_demand_plot = ipywidgets.interact(cobweb.quantity_demand_plot,
D=ipywidgets.fixed(quantity_demand),
a=cobweb.a_float_slider,
b=cobweb.b_float_slider)
```
<h2> Supply and demand </h2>
Market clearing equilibrium price, $p^*$, satisfies...
$$ D(p_t) = S(p_t^e). $$
Really this is also an equilibrium in beliefs because we also require that $p_t = p_t^e$!
```
interactive_supply_demand_plot = ipywidgets.interact(cobweb.supply_demand_plot,
D=ipywidgets.fixed(quantity_demand),
S=ipywidgets.fixed(quantity_supply),
a=cobweb.a_float_slider,
b=cobweb.b_float_slider,
gamma=cobweb.gamma_float_slider,
p_bar=cobweb.p_bar_float_slider)
```
<h2> Analyzing dynamics of the model via simulation... </h2>
Model has no closed form solution (i.e., we can not solve for a function that describes $p_t^e$ as a function of time and model parameters). BUT, we can simulate equation 7 above to better understand the dynamics of the model...
We can simulate our model and plot time series for different parameter values. Questions for discussion...
<ol>
<li> Can you find a two-cycle? What does this mean?</li>
<li> Can you find higher cycles? Perhaps a four-cycle? Maybe even a three-cycle?</li>
<li> Do simulations with similar initial conditions converge or diverge over time? </li>
</ol>
Can we relate these things to other SFI MOOCS on non-linear dynamics and chaos? Surely yes!
```
model = functools.partial(adaptive_expectations, inverse_demand, quantity_supply)
interactive_time_series_plot = ipywidgets.interact(cobweb.time_series_plot,
F=ipywidgets.fixed(model),
X0=cobweb.initial_expected_price_slider,
T=cobweb.T_int_slider,
a=cobweb.a_float_slider,
b=cobweb.b_float_slider,
w=cobweb.w_float_slider,
gamma=cobweb.gamma_float_slider,
p_bar=cobweb.p_bar_float_slider)
```
<h2> Forecast errors </h2>
How do we measure forecast error? What does the distribution of forecast errors look like for different parameters? Could an agent learn to avoid chaos? Specifically, suppose an agent learned to tune the value of $w$ in order to minimize its mean forecast error. Would this eliminate chaotic dynamics?
```
interactive_forecast_error_plot = ipywidgets.interact(cobweb.forecast_error_plot,
D_inverse=ipywidgets.fixed(inverse_demand),
S=ipywidgets.fixed(quantity_supply),
F=ipywidgets.fixed(model),
X0=cobweb.initial_expected_price_slider,
T=cobweb.T_int_slider,
a=cobweb.a_float_slider,
b=cobweb.b_float_slider,
w=cobweb.w_float_slider,
gamma=cobweb.gamma_float_slider,
p_bar=cobweb.p_bar_float_slider)
```
<h2> Other things of possible interest? </h2>
Impulse response functions?
Compare constrast model predictions for rational expectations, naive expectations, adaptive expectations. Depending on what Cars might have in mind, we could also add other expectation formation rules from his more recent work and have students analyze those...
| true |
code
| 0.828315 | null | null | null | null |
|
# Band Ratios Conflations
This notebook steps through how band ratio measures are underdetermined.
By 'underdetermined', we mean that the same value, or same change in value between measures, can arise from different underlying causes.
This shows that band ratios are a non-specific measure.
As an example case, we use the theta-beta ratio.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from fooof import FOOOF
from fooof.sim import gen_power_spectrum
from fooof.plts.spectra import (plot_spectrum, plot_spectra,
plot_spectrum_shading, plot_spectra_shading)
# Import custom project code
import sys
sys.path.append('../bratios')
from ratios import calc_band_ratio
from paths import FIGS_PATHS as fp
# Settings
SAVE_FIG = False
PLOT_TITLES = True # Whether to plot titles on each axis
# Plot settings
shade_color = '#0365C0'
# Band Settings
theta_band = [4, 8]
beta_band = [20, 30]
# Set up index helpers
cf_ind = 0
pw_ind = 1
bw_ind = 2
# Simulated power spectra settings
freq_range = [1, 35]
freq_res = 0.1
nlv = 0
# Define default aperiodic values
ap_def = [0, 1]
# Define default periodic values
theta_def = [6, 0.4, 1]
alpha_def = [10, 0.5, 0.75]
beta_def = [25, 0.3, 1.5]
```
## Comparing Band Ratio Values
First, let's consider a hypothetical investigation comparing band ratio measures between two groups.
The typical interpretation of finding a difference between measured band ratios would be that there is a difference in the relative powers of the oscillation bands used in the calculation of the band ratio. That is to say, the change in ratio could come from a change in two things (the power of the low band, and/or the power of the high band).
Here, we will show that there are actually many more ways in which one could measure this difference.
A numerically identically change in theta / beta ratio can be obtained from:
#### Periodic Changes
- a change in theta power
- a change in theta bandwidth
- a change in beta center frequency
- a change in beta power
- a change in beta bandwidth
#### Aperiodic Changes
- a change in aperiodic exponent
- with or without oscillations present
Note that the specific values in the simulations below have been tuned to create numerically identical changes in measured band ratio.
```
# Create a baseline PSD, with oscillations, to compare to
freqs, ps_base = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_def, beta_def],
nlv, freq_res)
```
### Periodic Changes
```
## CF
# Change in center frequency - high band
beta_cf = beta_def.copy(); beta_cf[cf_ind] = 19.388
freqs, ps_be_cf = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_def, beta_cf],
nlv, freq_res)
## PW
# Changes in oscillation power - low band
theta_pw = theta_def.copy(); theta_pw[pw_ind] = 0.5041
freqs, ps_th_pw = gen_power_spectrum(freq_range, ap_def,
[theta_pw, alpha_def, beta_def],
nlv, freq_res)
# Changes in oscillation power - high band
beta_pw = beta_def.copy(); beta_pw[pw_ind] = 0.1403
freqs, ps_be_pw = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_def, beta_pw],
nlv, freq_res)
## BW
# Changes in oscillation bandwidth - low band
theta_bw = theta_def.copy(); theta_bw[bw_ind] = 1.61
freqs, ps_th_bw = gen_power_spectrum(freq_range, ap_def,
[theta_bw, alpha_def, beta_def],
nlv, freq_res)
# Changes in oscillation bandwidth - high band
beta_bw = beta_def.copy(); beta_bw[bw_ind] = 0.609
freqs, ps_be_bw = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_def, beta_bw],
nlv, freq_res)
# Changes in other band - center frequency
alpha_cf = alpha_def.copy(); alpha_cf[cf_ind] = 8.212
freqs, ps_al_cf = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_cf, beta_def],
nlv, freq_res)
# Changes in other band - bandwidth
alpha_bw = alpha_def.copy(); alpha_bw[bw_ind] = 1.8845
freqs, ps_al_bw = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_bw, beta_def],
nlv, freq_res)
# Collect all the power spectra together
spectra_data = {'Theta Frequency' : None,
'Theta Power' : ps_th_pw,
'Theta Bandwidth' : ps_th_bw,
'Alpha Frequency' : ps_al_cf,
'Alpha Power' : None,
'Alpha Bandwidth' : ps_al_bw,
'Beta Frequency' : ps_be_cf,
'Beta Power' : ps_be_pw,
'Beta Bandwidth' : ps_be_bw}
# Calcualte theta beta ratio of the baseline power spectrum
base_br = calc_band_ratio(freqs, ps_base, theta_band, beta_band)
# Calculate changes in theta / beta ratios
diffreqs = {}
for label, spectra in spectra_data.items():
if np.all(spectra):
comp_br = calc_band_ratio(freqs, spectra, theta_band, beta_band)
diffreqs[label] = base_br - comp_br
# Check the computed ratio values of each spectrum
print('TBR of base spectrum is: {:1.3f}'.format(base_br))
print('TBR of comp spectrum is: {:1.3f}'.format(comp_br))
# Check TBR difference measures from periodic changes
for label, diff in diffreqs.items():
print('TBR difference from {:20} is \t {:1.3f}'.format(label, diff))
# Create figure of periodic changes
title_settings = {'fontsize': 16, 'fontweight': 'bold'}
fig, ax = plt.subplots(3, 3, figsize=(15, 14))
for axis, (title, data) in zip(ax.flatten(), spectra_data.items()):
if not np.all(data): continue
plot_spectra_shading(freqs, [ps_base, data], [theta_band, beta_band],
shade_colors=shade_color,
log_freqs=False, log_powers=True, ax=axis)
if PLOT_TITLES:
axis.set_title(title, **title_settings)
axis.set_xlim([0, 35])
axis.set_ylim([-1.75, 0])
axis.xaxis.label.set_visible(False)
axis.yaxis.label.set_visible(False)
# Turn off empty axes
ax[0, 0].axis('off')
ax[1, 1].axis('off')
fig.subplots_adjust(hspace=.3)
fig.subplots_adjust(wspace=.3)
if SAVE_FIG: plt.savefig(fp.make_file_path(fp.demo, 'Underdetermined-Periodic', 'pdf'))
```
Each panel above plots two PSDs, where the blue curve is the same reference power spectrum plotted in all panels, and the orange is a unique comparison spectrum.
The difference between TBR from the blue and orange curve is the same (see cell above) across each panel.
This shows that multiple spectral parameters could change to arrive at identical differences in a ratio measure.
#### Periodic Notes
Note that for a given change (or direction of change) in theta / beta ratio (TBR), there is only one center frequency change that could do it.
This is true for the case, as is simulated, in which the 'baseline' spectrum has oscillations entirely within band ranges. In this example, the change is a relative increase in 'theta', and there is no way to increase relative theta by changing theta CF alone. This is due to the choice of comparison spectrum, and in another scenario, theta CF could also change measured ratio measures.
### Aperiodic Changes
The same change in ratio can also be driven from changes in aperiodic properties.
This can happen with or without oscillations even being present.
```
# Change in aperiodic exponent
ap_shift = [0.13, 1.1099]
freqs, ps_ap_ex = gen_power_spectrum(freq_range, ap_shift,
[theta_def, alpha_def, beta_def],
nlv, freq_res)
# Use a new base and transformation, without any oscillations
freqs, ps_new_base = gen_power_spectrum(freq_range, ap_def, [],
nlv, freq_res)
ap_shift = [0.13, 1.1417]
freqs, ps_new_apch = gen_power_spectrum(freq_range, ap_shift, [],
nlv, freq_res)
# Calculate the differences in ratio from baseline spectra
d_ap_osc = base_br - calc_band_ratio(freqs, ps_ap_ex, theta_band, beta_band)
d_ap_no_osc = calc_band_ratio(freqs, ps_new_base, theta_band, beta_band) - \
calc_band_ratio(freqs, ps_new_apch, theta_band, beta_band)
# Check TBR difference measures from aperiodic changes
base_text = 'TBR difference from the aperiodic component '
print(base_text + 'with oscillations is \t {:1.3f}'.format(d_ap_osc))
print(base_text + 'without oscillations is \t {:1.3f}'.format(d_ap_no_osc))
# Collect together components to plot
ap_bases = [ps_base, ps_new_base]
ap_diffs = [ps_ap_ex, ps_new_apch]
# Create aperiodic differences figure
fig, ax = plt.subplots(2, 1, figsize=(5, 9))
for ps_base, ps_diff, axis in zip(ap_bases, ap_diffs, ax.flatten()):
plot_spectra_shading(freqs, [ps_base, ps_diff], [theta_band, beta_band],
shade_colors=shade_color,
log_freqs=False, log_powers=True, ax=axis)
if PLOT_TITLES:
axis.set_title('Aperiodic Exponent', **title_settings)
# Plot Aesthetics
axis.set_xlim([0, 35])
axis.set_ylim([-1.75, 0])
axis.xaxis.label.set_visible(False)
axis.yaxis.label.set_visible(False)
fig.subplots_adjust(wspace=.3)
if SAVE_FIG: plt.savefig(fp.make_file_path(fp.demo, 'Underdetermined-Aperiodic', 'pdf'))
```
#### Conclusions
In this example, we have explored changes to measured band ratios by varying different spectral parameters.
Given an observed change in a BandRatio measure, there is no way to tell what has actually changed.
Variations in multiple spectral parameters can lead to the exact same change in ratio measure.
There is no reason to think the change even reflects oscillatory activity, given that aperiodic shifts can drive this effect.
In this notebook, we simulated variations in one parameter at a time, but in practice, all of these changes could happen together.
In subsequent notebooks, we will further characterize these findings by simulating changes in each parameter, to estimate how impactful different parameters are to ratio measures, as well as by simulating concurrent changes in multiple parameters, to explore the interaction between changes.
## Same Ratio, Different Spectra
So far we have seen how multiple possible changes in power spectra can lead to the same measured difference in band ratio measures across power spectra.
What if we calculate band ratio measures and find that they are the same? Can we infer that the analyzed power spectra are in some ways equivalent?
Next, let's examine if and how different power spectra can have the same band ratio value.
```
# Create a collection of spectra with different properties, with the same measured ratio value
freqs, ps1 = gen_power_spectrum(freq_range, [0, 0.9059],
[theta_def, alpha_def, beta_def],
nlv, freq_res)
freqs, ps2 = gen_power_spectrum(freq_range, [0, 0.9059],
[[6, 0.5, 2], alpha_def, [25, 0.3544, 5]],
nlv, freq_res)
freqs, ps3 = gen_power_spectrum(freq_range, [0.25, 1.2029],
[[6, 0.10, 1], alpha_def, beta_def],
nlv, freq_res)
freqs, ps4 = gen_power_spectrum(freq_range, [0.25, 1.2029],
[theta_def, alpha_def, [25, 0.66444, 1.5]],
nlv, freq_res)
# Collect the generated spectra together
spectra_list = [ps1, ps2, ps3, ps4]
# Calculate the ratio value for each spectrum
for spectrum in spectra_list:
print('Ratio value:\t {:1.3f}'.format(calc_band_ratio(freqs, ps1, theta_band, beta_band)))
# Plot all the power spectra together
plot_spectra_shading(freqs, spectra_list, [theta_band, beta_band],
shade_colors=shade_color, linewidth=3,
log_freqs=False, log_powers=True)
if SAVE_FIG: plt.savefig(fp.make_file_path(fp.demo, 'EquivalentRatioSpectra', 'pdf'))
```
In the plot above, we can see four different power spectra.
However, each of these power spectra has the exact same measured theta / beta ratio value.
Thus we can conclude that measuring the same band ratio value for different power spectra should not be taken to imply that they are in any way equivalent.
| true |
code
| 0.645176 | null | null | null | null |
|
# "E is for Exploratory Data Analysis: Categorical Data"
> What is Exploratory Data Analysis (EDA), why is it done, and how do we do it in Python?
- toc: false
- badges: True
- comments: true
- categories: [E]
- hide: False
- image: images/e-is-for-eda-text/alphabet-close-up-communication-conceptual-278887.jpg
## _What is **Exploratory Data Analysis(EDA)**?_
While I answered these questions in the [last post](https://educatorsrlearners.github.io/an-a-z-of-machine-learning/e/2020/06/15/e-is-for-eda.html), since [all learning is repetition](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=224340), I'll do it again :grin:
EDA is an ethos for how we scrutinize data including, but not limited to:
- what we look for (i.e. shapes, trends, outliers)
- the approaches we employ (i.e. [five-number summary](https://www.statisticshowto.com/how-to-find-a-five-number-summary-in-statistics/), visualizations)
- and the decisions we reach{% fn 1 %}
## _Why is it done?_
Two main reasons:
1. If we collected the data ourselves, we need to know if our data suits our needs or if we need to collect more/different data.
2. If we didn't collect the data ourselves, we need to interrogate the data to answer the "5 W's"
- __What__ kind of data do we have (i.e. numeric, categorical)?
- __When__ was the data collected? There could be more recent data which we could collect which would better inform our model.
- __How__ much data do we have? Also, how was the data collected?
- __Why__ was the data collected? The original motivation could highlight potential areas of bias in the data.
- __Who__ collected the data?
Some of these questions can't necessarily be answered by looking at the data alone which is fine because _[nothing comes from nothing](http://parmenides.me/nothing-comes-from-nothing/)_; someone will know the answers so all we have to do is know where to look and whom to ask.
## _How do we do it in Python?_
As always, I'll follow the steps outlined in [_Hands-on Machine Learning with Scikit-Learn, Keras & TensorFlow_](https://github.com/ageron/handson-ml/blob/master/ml-project-checklist.md)
### Step 1: Frame the Problem
"Given a set of features, can we determine how old someone needs to be to read a book?"
### Step 2: Get the Data
We'll be using the same dataset as in the [previous post](https://educatorsrlearners.github.io/an-a-z-of-machine-learning/e/2020/06/15/e-is-for-eda.html).
### Step 3: Explore the Data to Gain Insights (i.e. EDA)
As always, import the essential libraries, then load the data.
```
#hide
import warnings; warnings.simplefilter('ignore')
#For data manipulation
import pandas as pd
import numpy as np
#For visualization
import seaborn as sns
import matplotlib.pyplot as plt
import missingno as msno
url = 'https://raw.githubusercontent.com/educatorsRlearners/book-maturity/master/csv/book_info_complete.csv'
df = pd.read_csv(url)
```
To review,
***How much data do we have?***
```
df.shape
```
- 23 features
- one target
- 5,816 observations
***What type of data do we have?***
```
df.info()
```
Looks like mostly categorical with some numeric.
Lets take a closer look.
```
df.head().T
```
Again, I collected the data so I know the target is `csm_rating` which is the minimum age Common Sense Media (CSM) says a reader should be for the given book.
Also, we have essentially three types of features:
- Numeric
- `par_rating` : Ratings of the book by parents
- `kids_rating` : Ratings of the book by children
- :dart:`csm_rating` : Ratings of the books by Common Sense Media
- `Number of pages` : Length of the book
- `Publisher's recommended age(s)`: Self explanatory
- Date
- `Publication date` : When the book was published
- `Last updated`: When the book's information was updated on the website
with the rest of the features being categorical and text; these features will be our focus for today.
#### Step 3.1 Housekeeping
Clean the feature names to make inspection easier. {% fn 3 %}
```
df.columns
df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_').str.replace('(', '').str.replace(')', '')
df.columns
```
Much better.
Now lets subset the data frame so we only have the features of interest.
Given there are twice as many text features compared to non-text features, and the fact that I'm ~~lazy~~ efficient, I'll create a list of the features I ***don't*** want
```
numeric = ['par_rating', 'kids_rating', 'csm_rating', 'number_of_pages', "publisher's_recommended_ages", "publication_date", "last_updated"]
```
and use it to keep the features I ***do*** want.
```
df_strings = df.drop(df[numeric], axis=1)
```
_Voila!_
```
df_strings.head().T
```
Clearly, the non-numeric data falls into two groups:
- text
- `description`
- `plot`
- `csm_review`
- `need_to_know`
- categories
- `author`/`authors`
- `genre`
- `award`/`awards`
- etc.
Looking at the output above, so many questions come to mind:
1. How many missing values do we have?
2. How long are the descriptions?
3. What's the difference between `csm_review` and `need_to_know`?
4. Similarly, what the difference between `description` and `plot`?
5. How many different authors do we have in the dataset?
6. How many types of books do we have?
and I'm sure more will arise once we start.
Where to start? Lets answer the easiest questions first :grin:
## Categories
#### ***How many missing values do we have?***
A cursory glance at the output above indicates there are potentially a ton of missing values; lets inspect this hunch visually.
```
msno.bar(df_strings, sort='descending');
```
Hunch confirmed: 10 the 17 columns are missing values with some being practically empty.
To get a precise count, we can use `sidetable`.{% fn 2 %}
```
import sidetable
df_strings.stb.missing(clip_0=True, style=True)
```
OK, we have lots of missing values and several columns which appear to be measuring similar features (i.e., authors, illustrators, publishers, awards) so lets inspect these features in pairs.
### `author` and `authors`
Every book has an author, even if the author is "[Anonymous](https://bookshop.org/a/9791/9781538718469)," so then why do we essentially have two columns for the same thing?
:thinking: `author` is for books with a single writer whereas `authors` is for books with multiple authors like [_Good Omens_](https://bookshop.org/a/9791/9780060853983).
Let's test that theory.
```
msno.matrix(df_strings.loc[:, ['author', 'authors']]);
```
*Bazinga!*
We have a perfect correlation between missing data for `author` and `authors` but lets' have a look just in case.
```
df_strings.loc[df_strings['author'].isna() & df_strings["authors"].notna(), ['title', 'author', 'authors']].head()
df_strings.loc[df_strings['author'].notna() & df_strings["authors"].isna(), ['title', 'author', 'authors']].head()
df_strings.loc[df_strings['author'].notna() & df_strings["authors"].notna(), ['title', 'author', 'authors']].head()
```
My curiosity is satiated.
Now the question is how to successfully merge the two columns?
We could replace the `NaN` in `author` with the:
- values in `authors`
- word `multiple`
- first author in `authors`
- more/most popular of the authors in `authors`
and I'm sure I could come up with even more if I thought about/Googled it but the key is to understand that no matter what we choose, it will have consequences when we build our model{% fn 3 %}.
Next question which comes to mind is:
:thinking: ***How many different authors are there?***
```
df_strings.loc[:, 'author'].nunique()
```
Wow! Nearly half our our observations contain a unique name meaning this feature has [high cardinality](https://www.kdnuggets.com/2016/08/include-high-cardinality-attributes-predictive-model.html).
:thinking: ***Which authors are most represented in the data set?***
Lets create a [frequency table](https://www.mathsteacher.com.au/year8/ch17_stat/03_freq/freq.htm) to find out.
```
author_counts = df_strings.loc[:, ["title", 'author']].groupby('author').count().reset_index()
author_counts.sort_values('title', ascending=False).head(10)
```
Given that I've scraped the data from a website focusing on children, teens, and young adults, the results above only make sense; authors like [Dr. Seuss](https://bookshop.org/contributors/dr-seuss), [Eoin Coifer](https://bookshop.org/contributors/eoin-colfer-20dba4fd-138e-477e-bca5-75b9fa9bfe2f), and [Lemony Snicket](https://bookshop.org/books?keywords=lemony+snicket) are famous children's authors whereas [Rick Riordan](https://bookshop.org/books?keywords=percy+jackson), [Walter Dean Myers](https://bookshop.org/books?keywords=Walter+Dean+Myers) occupy the teen/young adult space and [Neil Gaiman](https://bookshop.org/contributors/neil-gaiman) writes across ages.
:thinking: ***How many authors are only represented once?***
That's easy to check.
```
from matplotlib.ticker import FuncFormatter
ax = author_counts['title'].value_counts(normalize=True).nlargest(5).plot.barh()
ax.invert_yaxis();
#Set the x-axis to a percentage
ax.xaxis.set_major_formatter(FuncFormatter(lambda x, _: '{:.0%}'.format(x)))
```
Wow! So approximately 60% of the authors have one title in our data set.
**Why does that matter?**
When it comes time to build our model we'll need to either [label encode](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html), [one-hot encode](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html), or [hash](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.FeatureHasher.html) this feature and whichever we decide to do will end up effecting the model profoundly due to the high [cardinality](https://pkghosh.wordpress.com/2017/10/09/combating-high-cardinality-features-in-supervised-machine-learning/) of this feature; however, we'll deal with all this another time :grin:.
### `illustrator` and `illustrators`
Missing values can be quite informative.
:thinking: What types of books typically have illustrators?
:bulb: Children's books!
Therefore, if a book's entries for both `illustrator` and `illustrators` is blank, that *probably* means that book doesn't have illustrations which would mean it is *more likely* to be for older children.
Let's test this theory in the simplest way I can think of :smile:
```
#Has an illustrator
df.loc[df['illustrator'].notna() | df['illustrators'].notna(), ['csm_rating']].hist();
#Doesn't have an illustrator
df.loc[df['illustrators'].isna() & df["illustrator"].isna(), ['csm_rating']].hist();
```
:bulb: *Who* the illustrator is doesn't matter as much as *whether* there is an illustrator.
Looks like when I do some feature engineering I'll need to create a `has_illustrator` feature.
### `book_type` and `genre`
These two features should be relatively straightforward but we'll have a quick look anyway.
`book_type` should be easy because, after a cursory inspection using `head` above, I'd expect to only see 'fiction' or 'non-fiction' but I'll double check.
```
ax_book_type = df_strings['book_type'].value_counts().plot.barh();
ax_book_type.invert_yaxis()
```
Good! The only values I have are the ones I expected but the ratio is highly skewed.
:thinking: What impact will this have on our model?
`genre` (e.g. fantasy, romance, sci-fi) is a *far* broader topic than `booktype` but how many different genres are represented in the data set?
```
df_strings['genre'].nunique()
```
:roll_eyes: Great
What's the breakdown?
```
ax_genre = df_strings['genre'].value_counts().plot.barh();
ax_genre.invert_yaxis()
```
That's not super useful but what if I took 10 most common genres?
```
ax_genre_10 = df_strings['genre'].value_counts(normalize=True).nlargest(10).plot.barh();
ax_genre_10.invert_yaxis()
#Set the x axis to percentage
ax_genre_10.xaxis.set_major_formatter(FuncFormatter(lambda x, _: '{:.0%}'.format(x)))
```
Hmmm. Looks like approximately half the books fall into one of three genres.
:bulb: To reduce dimensionality, recode any genre outside of the top 10 as 'other'.
Will save that idea for the feature engineering stage.
### `award` and `awards`
Since certain awards (e.g. [The Caldecott Medal](https://cloviscenter.libguides.com/children/caldecott#:~:text=The%20Medal%20shall%20be%20awarded,the%20illustrations%20be%20original%20work.)) are only awarded to children's books whereas others, namely [The RITA Award](https://en.wikipedia.org/wiki/RITA_Award#Winners) is only for "mature" readers.
:thinking: Will knowing if a work is an award winner provide insight?
:thinking: Which awards are represented?
```
award_ax = df_strings['award'].value_counts().plot.barh()
award_ax.invert_yaxis();
awards_ax = df_strings['awards'].str.split(",").explode().str.strip().value_counts().plot.barh()
awards_ax.invert_yaxis()
```
Hmmmmm. The Caldecott Medal is for picture books so that should mean the target readers are very young; however, we've already seen that "picture books" is the second most common value in `genre` so being a Caldecott Medal winner won't add much. Also, to be eligible for the other awards, a book needs to be aimed a t children 14 or below so that doesn't really tell us much either.
Conclusion: drop this feature.
While I could keep going and analyze `publisher`, `publishers`, and `available_on`, I'd be using the exact same techniques as above so, instead, time to move on to...
## Text
### `description`, `plot`, `csm_review`, `need_to_know`
Now for some REALLY fun stuff!
:thinking: How long are each of these observations?
Trying to be as efficient as possible, I'll:
- make a list of the features I want
```
variables = ['description', 'plot', 'csm_review', 'need_to_know']
```
- write a function to:
- convert the text to lowercase
- tokenize the text and remove [stop words](https://en.wikipedia.org/wiki/Stop_words)
- identify the length of each feature
```
from nltk import word_tokenize
from nltk.corpus import stopwords
stop = stopwords.words('english')
def text_process(df, feature):
df.loc[:, feature+'_tokens'] = df.loc[:, feature].apply(str.lower)
df.loc[:, feature+'_tokens'] = df.loc[:, feature+'_tokens'].apply(lambda x: [item for item in x.split() if item not in stop])
df.loc[:, feature+'_len'] = df.loc[:, feature+'_tokens'].apply(len)
return df
```
- loop through the list of variables saving it to the data frame
```
for var in variables:
df_text = text_process(df_strings, var)
df_text.iloc[:, -8:].head()
```
:thinking: `description` seems to be significantly shorter than the other three.
Let's plot them to investigate.
```
len_columns = df_text.columns.str.endswith('len')
df_text.loc[:,len_columns].hist();
plt.tight_layout()
```
Yep - `description` is significantly shorter but how do the other three compare.
```
columns = ['plot_len', 'need_to_know_len', 'csm_review_len']
df_text[columns].plot.box()
plt.xticks(rotation='vertical');
```
Hmmm. Lots of outliers for `csm_review` but, in general, the three features are of similar lengths.
### Next Steps
While I could create [word clouds](https://www.datacamp.com/community/tutorials/wordcloud-python) to visualize the most frequent words for each feature, or calculate the [sentiment](https://towardsdatascience.com/a-complete-exploratory-data-analysis-and-visualization-for-text-data-29fb1b96fb6a) of each feature, my stated goal is to identify how old someone should be to read a book and not whether a review is good or bad.
To that end, my curiosity about these features is satiated so I'm ready to move on to another chapter.
## Summary
- :ballot_box_with_check: numeric data
- :ballot_box_with_check: categorical data
- :black_square_button: images (book covers)
Two down; one to go!
Going forward, my key points to remember are:
### What type of categorical data do I have?
There is a huge difference between ordered (i.e. "bad", "good", "great") and truly nominal data that has no order/ranking like different genres; just because ***I*** prefer science fiction to fantasy, it doesn't mean it actually ***is*** superior.
### Are missing values really missing?
Several of the features had missing values which were, in fact, not truly missing; for example, the `award` and `awards` features were mostly blank for a very good reason: the book didn't win one of the four awards recognized by Common Sense Media.
In conclusion, both of the points above can be summarized simply by as "be sure to get to know your data."
Happy coding!
#### Footnotes
{{ 'Adapted from [_Engineering Statistics Handbook_](https://www.itl.nist.gov/div898/handbook/eda/section1/eda11.htm)' | fndetail: 1 }}
{{ 'Be sure to check out this excellent [post](https://beta.deepnote.com/article/sidetable-pandas-methods-you-didnt-know-you-needed) by Jeff Hale for more examples on how to use this package' | fndetail: 2 }}
{{ 'See this post on [Smarter Ways to Encode Categorical Data](https://towardsdatascience.com/smarter-ways-to-encode-categorical-data-for-machine-learning-part-1-of-3-6dca2f71b159)' | fndetail: 3 }}
{{ 'Big *Thank You* to [Chaim Gluck](https://medium.com/@chaimgluck1/working-with-pandas-fixing-messy-column-names-42a54a6659cd) for providing this tip' | fndetail: 4 }}
| true |
code
| 0.415581 | null | null | null | null |
|
```
from IPython.display import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
import math, random
from scipy.optimize import linear_sum_assignment
from utils import NestedTensor, nested_tensor_from_tensor_list, MLP
Image(filename="figs/model.png", retina=True)
```
This notebook provides a Pytorch implementation for the sequential variant of PRTR (Pose Regression TRansformers) in [Pose Recognition with Cascade Transformers](https://arxiv.org/abs/2104.06976).
It is intended to provide researchers interested in sequential PRTR with a concrete understanding that only code can deliver. It can also be used as a starting point for end-to-end top-down pose estimation research.
```
class PRTR_sequential(nn.Module):
def __init__(self, backbone, transformer, transformer_kpt, level, x_res=10, y_res=10):
super().__init__()
self.backbone = backbone
self.transformer = transformer
hidden_dim = transformer.d_model
self.class_embed = nn.Linear(hidden_dim, 2)
self.bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3)
self.query_embed = nn.Embedding(100, hidden_dim)
self.input_proj = nn.Conv2d(backbone.num_channels, hidden_dim, kernel_size=1)
self.transformer_kpt = transformer_kpt
x_interpolate = torch.linspace(-1.25, 1.25, x_res, requires_grad=False).unsqueeze(0) # [1, x_res], ANNOT ?(1)
y_interpolate = torch.linspace(-1.25, 1.25, y_res, requires_grad=False).unsqueeze(0) # [1, y_res]
self.register_buffer("x_interpolate", x_interpolate)
self.register_buffer("y_interpolate", y_interpolate)
self.x_res = x_res
self.y_res = y_res
self.level = level
mask = torch.zeros(1, y_res, x_res, requires_grad=False) # [1, y_res, x_res]
self.register_buffer("mask", mask)
self.build_pe()
```
Class `PRTR_sequential` needs the following arguments:
+ backbone: a customizable CNN backbone which returns a pyramid of feature maps with different spatial size
+ transformer: a customizable Transformer for person detection (1st Transformer)
+ transformer_kpt: a customizable Transformer for keypoint detection (2nd Transformer)
+ level: from which layers of pyramid we will extract features
+ x_res: the width of STN-cropped featrure map fed to 2nd Transformer
+ y_res: the height of STN-cropped featrure map fed to 2nd Transformer
Some annotations:
1. For `x_interpolate` and `y_interpolate`, we use an extended eyesight of 125% to the orginal boudning box to provide more information from backbone to the 2nd Transformer.
```
def build_pe(self):
# fixed sine pe
not_mask = 1 - self.mask
y_embed = not_mask.cumsum(1, dtype=torch.float32)
x_embed = not_mask.cumsum(2, dtype=torch.float32)
eps = 1e-6; scale = 2 * math.pi # normalize?
y_embed = y_embed / (y_embed[:, -1:, :] + eps) * scale
x_embed = x_embed / (x_embed[:, :, -1:] + eps) * scale
num_pos_feats = 128; temperature = 10000
dim_t = torch.arange(num_pos_feats, dtype=torch.float32, device=self.mask.device)
dim_t = temperature ** (2 * (dim_t // 2) / num_pos_feats)
pos_x = x_embed[:, :, :, None] / dim_t
pos_y = y_embed[:, :, :, None] / dim_t
pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3)
pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3)
pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
self.register_buffer("pe", pos)
# learnable pe
self.row_embed = nn.Embedding(num_pos_feats, self.x_res)
self.col_embed = nn.Embedding(num_pos_feats, self.y_res)
nn.init.uniform_(self.row_embed.weight)
nn.init.uniform_(self.col_embed.weight)
def get_leant_pe(self):
y_embed = self.col_embed.weight.unsqueeze(-1).expand(-1, -1, self.x_res)
x_embed = self.row_embed.weight.unsqueeze(1).expand(-1, self.y_res, -1)
embed = torch.cat([y_embed, x_embed], dim=0).unsqueeze(0)
return embed
PRTR_sequential.build_pe = build_pe
PRTR_sequential.get_leant_pe = get_leant_pe
```
Then we build positional embedding for the 2nd Transformer, which ensembles both fixed sinusoidal embedding and learnt embedding.
For each box containing person cropped from original image, we use the same positional embedding, irrelevent to where the box is.
```
def forward(self, samples):
# the 1st Transformer, to detect person
features, pos = self.backbone(samples)
hs = self.transformer(self.input_proj(features[-1].tensors), features[-1].mask, self.query_embed.weight, pos[-1])[0][-1] # [B, person per image, f]
logits = self.class_embed(hs) # [B, person per image, 2]
bboxes = self.bbox_embed(hs).sigmoid() # [B, person per image, 4]
outputs = {'pred_logits': logits, 'pred_boxes': bboxes}
# some preperation for STN feature cropping
person_per_image = hs.size(1)
num_person = person_per_image * hs.size(0)
heights, widths = samples.get_shape().unbind(-1) # [B] * 2
rh = heights.repeat_interleave(person_per_image) # [person per image * B]
rw = widths.repeat_interleave(person_per_image) # [person per image * B]
srcs = [features[_].decompose()[0] for _ in self.level]
cx, cy, w, h = bboxes.flatten(end_dim=1).unbind(-1) # [person per image * B] * 4
cx, cy, w, h = cx * rw, cy * rh, w * rw, h * rh # ANNOT (1)
# STN cropping
y_grid = (h.unsqueeze(-1) @ self.y_interpolate + cy.unsqueeze(-1) * 2 - 1).unsqueeze(-1).unsqueeze(-1) # [person per image * B, y_res, 1, 1]
x_grid = (w.unsqueeze(-1) @ self.x_interpolate + cx.unsqueeze(-1) * 2 - 1).unsqueeze(-1).unsqueeze(1) # [person per image * B, 1, x_res, 1]
grid = torch.cat([x_grid.expand(-1, self.y_res, -1, -1), y_grid.expand(-1, -1, self.x_res, -1)], dim=-1)
cropped_feature = []
cropped_pos = []
for j, l in enumerate(self.level):
cropped_feature.append(F.grid_sample(srcs[j].expand(num_person, -1, -1, -1), grid, padding_mode="border")) # [person per image * B, C, y_res, x_res]
cropped_feature = torch.cat(cropped_feature, dim=1)
cropped_pos.append(self.pe.expand(num_person, -1, -1, -1))
cropped_pos.append(self.get_leant_pe().expand(num_person, -1, -1, -1))
cropped_pos = torch.cat(cropped_pos, dim=1)
mask = self.mask.bool().expand(num_person, -1, -1) # ANNOT (2)
# 2nd Transformer
coord, logtis = self.transformer_kpt(bboxes, cropped_feature, cropped_pos, mask) # [person per image * B, 17, 2]
outputs["pred_kpt_coord"] = coord.reshape(hs.size(0), -1, self.transformer_kpt.num_queries, 2)
outputs["pred_kpt_logits"] = logtis.reshape(hs.size(0), -1, self.transformer_kpt.num_queries, self.transformer_kpt.num_kpts + 1)
return outputs
PRTR_sequential.forward = forward
```
`forward` method takes in a `NestedTensor` and returns a dictionary of predictions, some annotations:
1. Input `samples` and `features` are `NestedTensor`, which basically stacks a list of tensors of different shapes by their top-left corner and uses masks to denote valid positions. Thus when we need to crop person bounding boxes from the whole feature map, we need to scale boxes according to image size
2. we always gives unmasked image to the 2nd Transformer, becasue all the persons are cropped to the same resolution
```
def infer(self, samples):
self.eval()
outputs = self(samples)
out_logits, out_coord = outputs['pred_kpt_logits'], outputs['pred_kpt_coord']
C_stacked = out_logits[..., 1:].transpose(2, 3).flatten(0, 1).detach().cpu().numpy() # [person per image * B, 17, num queries (for keypoint)]
out_coord = out_coord.flatten(0, 1)
coord_holder = []
for b, C in enumerate(C_stacked):
_, query_ind = linear_sum_assignment(-C)
coord_holder.append(out_coord[b, query_ind.tolist()])
matched_coord = torch.stack(coord_holder, dim=0).reshape(out_logits.size(0), out_logits.size(1), 17, -1)
return matched_coord # [B, num queries, num kpts, 2]
PRTR_sequential.infer = infer
```
`infer` takes the same input as `forward`, but instead of returning all keypoint queries for loss calculaiton, it leverages a Hungarian algorithm to select the 17 keytpoints as prediction.
The selection process can be thought of as a bipartite graph matching problem, graph constructed as below:
+ for each query in 2nd Transformer a node is made, creating set Q
+ for each keypoint type, a node is made, creating set K
+ set Q and K are fully inter-connected, edge weight between $Q_i$ and $K_j$ are the _unnormalized logits_ of query $i$ classified as keypoint type $k$
+ Q, K have no intra-connection,
Hungarian algorithm will find the matching between Q and K with highest edge weights, selected queries are returned as prediction. A minimal example with only 3 queries and 2 keypoint types are shown as below:

```
class DETR_kpts(nn.Module):
def __init__(self, transformer, num_kpts, num_queries, input_dim):
super().__init__()
self.num_kpts = num_kpts
self.num_queries = num_queries
hidden_dim = transformer.d_model
self.query_embed = nn.Embedding(num_queries, hidden_dim)
self.input_proj = nn.Conv2d(input_dim, hidden_dim, kernel_size=1)
self.transformer = transformer
self.coord_predictor = MLP(hidden_dim, hidden_dim, 2, num_layers=3)
self.class_predictor = nn.Linear(hidden_dim, num_kpts + 1)
def forward(self, bboxes, features, pos, mask):
src_proj = self.input_proj(features)
j_embed = self.transformer(src_proj, mask, self.query_embed.weight, pos)[0][-1] # [B, num queries, hidden dim]
j_coord_ = self.coord_predictor(j_embed).sigmoid()
x, y = j_coord_.unbind(-1) # [B, Q] * 2
x = (x * 1.25 - 0.625) * bboxes[:, 2].unsqueeze(-1) + bboxes[:, 0].unsqueeze(-1)
y = (y * 1.25 - 0.625) * bboxes[:, 3].unsqueeze(-1) + bboxes[:, 1].unsqueeze(-1)
x = x.clamp(0, 1)
y = y.clamp(0, 1)
j_coord = torch.stack([x, y], dim=-1)
j_class = self.class_predictor(j_embed[-1]) # [B, J, c+1], logits
return j_coord, j_class
```
Class `DETR_kpts` is the 2nd Transformer in PRTR and needs the following arguments:
+ transformer: a customizable Transformer for keypoint detection (2nd Transformer)
+ num_kpts: number of keypoint annotations per person of this dataset, e.g., COCO has 17 keypoints
+ num_queries: query number, similar to DETR
+ input_dim: image feature dimension from 1st Transformer
Its `forward` takes in `bboxes` becasue we need to recover per-person prediction to whole image coordinates, `features`, `pos` and `mask` for Transformer input.
`forward` returns predicted keypoint coordinates in 0 to 1, relative to whole image, and their probability belonging to each keypoint class, e.g. nose, left shoulder.
| true |
code
| 0.829975 | null | null | null | null |
|
# Simulate and Generate Empirical Distributions in Python
## Mini-Lab: Simulations, Empirical Distributions, Sampling
Welcome to your next mini-lab! Go ahead an run the following cell to get started. You can do that by clicking on the cell and then clickcing `Run` on the top bar. You can also just press `Shift` + `Enter` to run the cell.
```
from datascience import *
import numpy as np
import random
import otter
grader = otter.Notebook("m6_l1_tests")
```
Let's continue our analysis of COVID-19 data with the same false negative and false positive values of 10% and 5%. For the first task, let's try and create a sample population with 10,000 people. Let's say that 20% of this population has COVID-19. Replace the `...` in function below to create this sample population. The `create_population` function takes in an input `n` and returns a table with `n` rows. These rows can either have `positive` or `negative` as their value. These values indicate whether or not an individual has COVID-19.
For random number generation, feel free to look up the [NumPy documentation](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.random.html) or the [Python random documentation](https://docs.python.org/3.8/library/random.html).
```
def create_population(n):
test_results = ...
for ...:
random_num = ...
if ...:
disease_result = ...
else:
disease_result = ...
test_results = np.append(test_results, disease_result)
return Table().with_column("COVID-19", test_results)
covid_population = create_population(...)
covid_population.show(5)
# There is a chance that this test may fail even with a correct solution due to randomness!
# Run the above cell again and run the grader again if you think this is the case.
grader.check("q1")
```
Given this population, let's go ahead and randomly test 1000 members. Complete `test_population` below by replacing the `...` with functional code. This function takes in a `population` which is a `datascience` table and a number `n`, where `n` is the number of people that we are testing. Inside the function, we add a column to this table called `Test Results` which contains the test result for each person in the sample based on the false negative and false positve rates given earlier. There is another function called `test_individual` that simplifies `test_population`. You will use `test_individual` within `test_population`.
```
def test_population(population, n):
population = ...
test_results = population.apply(test_individuals, "COVID-19")
population = population.with_column(...)
return population
def test_individuals(individual):
random_num = ...
if individual == "positive":
if ...:
return ...
else:
return ...
else:
if ...:
return ...
else:
return ...
covid_sample = ...
covid_sample.show(5)
# There is a chance that this test may fail even with a correct solution due to randomness!
# Run the above cell again and run the grader again if you think this is the case.
grader.check("q2")
```
Now that we've simulated a population and sampled this population, let's take a look at our results. We'll pivot first by the `COVID-19` column and then by the `Test Results` column to look at how well our COVID-19 test does using "real-life" figures.
```
covid_sample.pivot("COVID-19", "Test Results")
```
You'll see that though our test correctly identifies the disease most of the time, there are still some instances where our test gets it wrong. It is impossible for a test to have both a 0% false negative rate and a 0% false positive rate. In the case of this disease and testing, which should we prioritize? Driving down the false positive rate or driving down the false negative rate? Is there reason why one should be prioritized over the other? There is no simple answer to these questions, and as data scientists, we'll have to grapple with these issues oursleves and navigate the complex web we call life.
Congratulations on finishing! Run the next cell to make sure that you passed all of the test cases.
```
grader.check_all()
```
| true |
code
| 0.330599 | null | null | null | null |
|
Cognizant Data Science Summit 2020 : July 1, 2020
Yogesh Deshpande [157456]
# Week 1 challenge - Python
Description
The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal. The eight queens puzzle is an example of the more general n queens problem of placing n non-attacking queens on an n×n chessboard. (Source : https://en.wikipedia.org/wiki/Eight_queens_puzzle )
Challenge
The challenge is to generate one right sequence through Genetic Programming. The sequence has to be 8 numbers between 0 to 7. Each number represents the positions the Queens can be placed. Each number refers to the row number in the specific column
0 3 4 5 6 1 2 4
• 0 is the row number in the column 0 where the Queen can be placed
• 3 is the row number in the column 1 where the Queen can be placed
# Initiaze variables, functions' definitions
```
import random
# Set the variables as per the problem statement
NumberofQueens = 8
InitialPopulation = 1000000 # Initial population has number of chromozones out of which one or more are possible solutions
NumberofIterations = 1000 # Number of generations to check for possible solution
def create_chromozone(NumberofQueens):
chromozone = []
for gene in range(NumberofQueens):
chromozone.append(random.randint(0, NumberofQueens-1))
return chromozone
#print(chromozone)
# Unit testing
# create_chromozone(NumberofQueens)
def create_population(NumberofQueens, InitialPopulation):
Population = []
for chromozone in range(InitialPopulation):
Population.append(create_chromozone(NumberofQueens))
#print(Population)
return Population
# Unit testing
#create_population(NumberofQueens, InitialPopulation)
def fitness_calculation(chromosome, maxFitness):
horizontal_collisions = sum([chromosome.count(i) - 1 for i in chromosome])/2
diagonal_collisions = 0
for record in range(1,len(chromosome)+1):
column1 = record-1
row1 = chromosome[column1]
for i in range (column1+1, len(chromosome)):
column2 = i
row2 = chromosome[i]
deltaRow = abs(row1 - row2)
deltaCol = abs(column1 - column2)
if (deltaRow == deltaCol):
#print("######## Collision detected ##############")
diagonal_collisions = diagonal_collisions + 1
#print("Horizontal Collisions are {} and Diagonal are {} ".format(horizontal_collisions, diagonal_collisions))
fitness_score = maxFitness - (horizontal_collisions + diagonal_collisions)
#print("The fitness score is {}".format(fitness_score))
return fitness_score
#Unit Test
#itness_calculation([4, 1, 5, 8, 2, 7, 3, 6], 28)
def strength_of_chromosome(chromosome, maxFitness):
return fitness_calculation(chromosome, maxFitness) / maxFitness
#Unit Test
#strength_of_chromosome([1, 1, 1, 1, 1, 1, 1, 1], 28)
#strength_of_chromosome([4, 1, 5, 8, 2, 7, 3, 6], 28)
```
# Main Program for solution to get a 8-Queen sequence
```
# Main Program
if __name__ == "__main__":
# Calulate the target Fitness
TargetFitness = (NumberofQueens * (NumberofQueens - 1)) /2
print("Maximum score to achive is = {}".format(TargetFitness))
# Inital population
Population = create_population(NumberofQueens, InitialPopulation)
generation_counter = 0
for iteration in range(NumberofIterations):
MaxPopulationScore = max([fitness_calculation(chromozone, TargetFitness) for chromozone in Population])
print("generation counter = {}, MaxPopulationScore = {}".format(generation_counter, MaxPopulationScore))
if (MaxPopulationScore != TargetFitness):
# If the current population has no score matching target score, continue with next generation
generation_counter = generation_counter + 1
else:
# Target score is achieved at this stage
break
print("Solved in generation {}".format(generation_counter+1))
for chromosome in Population:
if (fitness_calculation(chromosome, TargetFitness) == TargetFitness):
print("Solution =======> {}".format(chromosome))
create_chromozone(8)
create_chromozone(8)
```
| true |
code
| 0.349783 | null | null | null | null |
|
# 🔬 Sequence Comparison of DNA using `BioPython`
### 🦠 `Covid-19`, `SARS`, `MERS`, and `Ebola`
#### Analysis Techniques:
* Compare their DNA sequence and Protein (Amino Acid) sequence
* GC Content
* Freq of Each Amino Acids
* Find similarity between them
* Alignment
* hamming distance
* 3D structure of each
| DNA Sequence | Datasource |
|:-----------------|:--------------------------------------------------------------|
| Latest Sequence | https://www.ncbi.nlm.nih.gov/genbank/sars-cov-2-seqs/ |
| Wuhan-Hu-1 | https://www.ncbi.nlm.nih.gov/nuccore/MN908947.3?report=fasta |
| Covid19 | https://www.ncbi.nlm.nih.gov/nuccore/NC_045512.2?report=fasta |
| SARS | https://www.ncbi.nlm.nih.gov/nuccore/NC_004718.3?report=fasta |
| MERS | https://www.ncbi.nlm.nih.gov/nuccore/NC_019843.3?report=fasta |
| EBOLA | https://www.ncbi.nlm.nih.gov/nuccore/NC_002549.1?report=fasta |
### 1. Analysis Techniques
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from Bio.Seq import Seq
# Create our sequence
seq1 = Seq('ACTCGA')
seq2 = Seq('AC')
```
#### GC Contents In DNA
* `GC-content` (or guanine-cytosine content) is the **percentage of nitrogenous bases** in a DNA or RNA molecule that are either guanine (`G`) or cytosine (`C`)
#### Usefulness
* In polymerase chain reaction (PCR) experiments, the GC-content of short oligonucleotides known as primers is often used to predict their **annealing temperature** to the template DNA.
* A `high` GC-content level indicates a relatively higher melting temperature.
* DNA with `low` GC-content is less stable than DNA with high GC-content
> Question: which sequence is more stable when heat is applied?
```
from Bio.SeqUtils import GC
# Check GC (guanine-cytosine) percentage in sequence
print(f"{GC(seq1)}% \t({seq1})")
print(f"{GC(seq2)}% \t({seq2})")
```
### Sequence Alignment
* `Global alignment` finds the best concordance/agreement between all characters in two sequences
* `Local Alignment` finds just the subsequences that align the best
```
from Bio import pairwise2
from Bio.pairwise2 import format_alignment
print('seq1 =', seq1, '\nseq2 =', seq2, '\n\n')
# Global alignment
alignments = pairwise2.align.globalxx(seq1, seq2)
print(f'Alignments found: {len(alignments)}')
print(*alignments)
# Print nicely
print(format_alignment(*alignments[0]))
# 2nd alignment
print(format_alignment(*alignments[1]))
# To see all possible alignments
for a in alignments:
print(format_alignment(*a), '\n')
# Get the number of possible sequence alignments
alignment_score = pairwise2.align.globalxx(seq1,seq2,one_alignment_only=True,score_only=True)
alignment_score
```
#### Sequence Similarity
* Fraction of nucleotides that is the same/ total number of nucleotides * 100%
```
alignment_score/len(seq1)*100
```
### Hamming Distance: `How Many Subsitutions are Required to Match Two Sequences?`
* Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different.
* In other words, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other
* It is used for error detection or error correction
* It is used to quantify the similarity of DNA sequences
#### Edit Distance
* Is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other (e.g. Levenshtein distance)
```
def hamming_distance(lhs, rhs):
return len([(x,y) for x,y in zip(lhs,rhs) if x != y])
hamming_distance('TT', 'ACCTA')
def hammer_time(s1, s2, verbose=True):
"""Take two nucleotide sequences s1 and s2, and display
the possible alignments and hamming distance.
"""
if verbose:
print('s1 =', s1, '\ns2 =', s2, '\n\n')
print('Hamming Distance:', hamming_distance(s1, s2), '\n(min substitutions for sequences to match)')
print('\nAlignment Options:\n\n')
alignments = pairwise2.align.globalxx(s1, s2)
for a in alignments:
print(format_alignment(*a), '\n')
s1 = 'ACTCGAA'
s2 = 'ACGA'
hammer_time(s1, s2)
```
### Dot Plot
* A dot plot is a graphical method that allows the **comparison of two biological sequences** and identify regions of **close similarity** between them.
* Simplest explanation: put a dot wherever sequences are identical
#### Usefulness
Dot plots can also be used to visually inspect sequences for
- Direct or inverted repeats
- Regions with low sequence complexity
- Similar regions
- Repeated sequences
- Sequence rearrangements
- RNA structures
- Gene order
Acknowledgement: https://stackoverflow.com/questions/40822400/how-to-create-a-dotplot-of-two-dna-sequence-in-python
```
def delta(x,y):
return 0 if x == y else 1
def M(seq1,seq2,i,j,k):
return sum(delta(x,y) for x,y in zip(seq1[i:i+k],seq2[j:j+k]))
def makeMatrix(seq1,seq2,k):
n = len(seq1)
m = len(seq2)
return [[M(seq1,seq2,i,j,k) for j in range(m-k+1)] for i in range(n-k+1)]
def plotMatrix(M,t, seq1, seq2, nonblank = chr(0x25A0), blank = ' '):
print(' |' + seq2)
print('-'*(2 + len(seq2)))
for label,row in zip(seq1,M):
line = ''.join(nonblank if s < t else blank for s in row)
print(label + '|' + line)
def dotplot(seq1,seq2,k = 1,t = 1):
M = makeMatrix(seq1,seq2,k)
plotMatrix(M, t, seq1,seq2) #experiment with character choice
# The dot plot: put a dot where the two sequences are identical
s1 = 'ACTCGA'
s2 = 'AC'
dotplot(s1, s2)
# Identical proteins will show a diagonal line.
s1 = 'ACCTAG'
s2 = 'ACCTAG'
dotplot(s1, s2)
print('\n\n')
hammer_time(s1, s2, verbose=False)
```
# 🔬 2. Comparative Analysis of Virus DNA
### 🦠 `Covid-19`, `SARS`, `MERS`, `Ebola`
* Covid19(`SARS-CoV2`) is a novel coronavirus identified as the cause of coronavirus disease 2019 (COVID-19) that began in Wuhan, China in late 2019 and spread worldwide.
* MERS(`MERS-CoV`) was identified in 2012 as the cause of Middle East respiratory syndrome (MERS).
* SARS(`SARS-CoV`) was identified in 2002 as the cause of an outbreak of severe acute respiratory syndrome (SARS).
#### `fasta` DNA Sequence Files
* Covid19 : https://www.rcsb.org/3d-view/6LU7
* SARS: https://www.ncbi.nlm.nih.gov/nuccore/NC_004718.3?report=fasta
* MERS: https://www.ncbi.nlm.nih.gov/nuccore/NC_019843.3?report=fasta
* EBOLA:https://www.rcsb.org/structure/6HS4
```
import pandas as pd
import numpy as np
from Bio import SeqIO
covid = SeqIO.read("../data/01_COVID_MN908947.3.fasta","fasta")
mers = SeqIO.read("../data/02_MERS_NC_019843.3.fasta","fasta")
sars = SeqIO.read("../data/03_SARS_rcsb_pdb_5XES.fasta","fasta")
ebola = SeqIO.read("../data/04_EBOLA_rcsb_pdb_6HS4.fasta","fasta")
# Convert imports to BioPython sequences
covid_seq = covid.seq
mers_seq = mers.seq
sars_seq = sars.seq
ebola_seq = ebola.seq
# Create dataframe
df = pd.DataFrame({'name': ['COVID19', 'MERS', 'SARS', 'EBOLA'],
'seq': [covid_seq, mers_seq, sars_seq, ebola_seq]})
df
```
#### Length of Each Genome
```
df['len'] = df.seq.apply(lambda x: len(x))
df[['name', 'len']].sort_values('len', ascending=False) \
.style.bar(color='#cde8F6', vmin=0, width=100, align='left')
```
* `MERS`, `COVID` and `SARS` all have about the same genome length (30,000 base pairs)
#### Which of them is more heat stable?
```
# Check the GC content
df['gc_content'] = df.seq.apply(lambda x: GC(x))
df[['name', 'gc_content']].sort_values('gc_content', ascending=False) \
.style.bar(color='#cde8F6', vmin=0)
```
* `MERS` is the most stable with a GC of `41.24` followed by Ebola
#### Translate RNA into proteins
How many proteins are in each dna sequence?
```
# Translate the RNA into Proteins
df['proteins'] = df.seq.apply(lambda s: len(s.translate()))
df[['name', 'proteins']].sort_values('proteins', ascending=False) \
.style.bar(color='#cde8F6', vmin=0)
```
#### How Many Amino Acids are Created?
```
from Bio.SeqUtils.ProtParam import ProteinAnalysis
from collections import Counter
# Method 1
covid_analysed = ProteinAnalysis(str(covid_protein))
mers_analysed = ProteinAnalysis(str(mers_protein))
sars_analysed = ProteinAnalysis(str(sars_protein))
ebola_analysed = ProteinAnalysis(str(ebola_protein))
# Check for the Frequence of AA
covid_analysed.count_amino_acids()
# Method 2
from collections import Counter
# Find the Amino Acid Frequency
df['aa_freq'] = df.seq.apply(lambda s: Counter(s.translate()))
df
```
#### Most Common Amino Acid
```
# For Covid
df[df.name=='COVID19'].aa_freq.values[0].most_common(10)
# Plot the Amino Acids of COVID-19
aa = df[df.name=='COVID19'].aa_freq.values[0]
plt.bar(aa.keys(), aa.values())
# All viruses -- same chart (not stacked)
for virus in df.name:
aa = df[df.name==virus].aa_freq.values[0]
plt.bar(aa.keys(), aa.values())
plt.show()
```
### Dot Plots of Opening Sequences
```
# COVID and MERS
dotplot(covid_seq[0:10],mers_seq[0:10])
# COVID and SARS
n = 10
dotplot(covid_seq[0:n],sars_seq[0:n])
# Plotting function to illustrate deeper matches
def dotplotx(seq1, seq2, n):
seq1=seq1[0:n]
seq2=seq2[0:n]
plt.imshow(np.array(makeMatrix(seq1,seq2,1)))
# on x-axis list all sequences of seq 2
xt=plt.xticks(np.arange(len(list(seq2))),list(seq2))
# on y-axis list all sequences of seq 1
yt=plt.yticks(np.arange(len(list(seq1))),list(seq1))
plt.show()
dotplotx(covid_seq, sars_seq, n=100)
```
Notice the large diagonal line for the second half of the first 100 nucleotides - indicating these are the same for `COVID19` and `SARS`
```
dotplotx(covid_seq, ebola_seq, n=100)
```
No corresponding matches for `EBOLA` and `COVID`
#### Calculate Pairwise Alignment for the First 100 Nucleotides
```
def pairwise_alignment(s1, s2, n):
if n == 'full': n = min(len(s1), len(s2))
alignment = pairwise2.align.globalxx(s1[0:n], s2[0:n], one_alignment_only=True, score_only=True)
print(f'Pairwise alignment: {alignment:.0f}/{n} ({(alignment/n)*100:0.1f}%)')
# SARS and COVID
pairwise_alignment(covid_seq, sars_seq, n=100)
pairwise_alignment(covid_seq, sars_seq, n=10000)
pairwise_alignment(covid_seq, sars_seq, n=len(sars_seq))
```
* `82.9`% of the COVID19 genome is exactly the same as SARS
```
pairwise_alignment(covid_seq, mers_seq, n='full')
pairwise_alignment(covid_seq, ebola_seq, n='full')
```
* `COVID19` and `SARS` have a `82.9`% similarity. Both are of the same genus and belong to `Sars_Cov`.
* `COVID19` and `EBOLA` have a `65.3`% similarity since they are from a different family of virus
### Example of the Opening Sequence of `COVID19` and `SARS`
Sequencing found similar structure from `40:100` so lets use our functions to visualise it.
```
s1 = covid_seq[40:100]
s2 = sars_seq[40:100]
print('Similarity matrix (look for diagonal)')
dotplotx(s1, s2, n=100)
print('Possible alignment pathways: \n\n')
hammer_time(s1, s2, verbose=False)
```
| true |
code
| 0.468304 | null | null | null | null |
|
# Monetary Economics: Chapter 5
### Preliminaries
```
# This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
import matplotlib.pyplot as plt
from pysolve.model import Model
from pysolve.utils import is_close,round_solution
```
### Model LP1
```
def create_lp1_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.set_param_default(0)
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('G', desc='Government goods')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + ' +
'BLs(-1)) - (T + Rb(-1)*Bcb(-1)) - (BLs - BLs(-1))*Pbl')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + chi * (Pble - Pbl) / Pbl')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pbl')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = Pblbar')
return model
lp1_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938}
lp1_exogenous = {'G': 20,
'Rbar': 0.03,
'Pblbar': 20}
lp1_variables = {'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20}
```
### Scenario: Interest rate shock
```
lp1 = create_lp1_model()
lp1.set_values(lp1_parameters)
lp1.set_values(lp1_exogenous)
lp1.set_values(lp1_variables)
for _ in xrange(15):
lp1.solve(iterations=100, threshold=1e-6)
# shock the system
lp1.set_values({'Rbar': 0.04,
'Pblbar': 15})
for _ in xrange(45):
lp1.solve(iterations=100, threshold=1e-6)
```
###### Figure 5.2
```
caption = '''
Figure 5.2 Evolution of the wealth to disposable income ratio, following an increase
in both the short-term and long-term interest rates, with model LP1'''
data = [s['V']/s['YDr'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.89, 1.01)
axes.plot(data, 'k')
# add labels
plt.text(20, 0.98, 'Wealth to disposable income ratio')
fig.text(0.1, -.05, caption);
```
###### Figure 5.3
```
caption = '''
Figure 5.3 Evolution of the wealth to disposable income ratio, following an increase
in both the short-term and long-term interest rates, with model LP1'''
ydrdata = [s['YDr'] for s in lp1.solutions[5:]]
cdata = [s['C'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(92.5, 101.5)
axes.plot(ydrdata, 'k')
axes.plot(cdata, linestyle='--', color='r')
# add labels
plt.text(16, 98, 'Disposable')
plt.text(16, 97.6, 'income')
plt.text(22, 95, 'Consumption')
fig.text(0.1, -.05, caption);
```
###### Figure 5.4
```
caption = '''
Figure 5.4 Evolution of the bonds to wealth ration and the bills to wealth ratio,
following an increase from 3% to 4% in the short-term interest rate, while the
long-term interest rates moves from 5% to 6.67%, with model LP1'''
bhdata = [s['Bh']/s['V'] for s in lp1.solutions[5:]]
pdata = [s['Pbl']*s['BLh']/s['V'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.382, 0.408)
axes.plot(bhdata, 'k')
axes.plot(pdata, linestyle='--', color='r')
# add labels
plt.text(14, 0.3978, 'Bonds to wealth ratio')
plt.text(17, 0.39, 'Bills to wealth ratio')
fig.text(0.1, -.05, caption);
```
### Model LP2
```
def create_lp2_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('TP', desc='Target proportion in households portfolio')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.var('z1', desc='Switch parameter')
model.var('z2', desc='Switch parameter')
model.set_param_default(0)
model.param('add', desc='Random shock to expectations')
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('beta', desc='Adjustment parameter in price of bills')
model.param('betae', desc='Adjustment parameter in expectations')
model.param('bot', desc='Bottom value for TP')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('top', desc='Top value for TP')
model.param('G', desc='Government goods')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' +
' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)')
model.add('z1 = if_true(TP > top)')
model.add('z2 = if_true(TP < bot)')
model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))')
return model
lp2_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'beta': 0.01,
'betae': 0.5,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938,
'bot': 0.495,
'top': 0.505 }
lp2_exogenous = {'G': 20,
'Rbar': 0.03,
'Pblbar': 20,
'add': 0}
lp2_variables = {'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20,
'Pble': 20,
'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh)
'z1': 0,
'z2': 0}
```
### Scenario: interest rate shock
```
lp2_bill = create_lp2_model()
lp2_bill.set_values(lp2_parameters)
lp2_bill.set_values(lp2_exogenous)
lp2_bill.set_values(lp2_variables)
lp2_bill.set_values({'z1': lp2_bill.evaluate('if_true(TP > top)'),
'z2': lp2_bill.evaluate('if_true(TP < bot)')})
for _ in xrange(10):
lp2_bill.solve(iterations=100, threshold=1e-4)
# shock the system
lp2_bill.set_values({'Rbar': 0.035})
for _ in xrange(45):
lp2_bill.solve(iterations=100, threshold=1e-4)
```
###### Figure 5.5
```
caption = '''
Figure 5.5 Evolution of the long-term interest rate (the bond yield), following an
increase in the short-term interest rate (the bill rate), as a result of the response of
the central bank and the Treasury, with Model LP2.'''
rbdata = [s['Rb'] for s in lp2_bill.solutions[5:]]
pbldata = [1./s['Pbl'] for s in lp2_bill.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.set_ylim(0.029, 0.036)
axes.plot(rbdata, linestyle='--', color='r')
axes2 = axes.twinx()
axes2.spines['top'].set_visible(False)
axes2.set_ylim(0.0495, 0.052)
axes2.plot(pbldata, 'k')
# add labels
plt.text(12, 0.0518, 'Short-term interest rate')
plt.text(15, 0.0513, 'Long-term interest rate')
fig.text(0.05, 1.05, 'Bill rate')
fig.text(1.15, 1.05, 'Bond yield')
fig.text(0.1, -.1, caption);
```
###### Figure 5.6
```
caption = '''
Figure 5.6 Evolution of the target proportion (TP), that is the share of bonds in the
government debt held by households, following an increase in the short-term interest
rate (the bill rate) and the response of the central bank and of the Treasury,
with Model LP2'''
tpdata = [s['TP'] for s in lp2_bill.solutions[5:]]
topdata = [s['top'] for s in lp2_bill.solutions[5:]]
botdata = [s['bot'] for s in lp2_bill.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.set_ylim(0.490, 0.506)
axes.plot(topdata, color='k')
axes.plot(botdata, color='k')
axes.plot(tpdata, linestyle='--', color='r')
# add labels
plt.text(30, 0.5055, 'Ceiling of target range')
plt.text(30, 0.494, 'Floor of target range')
plt.text(10, 0.493, 'Share of bonds')
plt.text(10, 0.4922, 'in government debt')
plt.text(10, 0.4914, 'held by households')
fig.text(0.1, -.15, caption);
```
### Scenario: Shock to the bond price expectations
```
lp2_bond = create_lp2_model()
lp2_bond.set_values(lp2_parameters)
lp2_bond.set_values(lp2_exogenous)
lp2_bond.set_values(lp2_variables)
lp2_bond.set_values({'z1': 'if_true(TP > top)',
'z2': 'if_true(TP < bot)'})
for _ in xrange(10):
lp2_bond.solve(iterations=100, threshold=1e-5)
# shock the system
lp2_bond.set_values({'add': -3})
lp2_bond.solve(iterations=100, threshold=1e-5)
lp2_bond.set_values({'add': 0})
for _ in xrange(43):
lp2_bond.solve(iterations=100, threshold=1e-4)
```
###### Figure 5.7
```
caption = '''
Figure 5.7 Evolution of the long-term interest rate, following an anticipated fall in
the price of bonds, as a consequence of the response of the central bank and of the
Treasury, with Model LP2'''
pbldata = [1./s['Pbl'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.0497, 0.0512)
axes.plot(pbldata, linestyle='--', color='k')
# add labels
plt.text(15, 0.0509, 'Long-term interest rate')
fig.text(0.1, -.1, caption);
```
###### Figure 5.8
```
caption = '''
Figure 5.8 Evolution of the expected and actual bond prices, following an anticipated
fall in the price of bonds, as a consequence of the response of the central bank and of
the Treasury, with Model LP2'''
pbldata = [s['Pbl'] for s in lp2_bond.solutions[5:]]
pbledata = [s['Pble'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(16.5, 21)
axes.plot(pbldata, linestyle='--', color='k')
axes.plot(pbledata, linestyle='-', color='r')
# add labels
plt.text(8, 20, 'Actual price of bonds')
plt.text(10, 19, 'Expected price of bonds')
fig.text(0.1, -.1, caption);
```
###### Figure 5.9
```
caption = '''
Figure 5.9 Evolution of the target proportion (TP), that is the share of bonds in the
government debt held by households, following an anticipated fall in the price of
bonds, as a consequence of the response of the central bank and of the Treasury, with
Model LP2'''
tpdata = [s['TP'] for s in lp2_bond.solutions[5:]]
botdata = [s['top'] for s in lp2_bond.solutions[5:]]
topdata = [s['bot'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.47, 0.52)
axes.plot(tpdata, linestyle='--', color='r')
axes.plot(botdata, linestyle='-', color='k')
axes.plot(topdata, linestyle='-', color='k')
# add labels
plt.text(30, 0.508, 'Ceiling of target range')
plt.text(30, 0.491, 'Floor of target range')
plt.text(10, 0.49, 'Share of bonds in')
plt.text(10, 0.487, 'government debt')
plt.text(10, 0.484, 'held by households')
fig.text(0.1, -.15, caption);
```
### Scenario: Model LP1, propensity to consume shock
```
lp1_alpha = create_lp1_model()
lp1_alpha.set_values(lp1_parameters)
lp1_alpha.set_values(lp1_exogenous)
lp1_alpha.set_values(lp1_variables)
for _ in xrange(10):
lp1_alpha.solve(iterations=100, threshold=1e-6)
# shock the system
lp1_alpha.set_values({'alpha1': 0.7})
for _ in xrange(45):
lp1_alpha.solve(iterations=100, threshold=1e-6)
```
### Model LP3
```
def create_lp3_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('PSBR', desc='Public sector borrowing requirement (PSBR)')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('TP', desc='Target proportion in households portfolio')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.var('z1', desc='Switch parameter')
model.var('z2', desc='Switch parameter')
model.var('z3', desc='Switch parameter')
model.var('z4', desc='Switch parameter')
# no longer exogenous
model.var('G', desc='Government goods')
model.set_param_default(0)
model.param('add', desc='Random shock to expectations')
model.param('add2', desc='Addition to the government expenditure setting rule')
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('beta', desc='Adjustment parameter in price of bills')
model.param('betae', desc='Adjustment parameter in expectations')
model.param('bot', desc='Bottom value for TP')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('top', desc='Top value for TP')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' +
' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)')
model.add('z1 = if_true(TP > top)')
model.add('z2 = if_true(TP < bot)')
model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))')
model.add('PSBR = (G + Rb*Bs(-1) + BLs(-1)) - (T + Rb*Bcb(-1))')
model.add('z3 = if_true((PSBR(-1)/Y(-1)) > 0.03)')
model.add('z4 = if_true((PSBR(-1)/Y(-1)) < -0.03)')
model.add('G = G(-1) - (z3 + z4)*PSBR(-1) + add2')
return model
lp3_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'beta': 0.01,
'betae': 0.5,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938,
'bot': 0.495,
'top': 0.505 }
lp3_exogenous = {'Rbar': 0.03,
'Pblbar': 20,
'add': 0,
'add2': 0}
lp3_variables = {'G': 20,
'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20,
'Pble': 20,
'PSBR': 0,
'Y': 115.8,
'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh)
'z1': 0,
'z2': 0,
'z3': 0,
'z4': 0}
```
### Scenario: LP3, decrease in propensity to consume
```
lp3_alpha = create_lp3_model()
lp3_alpha.set_values(lp3_parameters)
lp3_alpha.set_values(lp3_exogenous)
lp3_alpha.set_values(lp3_variables)
for _ in xrange(10):
lp3_alpha.solve(iterations=100, threshold=1e-6)
# shock the system
lp3_alpha.set_values({'alpha1': 0.7})
for _ in xrange(45):
lp3_alpha.solve(iterations=100, threshold=1e-6)
```
###### Figure 5.10
```
caption = '''
Figure 5.10 Evolution of national income (GDP), following a sharp decrease in the
propensity to consume out of current income, with Model LP1'''
ydata = [s['Y'] for s in lp1_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(90, 128)
axes.plot(ydata, linestyle='--', color='k')
# add labels
plt.text(20, 110, 'Gross Domestic Product')
fig.text(0.1, -.05, caption);
```
###### Figure 5.11
```
caption = '''
Figure 5.11 Evolution of national income (GDP), following a sharp decrease in the
propensity to consume out of current income, with Model LP3'''
ydata = [s['Y'] for s in lp3_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off', right='off')
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(90, 128)
axes.plot(ydata, linestyle='--', color='k')
# add labels
plt.text(20, 110, 'Gross Domestic Product')
fig.text(0.1, -.05, caption);
```
###### Figure 5.12
```
caption = '''
Figure 5.12 Evolution of pure government expenditures and of the government deficit
to national income ratio (the PSBR to GDP ratio), following a sharp decrease in the
propensity to consume out of current income, with Model LP3'''
gdata = [s['G'] for s in lp3_alpha.solutions[5:]]
ratiodata = [s['PSBR']/s['Y'] for s in lp3_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top='off')
axes.spines['top'].set_visible(False)
axes.set_ylim(16, 20.5)
axes.plot(gdata, linestyle='--', color='r')
plt.text(5, 20.4, 'Pure government')
plt.text(5, 20.15, 'expenditures (LHS)')
plt.text(30, 18, 'Deficit to national')
plt.text(30, 17.75, 'income ration (RHS)')
axes2 = axes.twinx()
axes2.tick_params(top='off')
axes2.spines['top'].set_visible(False)
axes2.set_ylim(-.01, 0.04)
axes2.plot(ratiodata, linestyle='-', color='b')
# add labels
fig.text(0.1, 1.05, 'G')
fig.text(0.9, 1.05, 'PSBR to Y ratio')
fig.text(0.1, -.1, caption);
```
| true |
code
| 0.627809 | null | null | null | null |
|
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
sns.set_style("whitegrid")
plt.style.use("fivethirtyeight")
df = pd.read_csv('diabetes.csv')
df[0:10]
pd.set_option("display.float", "{:.2f}".format)
df.describe()
df.info()
missing_values_count = df.isnull().sum()
total_cells = np.product(df.shape)
total_missing = missing_values_count.sum()
percentage_missing = (total_missing/total_cells)*100
print(percentage_missing)
from sklearn.ensemble import RandomForestRegressor
x = df.copy()
y = x.pop('Outcome')
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(x,y,test_size=0.20,random_state=0)
from sklearn.metrics import accuracy_score
```
## logistic regression
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,Y_train)
Y_pred_lr = lr.predict(X_test)
score_lr = round(accuracy_score(Y_pred_lr,Y_test)*100,2)
print("The accuracy score achieved using Logistic Regression is: "+str(score_lr)+" %")
```
## Navie bayes
```
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X_train,Y_train)
Y_pred_nb = nb.predict(X_test)
score_nb = round(accuracy_score(Y_pred_nb,Y_test)*100,2)
print("The accuracy score achieved using Naive Bayes is: "+str(score_nb)+" %")
```
## Support Vector Machine
```
from sklearn import svm
sv = svm.SVC(kernel='linear')
sv.fit(X_train, Y_train)
Y_pred_svm = sv.predict(X_test)
score_svm = round(accuracy_score(Y_pred_svm,Y_test)*100,2)
print("The accuracy score achieved using Linear SVM is: "+str(score_svm)+" %")
```
## KNN
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(X_train,Y_train)
Y_pred_knn=knn.predict(X_test)
score_knn = round(accuracy_score(Y_pred_knn,Y_test)*100,2)
print("The accuracy score achieved using KNN is: "+str(score_knn)+" %")
```
## XG Boost
```
import xgboost as xgb
xgb_model = xgb.XGBClassifier(objective="binary:logistic", random_state=42)
xgb_model.fit(X_train, Y_train)
Y_pred_xgb = xgb_model.predict(X_test)
score_xgb = round(accuracy_score(Y_pred_xgb,Y_test)*100,2)
print("The accuracy score achieved using XGBoost is: "+str(score_xgb)+" %")
```
## Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
# Neural Network
```
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import Dense
model = Sequential()
model.add(Dense(11,activation='relu',input_dim=8))
model.add(Dense(1,activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(X_train,Y_train, validation_data=(X_test, Y_test),epochs=200, batch_size=10)
import matplotlib.pyplot as plt
%matplotlib inline
# Model accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Model Losss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
Y_pred_nn = model.predict(X_test)
rounded = [round(x[0]) for x in Y_pred_nn]
Y_pred_nn = rounded
score_nn = round(accuracy_score(Y_pred_nn,Y_test)*100,2)
print("The accuracy score achieved using Neural Network is: "+str(score_nn)+" %")
```
# Convolutional Neural Network
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten,BatchNormalization
from tensorflow.keras.layers import Conv1D, MaxPool1D
from tensorflow.keras.optimizers import Adam
print(tf.__version__)
X_train.shape
X_test.shape
x_train = X_train.reshape(614,8,1)
x_test = X_test.reshape(154,8,1)
epochs = 100
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=2, activation='relu', input_shape=(8,1)))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Conv1D(filters=32, kernel_size=2, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Conv1D(filters=32, kernel_size=2, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(64,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.00005),metrics=['accuracy'])
hists = model.fit(x_train, Y_train,validation_data=(x_test, Y_test), epochs=200, verbose=1)
# Model accuracy
plt.plot(hists.history['accuracy'])
plt.plot(hists.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Model Losss
plt.plot(hists.history['loss'])
plt.plot(hists.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Predicting the Test set results
y_pred_cnn = model.predict(x_test)
rounded = [round(x[0]) for x in y_pred_cnn]
Y_pred_cnn = rounded
score_cnn = round(accuracy_score(Y_pred_cnn,Y_test)*100,2)
print("The accuracy score achieved using artificial Neural Network is: "+str(score_cnn)+" %")
```
# Artificial Neural Network
```
import keras
from keras.models import Sequential
from keras.layers import Dense
# Intinialising the ANN
classifier = Sequential()
# Adding the input layer and the first Hidden layer
classifier.add(Dense(activation="relu", input_dim=8, units=7, kernel_initializer="uniform"))
# Adding the output layer
classifier.add(Dense(activation="sigmoid", input_dim=8, units=1, kernel_initializer="uniform"))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the training set
hist = classifier.fit(X_train, Y_train,validation_data=(X_test, Y_test), batch_size=10, epochs=500)
# Model accuracy
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Model Losss
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Predicting the Test set results
y_pred_ann = classifier.predict(X_test)
rounded = [round(x[0]) for x in y_pred_ann]
Y_pred_ann = rounded
score_ann = round(accuracy_score(Y_pred_ann,Y_test)*100,2)
print("The accuracy score achieved using artificial Neural Network is: "+str(score_ann)+" %")
```
## model with best score
```
scores = [score_lr,score_nb,score_svm,score_knn,score_xgb,score_nn,score_ann,score_cnn]
algorithms = ["Logistic Regression","Naive Bayes","Support Vector Machine","K-Nearest Neighbors","XGBoost","Neural Network","Art. Neural Network","Conv. Neural Network"]
for i in range(len(algorithms)):
print("The accuracy score achieved using "+algorithms[i]+" is: "+str(scores[i])+" %")
sns.set(rc={'figure.figsize':(15,7)})
plt.xlabel("Algorithms")
plt.ylabel("Accuracy score")
sns.barplot(algorithms,scores)
```
| true |
code
| 0.636861 | null | null | null | null |
|
```
# default_exp dl_101
```
# Deep learning 101 with Pytorch and fastai
> Some code and text snippets have been extracted from the book [\"Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD\"](https://course.fast.ai/), and from these blog posts [[ref1](https://muellerzr.github.io/fastblog/2021/02/14/Pytorchtofastai.html)].
```
#hide
from nbdev.showdoc import *
from fastcore.all import *
# export
import torch
from torch.utils.data import TensorDataset
import matplotlib.pyplot as plt
import torch.nn as nn
from wwdl.utils import *
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
device
```
## Linear regression model in Pytorch
### Datasets and Dataloaders
We'll create a dataset that contains $(x,y)$ pairs sampled from the linear function $y = ax + b+ \epsilon$. To do this, we'll create a PyTorch's `TensorDataset`.
A PyTorch tensor is nearly the same thing as a NumPy array. The vast majority of methods and operators supported by NumPy on these structures are also supported by PyTorch, but PyTorch tensors have additional capabilities. One major capability is that these structures can live on the GPU, in which case their computation will be optimized for the GPU and can run much faster (given lots of values to work on). In addition, PyTorch can automatically calculate derivatives of these operations, including combinations of operations. These two things are critical for deep learning
```
# export
def linear_function_dataset(a, b, n=100, show_plot=False):
r"""
Creates a Pytorch's `TensorDataset` with `n` random samples of the
linear function y = `a`*x + `b`. `show_plot` dcides whether or not to
plot the dataset
"""
x = torch.randn(n, 1)
y = a*x + b + 0.1*torch.randn(n, 1)
if show_plot:
show_TensorFunction1D(x, y, marker='.')
return TensorDataset(x, y)
a = 2
b = 3
n = 100
data = linear_function_dataset(a, b, n, show_plot=True)
test_eq(type(data), TensorDataset)
```
In every machine/deep learning experiment, we need to have at least two datasets:
- training: used to train the model
- validation: used to validate the model after each training step. It allows to detect overfitting and adjust the hyperparameters of the model properly
```
train_ds = linear_function_dataset(a, b, n=100, show_plot=True)
valid_ds = linear_function_dataset(a, b, n=20, show_plot=True)
```
A dataloader combines a dataset an a sampler that samples data into **batches**, and provides an iterable over the given dataset.
```
bs = 10
train_dl = torch.utils.data.DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=bs, shuffle=False)
for i, data in enumerate(train_dl, 1):
x, y = data
print(f'batch {i}: x={x.shape} ({x.device}), y={y.shape} ({y.device})')
```
### Defining a linear regression model in Pytorch
The class `torch.nn.Module` is the base structure for all models in Pytorch. It mostly helps to register all the trainable parameters. A module is an object of a class that inherits from the PyTorch `nn.Module` class.
To implement an `nn.Module` you just need to:
- Make sure the superclass __init__ is called first when you initialize it.
- Define any parameters of the model as attributes with nn.Parameter. To tell `Module` that we want to treat a tensor as a parameter, we have to wrap it in the `nn.Parameter` class. All PyTorch modules use `nn.Parameter` for any trainable parameters. This class doesn't actually add any functionality (other than automatically calling `requires_grad`). It's only used as a "marker" to show what to include in parameters:
- Define a forward function that returns the output of your model.
```
#export
class LinRegModel(nn.Module):
def __init__(self):
super().__init__()
self.a = nn.Parameter(torch.randn(1))
self.b = nn.Parameter(torch.randn(1))
def forward(self, x): return self.a*x + self.b
model = LinRegModel()
pa, pb = model.parameters()
pa, pa.shape, pb, pb.shape
```
Objects of this class behave identically to standard Python functions, in that you can call them using parentheses and they will return the activations of a model.
```
x = torch.randn(10, 1)
out = model(x)
x, x.shape, out, out.shape
```
### Loss function and optimizer
The loss is the thing the machine is using as the measure of performance to decide how to update model parameters. The loss function is simple enough for a regression problem, we'll just use the Mean Square Error (MSE)
```
loss_func = nn.MSELoss()
loss_func(x, out)
```
We have data, a model, and a loss function; we only need one more thing we can fit a model, and that's an optimizer.
```
opt_func = torch.optim.SGD(model.parameters(), lr = 1e-3)
```
### Training loop
During training, we need to push our model and our batches to the GPU. Calling `cuda()` on a model or a tensor this class puts all these parameters on the GPU:
```
model = model.to(device)
```
To train a model, we will need to compute all the gradients of a given loss with respect to its parameters, which is known as the *backward pass*. The *forward pass* is where we compute the output of the model on a given input, based on the matrix products. PyTorch computes all the gradients we need with a magic call to `loss.backward`. The backward pass is the chain rule applied multiple times, computing the gradients from the output of our model and going back, one layer at a time.
In Pytorch, Each basic function we need to differentiate is written as a `torch.autograd.Function` object that has a `forward` and a `backward` method. PyTorch will then keep trace of any computation we do to be able to properly run the backward pass, unless we set the `requires_grad` attribute of our tensors to `False`.
For minibatch gradient descent (the usual way of training in deep learning), we calculate gradients on batches. Before moving onto the next batch, we modify our model's parameters based on the gradients. For each iteration through our dataset (which would be called an **epoch**), the optimizer would perform as many updates as we have batches.
There are two important methods in a Pytorch optimizer:
- `zero_grad`: In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes. `zero_grad` just loops through the parameters of the model and sets the gradients to zero. It also calls `detach_`, which removes any history of gradient computation, since it won't be needed after `zero_grad`.
```
n_epochs = 10
# export
def train(model, device, train_dl, loss_func, opt_func, epoch_idx):
r"""
Train `model` for one epoch, whose index is given in `epoch_idx`. The
training loop will iterate through all the batches of `train_dl`, using
the the loss function given in `loss_func` and the optimizer given in `opt_func`
"""
running_loss = 0.0
batches_processed = 0
for batch_idx, (x, y) in enumerate(train_dl, 1):
x, y = x.to(device), y.to(device) # Push data to GPU
opt_func.zero_grad() # Reset gradients
# Forward pass
output = model(x)
loss = loss_func(output, y)
# Backward pass
loss.backward()
# Optimizer step
opt_func.step()
# print statistics
running_loss += loss.item()
batches_processed += 1
print(f'Train loss [Epoch {epoch_idx}]: {running_loss/batches_processed : .2f})')
for epoch in range(1, n_epochs+1):
train(model, device, train_dl, loss_func, opt_func, epoch)
```
We can see how the parameters of the regression model are getting closer to the truth values `a` and `b` from the linear function.
```
L(model.named_parameters())
```
### Validating the model
Validating the model requires only a forward pass, it's just inference. Disabling gradient calculation with the method `torch.no_grad()` is useful for inference, when you are sure that you will not call :meth:`Tensor.backward()`.
```
#export
def validate(model, device, dl):
running_loss = 0.
total_batches = 0
with torch.no_grad():
for x, y in valid_dl:
x, y = x.to(device), y.to(device)
output = model(x)
loss = loss_func(output, y)
running_loss += loss.item()
total_batches += 1
print(f'Valid loss: {running_loss/total_batches : .2f}')
validate(model, device, valid_dl)
```
In order to spot overfitting, it is useful to validate the model after each training epoch.
```
for epoch in range(1, n_epochs +1):
train(model, device, train_dl, loss_func, opt_func, epoch)
validate(model, device, valid_dl)
```
## Abstracting the manual training loop: moving from Pytorch to fastai
```
from fastai.basics import *
from fastai.callback.progress import ProgressCallback
```
We can entirely replace the custom training loop with fastai's. That means you can get rid of `train()`, `validate()`, and the epoch loop in the original code, and replace it all with a couple of lines.
fastai's training loop lives in a `Learner`. The Learner is the glue that merges everything together (Datasets, Dataloaders, model and optimizer) and enables to train by just calling a `fit` function.
fastai's `Learner` expects DataLoaders to be used, rather than simply one DataLoader, so let's make that. We could just do `dls = Dataloaders(train_dl, valid_dl)`, to keep the PyTorch Dataloaders. However, by using a fastai `DataLoader` instead, created directly from the `TensorDataset` objects, we have some automations, such as automatic pushing of the data to GPU.
```
dls = DataLoaders.from_dsets(train_ds, valid_ds, bs=10)
learn = Learner(dls, model=LinRegModel(), loss_func=nn.MSELoss(), opt_func=SGD)
```
Now we have everything needed to do a basic `fit`:
```
learn.fit(10, lr=1e-3)
```
Having a Learner allows us to easily gather the model predictions for the validation set, which we can use for visualisation and analysis
```
inputs, preds, outputs = learn.get_preds(with_input=True)
inputs.shape, preds.shape, outputs.shape
show_TensorFunction1D(inputs, outputs, y_hat=preds, marker='.')
```
## Building a simple neural network
For the next example, we will create the dataset by sampling values from a non linear sample $y(x) = -\frac{1}{100}x^7 - x^4 - 2x^2 - 4x + 1$
```
# export
def nonlinear_function_dataset(n=100, show_plot=False):
r"""
Creates a Pytorch's `TensorDataset` with `n` random samples of the
nonlinear function y = (-1/100)*x**7 -x**4 -2*x**2 -4*x + 1 with a bit
of noise. `show_plot` decides whether or not to plot the dataset
"""
x = torch.rand(n, 1)*20 - 10 # Random values between [-10 and 10]
y = (-1/100)*x**7 -x**4 -2*x**2 -4*x + 1 + 0.1*torch.randn(n, 1)
if show_plot:
show_TensorFunction1D(x, y, marker='.')
return TensorDataset(x, y)
n = 100
ds = nonlinear_function_dataset(n, show_plot=True)
x, y = ds.tensors
test_eq(x.shape, y.shape)
```
We will create the trainin and test dataset, and build the Dataloaders with them, this time directly in fastai, using the `Dataloaders.from_dsets` method.
```
train_ds = nonlinear_function_dataset(n=1000)
valid_ds = nonlinear_function_dataset(n=200)
```
Normalization in deep learning are used to make optimization easier by smoothing the loss surface of the network. We will normalize the data based on the mean and std of the train dataset
```
norm_mean = train_ds.tensors[1].mean()
norm_std = train_ds.tensors[1].std()
train_ds_norm = TensorDataset(train_ds.tensors[0],
(train_ds.tensors[1] - norm_mean)/norm_std)
valid_ds_norm = TensorDataset(valid_ds.tensors[0],
(valid_ds.tensors[1] - norm_mean)/norm_std)
dls = DataLoaders.from_dsets(train_ds_norm, valid_ds_norm, bs = 32)
```
We will build a Multi Layer Perceptron with 3 hidden layers. These networks are also known as Feed-Forward Neural Networks. The layers aof this type of networks are known as Fully Connected Layers, because, between every subsequent pair of layers, all the neurons are connected to each other.
<img alt="Neural network architecture" caption="Neural network" src="https://i.imgur.com/5ZWPtRS.png">
The easiest way of wrapping several layers in Pytorch is using the `nn.Sequential` module. It creates a module with a `forward` method that will call each of the listed layers or functions in turn, without us having to do the loop manually in the forward pass.
```
#export
class MLP3(nn.Module):
r"""
Multilayer perceptron with 3 hidden layers, with sizes `nh1`, `nh2` and
`nh3` respectively.
"""
def __init__(self, n_in=1, nh1=200, nh2=100, nh3=50, n_out=1):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(n_in, nh1),
nn.ReLU(),
nn.Linear(nh1, nh2),
nn.ReLU(),
nn.Linear(nh2, nh3),
nn.ReLU(),
nn.Linear(nh3, n_out)
)
def forward(self, x): return self.layers(x)
x, y = dls.one_batch()
model = MLP3()
output = model(x)
output.shape
learn = Learner(dls, MLP3(), loss_func=nn.MSELoss(), opt_func=Adam)
learn.fit(10, lr=1e-3)
inputs, preds, outputs = learn.get_preds(with_input = True)
show_TensorFunction1D(inputs, outputs, y_hat=preds, marker='.')
```
Let's compare these results with the ones by our previous linear regression model
```
learn_lin = Learner(dls, LinRegModel(), loss_func=nn.MSELoss(), opt_func=Adam)
learn_lin.fit(20, lr=1e-3)
inputs, preds, outputs = learn_lin.get_preds(with_input = True)
show_TensorFunction1D(inputs, outputs, y_hat=preds, marker='.')
```
## Export
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| true |
code
| 0.869077 | null | null | null | null |
|
# CNN - Example 01
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
### Load Keras Dataset
```
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
```
#### Visualize data
```
print(x_train.shape)
single_image = x_train[0]
print(single_image.shape)
plt.imshow(single_image)
```
### Pre-Process data
#### One Hot encode
```
# Make it one hot encoded otherwise it will think as a regression problem on a continuous axis
from tensorflow.keras.utils import to_categorical
print("Shape before one hot encoding" +str(y_train.shape))
y_example = to_categorical(y_train)
print(y_example)
print("Shape after one hot encoding" +str(y_train.shape))
y_example[0]
y_cat_test = to_categorical(y_test,10)
y_cat_train = to_categorical(y_train,10)
```
#### Normalize the images
```
x_train = x_train/255
x_test = x_test/255
scaled_single = x_train[0]
plt.imshow(scaled_single)
```
#### Reshape the images
```
# Reshape to include channel dimension (in this case, 1 channel)
# x_train.shape
x_train = x_train.reshape(60000, 28, 28, 1)
x_test = x_test.reshape(10000,28,28,1)
# ### Image data augmentation
from tensorflow.keras.preprocessing.image import ImageDataGenerator
```
help(ImageDataGenerator)
```
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
datagen.fit(x_train)
it = datagen.flow(x_train, y_cat_train, batch_size=32)
# Preparing the Samples and Plot for displaying output
for i in range(9):
# preparing the subplot
plt.subplot(330 + 1 + i)
# generating images in batches
batch = it.next()
# Remember to convert these images to unsigned integers for viewing
image = batch[0][0].astype('uint8')
# Plotting the data
plt.imshow(image)
# Displaying the figure
plt.show()
```
### Model # 1
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPool2D, Flatten
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(4,4), input_shape=(28, 28, 1), activation='relu',))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))
```
Notes : If y is not one hot coded then loss= sparse_categorical_crossentropy
```
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy', 'categorical_accuracy'])
# we can add in additional metrics https://keras.io/metrics/
model.summary()
```
#### Add Early Stopping
```
from tensorflow.keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_loss', patience=2)
```
##### Training using one hot encoding
```
# fits the model on batches with real-time data augmentation:
history = model.fit(datagen.flow(x_train, y_cat_train, batch_size=32),
epochs=10,
steps_per_epoch=len(x_train) / 32,
validation_data=(x_test,y_cat_test),
callbacks=[early_stop])
```
#### Save model
Saving model
from tensorflow.keras.models import load_model
model_file = 'D:\\Sandbox\\Github\\MODELS\\' + '01_mnist.h5'
model.save(model_file)
#### Retreive model
Retrieve model
model = load_model(model_file)
#### Evaluate
Rule of thumb
1. High Bias accuracy = 80% val-accuracy = 78% (2% gap)
2. High Variance accuracy = 98% val-accuracy = 80% (18% gap)
3. High Bias and High Variance accuracy = 80% val-accuracy = 60% (20% gap)
4. Low Bias and Low Variance accuracy = 98% val-accuracy = 96% (2% gap)
#### Eval - Train
```
model.metrics_names
pd.DataFrame(history.history).head()
#pd.DataFrame(model.history.history).head()
```
pd.DataFrame(history.history).plot()
```
losses = pd.DataFrame(history.history)
losses[['loss','val_loss']].plot()
losses[['accuracy','val_accuracy']].plot()
# Plot loss per iteration
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.legend()
# Plot accuracy per iteration
plt.plot(history.history['accuracy'], label='acc')
plt.plot(history.history['val_accuracy'], label='val_acc')
plt.legend()
```
#### Eval - Test
```
test_metrics = model.evaluate(x_test,y_cat_test,verbose=1)
print('Loss on test dataset:', test_metrics[0])
print('Accuracy on test dataset:', test_metrics[1])
print("Loss and Accuracy on Train dataset:")
pd.DataFrame(history.history).tail(1)
```
As it turns out, the accuracy on the test dataset is smaller than the accuracy on the training dataset.
This is completely normal, since the model was trained on the `train_dataset`.
When the model sees images it has never seen during training, (that is, from the `test_dataset`),
we can expect performance to go down.
#### Prediction
```
y_prediction = np.argmax(model.predict(x_test), axis=-1)
```
#### Reports
```
from sklearn.metrics import classification_report,confusion_matrix
print(classification_report(y_test, y_prediction))
print(confusion_matrix(y_test, y_prediction))
```
Recall (sensivity) : Fraud detection recall because you want to catch FN (real fraud guys)
Precision (specificity): Sentiment analysis precision is important. You want to catch all feeling FP ()
F1 score : Higher is better to compare two or more models
accuracy : higher is better
error : 1 - accuracy
Ideally, We want both Precision & Recall to be 1 but it is a zero-sum game. You can't have both
```
import seaborn as sns
plt.figure(figsize=(10,6))
sns.heatmap(confusion_matrix(y_test,y_prediction),annot=True)
```
#### Predictions go wrong!
```
# Show some misclassified examples
misclassified_idx = np.where(y_prediction != y_test)[0]
i = np.random.choice(misclassified_idx)
plt.imshow(x_test[i].reshape(28,28), cmap='gray')
plt.title("True label: %s Predicted: %s" % (y_test[i], y_prediction[i]));
```
#### Final thoughts
Rule of thumb
1. High Bias accuracy = 80% val-accuracy = 78% (2% gap)
2. High Variance accuracy = 98% val-accuracy = 80% (18% gap)
3. High Bias and High Variance accuracy = 80% val-accuracy = 60% (20% gap)
4. Low Bias and Low Variance accuracy = 98% val-accuracy = 96% (2% gap)
```
print("Percentage of wrong predcitions : " + str(len(misclassified_idx)/len(y_prediction)*100) + " %")
print("Models maximum accuracy : " + str(np.max(history.history['accuracy'])*100) + " %")
print("Models maximum validation accuracy : " + str(np.max(history.history['val_accuracy'])*100) + " %")
```
Model has Low Bias and High Variance with more than 29% gap. The recall is also bad. Image augmentation
doesn't help here. Augumentation with rotation and tilting doesn't help b/c it is a unique digital shape.
| true |
code
| 0.699614 | null | null | null | null |
|
# 卷积神经网络示例与各层可视化
```
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
print ("当前TensorFlow版本为 [%s]" % (tf.__version__))
print ("所有包载入完毕")
```
## 载入 MNIST
```
mnist = input_data.read_data_sets('data/', one_hot=True)
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
print ("MNIST ready")
```
## 定义模型
```
# NETWORK TOPOLOGIES
n_input = 784
n_channel = 64
n_classes = 10
# INPUTS AND OUTPUTS
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# NETWORK PARAMETERS
stddev = 0.1
weights = {
'c1': tf.Variable(tf.random_normal([7, 7, 1, n_channel], stddev=stddev)),
'd1': tf.Variable(tf.random_normal([14*14*64, n_classes], stddev=stddev))
}
biases = {
'c1': tf.Variable(tf.random_normal([n_channel], stddev=stddev)),
'd1': tf.Variable(tf.random_normal([n_classes], stddev=stddev))
}
print ("NETWORK READY")
```
## 定义图结构
```
# MODEL
def CNN(_x, _w, _b):
# RESHAPE
_x_r = tf.reshape(_x, shape=[-1, 28, 28, 1])
# CONVOLUTION
_conv1 = tf.nn.conv2d(_x_r, _w['c1'], strides=[1, 1, 1, 1], padding='SAME')
# ADD BIAS
_conv2 = tf.nn.bias_add(_conv1, _b['c1'])
# RELU
_conv3 = tf.nn.relu(_conv2)
# MAX-POOL
_pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# VECTORIZE
_dense = tf.reshape(_pool, [-1, _w['d1'].get_shape().as_list()[0]])
# DENSE
_logit = tf.add(tf.matmul(_dense, _w['d1']), _b['d1'])
_out = {
'x_r': _x_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3
, 'pool': _pool, 'dense': _dense, 'logit': _logit
}
return _out
# PREDICTION
cnnout = CNN(x, weights, biases)
# LOSS AND OPTIMIZER
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=y, logits=cnnout['logit']))
optm = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
corr = tf.equal(tf.argmax(cnnout['logit'], 1), tf.argmax(y, 1))
accr = tf.reduce_mean(tf.cast(corr, "float"))
# INITIALIZER
init = tf.global_variables_initializer()
print ("FUNCTIONS READY")
```
## 存储
```
savedir = "nets/cnn_mnist_simple/"
saver = tf.train.Saver(max_to_keep=3)
save_step = 4
if not os.path.exists(savedir):
os.makedirs(savedir)
print ("SAVER READY")
```
## 运行
```
# PARAMETERS
training_epochs = 20
batch_size = 100
display_step = 4
# LAUNCH THE GRAPH
sess = tf.Session()
sess.run(init)
# OPTIMIZE
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# ITERATION
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
feeds = {x: batch_xs, y: batch_ys}
sess.run(optm, feed_dict=feeds)
avg_cost += sess.run(cost, feed_dict=feeds)
avg_cost = avg_cost / total_batch
# DISPLAY
if (epoch+1) % display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch+1, training_epochs, avg_cost))
feeds = {x: batch_xs, y: batch_ys}
train_acc = sess.run(accr, feed_dict=feeds)
print ("TRAIN ACCURACY: %.3f" % (train_acc))
feeds = {x: mnist.test.images, y: mnist.test.labels}
test_acc = sess.run(accr, feed_dict=feeds)
print ("TEST ACCURACY: %.3f" % (test_acc))
# SAVE
if (epoch+1) % save_step == 0:
savename = savedir+"net-"+str(epoch+1)+".ckpt"
saver.save(sess, savename)
print ("[%s] SAVED." % (savename))
print ("OPTIMIZATION FINISHED")
```
## 恢复
```
do_restore = 0
if do_restore == 1:
sess = tf.Session()
epoch = 20
savename = savedir+"net-"+str(epoch)+".ckpt"
saver.restore(sess, savename)
print ("NETWORK RESTORED")
else:
print ("DO NOTHING")
```
## CNN如何工作
```
input_r = sess.run(cnnout['x_r'], feed_dict={x: trainimg[0:1, :]})
conv1 = sess.run(cnnout['conv1'], feed_dict={x: trainimg[0:1, :]})
conv2 = sess.run(cnnout['conv2'], feed_dict={x: trainimg[0:1, :]})
conv3 = sess.run(cnnout['conv3'], feed_dict={x: trainimg[0:1, :]})
pool = sess.run(cnnout['pool'], feed_dict={x: trainimg[0:1, :]})
dense = sess.run(cnnout['dense'], feed_dict={x: trainimg[0:1, :]})
out = sess.run(cnnout['logit'], feed_dict={x: trainimg[0:1, :]})
```
## 输入
```
print ("Size of 'input_r' is %s" % (input_r.shape,))
label = np.argmax(trainlabel[0, :])
print ("Label is %d" % (label))
# PLOT
plt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray'))
plt.title("Label of this image is " + str(label) + "")
plt.colorbar()
plt.show()
```
# CONV 卷积层
```
print ("SIZE OF 'CONV1' IS %s" % (conv1.shape,))
for i in range(3):
plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv1")
plt.colorbar()
plt.show()
```
## CONV + BIAS
```
print ("SIZE OF 'CONV2' IS %s" % (conv2.shape,))
for i in range(3):
plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv2")
plt.colorbar()
plt.show()
```
## CONV + BIAS + RELU
```
print ("SIZE OF 'CONV3' IS %s" % (conv3.shape,))
for i in range(3):
plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv3")
plt.colorbar()
plt.show()
```
## POOL
```
print ("SIZE OF 'POOL' IS %s" % (pool.shape,))
for i in range(3):
plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th pool")
plt.colorbar()
plt.show()
```
## DENSE
```
print ("SIZE OF 'DENSE' IS %s" % (dense.shape,))
print ("SIZE OF 'OUT' IS %s" % (out.shape,))
plt.matshow(out, cmap=plt.get_cmap('gray'))
plt.title("OUT")
plt.colorbar()
plt.show()
```
## CONVOLUTION FILTER 卷积核
```
wc1 = sess.run(weights['c1'])
print ("SIZE OF 'WC1' IS %s" % (wc1.shape,))
for i in range(3):
plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv filter")
plt.colorbar()
plt.show()
```
| true |
code
| 0.577197 | null | null | null | null |
|
```
%run startup.py
%%javascript
$.getScript('./assets/js/ipython_notebook_toc.js')
```
# A Decision Tree of Observable Operators
## Part 1: NEW Observables.
> source: http://reactivex.io/documentation/operators.html#tree.
> (transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, [axiros](http://www.axiros.com))
**This tree can help you find the ReactiveX Observable operator you’re looking for.**
<h2 id="tocheading">Table of Contents</h2>
<div id="toc"></div>
## Usage
There are no configured behind the scenes imports or code except [`startup.py`](./edit/startup.py), which defines output helper functions, mainly:
- `rst, reset_start_time`: resets a global timer, in order to have use cases starting from 0.
- `subs(observable)`: subscribes to an observable, printing notifications with time, thread, value
All other code is explicitly given in the notebook.
Since all initialisiation of tools is in the first cell, you always have to run the first cell after ipython kernel restarts.
**All other cells are autonmous.**
In the use case functions, in contrast to the official examples we simply use **`rand`** quite often (mapped to `randint(0, 100)`), to demonstrate when/how often observable sequences are generated and when their result is buffered for various subscribers.
*When in doubt then run the cell again, you might have been "lucky" and got the same random.*
### RxJS
The (bold printed) operator functions are linked to the [official documentation](http://reactivex.io/documentation/operators.html#tree) and created roughly analogous to the **RxJS** examples. The rest of the TOC lines links to anchors within the notebooks.
### Output
When the output is not in marble format we display it like so:
```
new subscription on stream 276507289
3.4 M [next] 1.4: {'answer': 42}
3.5 T1 [cmpl] 1.6: fin
```
where the lines are syncronously `print`ed as they happen. "M" and "T1" would be thread names ("M" is main thread).
For each use case in `reset_start_time()` (alias `rst`), a global timer is set to 0 and we show the offset to it, in *milliseconds* & with one decimal value and also the offset to the start of stream subscription. In the example 3.4, 3.5 are millis since global counter reset, while 1.4, 1.6 are offsets to start of subscription.
# I want to create a **NEW** Observable...
## ... that emits a particular item: **[just](http://reactivex.io/documentation/operators/just.html) **
```
reset_start_time(O.just)
stream = O.just({'answer': rand()})
disposable = subs(stream)
sleep(0.5)
disposable = subs(stream) # same answer
# all stream ops work, its a real stream:
disposable = subs(stream.map(lambda x: x.get('answer', 0) * 2))
```
## ..that was returned from a function *called at subscribe-time*: **[start](http://reactivex.io/documentation/operators/start.html)**
```
print('There is a little API difference to RxJS, see Remarks:\n')
rst(O.start)
def f():
log('function called')
return rand()
stream = O.start(func=f)
d = subs(stream)
d = subs(stream)
header("Exceptions are handled correctly (an observable should never except):")
def breaking_f():
return 1 / 0
stream = O.start(func=breaking_f)
d = subs(stream)
d = subs(stream)
# startasync: only in python3 and possibly here(?) http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.Future
#stream = O.start_async(f)
#d = subs(stream)
```
## ..that was returned from an Action, Callable, Runnable, or something of that sort, called at subscribe-time: **[from](http://reactivex.io/documentation/operators/from.html)**
```
rst(O.from_iterable)
def f():
log('function called')
return rand()
# aliases: O.from_, O.from_list
# 1.: From a tuple:
stream = O.from_iterable((1,2,rand()))
d = subs(stream)
# d = subs(stream) # same result
# 2. from a generator
gen = (rand() for j in range(3))
stream = O.from_iterable(gen)
d = subs(stream)
rst(O.from_callback)
# in my words: In the on_next of the subscriber you'll have the original arguments,
# potentially objects, e.g. user original http requests.
# i.e. you could merge those with the result stream of a backend call to
# a webservice or db and send the request.response back to the user then.
def g(f, a, b):
f(a, b)
log('called f')
stream = O.from_callback(lambda a, b, f: g(f, a, b))('fu', 'bar')
d = subs(stream.delay(200))
# d = subs(stream.delay(200)) # does NOT work
```
## ...after a specified delay: **[timer](http://reactivex.io/documentation/operators/timer.html)**
```
rst()
# start a stream of 0, 1, 2, .. after 200 ms, with a delay of 100 ms:
stream = O.timer(200, 100).time_interval()\
.map(lambda x: 'val:%s dt:%s' % (x.value, x.interval))\
.take(3)
d = subs(stream, name='observer1')
# intermix directly with another one
d = subs(stream, name='observer2')
```
## ...that emits a sequence of items repeatedly: **[repeat](http://reactivex.io/documentation/operators/repeat.html) **
```
rst(O.repeat)
# repeat is over *values*, not function calls. Use generate or create for function calls!
subs(O.repeat({'rand': time.time()}, 3))
header('do while:')
l = []
def condition(x):
l.append(1)
return True if len(l) < 2 else False
stream = O.just(42).do_while(condition)
d = subs(stream)
```
## ...from scratch, with custom logic and cleanup (calling a function again and again): **[create](http://reactivex.io/documentation/operators/create.html) **
```
rx = O.create
rst(rx)
def f(obs):
# this function is called for every observer
obs.on_next(rand())
obs.on_next(rand())
obs.on_completed()
def cleanup():
log('cleaning up...')
return cleanup
stream = O.create(f).delay(200) # the delay causes the cleanup called before the subs gets the vals
d = subs(stream)
d = subs(stream)
sleep(0.5)
rst(title='Exceptions are handled nicely')
l = []
def excepting_f(obs):
for i in range(3):
l.append(1)
obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) ))
obs.on_completed()
stream = O.create(excepting_f)
d = subs(stream)
d = subs(stream)
rst(title='Feature or Bug?')
print('(where are the first two values?)')
l = []
def excepting_f(obs):
for i in range(3):
l.append(1)
obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) ))
obs.on_completed()
stream = O.create(excepting_f).delay(100)
d = subs(stream)
d = subs(stream)
# I think its an (amazing) feature, preventing to process functions results of later(!) failing functions
rx = O.generate
rst(rx)
"""The basic form of generate takes four parameters:
the first item to emit
a function to test an item to determine whether to emit it (true) or terminate the Observable (false)
a function to generate the next item to test and emit based on the value of the previous item
a function to transform items before emitting them
"""
def generator_based_on_previous(x): return x + 1.1
def doubler(x): return 2 * x
d = subs(rx(0, lambda x: x < 4, generator_based_on_previous, doubler))
rx = O.generate_with_relative_time
rst(rx)
stream = rx(1, lambda x: x < 4, lambda x: x + 1, lambda x: x, lambda t: 100)
d = subs(stream)
```
## ...for each observer that subscribes OR according to a condition at subscription time: **[defer / if_then](http://reactivex.io/documentation/operators/defer.html) **
```
rst(O.defer)
# plural! (unique per subscription)
streams = O.defer(lambda: O.just(rand()))
d = subs(streams)
d = subs(streams) # gets other values - created by subscription!
# evaluating a condition at subscription time in order to decide which of two streams to take.
rst(O.if_then)
cond = True
def should_run():
return cond
streams = O.if_then(should_run, O.return_value(43), O.return_value(56))
d = subs(streams)
log('condition will now evaluate falsy:')
cond = False
streams = O.if_then(should_run, O.return_value(43), O.return_value(rand()))
d = subs(streams)
d = subs(streams)
```
## ...that emits a sequence of integers: **[range](http://reactivex.io/documentation/operators/range.html) **
```
rst(O.range)
d = subs(O.range(0, 3))
```
### ...at particular intervals of time: **[interval](http://reactivex.io/documentation/operators/interval.html) **
(you can `.publish()` it to get an easy "hot" observable)
```
rst(O.interval)
d = subs(O.interval(100).time_interval()\
.map(lambda x, v: '%(interval)s %(value)s' \
% ItemGetter(x)).take(3))
```
### ...after a specified delay (see timer)
## ...that completes without emitting items: **[empty](http://reactivex.io/documentation/operators/empty-never-throw.html) **
```
rst(O.empty)
d = subs(O.empty())
```
## ...that does nothing at all: **[never](http://reactivex.io/documentation/operators/empty-never-throw.html) **
```
rst(O.never)
d = subs(O.never())
```
## ...that excepts: **[throw](http://reactivex.io/documentation/operators/empty-never-throw.html) **
```
rst(O.on_error)
d = subs(O.on_error(ZeroDivisionError))
```
| true |
code
| 0.244814 | null | null | null | null |
|
## Programming Exercise 1 - Linear Regression
- [warmUpExercise](#warmUpExercise)
- [Linear regression with one variable](#Linear-regression-with-one-variable)
- [Gradient Descent](#Gradient-Descent)
```
# %load ../../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from mpl_toolkits.mplot3d import axes3d
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 150)
pd.set_option('display.max_seq_items', None)
#%config InlineBackend.figure_formats = {'pdf',}
%matplotlib inline
import seaborn as sns
sns.set_context('notebook')
sns.set_style('white')
```
#### warmUpExercise
```
def warmUpExercise():
return(np.identity(5))
warmUpExercise()
```
### Linear regression with one variable
```
data = np.loadtxt('data/ex1data1.txt', delimiter=',')
X = np.c_[np.ones(data.shape[0]),data[:,0]]
y = np.c_[data[:,1]]
plt.scatter(X[:,1], y, s=30, c='r', marker='x', linewidths=1)
plt.xlim(4,24)
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s');
```
#### Gradient Descent
```
def computeCost(X, y, theta=[[0],[0]]):
m = y.size
J = 0
h = X.dot(theta)
J = 1/(2*m)*np.sum(np.square(h-y))
return(J)
computeCost(X,y)
def gradientDescent(X, y, theta=[[0],[0]], alpha=0.01, num_iters=1500):
m = y.size
J_history = np.zeros(num_iters)
for iter in np.arange(num_iters):
h = X.dot(theta)
theta = theta - alpha*(1/m)*(X.T.dot(h-y))
J_history[iter] = computeCost(X, y, theta)
return(theta, J_history)
# theta for minimized cost J
theta , Cost_J = gradientDescent(X, y)
print('theta: ',theta.ravel())
plt.plot(Cost_J)
plt.ylabel('Cost J')
plt.xlabel('Iterations');
xx = np.arange(5,23)
yy = theta[0]+theta[1]*xx
# Plot gradient descent
plt.scatter(X[:,1], y, s=30, c='r', marker='x', linewidths=1)
plt.plot(xx,yy, label='Linear regression (Gradient descent)')
# Compare with Scikit-learn Linear regression
regr = LinearRegression()
regr.fit(X[:,1].reshape(-1,1), y.ravel())
plt.plot(xx, regr.intercept_+regr.coef_*xx, label='Linear regression (Scikit-learn GLM)')
plt.xlim(4,24)
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s')
plt.legend(loc=4);
# Predict profit for a city with population of 35000 and 70000
print(theta.T.dot([1, 3.5])*10000)
print(theta.T.dot([1, 7])*10000)
# Create grid coordinates for plotting
B0 = np.linspace(-10, 10, 50)
B1 = np.linspace(-1, 4, 50)
xx, yy = np.meshgrid(B0, B1, indexing='xy')
Z = np.zeros((B0.size,B1.size))
# Calculate Z-values (Cost) based on grid of coefficients
for (i,j),v in np.ndenumerate(Z):
Z[i,j] = computeCost(X,y, theta=[[xx[i,j]], [yy[i,j]]])
fig = plt.figure(figsize=(15,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122, projection='3d')
# Left plot
CS = ax1.contour(xx, yy, Z, np.logspace(-2, 3, 20), cmap=plt.cm.jet)
ax1.scatter(theta[0],theta[1], c='r')
# Right plot
ax2.plot_surface(xx, yy, Z, rstride=1, cstride=1, alpha=0.6, cmap=plt.cm.jet)
ax2.set_zlabel('Cost')
ax2.set_zlim(Z.min(),Z.max())
ax2.view_init(elev=15, azim=230)
# settings common to both plots
for ax in fig.axes:
ax.set_xlabel(r'$\theta_0$', fontsize=17)
ax.set_ylabel(r'$\theta_1$', fontsize=17)
```
| true |
code
| 0.649356 | null | null | null | null |
|
# Basic Examples with Different Protocols
## Prerequisites
* A kubernetes cluster with kubectl configured
* curl
* grpcurl
* pygmentize
## Examples
* [Seldon Protocol](#Seldon-Protocol-Model)
* [Tensorflow Protocol](#Tensorflow-Protocol-Model)
* [KFServing V2 Protocol](#KFServing-V2-Protocol-Model)
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html) to setup Seldon Core with an ingress - either Ambassador or Istio.
Then port-forward to that ingress on localhost:8003 in a separate terminal either with:
* Ambassador: `kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080`
* Istio: `kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80`
```
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
import json
import time
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def writetemplate(line, cell):
with open(line, 'w') as f:
f.write(cell.format(**globals()))
VERSION=!cat ../version.txt
VERSION=VERSION[0]
VERSION
```
## Seldon Protocol Model
We will deploy a REST model that uses the SELDON Protocol namely by specifying the attribute `protocol: seldon`
```
%%writetemplate resources/model_seldon.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: example-seldon
spec:
protocol: seldon
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:{VERSION}
name: classifier
graph:
name: classifier
type: MODEL
name: model
replicas: 1
!kubectl apply -f resources/model_seldon.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-seldon -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep example-seldon -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/example-seldon/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
X=!cd ../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0,5.0]]}}' \
-rpc-header seldon:example-seldon -rpc-header namespace:seldon \
-plaintext \
-proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict
d=json.loads("".join(X))
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
!kubectl delete -f resources/model_seldon.yaml
```
## Tensorflow Protocol Model
We will deploy a model that uses the TENSORLFOW Protocol namely by specifying the attribute `protocol: tensorflow`
```
%%writefile resources/model_tfserving.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: example-tfserving
spec:
protocol: tensorflow
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=halfplustwo
- --model_base_path=gs://seldon-models/tfserving/half_plus_two
image: tensorflow/serving
name: halfplustwo
ports:
- containerPort: 8501
name: http
protocol: TCP
- containerPort: 8500
name: grpc
protocol: TCP
graph:
name: halfplustwo
type: MODEL
endpoint:
httpPort: 8501
grpcPort: 8500
name: model
replicas: 1
!kubectl apply -f resources/model_tfserving.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-tfserving \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep example-tfserving -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://localhost:8003/seldon/seldon/example-tfserving/v1/models/halfplustwo/:predict \
-H "Content-Type: application/json"
d=json.loads("".join(X))
print(d)
assert(d["predictions"][0] == 2.5)
X=!cd ../executor/proto && grpcurl \
-d '{"model_spec":{"name":"halfplustwo"},"inputs":{"x":{"dtype": 1, "tensor_shape": {"dim":[{"size": 3}]}, "floatVal" : [1.0, 2.0, 3.0]}}}' \
-rpc-header seldon:example-tfserving -rpc-header namespace:seldon \
-plaintext -proto ./prediction_service.proto \
0.0.0.0:8003 tensorflow.serving.PredictionService/Predict
d=json.loads("".join(X))
print(d)
assert(d["outputs"]["x"]["floatVal"][0] == 2.5)
!kubectl delete -f resources/model_tfserving.yaml
```
## KFServing V2 Protocol Model
We will deploy a REST model that uses the KFServing V2 Protocol namely by specifying the attribute `protocol: kfserving`
```
%%writefile resources/model_v2.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
name: triton
spec:
protocol: kfserving
predictors:
- graph:
children: []
implementation: TRITON_SERVER
modelUri: gs://seldon-models/trtis/simple-model
name: simple
name: simple
replicas: 1
!kubectl apply -f resources/model_v2.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=triton -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep triton -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"inputs":[{"name":"INPUT0","data":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"datatype":"INT32","shape":[1,16]},{"name":"INPUT1","data":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"datatype":"INT32","shape":[1,16]}]}' \
-X POST http://0.0.0.0:8003/seldon/seldon/triton/v2/models/simple/infer \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["outputs"][0]["data"][0]==2)
X=!cd ../executor/api/grpc/kfserving/inference && \
grpcurl -d '{"model_name":"simple","inputs":[{"name":"INPUT0","contents":{"int_contents":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]},"datatype":"INT32","shape":[1,16]},{"name":"INPUT1","contents":{"int_contents":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]},"datatype":"INT32","shape":[1,16]}]}' \
-plaintext -proto ./grpc_service.proto \
-rpc-header seldon:triton -rpc-header namespace:seldon \
0.0.0.0:8003 inference.GRPCInferenceService/ModelInfer
X="".join(X)
print(X)
!kubectl delete -f resources/model_v2.yaml
```
| true |
code
| 0.216757 | null | null | null | null |
|
# Predicting Heart Disease using Machine Learning
This notebook uses various Python based machine learning and data science libraries in an attempt to build a machine learning model capable of predicting whether or not someone has a Heart Disease based on their medical attributes.
We're going to take the following approach:
1. [Problem Definition](#definition)
2. [Data](#data)
3. [Evaluation](#evaluation)
4. [Features](#features)
5. [Modelling](#modelling)
6. [Experimentation](#experimentation)
## <a name="definition">1. Problem Definition</a>
In a statement,
> Given clinical parameters about a patient, can we predict whether or not a they have a heart disease or not?
## <a name="data">2. Data</a>
[Heart Disease UCI - Original Version](https://archive.ics.uci.edu/ml/datasets/heart+disease)
[Heart Disease UCI - Kaggle Version](https://www.kaggle.com/ronitf/heart-disease-uci)
## <a name="evaluation">3.Evaluation</a>
> If we can reach 95% of accuracy at predicting whether or not a patient has heart disease during the proof of concept, we'll pursue the project.
## <a name="features">4.Features</a>
The following are the features we'll use to predict our target variable (heart disease or no heart disease).
1. age - age in years
2. sex - (1 = male; 0 = female)
3. cp - chest pain type
* 0: Typical angina: chest pain related decrease blood supply to the heart
* 1: Atypical angina: chest pain not related to heart
* 2: Non-anginal pain: typically esophageal spasms (non heart related)
* 3: Asymptomatic: chest pain not showing signs of disease
4. trestbps - resting blood pressure (in mm Hg on admission to the hospital)
* anything above 130-140 is typically cause for concern
5. chol - serum cholestoral in mg/dl
* serum = LDL + HDL + .2 * triglycerides
* above 200 is cause for concern
6. fbs - (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)
* '>126' mg/dL signals diabetes
7. restecg - resting electrocardiographic results
* 0: Nothing to note
* 1: ST-T Wave abnormality
- can range from mild symptoms to severe problems
- signals non-normal heart beat
* 2: Possible or definite left ventricular hypertrophy
- Enlarged heart's main pumping chamber
8. thalach - maximum heart rate achieved
9. exang - exercise induced angina (1 = yes; 0 = no)
10. oldpeak - ST depression induced by exercise relative to rest
* looks at stress of heart during excercise
* unhealthy heart will stress more
11. slope - the slope of the peak exercise ST segment
* 0: Upsloping: better heart rate with excercise (uncommon)
* 1: Flatsloping: minimal change (typical healthy heart)
* 2: Downslopins: signs of unhealthy heart
12. ca - number of major vessels (0-3) colored by flourosopy
* colored vessel means the doctor can see the blood passing through
* the more blood movement the better (no clots)
13. thal - thalium stress result
* 1,3: normal
* 6: fixed defect: used to be defect but ok now
* 7: reversable defect: no proper blood movement when excercising
14. target - have disease or not (1=yes, 0=no) (= the predicted attribute)
**Note:** No personal identifiable information (PPI) can be found in the dataset.
```
# Regular EDA and plotting libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Models from scikit-learn
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
# Model Evaluations
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import precision_score, recall_score, f1_score
from sklearn.metrics import plot_roc_curve, plot_confusion_matrix
```
---------
## Load data
```
df = pd.read_csv('data/heart-disease.csv')
df.head()
```
--------
## Exploratory Data Analysis (EDA)
1. What question(s) are we trying to solve?
2. What kind of data do we have and how do we treat different types?
3. What are the missing data and how are we going to handle them?
4. What are the outliers, why we care about them and how are we going to handle them?
5. How can we add, change or remove features to get more out of the data?
```
df.tail()
df.info()
# check if there is any missing data
df.isnull().sum()
# how many classes are in target variable?
df['target'].value_counts()
# visulaiztion of classes
# sns.countplot(x=df['target']);
df['target'].value_counts().plot.bar(color=['salmon', 'lightblue']);
plt.xlabel('0: No Disease, 1: Heart Disease')
plt.ylabel('Count');
```
Seem like there are 2 categories and pretty balanced dataset.
-------
## Finding Patterns in data
```
df.describe().transpose()
```
### Heart disease frequency according to Sex
```
df['sex'].value_counts()
```
There are about 207 male and 96 females. So we have more male population, so we need to keep that in back of our mind
(1 = male; 0 = female)
```
pd.crosstab(df['sex'], df['target'])
72/(24+72), 93/(114+93)
```
We can see that based on existing data, there are 75% chances that female can have a heart disease. For male, there are 45% chance.
```
# visualize the data
# pd.crosstab(df['sex'], df['target']).plot(kind='bar', color=['salmon', 'lightblue']);
pd.crosstab(df['sex'], df['target']).plot(kind='bar');
plt.title('Heart disease frequency by Sex')
plt.xlabel('0: No Disease, 1: Heart Disease ')
plt.ylabel('Count')
plt.legend(['Female', 'Male']);
plt.xticks(rotation=0);
```
### Age Vs Max. Heart Rate for people who have Heart Disease
```
df.columns
plt.figure(figsize=(10, 7))
# positive cases
sns.scatterplot(data=df, x=df.age[df.target==1], y=df.thalach[df.target==1], color='salmon', s=50, alpha=0.8);
# negative cases
sns.scatterplot(data=df, x=df.age[df.target==0], y=df.thalach[df.target==0], color='lightblue', s=50, alpha=0.8)
plt.title('Heart Disease in function of Age and Max Heart Rate')
plt.xlabel('Age')
plt.ylabel('Max Heart Rate');
plt.legend(['Heart Disease', 'No Disease']);
```
### Distribution of Age
```
sns.histplot(data=df, x=df['age'], bins=30);
```
### Heart Disease Frequency per Chest Pain level
cp - chest pain type
* 0: Typical angina: chest pain related decrease blood supply to the heart
* 1: Atypical angina: chest pain not related to heart
* 2: Non-anginal pain: typically esophageal spasms (non heart related)
* 3: Asymptomatic: chest pain not showing signs of disease
```
pd.crosstab(df['target'], df['cp'])
pd.crosstab(df['cp'], df['target']).plot(kind='bar', color=['lightblue', 'salmon']);
plt.title('Heart Disease Frequency per Chest Pain level')
plt.xlabel('Chest Pain Level')
plt.ylabel('Count')
plt.legend(['No Diease', 'Heart Disease'])
plt.xticks(rotation=0);
```
### Correlation between indepedent variables
```
df.corr()['target'][:-1]
# visualization
corr_matrix = df.corr()
plt.figure(figsize=(12, 8))
sns.heatmap(corr_matrix, annot=True, linewidth=0.5, fmt='.2f', cmap='viridis_r');
```
As per the heatmap above, `Chest pain (cp)` has the highest positive correlation with Target variable among the features, followed by `thalach (Maximum Heart Rate)` variable.
On the other hand, `exang - exercise induced angina` and `oldpeak - ST depression induced by exercise relative to rest` have the lowest correlation with Target variable.
--------
## <a name="modelling">5. Modelling</a>
```
df.head(2)
# split features and labels
X = df.drop('target', axis=1)
y = df['target']
X.head(2)
y.head(2)
# split into training, testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
```
As there is no missing data and no values to convert from categorical to numerical values, we will continue to build model and train them .
### Model Training
We will try 3 different models.
1. Logistic Regression
2. K-Nearest Neighbours Classifier
3. Random Forest Classifier
```
# put models in dictionary
models = {
'LogisticRegression': LogisticRegression(max_iter=1000),
'KNN': KNeighborsClassifier(),
'RandomForestClassifer': RandomForestClassifier()
}
# create function to fit and score model
def fit_and_score(models, X_train, X_test, y_train, y_test):
"""
Fits and evalute given machine learning models.
models: a dictionary of different scikit learn machine learning models
X_train: training date (no labels)
X_test: testing data (no labels)
y_train: training labels
y_test : testing labels
returns model scores dictionary.
"""
# set random seed
np.random.seed(42)
# make dictonary to keep scores
model_scores = {}
# loop through models to fit and score
for model_name, model in models.items():
model.fit(X_train, y_train) # fit model
score = model.score(X_test, y_test) # get score
model_scores[model_name] = score # put score for each model
return model_scores
# fit and score
model_scores = fit_and_score(models, X_train, X_test, y_train, y_test)
model_scores
```
### Model Comparison
```
model_compare = pd.DataFrame(model_scores, index=['accuracy'])
model_compare.head()
model_compare.T.plot(kind='bar');
```
---------
## <a name="experimentation">6.Experimentation</a>
### Tuning or Improving our models
Now we've got baseline models. and we might want to experiment to improve the results.
We will be doing:
* Hyperparameter tuning
* Feature Importance
* Confusion Matrix
* Cross Validation
* Precision
* Recall
* F1 Score
* Classification Report
* ROC curve
* Area Under the Curve (AUC)
### Hyperparameter Tuning
1. [Hyperparameter Tuning - Manually](#manually)
2. [Hyperparameter Tuning - using RandomizedSearchCV](#randomized)
3.[Hyperparameter Tuning - using GridSearchCV](#grid)
### <a name='manually'>Hyperparameter Tuning - Manually</a>
```
train_scores = []
test_scores = []
```
### KNN
```
# create a different values of parameters
neighbours = range(1, 21)
# instantiate instance
knn = KNeighborsClassifier()
# loop through different n_neighbors
for i in neighbours:
# set param
knn.set_params(n_neighbors=i)
# fit model
knn.fit(X_train, y_train)
# get score
train_scores.append(knn.score(X_train, y_train))
test_scores.append(knn.score(X_test, y_test))
plt.plot(neighbours, train_scores, label='Train Score')
plt.plot(neighbours, test_scores, label='Test Score');
plt.xticks(np.arange(1,21,1))
plt.legend();
plt.xlabel('n_neighbor')
plt.ylabel('score');
print(f"Maximum KNN score on the test data: {max(test_scores) * 100:.2f}%")
```
-----
## <a name='randomized'>Hyperparameter Tuning - using RandomizedSearchCV</a>
We are going to tune the following models using RandomizedSearchCV.
* Logistic Regression
* RandomForest Classifier
```
# help(LogisticRegression)
np.logspace(-4, 4, 20)
# help(RandomForestClassifier)
```
#### Create Hyperparameter Grid
```
# create hyperparameter grid for Logistic Regression
log_reg_grid = {
'C': np.logspace(-4, 4, 20),
'solver': ['liblinear']
}
# create hyperparameter grid for Random Forest Classifier
rf_grid = {
'n_estimators': np.arange(10, 1000, 50),
'max_depth': [None, 3, 5, 10],
'min_samples_split': np.arange(2, 20, 2),
'min_samples_leaf': np.arange(1, 20, 2)
}
```
#### Create RandomizedSearchCV with created Hyperparameter Grid (Logistic Regression)
```
np.random.seed(42)
# set up random hyperparameter search for Logistic Regression
rs_log_reg = RandomizedSearchCV(LogisticRegression(),
log_reg_grid,
cv=5,
n_iter=20,
verbose=True)
# fit random hyperparameter search model for Logistic Regression
rs_log_reg.fit(X_train, y_train)
# check best parameters
rs_log_reg.best_params_
# check the score
rs_log_reg.score(X_test, y_test)
# comparing with baseline scores
model_scores
```
#### Create RandomizedSearchCV with created Hyperparameter Grid (Random Forest Classifier)
```
np.random.seed(42)
# set up random hyperparameter search for RandomForestClassifier
rs_rf = RandomizedSearchCV(RandomForestClassifier(), rf_grid, cv=5, n_iter=20, verbose=True)
# fit random hyperparamter search model
rs_rf.fit(X_train, y_train)
# check best parameters
rs_rf.best_params_
# check the score
rs_rf.score(X_test, y_test)
# comparing with baseline scores
model_scores
```
**We can see that between LogisticRegression and RandomForestClassifier using RandomizedSearchCV, LogisticRegression score is better.**
**So we will explore using LogisticRegression with GridSearchCV to further improve the performance.**
---------
## <a name='grid'>Hyperparameter Tuning - using GridSearchCV</a>
We are going to tune the following models using GridSearchCV.
* Logistic Regression
```
# create hyperparameter grid for Logistic Regression
log_reg_grid = {
'C': np.logspace(-4, 4, 20),
'solver': ['liblinear']
}
# set up grid hyperparameter search for Logistic Regression
gs_log_reg = GridSearchCV(LogisticRegression(),
log_reg_grid,
cv=5,
verbose=True)
# train the model
gs_log_reg.fit(X_train, y_train)
# get best parameters
gs_log_reg.best_params_
# get the score
gs_log_reg.score(X_test, y_test)
```
---------
### Evaluating Models
Evaluating our tuned machine learning classifiers, beyond accuracy
* ROC and AUC score
* Confusion Matrix, Plot Confusion Matrix
* Classification Report
* Precision
* Recall
* F1
```
# make predictions
y_preds = gs_log_reg.predict(X_test)
# ROC curve and AUC
plot_roc_curve(gs_log_reg, X_test, y_test);
confusion_matrix(y_test, y_preds)
plot_confusion_matrix(gs_log_reg, X_test, y_test);
print(classification_report(y_test, y_preds))
```
**NOTE: As the above `classification report` only covers ONE train test split set of Test data.**
**So we may want to use cross validated precision, recall, f1 score to get the whole idea.**
--------
## Calculate evaluation metrics using Cross Validated Precision, Recall and F1 score
- we will use `cross_val_score` for this with different `scoring` parameter.
- we will create new model and validated on whole dataset.
```
# check current best parameter
gs_log_reg.best_params_
# create a new classifier with current best parameter
clf = LogisticRegression(C=0.23357214690901212, solver='liblinear')
# Cross Validated Accuracy
cv_accuracy = cross_val_score(clf, X, y, scoring='accuracy', cv=5)
cv_accuracy
# mean of cross valided accuracy
cv_accuracy = np.mean(cv_accuracy)
cv_accuracy
# Cross Validated Precision
cv_precision = cross_val_score(clf, X, y, scoring='precision', cv=5)
cv_precision = np.mean(cv_precision)
cv_precision
# Cross Validated Recall
cv_recall = cross_val_score(clf, X, y, scoring='recall', cv=5)
cv_recall = np.mean(cv_recall)
cv_recall
# Cross Validated F1
cv_f1 = cross_val_score(clf, X, y, scoring='f1', cv=5)
cv_f1 = np.mean(cv_f1)
cv_f1
# Visualize cross-validated metrics
cv_metrics = pd.DataFrame({'Accuracy': cv_accuracy,
'Precision': cv_precision,
'Recall': cv_recall,
'F1': cv_f1},
index=[0])
cv_metrics.T.plot.bar(legend=False);
plt.title('Cross Validated Classification Metrics')
plt.xticks(rotation=30);
```
-----------
## Feature Importance
Feature Importance is another as asking, "which features contributed most to the outcomes of the model and how did they contribute?
Finding Feature Importance is different for each machine learning model.
### Finding Feature Importance for Logistic Regression
```
model = LogisticRegression(C=0.23357214690901212, solver='liblinear')
model.fit(X_train, y_train)
# check Coefficient of features
model.coef_
df.head(2)
# Match coef's of features to columns name
feature_dict = dict(zip(df.columns, list(model.coef_[0])))
feature_dict
```
**NOTE: Unlike correlation which is done during EDA, cofficient is model driven.**
We got those coef_ values after we have the model.
```
# Visualize Feature Importance
feature_df = pd.DataFrame(feature_dict, index=[0])
feature_df.T.plot.bar(title='Feature Importance of Logistic Regression', legend=False);
pd.crosstab(df['slope'], df['target'])
```
based on the coef_, the higher the value of slope, the model tends to predict higher value (which is 0 to 1: meaning likely to have heart disease)
-------
```
pd.crosstab(df['sex'], df['target'])
72/24
93/114
```
based on the coef_, the higher the value of sex (0 => 1), the model tends to predict lower value.
Example:
For Sex 0 (female), Target changes from 0 => 1 is 72/24 = 3.0
For Sex 1 (male), Target changes from 0 => 1 is 93/114 = 0.8157894736842105
So the value got decrease from 3.0 to 0.8157894736842105.
-------
## Additional Experimentation
To improve our evaluation metrics, we can
* collect more data.
* try different models like XGBoost or CatBoost.
* improve the current model with additional hyperparameter tuning
```
# save the model
from joblib import dump
dump(clf, 'model/mdl_logistic_regression')
```
| true |
code
| 0.700191 | null | null | null | null |
|
<pre>
Torch : Manipulating vectors like dot product, addition etc and using GPU
Numpy : Manipuliting vectors
Pandas : Reading CSV file
Matplotlib : Plotting figure
</pre>
```
import numpy as np
import torch
import pandas as pd
from matplotlib import pyplot as plt
```
<pre>
O
O
O
O
O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O O
O |
O |
O |
O |
O |
| |
| |
| |
| |
| |
| |
| |
Visible Hidden/Feature
Layer Layer
(n_v) (n_h)
RBM : A class that initialize RBM with default values
</pre>
<pre>
Parameters
n_v : Number of visible inputs
Initialized by 0 but then take value of number of inputs
n_h : Number of features want to extract
Must be set by user
k : Sampling steps for contrastive divergance
Default value is 2 steps
epochs : Number of epochs for training RBM
Must be set by user
mini_batch_size : Size of mini batch for training
Must be set by user
alpha : Learning rate for updating parameters of RBM
Default value is 0.01
momentum : Reduces large jumps for updating parameters
weight_decay : Reduces the value of weight after every step of contrastive divergance
data : Data to be fitted for RBM
Must be given by user or else, thats all useless
</pre>
```
class RBM():
# Parameters
# n_v : Number of visible inputs
# Initialized by 0 but then take value of number of inputs
# n_h : Number of features want to extract
# Must be set by user
# k : Sampling steps for contrastive divergance
# Default value is 2 steps
# epochs : Number of epochs for training RBM
# Must be set by user
# mini_batch_size : Size of mini batch for training
# Must be set by user
# alpha : Learning rate for updating parameters of RBM
# Default value is 0.001
# momentum : Reduces large jumps for updating parameters
# weight_decay : Reduces the value of weight after every step of contrastive divergance
# data : Data to be fitted for RBM
# Must be given by user or else, thats all useless
def __init__(self, n_v=0, n_h=0, k=2, epochs=15, mini_batch_size=64, alpha=0.001, momentum=0.9, weight_decay=0.001):
self.number_features = 0
self.n_v = n_v
self.n_h = self.number_features
self.k = k
self.alpha = alpha
self.momentum = momentum
self.weight_decay = weight_decay
self.mini_batch_size = mini_batch_size
self.epochs = epochs
self.data = torch.randn(1, device="cuda")
# fit method is called to fit RBM for provided data
# First, data is converted in range of 0-1 cuda float tensors by dividing it by their maximum value
# Here, after calling this method, n_v is reinitialized to number of input values present in data
# number_features must be given by user before calling this method
# w Tensor of weights of RBM
# (n_v x n_h) Randomly initialized between 0-1
# a Tensor of bias for visible units
# (n_v x 1) Initialized by 1's
# b Tensor of bias for hidden units
# (n_b x 1) Initialized by 1's
# w_moment Momentum value for weights
# (n_v x n_h) Initialized by zeros
# a_moment Momentum values for visible units
# (n_v x 1) Initialized by zeros
# b_moment Momentum values for hidden units
# (n_h x 1) Initialized by zeros
def fit(self):
self.data /= self.data.max()
self.data = self.data.type(torch.cuda.FloatTensor)
self.n_v = len(self.data[0])
self.n_h = self.number_features
self.w = torch.randn(self.n_v, self.n_h, device="cuda") * 0.1
self.a = torch.ones(self.n_v, device="cuda") * 0.5
self.b = torch.ones(self.n_h, device="cuda")
self.w_moment = torch.zeros(self.n_v, self.n_h, device="cuda")
self.a_moment = torch.zeros(self.n_v, device="cuda")
self.b_moment = torch.zeros(self.n_h, device="cuda")
self.train()
# train This method splits dataset into mini_batch and run for given epoch number of times
def train(self):
for epoch_no in range(self.epochs):
ep_error = 0
for i in range(0, len(self.data), self.mini_batch_size):
mini_batch = self.data[i:i+self.mini_batch_size]
ep_error += self.contrastive_divergence(mini_batch)
print("Epoch Number : ", epoch_no, " Error : ", ep_error.item())
# cont_diverg It performs contrastive divergance using gibbs sampling algorithm
# p_h_0 Value of hidden units for given visivle units
# h_0 Activated hidden units as sampled from normal distribution (0 or 1)
# g_0 Positive associations of RBM
# wv_a Unactivated hidden units
# p_v_h Probability of hidden neuron to be activated given values of visible neurons
# p_h_v Probability of visible neuron to be activated given values of hidden neurons
# p_v_k Value of visible units for given visivle units after k step Gibbs Sampling
# p_h_k Value of hidden units for given visivle units after k step Gibbs Sampling
# g_k Negative associations of RBM
# error Recontruction error for given mini_batch
def contrastive_divergence(self, v):
p_h_0 = self.sample_hidden(v)
h_0 = (p_h_0 >= torch.rand(self.n_h, device="cuda")).float()
g_0 = v.transpose(0, 1).mm(h_0)
wv_a = h_0
# Gibbs Sampling step
for step in range(self.k):
p_v_h = self.sample_visible(wv_a)
p_h_v = self.sample_hidden(p_v_h)
wv_a = (p_h_v >= torch.rand(self.n_h, device="cuda")).float()
p_v_k = p_v_h
p_h_k = p_h_v
g_k = p_v_k.transpose(0, 1).mm(p_h_k)
self.update_parameters(g_0, g_k, v, p_v_k, p_h_0, p_h_k)
error = torch.sum((v - p_v_k)**2)
return error
# p_v_h : Probability of hidden neuron to be activated given values of visible neurons
# p_h_v : Probability of visible neuron to be activated given values of hidden neurons
#-----------------------------------Bernoulli-Bernoulli RBM--------------------------------------------
# p_h_v = sigmoid ( weight x visible + visible_bias )
# p_v_h = sigmoid (weight.t x hidden + hidden_bias )
#------------------------------------------------------------------------------------------------------
def sample_hidden(self, p_v_h): # Bernoulli-Bernoulli RBM
wv = p_v_h.mm(self.w)
wv_a = wv + self.b
p_h_v = torch.sigmoid(wv_a)
return p_h_v
def sample_visible(self, p_h_v): # Bernoulli-Bernoulli RBM
wh = p_h_v.mm(self.w.transpose(0, 1))
wh_b = wh + self.a
p_v_h = torch.sigmoid(wh_b)
return p_v_h
# weight_(t) = weight_(t) + ( positive_association - negative_association ) + weight_(t-1)
# visible_bias_(t) = visible_bias_(t) + sum( input - activated_visivle_at_k_step_sample ) + visible_bias_(t-1)
# hidden_bias_(t) = hidden_bias_(t) + sum( activated_initial_hidden - activated_hidden_at_k_step_sample ) + hidden_bias_(t-1)
def update_parameters(self, g_0, g_k, v, p_v_k, p_h_0, p_h_k):
self.w_moment *= self.momentum
del_w = (g_0 - g_k) + self.w_moment
self.a_moment *= self.momentum
del_a = torch.sum(v - p_v_k, dim=0) + self.a_moment
self.b_moment *= self.momentum
del_b = torch.sum(p_h_0 - p_h_k, dim=0) + self.b_moment
batch_size = v.size(0)
self.w += del_w * self.alpha / batch_size
self.a += del_a * self.alpha / batch_size
self.b += del_b * self.alpha / batch_size
self.w -= (self.w * self.weight_decay)
self.w_moment = del_w
self.a_moment = del_a
self.b_moment = del_b
dataset = pd.read_csv("/home/pushpull/mount/intHdd/dataset/mnist/mnist_train.csv", header=None)
data = torch.tensor(np.array(dataset)[:, 1:], device="cuda")
mnist = RBM()
mnist.data = data
mnist.number_features = 300
error = mnist.fit()
```
| true |
code
| 0.702083 | null | null | null | null |
|
# Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called [Cart-Pole](https://gym.openai.com/envs/CartPole-v0). In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.

We can simulate this game using [OpenAI Gym](https://gym.openai.com/). First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
```
import gym
import tensorflow as tf
import numpy as np
```
>**Note:** Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included `gym` as a submodule, so you can run `git submodule --init --recursive` to pull the contents into the `gym` repo.
```
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
```
We interact with the simulation through `env`. To show the simulation running, you can use `env.render()` to render one frame. Passing in an action as an integer to `env.step` will generate the next step in the simulation. You can see how many actions are possible from `env.action_space` and to get a random action you can use `env.action_space.sample()`. This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
```
env.reset()
rewards = []
for _ in range(100):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
rewards.append(reward)
if done:
rewards = []
env.reset()
```
To shut the window showing the simulation, use `env.close()`.
If you ran the simulation above, we can look at the rewards:
```
print(rewards[-20:])
```
The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
## Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-_table_. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
```
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
```
## Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a `Memory` object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a `Memory` object. If you're unfamiliar with `deque`, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
```
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
```
## Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an **$\epsilon$-greedy policy**.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called _exploitation_. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
## Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in _episodes_. One *episode* is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
* Initialize the memory $D$
* Initialize the action-value network $Q$ with random weights
* **For** episode = 1, $M$ **do**
* **For** $t$, $T$ **do**
* With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
* Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
* Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
* Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
* Set $\hat{Q}_j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max_{a'}{Q(s'_j, a')}$
* Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
* **endfor**
* **endfor**
## Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
```
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
```
## Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
```
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
```
## Training
Below we'll train our agent. If you want to watch it train, uncomment the `env.render()` line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
```
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
```
## Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
```
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
```
## Testing
Let's checkout how our trained agent plays the game.
```
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
```
## Extending this
So, Cart-Pole is a pretty simple game. However, the same model can be used to train an agent to play something much more complicated like Pong or Space Invaders. Instead of a state like we're using here though, you'd want to use convolutional layers to get the state from the screen images.

I'll leave it as a challenge for you to use deep Q-learning to train an agent to play Atari games. Here's the original paper which will get you started: http://www.davidqiu.com:8888/research/nature14236.pdf.
| true |
code
| 0.677687 | null | null | null | null |
|
## Train a model with Iris data using XGBoost algorithm
### Model is trained with XGBoost installed in notebook instance
### In the later examples, we will train using SageMaker's XGBoost algorithm
```
# Install xgboost in notebook instance.
#### Command to install xgboost
!pip install xgboost==1.2
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import itertools
import xgboost as xgb
from sklearn import preprocessing
from sklearn.metrics import classification_report, confusion_matrix
column_list_file = 'iris_train_column_list.txt'
train_file = 'iris_train.csv'
validation_file = 'iris_validation.csv'
columns = ''
with open(column_list_file,'r') as f:
columns = f.read().split(',')
columns
# Encode Class Labels to integers
# Labeled Classes
labels=[0,1,2]
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
le = preprocessing.LabelEncoder()
le.fit(classes)
# Specify the column names as the file does not have column header
df_train = pd.read_csv(train_file,names=columns)
df_validation = pd.read_csv(validation_file,names=columns)
df_train.head()
df_validation.head()
X_train = df_train.iloc[:,1:] # Features: 1st column onwards
y_train = df_train.iloc[:,0].ravel() # Target: 0th column
X_validation = df_validation.iloc[:,1:]
y_validation = df_validation.iloc[:,0].ravel()
# Launch a classifier
# XGBoost Training Parameter Reference:
# https://xgboost.readthedocs.io/en/latest/parameter.html
classifier = xgb.XGBClassifier(objective="multi:softmax",
num_class=3,
n_estimators=100)
classifier
classifier.fit(X_train,
y_train,
eval_set = [(X_train, y_train), (X_validation, y_validation)],
eval_metric=['mlogloss'],
early_stopping_rounds=10)
# early_stopping_rounds - needs to be passed in as a hyperparameter in SageMaker XGBoost implementation
# "The model trains until the validation score stops improving.
# Validation error needs to decrease at least every early_stopping_rounds to continue training.
# Amazon SageMaker hosting uses the best model for inference."
eval_result = classifier.evals_result()
training_rounds = range(len(eval_result['validation_0']['mlogloss']))
print(training_rounds)
plt.scatter(x=training_rounds,y=eval_result['validation_0']['mlogloss'],label='Training Error')
plt.scatter(x=training_rounds,y=eval_result['validation_1']['mlogloss'],label='Validation Error')
plt.grid(True)
plt.xlabel('Iteration')
plt.ylabel('LogLoss')
plt.title('Training Vs Validation Error')
plt.legend()
plt.show()
xgb.plot_importance(classifier)
plt.show()
df = pd.read_csv(validation_file,names=columns)
df.head()
X_test = df.iloc[:,1:]
print(X_test[:5])
result = classifier.predict(X_test)
result[:5]
df['predicted_class'] = result #le.inverse_transform(result)
df.head()
# Compare performance of Actual and Model 1 Prediction
plt.figure()
plt.scatter(df.index,df['encoded_class'],label='Actual')
plt.scatter(df.index,df['predicted_class'],label='Predicted',marker='^')
plt.legend(loc=4)
plt.yticks([0,1,2])
plt.xlabel('Sample')
plt.ylabel('Class')
plt.show()
```
<h2>Confusion Matrix</h2>
Confusion Matrix is a table that summarizes performance of classification model.<br><br>
```
# Reference:
# https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#print("Normalized confusion matrix")
#else:
# print('Confusion matrix, without normalization')
#print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
# Compute confusion matrix
cnf_matrix = confusion_matrix(df['encoded_class'],
df['predicted_class'],labels=labels)
cnf_matrix
# Plot confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=classes,
title='Confusion matrix - Count')
# Plot confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=classes,
title='Confusion matrix - Count',normalize=True)
print(classification_report(
df['encoded_class'],
df['predicted_class'],
labels=labels,
target_names=classes))
```
| true |
code
| 0.679205 | null | null | null | null |
|
# Handling uncertainty with quantile regression
```
%matplotlib inline
```
[Quantile regression](https://www.wikiwand.com/en/Quantile_regression) is useful when you're not so much interested in the accuracy of your model, but rather you want your model to be good at ranking observations correctly. The typical way to perform quantile regression is to use a special loss function, namely the quantile loss. The quantile loss takes a parameter, $\alpha$ (alpha), which indicates which quantile the model should be targeting. In the case of $\alpha = 0.5$, then this is equivalent to asking the model to predict the median value of the target, and not the most likely value which would be the mean.
A nice thing we can do with quantile regression is to produce a prediction interval for each prediction. Indeed, if we predict the lower and upper quantiles of the target then we will be able to obtain a "trust region" in between which the true value is likely to belong. Of course, the likeliness will depend on the chosen quantiles. For a slightly more detailed explanation see [this](https://medium.com/the-artificial-impostor/quantile-regression-part-1-e25bdd8d9d43) blog post.
As an example, let us take the [simple time series model we built in another notebook](building-a-simple-time-series-model.md). Instead of predicting the mean value of the target distribution, we will predict the 5th, 50th, 95th quantiles. This will require training three separate models, so we will encapsulate the model building logic in a function called `make_model`. We also have to slightly adapt the training loop, but not by much. Finally, we will draw the prediction interval along with the predictions from for 50th quantile (i.e. the median) and the true values.
```
import calendar
import math
import matplotlib.pyplot as plt
from river import compose
from river import datasets
from river import linear_model
from river import metrics
from river import optim
from river import preprocessing
from river import stats
from river import time_series
def get_ordinal_date(x):
return {'ordinal_date': x['month'].toordinal()}
def get_month_distances(x):
return {
calendar.month_name[month]: math.exp(-(x['month'].month - month) ** 2)
for month in range(1, 13)
}
def make_model(alpha):
extract_features = compose.TransformerUnion(get_ordinal_date, get_month_distances)
scale = preprocessing.StandardScaler()
learn = linear_model.LinearRegression(
intercept_lr=0,
optimizer=optim.SGD(3),
loss=optim.losses.Quantile(alpha=alpha)
)
model = extract_features | scale | learn
model = time_series.Detrender(regressor=model, window_size=12)
return model
metric = metrics.MAE()
models = {
'lower': make_model(alpha=0.05),
'center': make_model(alpha=0.5),
'upper': make_model(alpha=0.95)
}
dates = []
y_trues = []
y_preds = {
'lower': [],
'center': [],
'upper': []
}
for x, y in datasets.AirlinePassengers():
y_trues.append(y)
dates.append(x['month'])
for name, model in models.items():
y_preds[name].append(model.predict_one(x))
model.learn_one(x, y)
# Update the error metric
metric.update(y, y_preds['center'][-1])
# Plot the results
fig, ax = plt.subplots(figsize=(10, 6))
ax.grid(alpha=0.75)
ax.plot(dates, y_trues, lw=3, color='#2ecc71', alpha=0.8, label='Truth')
ax.plot(dates, y_preds['center'], lw=3, color='#e74c3c', alpha=0.8, label='Prediction')
ax.fill_between(dates, y_preds['lower'], y_preds['upper'], color='#e74c3c', alpha=0.3, label='Prediction interval')
ax.legend()
ax.set_title(metric);
```
An important thing to note is that the prediction interval we obtained should not be confused with a confidence interval. Simply put, a prediction interval represents uncertainty for where the true value lies, whereas a confidence interval encapsulates the uncertainty on the prediction. You can find out more by reading [this](https://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals) CrossValidated post.
| true |
code
| 0.728516 | null | null | null | null |
|
### Privatizing Histograms
Sometimes we want to release the counts of individual outcomes in a dataset.
When plotted, this makes a histogram.
The library currently has two approaches:
1. Known category set `make_count_by_categories`
2. Unknown category set `make_count_by`
The next code block imports just handles boilerplate: imports, data loading, plotting.
```
import os
from opendp.meas import *
from opendp.mod import enable_features, binary_search_chain, Measurement, Transformation
from opendp.trans import *
from opendp.typing import *
enable_features("contrib")
max_influence = 1
budget = (1., 1e-8)
# public information
col_names = ["age", "sex", "educ", "race", "income", "married"]
data_path = os.path.join('.', 'data', 'PUMS_california_demographics_1000', 'data.csv')
size = 1000
with open(data_path) as input_data:
data = input_data.read()
def plot_histogram(sensitive_counts, released_counts):
"""Plot a histogram that compares true data against released data"""
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
fig = plt.figure()
ax = fig.add_axes([1,1,1,1])
plt.ylim([0,225])
tick_spacing = 1.
ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
plt.xlim(0,15)
width = .4
ax.bar(list([x+width for x in range(0, len(sensitive_counts))]), sensitive_counts, width=width, label='True Value')
ax.bar(list([x+2*width for x in range(0, len(released_counts))]), released_counts, width=width, label='DP Value')
ax.legend()
plt.title('Histogram of Education Level')
plt.xlabel('Years of Education')
plt.ylabel('Count')
plt.show()
```
### Private histogram via `make_count_by_categories`
This approach is only applicable if the set of potential values that the data may take on is public information.
If this information is not available, then use `make_count_by` instead.
It typically has greater utility than `make_count_by` until the size of the category set is greater than dataset size.
In this data, we know that the category set is public information:
strings consisting of the numbers between 1 and 20.
The counting aggregator computes a vector of counts in the same order as the input categories.
It also includes one extra count at the end of the vector,
consisting of the number of elements that were not members of the category set.
You'll notice that `make_base_geometric` has an additional argument that explicitly sets the type of the domain, `D`.
It defaults to `AllDomain[int]` which works in situations where the mechanism is noising a scalar.
However, in this situation, we are noising a vector of scalars,
and thus the appropriate domain is `VectorDomain[AllDomain[int]]`.
```
# public information
categories = list(map(str, range(1, 20)))
histogram = (
make_split_dataframe(separator=",", col_names=col_names) >>
make_select_column(key="educ", TOA=str) >>
# Compute counts for each of the categories and null
make_count_by_categories(categories=categories)
)
noisy_histogram = binary_search_chain(
lambda s: histogram >> make_base_geometric(scale=s, D=VectorDomain[AllDomain[int]]),
d_in=max_influence, d_out=budget[0])
sensitive_counts = histogram(data)
released_counts = noisy_histogram(data)
print("Educational level counts:\n", sensitive_counts[:-1])
print("DP Educational level counts:\n", released_counts[:-1])
print("DP estimate for the number of records that were not a member of the category set:", released_counts[-1])
plot_histogram(sensitive_counts, released_counts)
```
### Private histogram via `make_count_by`
This approach is applicable when the set of categories is unknown or very large.
Any categories with a noisy count less than a given threshold will be censored from the final release.
The noise scale influences the epsilon parameter of the budget, and the threshold influences the delta parameter in the budget.
`ptr` stands for Propose-Test-Release, a framework for censoring queries for which the local sensitivity is greater than some threshold.
Any category with a count sufficiently small is censored from the release.
It is sometimes referred to as a "stability histogram" because it only releases counts for "stable" categories that exist in all datasets that are considered "neighboring" to your private dataset.
I start out by defining a function that finds the tightest noise scale and threshold for which the stability histogram is (d_in, d_out)-close.
You may find this useful for your application.
```
def make_base_ptr_budget(
preprocess: Transformation,
d_in, d_out,
TK: RuntimeTypeDescriptor) -> Measurement:
"""Make a stability histogram that respects a given d_in, d_out.
:param preprocess: Transformation
:param d_in: Input distance to satisfy
:param d_out: Output distance to satisfy
:param TK: Type of Key (hashable)
"""
from opendp.mod import binary_search_param
def privatize(s, t=1e8):
return preprocess >> make_base_ptr(scale=s, threshold=t, TK=TK)
s = binary_search_param(lambda s: privatize(s=s), d_in, d_out)
t = binary_search_param(lambda t: privatize(s=s, t=t), d_in, d_out)
return privatize(s=s, t=t)
```
I now use the `make_count_by_ptr_budget` constructor to release a private histogram on the education data.
The stability mechanism, as currently written, samples from a continuous noise distribution.
If you haven't already, please read about [floating-point behavior in the docs](https://docs.opendp.org/en/latest/user/measurement-constructors.html#floating-point).
```
from opendp.mod import enable_features
enable_features("floating-point")
preprocess = (
make_split_dataframe(separator=",", col_names=col_names) >>
make_select_column(key="educ", TOA=str) >>
make_count_by(MO=L1Distance[float], TK=str, TV=float)
)
noisy_histogram = make_base_ptr_budget(
preprocess,
d_in=max_influence, d_out=budget,
TK=str)
sensitive_counts = histogram(data)
released_counts = noisy_histogram(data)
# postprocess to make the results easier to compare
postprocessed_counts = {k: round(v) for k, v in released_counts.items()}
print("Educational level counts:\n", sensitive_counts)
print("DP Educational level counts:\n", postprocessed_counts)
def as_array(data):
return [data.get(k, 0) for k in categories]
plot_histogram(sensitive_counts, as_array(released_counts))
```
| true |
code
| 0.660884 | null | null | null | null |
|
## Set up the dependencies
```
# for reading and validating data
import emeval.input.spec_details as eisd
import emeval.input.phone_view as eipv
import emeval.input.eval_view as eiev
# Visualization helpers
import emeval.viz.phone_view as ezpv
import emeval.viz.eval_view as ezev
# Metrics helpers
import emeval.metrics.segmentation as ems
# For plots
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle
%matplotlib inline
# For maps
import folium
import branca.element as bre
# For easier debugging while working on modules
import importlib
import pandas as pd
import numpy as np
pd.options.display.float_format = '{:.6f}'.format
import arrow
THIRTY_MINUTES = 30 * 60
TIME_THRESHOLD = THIRTY_MINUTES
importlib.reload(ems)
```
## The spec
The spec defines what experiments were done, and over which time ranges. Once the experiment is complete, most of the structure is read back from the data, but we use the spec to validate that it all worked correctly. The spec also contains the ground truth for the legs. Here, we read the spec for the trip to UC Berkeley.
```
DATASTORE_URL = "http://cardshark.cs.berkeley.edu"
AUTHOR_EMAIL = "shankari@eecs.berkeley.edu"
sd_la = eisd.SpecDetails(DATASTORE_URL, AUTHOR_EMAIL, "unimodal_trip_car_bike_mtv_la")
sd_sj = eisd.SpecDetails(DATASTORE_URL, AUTHOR_EMAIL, "car_scooter_brex_san_jose")
sd_ucb = eisd.SpecDetails(DATASTORE_URL, AUTHOR_EMAIL, "train_bus_ebike_mtv_ucb")
```
## The views
There are two main views for the data - the phone view and the evaluation view.
### Phone view
In the phone view, the phone is primary, and then there is a tree that you can traverse to get the data that you want. Traversing that tree typically involves nested for loops; here's an example of loading the phone view and traversing it. You can replace the print statements with real code. When you are ready to check this in, please move the function to one of the python modules so that we can invoke it more generally
```
importlib.reload(eipv)
pv_la = eipv.PhoneView(sd_la)
pv_sj = eipv.PhoneView(sd_sj)
pv_ucb = eipv.PhoneView(sd_ucb)
```
## Number of detected trips versus ground truth trips
Checks to see how many spurious transitions there were
```
importlib.reload(ems)
ems.fill_sensed_trip_ranges(pv_la)
ems.fill_sensed_trip_ranges(pv_sj)
ems.fill_sensed_trip_ranges(pv_ucb)
pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][4]["trip_id"]
```
### Start and end times mismatch
```
curr_run = pv_la.map()["android"]["ucb-sdb-android-2"]["evaluation_ranges"][0]
print(curr_run.keys())
ems.find_matching_segments(curr_run["evaluation_trip_ranges"], "trip_id", curr_run["sensed_trip_ranges"])
[1,2,3][1:2]
def get_tradeoff_entries(pv):
tradeoff_entry_list = []
for phone_os, phone_map in pv.map().items():
print(15 * "=*")
print(phone_os, phone_map.keys())
for phone_label, phone_detail_map in phone_map.items():
print(4 * ' ', 15 * "-*")
print(4 * ' ', phone_label, phone_detail_map.keys())
if "control" in phone_detail_map["role"]:
print("Ignoring %s phone %s since they are always on" % (phone_detail_map["role"], phone_label))
continue
# this spec does not have any calibration ranges, but evaluation ranges are actually cooler
for r in phone_detail_map["evaluation_ranges"]:
print(8 * ' ', 30 * "=")
print(8 * ' ',r.keys())
print(8 * ' ',r["trip_id"], r["eval_common_trip_id"], r["eval_role"], len(r["evaluation_trip_ranges"]))
bcs = r["battery_df"]["battery_level_pct"]
delta_battery = bcs.iloc[0] - bcs.iloc[-1]
print("Battery starts at %d, ends at %d, drain = %d" % (bcs.iloc[0], bcs.iloc[-1], delta_battery))
sensed_trips = len(r["sensed_trip_ranges"])
visit_reports = len(r["visit_sensed_trip_ranges"])
matching_trip_map = ems.find_matching_segments(r["evaluation_trip_ranges"], "trip_id", r["sensed_trip_ranges"])
print(matching_trip_map)
for trip in r["evaluation_trip_ranges"]:
sensed_trip_range = matching_trip_map[trip["trip_id"]]
results = ems.get_count_start_end_ts_diff(trip, sensed_trip_range)
print("Got results %s" % results)
tradeoff_entry = {"phone_os": phone_os, "phone_label": phone_label,
"timeline": pv.spec_details.curr_spec["id"],
"range_id": r["trip_id"],
"run": r["trip_run"], "duration": r["duration"],
"role": r["eval_role_base"], "battery_drain": delta_battery,
"trip_count": sensed_trips, "visit_reports": visit_reports,
"trip_id": trip["trip_id"]}
tradeoff_entry.update(results)
tradeoff_entry_list.append(tradeoff_entry)
return tradeoff_entry_list
# We are not going to look at battery life at the evaluation trip level; we will end with evaluation range
# since we want to capture the overall drain for the timeline
tradeoff_entries_list = []
tradeoff_entries_list.extend(get_tradeoff_entries(pv_la))
tradeoff_entries_list.extend(get_tradeoff_entries(pv_sj))
tradeoff_entries_list.extend(get_tradeoff_entries(pv_ucb))
tradeoff_df = pd.DataFrame(tradeoff_entries_list)
r2q_map = {"power_control": 0, "HAMFDC": 1, "MAHFDC": 2, "HAHFDC": 3, "accuracy_control": 4}
q2r_map = {0: "power", 1: "HAMFDC", 2: "MAHFDC", 3: "HAHFDC", 4: "accuracy"}
# Make a number so that can get the plots to come out in order
tradeoff_df["quality"] = tradeoff_df.role.apply(lambda r: r2q_map[r])
tradeoff_df["count_diff"] = tradeoff_df[["count"]] - 1
import itertools
```
## Trip count analysis
### Scatter plot
```
ifig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12,4))
errorboxes = []
for key, df in tradeoff_df.query("phone_os == 'android'").groupby(["role", "timeline"]):
print(key, df)
tcd = df.trip_count
bd = df.battery_drain
print("Plotting rect with params %s, %d, %d" % (str((tcd.min(), bd.min())),
tcd.max() - tcd.min(),
bd.max() - bd.min()))
print(tcd.min(), tcd.max(), tcd.std())
xerror = np.array([[tcd.min(), tcd.max()]])
print(xerror.shape)
ax.errorbar(x=tcd.mean(), y=bd.mean(), xerr=[[tcd.min()], [tcd.max()]], yerr=[[bd.min()], [bd.max()]], label=key)
plt.legend()
```
### Timeline + trip specific variation
How many sensed trips matched to each ground truth trip?
```
ifig, ax_array = plt.subplots(nrows=2,ncols=3,figsize=(9,6), sharex=False, sharey=True)
timeline_list = ["train_bus_ebike_mtv_ucb", "car_scooter_brex_san_jose", "unimodal_trip_car_bike_mtv_la"]
for i, tl in enumerate(timeline_list):
tradeoff_df.query("timeline == @tl & phone_os == 'android'").boxplot(ax = ax_array[0][i], column=["count_diff"], by=["quality"])
ax_array[0][i].set_title(tl)
tradeoff_df.query("timeline == @tl & phone_os == 'ios'").boxplot(ax = ax_array[1][i], column=["count_diff"], by=["quality"])
ax_array[1][i].set_title("")
# tradeoff_df.query("timeline == @tl & phone_os == 'ios'").boxplot(ax = ax_array[2][i], column=["visit_reports"], by=["quality"])
# ax_array[2][i].set_title("")
# print(android_ax_returned.shape, ios_ax_returned.shape)
for i, ax in enumerate(ax_array[0]):
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
for i, ax in enumerate(ax_array[1]):
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
# for ax in ax_array[1]:
# ax.set_xticklabels(q2r_ios_list[1:])
# ax.set_xlabel("")
# for ax in ax_array[2]:
# ax.set_xticklabels(q2r_ios_list[1:])
# ax.set_xlabel("")
ax_array[0][0].set_ylabel("Difference in trip counts (android)")
ax_array[1][0].set_ylabel("Difference in trip counts (ios)")
# ax_array[2][0].set_ylabel("Difference in visit reports (ios)")
ifig.suptitle("Trip count differences v/s configured quality over multiple timelines")
# ifig.tight_layout()
```
### Timeline specific variation
```
def plot_count_with_errors(ax_array, phone_os):
for i, (tl, trip_gt) in enumerate(timeline_trip_gt.items()):
ax_array[i].bar(0, trip_gt)
for q in range(1,4):
curr_df = tradeoff_df.query("timeline == @tl & phone_os == @phone_os & quality == @q")
print("%s %s %s values = %s %s %s" % (phone_os, tl, q2r_map[q], curr_df.trip_count.min(), curr_df.trip_count.mean(), curr_df.trip_count.max()))
lower_error = curr_df.trip_count.mean() - curr_df.trip_count.min()
upper_error = curr_df.trip_count.max() - curr_df.trip_count.mean()
ax_array[i].bar(x=q, height=curr_df.trip_count.mean(),
yerr=[[lower_error], [upper_error]])
print("%s %s %s errors = %s %s %s" % (phone_os, tl, q2r_map[q], lower_error, curr_df.trip_count.mean(), upper_error))
ax_array[i].set_title(tl)
ifig, ax_array = plt.subplots(nrows=2,ncols=3,figsize=(10,5), sharex=False, sharey=True)
timeline_trip_gt = {"train_bus_ebike_mtv_ucb": 3,
"car_scooter_brex_san_jose": 2,
"unimodal_trip_car_bike_mtv_la": 2}
plot_count_with_errors(ax_array[0], "android")
plot_count_with_errors(ax_array[1], "ios")
for ax in ax_array[0]:
ax.set_xticks(range(0,4))
ax.set_xticklabels(["truth"] + [q2r_map[r] for r in range(1,4)])
ax.set_yticks(range(0,tradeoff_df.trip_count.max(),3))
for ax in ax_array[1]:
ax.set_xticks(range(0,4))
ax.set_xticklabels(["truth"] + [q2r_map[r] for r in range(1,4)])
ax.set_yticks(range(0,tradeoff_df.trip_count.max(),3))
ax_array[0,0].set_ylabel("nTrips (android)")
ax_array[1,0].set_ylabel("nTrips (ios)")
ifig.tight_layout(pad=0.85)
```
## Start end results
```
for r, df in tradeoff_df.query("timeline == @tl & phone_os == 'android'").groupby("role"):
print(r, df.trip_count.mean() , df.trip_count.min(), df.trip_count.max())
```
The HAHFDC phone ran out of battery on all three runs of the `train_bus_ebike_mtv_ucb` timeline, so the trips never ended. Let's remove those so that they don't obfuscate the values from the other runs.
```
out_of_battery_phones = tradeoff_df.query("timeline=='train_bus_ebike_mtv_ucb' & role=='HAHFDC' & trip_id=='berkeley_to_mtv_SF_express_bus_0' & phone_os == 'android'")
for i in out_of_battery_phones.index:
tradeoff_df.loc[i,"end_diff_mins"] = float('nan')
tradeoff_df.query("timeline=='train_bus_ebike_mtv_ucb' & role=='HAHFDC' & trip_id=='berkeley_to_mtv_SF_express_bus_0' & phone_os == 'android'")
```
### Overall results
```
ifig, ax_array = plt.subplots(nrows=1,ncols=4,figsize=(16,4), sharex=False, sharey=True)
tradeoff_df.query("phone_os == 'android'").boxplot(ax = ax_array[0], column=["start_diff_mins"], by=["quality"])
ax_array[0].set_title("start time (android)")
tradeoff_df.query("phone_os == 'android'").boxplot(ax = ax_array[1], column=["end_diff_mins"], by=["quality"])
ax_array[1].set_title("end time (android)")
tradeoff_df.query("phone_os == 'ios'").boxplot(ax = ax_array[2], column=["start_diff_mins"], by=["quality"])
ax_array[2].set_title("start_time (ios)")
tradeoff_df.query("phone_os == 'ios'").boxplot(ax = ax_array[3], column=["end_diff_mins"], by=["quality"])
ax_array[3].set_title("end_time (ios)")
# print(android_ax_returned.shape, ios_ax_returned.shape)
ax_array[0].set_xticklabels([q2r_map[int(t.get_text())] for t in ax_array[0].get_xticklabels()])
ax_array[1].set_xticklabels([q2r_map[int(t.get_text())] for t in ax_array[1].get_xticklabels()])
ax_array[2].set_xticklabels([q2r_map[int(t.get_text())] for t in ax_array[2].get_xticklabels()])
ax_array[3].set_xticklabels([q2r_map[int(t.get_text())] for t in ax_array[3].get_xticklabels()])
for ax in ax_array:
ax.set_xlabel("")
ax_array[1].text(0.55,25,"Excluding trips where battery ran out")
ax_array[0].set_ylabel("Diff (mins)")
ifig.suptitle("Trip start end accuracy v/s configured quality")
ifig.tight_layout(pad=1.7)
```
### Timeline specific
```
ifig, ax_array = plt.subplots(nrows=4,ncols=3,figsize=(10,10), sharex=False, sharey=True)
timeline_list = ["train_bus_ebike_mtv_ucb", "car_scooter_brex_san_jose", "unimodal_trip_car_bike_mtv_la"]
for i, tl in enumerate(timeline_list):
tradeoff_df.query("timeline == @tl & phone_os == 'android'").boxplot(ax = ax_array[0][i], column=["start_diff_mins"], by=["quality"])
ax_array[0][i].set_title(tl)
tradeoff_df.query("timeline == @tl & phone_os == 'android'").boxplot(ax = ax_array[1][i], column=["end_diff_mins"], by=["quality"])
ax_array[1][i].set_title("")
tradeoff_df.query("timeline == @tl & phone_os == 'ios'").boxplot(ax = ax_array[2][i], column=["start_diff_mins"], by=["quality"])
ax_array[2][i].set_title("")
tradeoff_df.query("timeline == @tl & phone_os == 'ios'").boxplot(ax = ax_array[3][i], column=["end_diff_mins"], by=["quality"])
ax_array[3][i].set_title("")
# print(android_ax_returned.shape, ios_ax_returned.shape)
for ax in ax_array[0]:
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
for ax in ax_array[1]:
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
ax_array[1,0].text(0.55,25,"Excluding trips where battery ran out")
for ax in ax_array[2]:
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
for ax in ax_array[3]:
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
ax_array[0][0].set_ylabel("Start time diff (android)")
ax_array[1][0].set_ylabel("End time diff (android)")
ax_array[2][0].set_ylabel("Start time diff (ios)")
ax_array[3][0].set_ylabel("End time diff (ios)")
ifig.suptitle("Trip start end accuracy (mins) v/s configured quality over multiple timelines")
# ifig.tight_layout(pad=2.5)
```
## Outlier checks
We can have unexpected values for both time and count. Unfortunately, there is no overlap between the two (intersection is zero). So we will look at a random sample from both cases
```
expected_legs = "&".join(["not (trip_id == 'bus trip with e-scooter access_0' & count == 2)",
"not (trip_id == 'mtv_to_berkeley_sf_bart_0' & count == 3)"])
count_outliers = tradeoff_df.query("count > 1 & %s" % expected_legs)
count_outliers[["phone_os", "range_id", "trip_id", "run", "role", "count", "start_diff_mins", "end_diff_mins"]].head()
tradeoff_df.query("count < 1 & role == 'HAHFDC'")
time_outliers = tradeoff_df.query("start_diff_mins == 30 | end_diff_mins == 30")
time_outliers[["phone_os", "range_id", "trip_id", "run", "role", "start_diff_mins", "end_diff_mins"]].head()
print(len(time_outliers.index.union(count_outliers.index)), len(time_outliers.index.intersection(count_outliers.index)))
time_outliers.sample(n=3, random_state=1)[["phone_os", "range_id", "trip_id", "run", "role", "count", "start_diff_mins", "end_diff_mins"]]
count_outliers.sample(n=3, random_state=1)[["phone_os", "range_id", "trip_id", "run", "role", "count", "start_diff_mins", "end_diff_mins"]]
fmt = lambda ts: arrow.get(ts).to("America/Los_Angeles")
def check_outlier(eval_range, trip_idx, mismatch_key):
eval_trip_range = eval_range["evaluation_trip_ranges"][trip_idx]
print("Trip %s, ground truth experiment for metric %s, %s, trip %s" % (eval_range["trip_id"], mismatch_key, fmt(eval_range[mismatch_key]), fmt(eval_trip_range[mismatch_key])))
print(eval_trip_range["transition_df"][["transition", "fmt_time"]])
print("**** For entire experiment ***")
print(eval_range["transition_df"][["transition", "fmt_time"]])
if mismatch_key == "end_ts":
# print("Transitions after trip end")
# print(eval_range["transition_df"].query("ts > %s" % eval_trip_range["end_ts"])[["transition", "fmt_time"]])
return ezpv.display_map_detail_from_df(eval_trip_range["location_df"])
else:
return ezpv.display_map_detail_from_df(eval_trip_range["location_df"])
```
##### MAHFDC is just terrible
It looks like with MAHFDC, we essentially get no trip ends on android. Let's investigate these a bit further.
- run 0: trip never ended: trip actually ended just before next trip started `15:01:26`. And then next trip had geofence exit, but we didn't detect it because it never ended, so we didn't create a sensed range for it.
- run 1: trip ended but after 30 mins: similar behavior; trip ended just before next trip started `15:49:39`.
```
tradeoff_df.query("phone_os == 'android' & role == 'MAHFDC' & timeline == 'car_scooter_brex_san_jose'")[["range_id", "trip_id", "run", "role", "count", "start_diff_mins", "end_diff_mins"]]
FMT_STRING = "HH:mm:SS"
for t in pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][3]["evaluation_trip_ranges"]:
print(sd_sj.fmt(t["start_ts"], FMT_STRING), "->", sd_sj.fmt(t["end_ts"], FMT_STRING))
pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][3]["transition_df"]
FMT_STRING = "HH:mm:SS"
for t in pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][4]["evaluation_trip_ranges"]:
print(sd_sj.fmt(t["start_ts"], FMT_STRING), "->", sd_sj.fmt(t["end_ts"], FMT_STRING))
pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][4]["transition_df"]
```
##### Visit detection kicked in almost at the end of the trip
```
# 44 ios suburb_city_driving_weekend_0 1 HAMFDC 0 30.000000 30.000000
check_outlier(pv_la.map()["ios"]["ucb-sdb-ios-3"]["evaluation_ranges"][4], 0, "start_ts")
```
##### Trip end never detected
Trip ended at 14:11, experiment ended at 14:45. No stopped_moving for the last trip
```
# 65 android bus trip with e-scooter access_0 2 HAMFDC 1 3.632239 30.000000
check_outlier(pv_sj.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][2], 1, "end_ts")
```
##### Trip end detection errors on iOS
Original experiment, explanation for the outliers on the HAHFDC and MAHFDC first runs to San Jose
- HAHFDC: Trip end detected 1.5 hours after real end, but before next trip start
- MAHFDC: Trip end detected 5 hours after real end, at the end of the next trip
- MAHFDC: Clearly this was not even detected as a separate trip, so this is correct. There was a spurious trip from `17:42:22` - `17:44:22` which ended up matching this. But clearly because of the missing trip end detection, both the previous trip and this one were incorrect. You can click on the points at the Mountain View library to confirm when the trip ended.
```
fig = bre.Figure()
fig.add_subplot(1,3,1).add_child(check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][0], 0, "end_ts"))
fig.add_subplot(1,3,2).add_child(check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-3"]["evaluation_ranges"][0], 0, "end_ts"))
fig.add_subplot(1,3,3).add_child(check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-3"]["evaluation_ranges"][0], 1, "start_ts"))
# check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][0], 0, "end_ts")
```
##### No geofence exit ever detected
On the middle trip of the second round of data collection to the San Jose library, we got no geofence exits. The entire list of transitions is
```
transition fmt_time
3 T_VISIT_ENDED 2019-08-06T11:29:20.573817-07:00
6 T_VISIT_STARTED 2019-08-06T11:29:20.911773-07:00
8 T_VISIT_ENDED 2019-08-06T11:35:38.250980-07:00
9 T_VISIT_STARTED 2019-08-06T12:00:05.445936-07:00
12 T_TRIP_ENDED 2019-08-06T12:00:07.093790-07:00
15 T_VISIT_ENDED 2019-08-06T15:59:13.998068-07:00
18 T_VISIT_STARTED 2019-08-06T17:12:38.808743-07:00
21 T_TRIP_ENDED 2019-08-06T17:12:40.504285-07:00
```
We did get visit notifications, so we did track location points (albeit after a long time), and we did get the trip end notifications, but we have no sensed trips. Had to handle this in the code as well
```
check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][4], 0, "start_ts")
```
##### No geofence exit ever detected
On the middle trip of the second round of data collection to the San Jose library, we got no geofence exits.
We did get visit notifications, so we did track location points (albeit after a long time), and we did get the trip end notifications, but we have no sensed trips. Had to handle this in the code as well
```
# 81 ios bus trip with e-scooter access_0 1 HAHFDC 0 30.000000 30.000000
check_outlier(pv_sj.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][4], 1, "end_ts")
```
### 7 mapped trips for one
This is essentially from the time that I wandered around looking for the bikeshare bike. This raises the question of whether I should filter out the points within the polygon in this case too. Overall, I think not. The only part within the polygon that we don't guarantee is the ground truth trajectory. We still do have the ground truth of the trip/section start end, and there really is no reason why we should have had so many "trips" when I was walking around. I certainly didn't wait for too long while walking and this was not semantically a "trip" by any stretch of the imagination.
```
# 113 android berkeley_to_mtv_SF_express_bus_0 2 HAMFDC 7 2.528077 3.356611
check_outlier(pv_ucb.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][2], 2, "end_ts")
```
### Trip split into two in medium accuracy *only*
Actual trip ends at `14:21`. In medium accuracy, detected trips were `14:12:15 -> 14:17:33` and `14:22:14 -> 14:24:15`. This was after we reached the destination, but there is a large gap because we basically got no points for a large part of the trip. This seems correct - it looks like iOS is just prematurely detecting the trip end in the MA case.
```
# 127 ios walk_urban_university_0 1 MAHFDC 2 4.002549 2.352913
fig = bre.Figure()
def compare_med_high_accuracy():
trip_idx = 1
mismatch_key = "end_ts"
ha_range = pv_ucb.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][1]
ha_trip_range = ha_range["evaluation_trip_ranges"][trip_idx]
eval_range = pv_ucb.map()["ios"]["ucb-sdb-ios-3"]["evaluation_ranges"][1]
eval_trip_range = eval_range["evaluation_trip_ranges"][trip_idx]
print("Trip %s, ground truth experiment for metric %s, %s, trip %s, high accuracy %s" %
(eval_range["trip_id"], mismatch_key,
fmt(eval_range[mismatch_key]), fmt(eval_trip_range[mismatch_key]), fmt(ha_trip_range[mismatch_key])))
print(eval_trip_range["transition_df"][["transition", "fmt_time"]])
print("**** Expanded ***")
print(eval_range["transition_df"].query("%s < ts < %s" %
((eval_trip_range["end_ts"] - 30*60), (eval_trip_range["end_ts"] + 30*60)))[["transition", "fmt_time"]])
fig = bre.Figure()
fig.add_subplot(1,2,1).add_child(ezpv.display_map_detail_from_df(ha_trip_range["location_df"]))
fig.add_subplot(1,2,2).add_child(ezpv.display_map_detail_from_df(eval_trip_range["location_df"]))
return fig
compare_med_high_accuracy()
[{'start_ts': fmt(1564089135.368705), 'end_ts': fmt(1564089453.8783798)},
{'start_ts': fmt(1564089734.305933), 'end_ts': fmt(1564089855.8683748)}]
```
### We just didn't detect any trip ends in the middle
We only detected a trip end at the Mountain View station. This is arguably more correct than the multiple trips that we get with a dwell time.
```
# 120 ios mtv_to_berkeley_sf_bart_0 2 HAHFDC 2 3.175024 1.046759
check_outlier(pv_ucb.map()["ios"]["ucb-sdb-ios-2"]["evaluation_ranges"][2], 0, "end_ts")
```
| true |
code
| 0.371707 | null | null | null | null |
|
# The art of using pipelines
Pipelines are a natural way to think about a machine learning system. Indeed with some practice a data scientist can visualise data "flowing" through a series of steps. The input is typically some raw data which has to be processed in some manner. The goal is to represent the data in such a way that is can be ingested by a machine learning algorithm. Along the way some steps will extract features, while others will normalize the data and remove undesirable elements. Pipelines are simple, and yet they are a powerful way of designing sophisticated machine learning systems.
Both [scikit-learn](https://stackoverflow.com/questions/33091376/python-what-is-exactly-sklearn-pipeline-pipeline) and [pandas](https://tomaugspurger.github.io/method-chaining) make it possible to use pipelines. However it's quite rare to see pipelines being used in practice (at least on Kaggle). Sometimes you get to see people using scikit-learn's `pipeline` module, however the `pipe` method from `pandas` is sadly underappreciated. A big reason why pipelines are not given much love is that it's easier to think of batch learning in terms of a script or a notebook. Indeed many people doing data science seem to prefer a procedural style to a declarative style. Moreover in practice pipelines can be a bit rigid if one wishes to do non-orthodox operations.
Although pipelines may be a bit of an odd fit for batch learning, they make complete sense when they are used for online learning. Indeed the UNIX philosophy has advocated the use of pipelines for data processing for many decades. If you can visualise data as a stream of observations then using pipelines should make a lot of sense to you. We'll attempt to convince you by writing a machine learning algorithm in a procedural way and then converting it to a declarative pipeline in small steps. Hopefully by the end you'll be convinced, or not!
In this notebook we'll manipulate data from the [Kaggle Recruit Restaurants Visitor Forecasting competition](https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting). The data is directly available through `river`'s `datasets` module.
```
from pprint import pprint
from river import datasets
for x, y in datasets.Restaurants():
pprint(x)
pprint(y)
break
```
We'll start by building and running a model using a procedural coding style. The performance of the model doesn't matter, we're simply interested in the design of the model.
```
from river import feature_extraction
from river import linear_model
from river import metrics
from river import preprocessing
from river import stats
means = (
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
)
scaler = preprocessing.StandardScaler()
lin_reg = linear_model.LinearRegression()
metric = metrics.MAE()
for x, y in datasets.Restaurants():
# Derive date features
x['weekday'] = x['date'].weekday()
x['is_weekend'] = x['date'].weekday() in (5, 6)
# Process the rolling means of the target
for mean in means:
x = {**x, **mean.transform_one(x)}
mean.learn_one(x, y)
# Remove the key/value pairs that aren't features
for key in ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude']:
x.pop(key)
# Rescale the data
x = scaler.learn_one(x).transform_one(x)
# Fit the linear regression
y_pred = lin_reg.predict_one(x)
lin_reg.learn_one(x, y)
# Update the metric using the out-of-fold prediction
metric.update(y, y_pred)
print(metric)
```
We're not using many features. We can print the last `x` to get an idea of the features (don't forget they've been scaled!)
```
pprint(x)
```
The above chunk of code is quite explicit but it's a bit verbose. The whole point of libraries such as `river` is to make life easier for users. Moreover there's too much space for users to mess up the order in which things are done, which increases the chance of there being target leakage. We'll now rewrite our model in a declarative fashion using a pipeline *à la sklearn*.
```
from river import compose
def get_date_features(x):
weekday = x['date'].weekday()
return {'weekday': weekday, 'is_weekend': weekday in (5, 6)}
model = compose.Pipeline(
('features', compose.TransformerUnion(
('date_features', compose.FuncTransformer(get_date_features)),
('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))),
('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))),
('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)))
)),
('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')),
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LinearRegression())
)
metric = metrics.MAE()
for x, y in datasets.Restaurants():
# Make a prediction without using the target
y_pred = model.predict_one(x)
# Update the model using the target
model.learn_one(x, y)
# Update the metric using the out-of-fold prediction
metric.update(y, y_pred)
print(metric)
```
We use a `Pipeline` to arrange each step in a sequential order. A `TransformerUnion` is used to merge multiple feature extractors into a single transformer. The `for` loop is now much shorter and is thus easier to grok: we get the out-of-fold prediction, we fit the model, and finally we update the metric. This way of evaluating a model is typical of online learning, and so we put it wrapped it inside a function called `progressive_val_score` part of the `evaluate` module. We can use it to replace the `for` loop.
```
from river import evaluate
model = compose.Pipeline(
('features', compose.TransformerUnion(
('date_features', compose.FuncTransformer(get_date_features)),
('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))),
('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))),
('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)))
)),
('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')),
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LinearRegression())
)
evaluate.progressive_val_score(dataset=datasets.Restaurants(), model=model, metric=metrics.MAE())
```
Notice that you couldn't have used the `progressive_val_score` method if you wrote the model in a procedural manner.
Our code is getting shorter, but it's still a bit difficult on the eyes. Indeed there is a lot of boilerplate code associated with pipelines that can get tedious to write. However `river` has some special tricks up it's sleeve to save you from a lot of pain.
The first trick is that the name of each step in the pipeline can be omitted. If no name is given for a step then `river` automatically infers one.
```
model = compose.Pipeline(
compose.TransformerUnion(
compose.FuncTransformer(get_date_features),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
),
compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'),
preprocessing.StandardScaler(),
linear_model.LinearRegression()
)
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Under the hood a `Pipeline` inherits from `collections.OrderedDict`. Indeed this makes sense because if you think about it a `Pipeline` is simply a sequence of steps where each step has a name. The reason we mention this is because it means you can manipulate a `Pipeline` the same way you would manipulate an ordinary `dict`. For instance we can print the name of each step by using the `keys` method.
```
for name in model.steps:
print(name)
```
The first step is a `FeatureUnion` and it's string representation contains the string representation of each of it's elements. Not having to write names saves up some time and space and is certainly less tedious.
The next trick is that we can use mathematical operators to compose our pipeline. For example we can use the `+` operator to merge `Transformer`s into a `TransformerUnion`.
```
model = compose.Pipeline(
compose.FuncTransformer(get_date_features) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)),
compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'),
preprocessing.StandardScaler(),
linear_model.LinearRegression()
)
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Likewhise we can use the `|` operator to assemble steps into a `Pipeline`.
```
model = (
compose.FuncTransformer(get_date_features) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
)
to_discard = ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude']
model = model | compose.Discard(*to_discard) | preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Hopefully you'll agree that this is a powerful way to express machine learning pipelines. For some people this should be quite remeniscent of the UNIX pipe operator. One final trick we want to mention is that functions are automatically wrapped with a `FuncTransformer`, which can be quite handy.
```
model = get_date_features
for n in [7, 14, 21]:
model += feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(n))
model |= compose.Discard(*to_discard)
model |= preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Naturally some may prefer the procedural style we first used because they find it easier to work with. It all depends on your style and you should use what you feel comfortable with. However we encourage you to use operators because we believe that this will increase the readability of your code, which is very important. To each their own!
Before finishing we can take an interactive look at our pipeline.
```
model
```
| true |
code
| 0.661868 | null | null | null | null |
|
#### Setup
```
# standard imports
import numpy as np
import torch
import matplotlib.pyplot as plt
from torch import optim
from ipdb import set_trace
from datetime import datetime
# jupyter setup
%matplotlib inline
%load_ext autoreload
%autoreload 2
# own modules
from dataloader import CAL_Dataset
from net import get_model
from dataloader import get_data, get_mini_data, load_json, save_json
from train import fit, custom_loss, validate
from metrics import calc_metrics
# paths
data_path = './dataset/'
```
uncomment the cell below if you want your experiments to yield always the same results
```
# manualSeed = 42
# np.random.seed(manualSeed)
# torch.manual_seed(manualSeed)
# # if you are using GPU
# torch.cuda.manual_seed(manualSeed)
# torch.cuda.manual_seed_all(manualSeed)
# torch.backends.cudnn.enabled = False
# torch.backends.cudnn.benchmark = False
# torch.backends.cudnn.deterministic = True
```
#### Training
Initialize the model. Possible Values for the task block type: MLP, LSTM, GRU, TempConv
```
params = {'name': 'tempconv', 'type_': 'TempConv', 'lr': 3e-4, 'n_h': 128, 'p':0.5, 'seq_len':5}
save_json(params, f"models/{params['name']}")
model, opt = get_model(params)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
```
get the data loader. get mini data gets only a subset of the training data, on which we can try if the model is able to overfit
```
train_dl, valid_dl = get_data(data_path, model.params.seq_len, batch_size=16)
# train_dl, valid_dl = get_mini_data(data_path, model.params.seq_len, batch_size=16, l=4000)
```
Train the model. We automatically save the model with the lowest val_loss. If you want to continue the training and keep the loss history, just pass it as an additional argument as shown below.
```
model, val_hist = fit(1, model, custom_loss, opt, train_dl, valid_dl)
# model, val_hist = fit(1, model, custom_loss, opt, train_dl, valid_dl, val_hist=val_hist)
```
uncomment the following two cells if the feature extractor should also be trained
```
# for name,param in model.named_parameters():
# param.requires_grad = True
# opt = optim.Adam(model.parameters())
# model, val_hist = fit(1, model, custom_loss, opt, train_dl, valid_dl)
plt.plot(val_hist)
```
#### evalute the model
reload model
```
name = 'gru'
params = load_json(f"models/{name}")
model, _ = get_model(params)
model.load_state_dict(torch.load(f"./models/{name}.pth"));
model.eval().to(device);
_, valid_dl = get_data(data_path, model.params.seq_len, batch_size=16)
```
run evaluation on full val set
```
_, all_preds, all_labels = validate(model, valid_dl, custom_loss)
calc_metrics(all_preds, all_labels)
```
#### plot results
```
# for convience, we can pass an integer instead of the full string
int2key = {0: 'red_light', 1:'hazard_stop', 2:'speed_sign',
3:'relative_angle', 4: 'center_distance', 5: 'veh_distance'}
def plot_preds(k, all_preds, all_labels, start=0, delta=1000):
if isinstance(k, int): k = int2key[k]
# get preds and labels
class_labels = ['red_light', 'hazard_stop', 'speed_sign']
pred = np.argmax(all_preds[k], axis=1) if k in class_labels else all_preds[k]
label = all_labels[k][:, 1] if k in class_labels else all_labels[k]
plt.plot(pred[start:start+delta], 'r--', label='Prediction', linewidth=2.0)
plt.plot(label[start:start+delta], 'g', label='Ground Truth', linewidth=2.0)
plt.legend()
plt.grid()
plt.show()
plot_preds(5, all_preds, all_labels, start=0, delta=4000)
```
#### param search
```
from numpy.random import choice
np.random.seed()
params = {'name': 'tempconv', 'type_': 'TempConv', 'lr': 3e-4, 'n_h': 128, 'p':0.5, 'seq_len':5}
def get_random_NN_parameters():
params = {}
params['type_'] = choice(['MLP', 'GRU', 'LSTM', 'TempConv'])
params['name'] = datetime.now().strftime("%Y_%m_%d_%H_%M")
params['lr'] = np.random.uniform(1e-5, 1e-2)
params['n_h'] = np.random.randint(5, 200)
params['p'] = np.random.uniform(0.25, 0.75)
params['seq_len'] = np.random.randint(1, 15)
return params
while True:
params = get_random_NN_parameters()
print('PARAMS: {}'.format(params))
# instantiate the model
model, opt = get_model(params)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
save_json(params, f"models/{params['name']}")
# get the data loaders
train_dl, valid_dl = get_data(data_path, model.params.seq_len, batch_size=16)
# start the training
model, val_hist = fit(5, model, custom_loss, opt, train_dl, valid_dl)
for name,param in model.named_parameters():
param.requires_grad = True
opt = optim.Adam(model.parameters())
model, val_hist = fit(5, model, custom_loss, opt, train_dl, valid_dl, val_hist=val_hist)
```
| true |
code
| 0.788461 | null | null | null | null |
|
# The Python ecosystem
## Why Python?
### Python in a nutshell
[Python](https://www.python.org) is a multi-purpose programming language created in 1989 by [Guido van Rossum](https://en.wikipedia.org/wiki/Guido_van_Rossum) and developed under a open source license.
It has the following characteristics:
- multi-paradigms (procedural, fonctional, object-oriented);
- dynamic types;
- automatic memory management;
- and much more!
### The Python syntax
For more examples, see the [Python cheatsheet](../tools/python_cheatsheet).
```
def hello(name):
print(f"Hello, {name}")
friends = ["Lou", "David", "Iggy"]
for friend in friends:
hello(friend)
```
### Introduction to Data Science
- Main objective: extract insight from data.
- Expression born in 1997 in the statistician community.
- "A Data Scientist is a statistician that lives in San Francisco".
- 2012 : "Sexiest job of the 21st century" (Harvard Business Review).
- [Controversy](https://en.wikipedia.org/wiki/Data_science#Relationship_to_statistics) on the expression's real usefulness.
[](https://en.wikipedia.org/wiki/Data_science)
[](http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram)
[](http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram)
### Python, a standard for ML and Data Science
- Language qualities (ease of use, simplicity, versatility).
- Involvement of the scientific and academical communities.
- Rich ecosystem of dedicated open source libraries.
## Essential Python tools
### Anaconda
[Anaconda](https://www.anaconda.com/distribution/) is a scientific distribution including Python and many (1500+) specialized packages. it is the easiest way to setup a work environment for ML and Data Science with Python.
[](https://www.anaconda.com/distribution/)
### Jupyter Notebook
The Jupyter Notebook is an open-source web application that allows to manage documents (_.ipynb_ files) that may contain live code, equations, visualizations and text.
It has become the *de facto* standard for sharing research results in numerical fields.
[](https://jupyter.org/)
### Google Colaboratory
Cloud environment for executing Jupyter notebooks through CPU, GPU or TPU.
[](https://colab.research.google.com)
### NumPy
[NumPy](https://numpy.org/) is a Python library providing support for multi-dimensional arrays, along with a large collection of mathematical functions to operate on these arrays.
It is the fundamental package for scientific computing in Python.
```
# Import the NumPy package under the alias "np"
import numpy as np
x = np.array([1, 4, 2, 5, 3])
print(x[:2])
print(x[2:])
print(np.sort(x))
```
### pandas
[pandas](https://pandas.pydata.org/) is a Python library providing high-performance, easy-to-use data structures and data analysis tools.
The primary data structures in **pandas** are implemented as two classes:
- **DataFrame**, which you can imagine as a relational data table, with rows and named columns.
- **Series**, which is a single column. A DataFrame contains one or more Series and a name for each Series.
The DataFrame is a commonly used abstraction for data manipulation.
```
import pandas as pd
# Create a DataFrame object contraining two Series
pop = pd.Series({"CAL": 38332521, "TEX": 26448193, "NY": 19651127})
area = pd.Series({"CAL": 423967, "TEX": 695662, "NY": 141297})
pd.DataFrame({"population": pop, "area": area})
```
### Matplotlib and Seaborn
[Matplotlib](https://matplotlib.org/) is a Python library for 2D plotting. [Seaborn](https://seaborn.pydata.org) is another visualization library that improves presentation of matplotlib-generated graphics.
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Setup plots (should be done on a separate cell for better results)
%matplotlib inline
plt.rcParams["figure.figsize"] = 10, 8
%config InlineBackend.figure_format = "retina"
sns.set()
# Plot a single function
x = np.linspace(0, 10, 30)
plt.plot(x, np.cos(x), label="cosinus")
plt.plot(x, np.sin(x), '-ok', label="sinus")
plt.legend()
plt.show()
```
### scikit-learn
[scikit-learn](https://scikit-learn.org) is a multi-purpose library built over Numpy and Matplotlib and providing dozens of built-in ML algorithms and models.
It is the Swiss army knife of Machine Learning.
Fun fact: scikit-learn was originally created by [INRIA](https://www.inria.fr).
### Keras
[Keras](https://keras.io/) is a high-level, user-friendly API for creating and training neural nets.
Once compatible with many back-end tools (Caffe, Theano, CNTK...), Keras is now the official high-level API of [TensorFlow](https://www.tensorflow.org/), Google's Machine Learning platform.
The [2.3.0 release](https://github.com/keras-team/keras/releases/tag/2.3.0) (Sept. 2019) was the last major release of multi-backend Keras.
See [this notebook](https://colab.research.google.com/drive/1UCJt8EYjlzCs1H1d1X0iDGYJsHKwu-NO) for a introduction to TF+Keras.
### PyTorch
[PyTorch](https://pytorch.org) is a Machine Learning platform supported by Facebook and competing with [TensorFlow](https://www.tensorflow.org/) for the hearts and minds of ML practitioners worldwide. It provides:
- a array manipulation API similar to NumPy;
- an autodifferentiation engine for computing gradients;
- a neural network API.
It is based on previous work, notably [Torch](http://torch.ch/) and [Chainer](https://chainer.org/).
| true |
code
| 0.429399 | null | null | null | null |
|
# Coronagraph Basics
This set of exercises guides the user through a step-by-step process of simulating NIRCam coronagraphic observations of the HR 8799 exoplanetary system. The goal is to familiarize the user with basic `pynrc` classes and functions relevant to coronagraphy.
```
# If running Python 2.x, makes print and division act like Python 3
from __future__ import print_function, division
# Import the usual libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Enable inline plotting at lower left
%matplotlib inline
from IPython.display import display, Latex, clear_output
```
We will start by first importing `pynrc` along with the `obs_hci` (High Contrast Imaging) class, which lives in the `pynrc.obs_nircam` module.
```
import pynrc
from pynrc import nrc_utils # Variety of useful functions and classes
from pynrc.obs_nircam import obs_hci # High-contrast imaging observation class
# Disable informational messages and only include warnings and higher
pynrc.setup_logging(level='WARN')
```
## Source Definitions
The `obs_hci` class first requires two arguments describing the spectra of the science and reference sources (`sp_sci` and `sp_ref`, respectively. Each argument should be a Pysynphot spectrum already normalized to some known flux. `pynrc` includes built-in functions for generating spectra. The user may use either of these or should feel free to supply their own as long as it meets the requirements.
1. The `pynrc.stellar_spectrum` function provides the simplest way to define a new spectrum:
```python
bp_k = pynrc.bp_2mass('k') # Define bandpass to normalize spectrum
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k)
```
You can also be more specific about the stellar properties with `Teff`, `metallicity`, and `log_g` keywords.
```python
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k,
Teff=7430, metallicity=-0.47, log_g=4.35)
```
2. Alternatively, the `pynrc.source_spectrum` class ingests spectral information of a given target and generates a model fit to the known photometric SED. Two model routines can be fit. The first is a very simple scale factor that is applied to the input spectrum, while the second takes the input spectrum and adds an IR excess modeled as a modified blackbody function. The user can find the relevant photometric data at http://vizier.u-strasbg.fr/vizier/sed/ and click download data as a VOTable.
```
# Define 2MASS Ks bandpass and source information
bp_k = pynrc.bp_2mass('k')
# Science source, dist, age, sptype, Teff, [Fe/H], log_g, mag, band
args_sources = [('HR 8799', 39.0, 30, 'F0V', 7430, -0.47, 4.35, 5.24, bp_k)]
# References source, sptype, Teff, [Fe/H], log_g, mag, band
ref_sources = [('HD 220657', 'F8III', 5888, -0.01, 3.22, 3.04, bp_k)]
name_sci, dist_sci, age, spt_sci, Teff_sci, feh_sci, logg_sci, mag_sci, bp_sci = args_sources[0]
name_ref, spt_ref, Teff_ref, feh_ref, logg_ref, mag_ref, bp_ref = ref_sources[0]
# For the purposes of simplicity, we will use pynrc.stellar_spectrum()
sp_sci = pynrc.stellar_spectrum(spt_sci, mag_sci, 'vegamag', bp_sci,
Teff=Teff_sci, metallicity=feh_sci, log_g=logg_sci)
sp_sci.name = name_sci
# And the refernece source
sp_ref = pynrc.stellar_spectrum(spt_ref, mag_ref, 'vegamag', bp_ref,
Teff=Teff_ref, metallicity=feh_ref, log_g=logg_ref)
sp_ref.name = name_ref
# Plot the two spectra
fig, ax = plt.subplots(1,1, figsize=(8,5))
xr = [2.5,5.5]
for sp in [sp_sci, sp_ref]:
w = sp.wave / 1e4
ind = (w>=xr[0]) & (w<=xr[1])
sp.convert('Jy')
f = sp.flux / np.interp(4.0, w, sp.flux)
ax.semilogy(w[ind], f[ind], lw=1.5, label=sp.name)
ax.set_ylabel('Flux (Jy) normalized at 4 $\mu m$')
sp.convert('flam')
ax.set_xlim(xr)
ax.set_xlabel(r'Wavelength ($\mu m$)')
ax.set_title('Spectral Sources')
# Overplot Filter Bandpass
bp = pynrc.read_filter('F444W', 'CIRCLYOT', 'MASK430R')
ax2 = ax.twinx()
ax2.plot(bp.wave/1e4, bp.throughput, color='C2', label=bp.name+' Bandpass')
ax2.set_ylim([0,0.8])
ax2.set_xlim(xr)
ax2.set_ylabel('Bandpass Throughput')
ax.legend(loc='upper left')
ax2.legend(loc='upper right')
fig.tight_layout()
```
## Initialize Observation
Now we will initialize the high-contrast imaging class `pynrc.obs_hci` using the spectral objects and various other settings. The `obs_hci` object is a subclass of the more generalized `NIRCam` class. It implements new settings and functions specific to high-contrast imaging observations for corongraphy and direct imaging.
For this tutorial, we want to observe these targets using the `MASK430R` coronagraph in the `F444W` filter. All circular coronagraphic masks such as the `430R` (R=round) should be paired with the `CIRCLYOT` pupil element, whereas wedge/bar masks are paired with `WEDGELYOT` pupil. Observations in the LW channel are most commonly observed in `WINDOW` mode with a 320x320 detector subarray size. Full detector sizes are also available.
The PSF simulation size (`fov_pix` keyword) should also be of similar size as the subarray window (recommend avoiding anything above `fov_pix=1024` due to computation time and memory usage). Use odd numbers to center the PSF in the middle of the pixel. If `fov_pix` is specified as even, then PSFs get centered at the corners. This distinction really only matter for unocculted observations, (ie., where the PSF flux is concentrated in a tight central core).
We also need to specify a WFE drift value (`wfe_ref_drift` parameter), which defines the anticipated drift in nm between the science and reference sources. For the moment, let's intialize with a value of 0nm. This prevents an initially long process by which `pynrc` calculates changes made to the PSF over a wide range of drift values.
Extended disk models can also be specified upon initialization using the `disk_hdu` keyword.
```
filt, mask, pupil = ('F444W', 'MASK430R', 'CIRCLYOT')
wind_mode, subsize = ('WINDOW', 320)
fov_pix, oversample = (320, 2)
wfe_ref_drift = 0
obs = pynrc.obs_hci(sp_sci, sp_ref, dist_sci, filter=filt, mask=mask, pupil=pupil,
wfe_ref_drift=wfe_ref_drift, fov_pix=fov_pix, oversample=oversample,
wind_mode=wind_mode, xpix=subsize, ypix=subsize, verbose=True)
```
All information for the reference observation is stored in the attribute `obs.nrc_ref`, which is simply it's own isolated `NIRCam` (`nrc_hci`) class. After initialization, any updates made to the primary `obs` instrument configuration (e.g., filters, detector size, etc.) must also be made inside the `obs.nrc_ref` class as well. That is to say, it does not automatically propogate. In many ways, it's best to think of these as two separate classes,
```python
obs_sci = obs
obs_ref = obs.nrc_ref
```
with some linked references between the two.
Now that we've succesffully initialized the obs_hci observations, let's specify the `wfe_ref_drift`. If this is your first time, then the `nrc_utils.wfed_coeff` function is called to determine a relationship between PSFs in the presense of WFE drift. This relationship is saved to disk in the `PYNRC_DATA` directory as a set of polynomial coefficients. Future calculations utilize these coefficients to quickly generate a new PSF for any arbitary drift value.
```
# WFE drift amount between rolls
# This only gets called during gen_roll_image()
# and temporarily updates obs.wfe_drift to create
# a new PSF.
obs.wfe_roll_drift = 2
# Drift amount between Roll 1 and reference
# This is simply a link to obs.nrc_ref.wfe_drift
obs.wfe_ref_drift = 10
```
## Exposure Settings
Optimization of exposure settings are demonstrated in another tutorial, so we will not repeat that process here. We can assume the optimization process was performed elsewhere to choose the `DEEP8` pattern with 16 groups and 5 total integrations. These settings apply to each roll position of the science observation as well as the for the reference observation.
```
# Update both the science and reference observations
obs.update_detectors(read_mode='DEEP8', ngroup=16, nint=5, verbose=True)
obs.nrc_ref.update_detectors(read_mode='DEEP8', ngroup=16, nint=5)
```
## Add Planets
There are four known giant planets orbiting HR 8799 at various locations. Ideally, we would like to position them at their predicted locations on the anticipated observation date. For this case, we choose a plausible observation date of November 1, 2019. To convert between $(x,y)$ and $(r,\theta)$, use the `nrc_utils.xy_to_rtheta` and `nrc_utils.rtheta_to_xy` functions.
When adding the planets, it doesn't matter too much which exoplanet model spectrum we decide to use since the spectra are still fairly unconstrained at these wavelengths. We do know roughly the planets' luminosities, so we can simply choose some reasonable model and renormalize it to the appropriate filter brightness. Currently, the only exoplanet spectral models available to `pynrc` are those from Spiegel & Burrows (2012).
```
# Projected locations for date 11/01/2019
# These are prelimary positions, but within constrained orbital parameters
loc_list = [(-1.57, 0.64), (0.42, 0.87), (0.5, -0.45), (0.35, 0.20)]
# Estimated magnitudes within F444W filter
pmags = [16.0, 15.0, 14.6, 14.7]
# Add planet information to observation class.
# These are stored in obs.planets.
# Can be cleared using obs.kill_planets().
obs.kill_planets()
for i, loc in enumerate(loc_list):
obs.add_planet(mass=10, entropy=13, age=age, xy=loc, runits='arcsec',
renorm_args=(pmags[i], 'vegamag', obs.bandpass))
# Generate and plot a noiseless slope image to make sure things look right
PA1 = 85
im_planets = obs.gen_planets_image(PA_offset=PA1)
from matplotlib.patches import Circle
from pynrc.nrc_utils import (coron_ap_locs, build_mask_detid, fshift, pad_or_cut_to_size)
fig, ax = plt.subplots(figsize=(6,6))
xasec = obs.det_info['xpix'] * obs.pix_scale
yasec = obs.det_info['ypix'] * obs.pix_scale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
xylim = 4
vmin = 0
vmax = 0.5*im_planets.max()
ax.imshow(im_planets, extent=extent, vmin=vmin, vmax=vmax)
# Overlay the coronagraphic mask
detid = obs.Detectors[0].detid
im_mask = obs.mask_images[detid]
# Do some masked transparency overlays
masked = np.ma.masked_where(im_mask>0.99, im_mask)
#ax.imshow(1-masked, extent=extent, alpha=0.5)
ax.imshow(1-masked, extent=extent, alpha=0.3, cmap='Greys_r', vmin=-0.5)
xc_off = obs.bar_offset
for loc in loc_list:
xc, yc = loc
xc, yc = nrc_utils.xy_rot(xc, yc, PA1)
xc += xc_off
circle = Circle((xc,yc), radius=xylim/15., alpha=0.7, lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
xlim = ylim = np.array([-1,1])*xylim
xlim = xlim + xc_off
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.set_title('{} planets -- {} {}'.format(sp_sci.name, obs.filter, obs.mask))
color = 'grey'
ax.tick_params(axis='both', color=color, which='both')
for k in ax.spines.keys():
ax.spines[k].set_color(color)
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, angle=PA1,
position=(0.25,0.9), label1='E', label2='N')
fig.tight_layout()
```
As we can see, even with "perfect PSF subtraction" and no noise, it's difficult to make out planet e. This is primarily due to its location relative to the occulting mask reducing throughput along with confusion of bright diffraction spots from nearby sources.
## Estimated Performance
Now we are ready to determine contrast performance and sensitivites as a function of distance from the star.
### 1. Roll-Subtracted Images
First, we will create a quick simulated roll-subtracted image using the in `gen_roll_image` method. For the selected observation date of 11/1/2019, APT shows a PA range of 84$^{\circ}$ to 96$^{\circ}$. So, we'll assume Roll 1 has PA1=85, while Roll 2 has PA2=95. In this case, "roll subtraction" simply creates two science images observed at different parallactic angles, then subtracts the same reference observation from each. The two results are then de-rotated to a common PA=0 and averaged.
There is also the option to create ADI images, where the other roll position becomes the reference star by setting `no_ref=True`.
```
# Cycle through a few WFE drift values
wfe_list = [0,5,10]
# PA values for each roll
PA1, PA2 = (85, 95)
# A dictionary of HDULists
hdul_dict = {}
for i, wfe_drift in enumerate(wfe_list):
print(wfe_drift)
# Upate WFE reference drift value
obs.wfe_ref_drift = wfe_drift
# Set the final output image to be oversampled
hdulist = obs.gen_roll_image(PA1=PA1, PA2=PA2)
hdul_dict[wfe_drift] = hdulist
from pynrc.obs_nircam import plot_hdulist
from matplotlib.patches import Circle
fig, axes = plt.subplots(1,3, figsize=(14,4.3))
xylim = 2.5
xlim = ylim = np.array([-1,1])*xylim
for j, wfe_drift in enumerate(wfe_list):
ax = axes[j]
hdul = hdul_dict[wfe_drift]
plot_hdulist(hdul, xr=xlim, yr=ylim, ax=ax, vmin=0, vmax=8)
# Location of planet
for loc in loc_list:
circle = Circle(loc, radius=xylim/15., lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
ax.set_title('$\Delta$WFE = {:.0f} nm'.format(wfe_drift))
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, position=(0.9,0.7), label1='E', label2='N')
fig.suptitle('{} -- {} {}'.format(name_sci, obs.filter, obs.mask), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=0.85)
```
**Note:** At first glance, it appears as if the innermost Planet e is getting brighter with increased WFE drift, which would be understandably confusing. However, upon further investigation, there just happens to be a bright residual speckle that lines up well with Planet e when observed at this specific parallactic angle. This was verified by adjusting the observed PA as well as removing the planets from the simulations.
### 2. Contrast Curves
Next, we will cycle through a few WFE drift values to get an idea of potential predicted sensitivity curves. The `calc_contrast` method returns a tuple of three arrays:
1. The radius in arcsec.
2. The n-sigma contrast.
3. The n-sigma magnitude sensitivity limit (vega mag).
```
# Cycle through varying levels of WFE drift and calculate contrasts
wfe_list = [0,5,10]
nsig = 5
# PA values for each roll
PA1, PA2 = (85, 95)
roll_angle = np.abs(PA2 - PA1)
curves = []
for i, wfe_drift in enumerate(wfe_list):
print(wfe_drift)
# Generate series of observations for each filter
obs.wfe_ref_drift = wfe_drift
# Generate contrast curves
result = obs.calc_contrast(roll_angle=roll_angle, nsig=nsig)
curves.append(result)
from pynrc.obs_nircam import plot_contrasts, plot_planet_patches, plot_contrasts_mjup
import matplotlib.patches as mpatches
# fig, ax = plt.subplots(figsize=(8,5))
fig, axes = plt.subplots(1,2, figsize=(14,4.5))
xr=[0,5]
yr=[24,8]
# 1a. Plot contrast curves and set x/y limits
ax = axes[0]
ax, ax2, ax3 = plot_contrasts(curves, nsig, wfe_list, obs=obs,
xr=xr, yr=yr, ax=ax, return_axes=True)
# 1b. Plot the locations of exoplanet companions
label = 'Companions ({})'.format(filt)
planet_dist = [np.sqrt(x**2+y**2) for x,y in loc_list]
ax.plot(planet_dist, pmags, marker='o', ls='None', label=label, color='k', zorder=10)
# 1c. Plot Spiegel & Burrows (2012) exoplanet fluxes (Hot Start)
plot_planet_patches(ax, obs, age=age, entropy=13, av_vals=None)
ax.legend(ncol=2)
# 2. Plot in terms of MJup using COND models
ax = axes[1]
plot_contrasts_mjup(curves, nsig, wfe_list, obs=obs, age=age,
ax=ax, twin_ax=True, xr=xr, yr=None)
ax.set_yscale('log')
ax.set_ylim([0.08,100])
ax.legend(loc='upper right', title='COND ({:.0f} Myr)'.format(age))
fig.suptitle('{} ({} + {})'.format(name_sci, obs.filter, obs.mask), fontsize=16)
fig.tight_layout()
fig.subplots_adjust(top=0.85, bottom=0.1 , left=0.05, right=0.97)
```
The innermost Planet e is right on the edge of the detection threshold as suggested by the simulated images.
### 3. Saturation Levels
Create an image showing level of saturation for each pixel. For NIRCam, saturation is important to track for purposes of accurate slope fits and persistence correction. In this case, we will plot the saturation levels both at `NGROUP=2` and `NGROUP=obs.det_info['ngroup']`. Saturation is defined at 80% well level, but can be modified using the `well_fill` keyword.
We want to perform this analysis for both science and reference targets.
```
# Saturation limits
ng_max = obs.det_info['ngroup']
sp_flat = pynrc.stellar_spectrum('flat')
print('NGROUP=2')
_ = obs.sat_limits(sp=sp_flat,ngroup=2,verbose=True)
print('')
print('NGROUP={}'.format(ng_max))
_ = obs.sat_limits(sp=sp_flat,ngroup=ng_max,verbose=True)
mag_sci = obs.star_flux('vegamag')
mag_ref = obs.star_flux('vegamag', sp=obs.sp_ref)
print('')
print('{} flux at {}: {:0.2f} mags'.format(obs.sp_sci.name, obs.filter, mag_sci))
print('{} flux at {}: {:0.2f} mags'.format(obs.sp_ref.name, obs.filter, mag_ref))
```
In this case, we don't expect HR 8799 to saturated. However, the reference source should have some saturated pixels before the end of an integration.
```
# Well level of each pixel for science source
sci_levels1 = obs.saturation_levels(ngroup=2)
sci_levels2 = obs.saturation_levels(ngroup=ng_max)
# Which pixels are saturated?
sci_mask1 = sci_levels1 > 0.8
sci_mask2 = sci_levels2 > 0.8
# Well level of each pixel for reference source
ref_levels1 = obs.saturation_levels(ngroup=2, do_ref=True)
ref_levels2 = obs.saturation_levels(ngroup=ng_max, do_ref=True)
# Which pixels are saturated?
ref_mask1 = ref_levels1 > 0.8
ref_mask2 = ref_levels2 > 0.8
# How many saturated pixels?
nsat1_sci = len(sci_levels1[sci_mask1])
nsat2_sci = len(sci_levels2[sci_mask2])
print(obs.sp_sci.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_sci))
print('{} saturated pixel at NGROUP={}'.format(nsat2_sci,ng_max))
# How many saturated pixels?
nsat1_ref = len(ref_levels1[ref_mask1])
nsat2_ref = len(ref_levels2[ref_mask2])
print('')
print(obs.sp_ref.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_ref))
print('{} saturated pixel at NGROUP={}'.format(nsat2_ref,ng_max))
# Saturation Mask for science target
nsat1, nsat2 = (nsat1_sci, nsat2_sci)
sat_mask1, sat_mask2 = (sci_mask1, sci_mask2)
sp = obs.sp_sci
nrc = obs
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = nrc.det_info['xpix'] * nrc.pix_scale
yasec = nrc.det_info['ypix'] * nrc.pix_scale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title('{} Saturation (NGROUP=2)'.format(sp.name))
axes[1].set_title('{} Saturation (NGROUP={})'.format(sp.name,ng_max))
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
# Saturation Mask for reference
nsat1, nsat2 = (nsat1_ref, nsat2_ref)
sat_mask1, sat_mask2 = (ref_mask1, ref_mask2)
sp = obs.sp_ref
nrc = obs.nrc_ref
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = nrc.det_info['xpix'] * nrc.pix_scale
yasec = nrc.det_info['ypix'] * nrc.pix_scale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title('{} Saturation (NGROUP=2)'.format(sp.name))
axes[1].set_title('{} Saturation (NGROUP={})'.format(sp.name,ng_max))
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
```
| true |
code
| 0.664812 | null | null | null | null |
|
# Exploratory Data Analysis Case Study -
##### Conducted by Nirbhay Tandon & Naveen Sharma
## 1.Import libraries and set required parameters
```
#import all the libraries and modules
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import re
from scipy import stats
# Supress Warnings
#Enable autocomplete in Jupyter Notebook.
%config IPCompleter.greedy=True
import warnings
warnings.filterwarnings('ignore')
import os
## Set the max display columns to None so that pandas doesn't sandwich the output
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 40)
```
### Reading and analysing Data
```
applicationData=pd.read_csv("./application_data.csv")
applicationData.head()
```
## 2. Data Inspection
```
#shape of application_data.csv data
applicationData.shape
#take information about the data
applicationData.info()
#get the information about the numerical data
applicationData.describe()
## print the column names for application_data.csv
applicationData.columns
## print the various datatypes of application_data.csv
applicationData.dtypes
```
## 3. Data Cleaning & Quality Check
In this section we will perform various checks and balances on the application_data.csv file.
We will:
* Perform a check for the number of missing/null values on each column
* Perform a check for the percentage of missing/null values of each column
* Drop the columns that have a high percentage of null values, i.e. over 60%
* Print the names of the dropped columns
* Verify that the columns were dropped by comparing the shape of the new dataframe created
* For columns with around 13% of null values we will discuss the best way to handle the missing/null values in the columns
* Check the data types of these columns and determine if they are categorical in nature or not
* Check the data types for all the columns in the dataframe and convert them to numerical data types if required
* Check for any outliers in any 3 numerical columns and treat them accordingly
* Create a bin for continous variables and analyse them
```
### Let us create a utility function to generate a list of null values in different dataframes
### We will utilize this function extensively througout the notebook.
def generateNullValuesPercentageTable(dataframe):
totalNullValues = dataframe.isnull().sum().sort_values(ascending=False)
percentageOfNullValues = round((dataframe.isnull().sum()*100/len(dataframe)).sort_values(ascending=False),2)
columnNamesWithPrcntgOfNullValues = pd.concat([totalNullValues, percentageOfNullValues], axis=1, keys=['Total Null Values', 'Percentage of Null Values'])
return columnNamesWithPrcntgOfNullValues
## Check the number of null values of each column and display them in
## decending order along with the percentage of null values there is
generateNullValuesPercentageTable(applicationData)
### Assess the shape of the dataframe before dropping
### columns with a high percentage of
### null values
print("The Initial shape of the DataFrame is: ", applicationData.shape)
#Drop all the columns where the
## percentage of missing values is above 60% in application_data.csv
droppedColumns = applicationData.columns[applicationData.isnull().mean() > 0.60]
applicationDataAfterDroppedColumns = applicationData.drop(droppedColumns, axis = 1)
print("The new shape of the DataFrame is: ", applicationDataAfterDroppedColumns.shape)
## analysing the dataframe is correct after dropping columns
applicationDataAfterDroppedColumns.head()
```
### Observation:
As you can see, the shape of the data has changed from (307511, 122) to (307511, 105). Which mean we have dropped 17 columns that had over 60% percent null values. The dropped columns are mentioned below.
```
print("The columns that have been dropped are: ", droppedColumns)
## print the percentage of columns with null values in the
## new data frame after the columns have been dropped
generateNullValuesPercentageTable(applicationDataAfterDroppedColumns)
#### Check dataframe shape to confirm no other columns were dropped
applicationDataAfterDroppedColumns.shape
```
### Observation:
As you can see above, there are still a few columns that have a above 30% of null/missing values. We can deal with those null/missing values using various methods of imputation.
##### Some key points:
- The columns with above 60% of null values have successfully been dropped
- The column with the highest percentage of null values after the drop is "LANDAREA_MEDI" with 59.38% null values. Whereas earlier it was "COMMONAREA_MEDI" with 69.87% null values
- The new shape of the dataframe is (307511, 105)
Checking the datadrame after dropping null values
```
applicationDataAfterDroppedColumns.head()
### Analyzing Columns with null values around 14% to determine
### what might be the best way to impute such values
listOfColumnsWithLessValuesOfNull = applicationDataAfterDroppedColumns.columns[applicationDataAfterDroppedColumns.isnull().mean() < 0.14]
applicationDataWithLessPrcntgOfNulls = applicationDataAfterDroppedColumns.loc[:, listOfColumnsWithLessValuesOfNull]
print(applicationDataWithLessPrcntgOfNulls.shape)
applicationDataWithLessPrcntgOfNulls.head(20)
### Analysing columns with around 13.5% null values
columnsToDescribe = ['AMT_REQ_CREDIT_BUREAU_QRT', 'AMT_REQ_CREDIT_BUREAU_YEAR', 'AMT_REQ_CREDIT_BUREAU_MON', 'AMT_REQ_CREDIT_BUREAU_DAY', 'AMT_REQ_CREDIT_BUREAU_HOUR','AMT_REQ_CREDIT_BUREAU_WEEK', 'OBS_30_CNT_SOCIAL_CIRCLE', 'OBS_60_CNT_SOCIAL_CIRCLE', 'EXT_SOURCE_2']
applicationDataAfterDroppedColumns[columnsToDescribe].describe()
### Let us plot a boxplot to see the various variables
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(40,25))
sns.boxplot(data=applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_YEAR, ax=axes[0][0])
axes[0][0].set_title('AMT_REQ_CREDIT_BUREAU_YEAR')
sns.boxplot(data=applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_MON, ax=axes[0][1])
axes[0][1].set_title('AMT_REQ_CREDIT_BUREAU_MON')
sns.boxplot(data=applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_DAY, ax=axes[1][0])
axes[1][0].set_title('AMT_REQ_CREDIT_BUREAU_DAY')
sns.boxplot(applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_HOUR, ax=axes[1][1])
axes[1][1].set_title('AMT_REQ_CREDIT_BUREAU_HOUR')
sns.boxplot(applicationDataAfterDroppedColumns.AMT_REQ_CREDIT_BUREAU_WEEK, ax=axes[2][0])
axes[2][0].set_title('AMT_REQ_CREDIT_BUREAU_WEEK')
plt.show()
```
### Observation
As you can see above, when we take a look at the columns that have a low number of null values, the shape of the data changes to (307511, 71) compared to (307511, 105). We lose 34 columns in the process.
Checking columns having less no. of Null values(around 13% or so) and analysing the best metric
to impute the missing/null values in those columns basis if the column/variable is 'Categorical' or 'Continuous''
- AMT_REQ_CREDIT_BUREAU_HOUR (99.4% of the values are 0.0 with 4.0 and 3.0 values being outliers. Its safe to impute the missing values with 0.0)
- AMT_REQ_CREDIT_BUREAU_DAY (99.4% of the values are 0.0 with 9.0 and 8.0 values being outliers. Its safe to impute the missing values with 0.0)
- AMT_REQ_CREDIT_BUREAU_WEEK (96.8% of the values are 0.0 with 8.0 and 7.0 values being outliers. Its safe to impute the missing values with 0.0)
- AMT_REQ_CREDIT_BUREAU_MON (83.6% of the values are 0.0. Its safe to impute the missing values with mode : 0.0)
- AMT_REQ_CREDIT_BUREAU_YEAR (It seems fine to use the median value 1.0 here for imputing the missing values)
```
### Checking for categorical data
categoricalDataColumns = applicationDataAfterDroppedColumns.nunique().sort_values()
categoricalDataColumns
```
### Observation:
Given the wide number of columns with a less number of unique values, we will convert all columns with upto 5 values into categorical columns
```
listOfColumnsWithMaxTenUniqueValues = [i for i in applicationDataAfterDroppedColumns.columns if applicationDataAfterDroppedColumns[i].nunique() <= 5]
for col in listOfColumnsWithMaxTenUniqueValues:
applicationDataAfterDroppedColumns[col] = applicationDataAfterDroppedColumns[col].astype('category')
applicationDataAfterDroppedColumns.shape
applicationDataAfterDroppedColumns.head()
## Check for datatypes of all columns in the new dataframe
applicationDataAfterDroppedColumns.info()
```
### Observation:
We notice above that after dropping the null columns we still have:
- 43 Categorical
- 48 Float
- 6 Integer
- 8 Object data types
```
## Convert the categorical data columns into individual columns with numeric values for better analysis
## we will do this using one-hot-encoding method
convertedCategoricalColumnsDataframe = pd.get_dummies(applicationDataAfterDroppedColumns, columns=listOfColumnsWithMaxTenUniqueValues, prefix=listOfColumnsWithMaxTenUniqueValues)
convertedCategoricalColumnsDataframe.head()
## Converting these columns has changed the shape of the data to
print("Shape of Application Data after categorical column conversion: ", convertedCategoricalColumnsDataframe.shape)
```
### Observation
As you can see above we have successfully converted the varius categorical datatypes into their own columns.
The new shape of the data is (307511, 158) compared to (307511, 105). We have introuced 53 new columns. These will help us identify the best possible method to use for imputing values.
```
### Count the number of missing values in the new dataframe
generateNullValuesPercentageTable(convertedCategoricalColumnsDataframe)
```
### Observation
Let us take the following columns - AMT_REQ_CREDIT_BUREAU_YEAR, AMT_REQ_CREDIT_BUREAU_MON, OBS_30_CNT_SOCIAL_CIRCLE, OBS_60_CNT_SOCIAL_CIRCLE, EXT_SOURCE_2.
Determine their datatypes and using the describe above try and identify what values can be used to impute into the null columns.
```
listOfCols = ['AMT_REQ_CREDIT_BUREAU_YEAR', 'AMT_REQ_CREDIT_BUREAU_MON', 'OBS_30_CNT_SOCIAL_CIRCLE', 'OBS_60_CNT_SOCIAL_CIRCLE', 'EXT_SOURCE_2']
convertedCategoricalColumnsDataframe[listOfCols].dtypes
applicationDataAfterDroppedColumns['AMT_REQ_CREDIT_BUREAU_HOUR'].fillna(0.0, inplace = True)
applicationDataAfterDroppedColumns['AMT_REQ_CREDIT_BUREAU_HOUR'] = applicationDataAfterDroppedColumns['AMT_REQ_CREDIT_BUREAU_HOUR'].astype(int)
## convert DAYS_BIRTH to years
def func_age_yrs(x):
return round(abs(x/365),0)
applicationDataAfterDroppedColumns['DAYS_BIRTH'] = applicationDataAfterDroppedColumns['DAYS_BIRTH'].apply(func_age_yrs)
```
### Observation
In all the selected columns we can see that we can use the median to impute the values in the dataframe. They all correspond to 0.00 except EXT_SOURCE_2. For EXT_SOURCE_2 we observe that the mean and the median values are roughly similar at 5.143927e-01 for mean & 5.659614e-01 for median. So we could use either of those values to impute.
Let us now check for outliers on 6 numerical columns.
For this we can use our dataset from after we dropped the columns with over 60% null values.
```
### We will use boxplots to handle the outliers on AMT_CREDIT, AMT_ANNUITY, AMT_GOODS_PRICE
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(50,50))
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_CREDIT.dropna(), ax=axes[0][0])
axes[0][0].set_title('AMT_CREDIT')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna(), ax=axes[0][1])
axes[0][1].set_title('AMT_ANNUITY')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_GOODS_PRICE.dropna(), ax=axes[1][0])
axes[1][0].set_title('AMT_GOODS_PRICE')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_INCOME_TOTAL.dropna(), ax=axes[1][1])
axes[1][1].set_title('AMT_INCOME_TOTAL')
sns.boxplot(data= applicationDataAfterDroppedColumns.DAYS_BIRTH.dropna(), ax=axes[2][0])
axes[2][0].set_title('DAYS_BIRTH')
sns.boxplot(data= applicationDataAfterDroppedColumns.DAYS_EMPLOYED.dropna(), ax=axes[2][1])
axes[2][1].set_title('DAYS_EMPLOYED')
plt.show()
```
### Observation
We can easily see in the box plot that there are so many outliers which has to removed for the better calculation. So, In the next part of the code we remove outliers from the function "remove_outliers" which accept dataframe and columns name (In which we want to remove outliers) as argument and return the outliers removed dataframe.
Analysing outliers in Numeric variables and Handling/Treating them with appropriate methods.
- AMT_REQ_CREDIT_BUREAU_HOUR (99.4% of the values are 0.0 with value '4' and '3' being outliers. Should be retained)
Considering that its the number of enquiries made by the company to credit bureau, this could significantly mean that the company was extremely cautious in making a decision of whether to grant loan/credit to this particular client or not. This might imply that it could be a case of 'High Risk' client and can influence the Target variable. Its better to retain these outlier values
- AMT_INCOME_TOTAL ( Clearly 117000000.0 is an outlier here.)
The above oulier can be dropped in order to not skew with the analysis. We can use IQR to remove this value.
- DAYS_BIRTH ( There is no outlier in this column)
- DAYS_EMPLOYED ( Clearly 1001 is an outlier here and should be deleted.18% of the column values are 1001)
Clearly 1001 is an outlier here. 18% of the column values are 1001. Since , this represents the no. of years of employement as on the application date, these should be deleted. Though values above 40 years till 49 years of employment seems questionable as well but lets not drop it for now considering exception cases.
Another way to see the distribution of is using a distribution plot.
```
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(50,50))
sns.distplot(applicationDataAfterDroppedColumns.AMT_CREDIT.dropna(), ax=axes[0][0])
axes[0][0].set_title('AMT_CREDIT')
sns.distplot(applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna(), ax=axes[0][1])
axes[0][1].set_title('AMT_ANNUITY')
sns.distplot(applicationDataAfterDroppedColumns.AMT_GOODS_PRICE.dropna(), ax=axes[1][0])
axes[1][0].set_title('AMT_GOODS_PRICE')
sns.distplot(applicationDataAfterDroppedColumns.AMT_INCOME_TOTAL.dropna(), ax=axes[1][1])
axes[1][1].set_title('AMT_INCOME_TOTAL')
sns.distplot(applicationDataAfterDroppedColumns.DAYS_BIRTH.dropna(), ax=axes[2][0])
axes[2][0].set_title('DAYS_BIRTH')
sns.distplot(applicationDataAfterDroppedColumns.DAYS_EMPLOYED.dropna(), ax=axes[2][1])
axes[2][1].set_title('DAYS_EMPLOYED')
plt.show()
```
### Observation
As you can see from the distplots above there are a few outliers that aren't properly normalized.
The 'DAYS_EMPLOYED' column is heavily skewed in the -ve side of the plot.
```
#Function for removing outliers
def remove_outlier(df, col_name):
q1 = df[col_name].quantile(0.25)
q3 = df[col_name].quantile(0.75)
iqr = q3-q1 #Interquartile range
l = q1-1.5*iqr
h = q3+1.5*iqr
dfOutput = df.loc[(df[col_name] > l) & (df[col_name] < h)]
return dfOutput
cols=['AMT_CREDIT','AMT_ANNUITY', 'AMT_GOODS_PRICE', 'AMT_INCOME_TOTAL', 'DAYS_EMPLOYED']
for i in cols:
applicationDataAfterDroppedColumns=remove_outlier(applicationDataAfterDroppedColumns,i)
applicationDataAfterDroppedColumns.head()
### Plot the box plot again after removing outliers
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(50,50))
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_CREDIT.dropna(), ax=axes[0][0])
axes[0][0].set_title('AMT_CREDIT')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna(), ax=axes[0][1])
axes[0][1].set_title('AMT_ANNUITY')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_GOODS_PRICE.dropna(), ax=axes[1][0])
axes[1][0].set_title('AMT_GOODS_PRICE')
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_INCOME_TOTAL.dropna(), ax=axes[1][1])
axes[1][1].set_title('AMT_INCOME_TOTAL')
sns.boxplot(data= applicationDataAfterDroppedColumns.DAYS_BIRTH.dropna(), ax=axes[2][0])
axes[2][0].set_title('DAYS_BIRTH')
sns.boxplot(data= applicationDataAfterDroppedColumns.DAYS_EMPLOYED.dropna(), ax=axes[2][1])
axes[2][1].set_title('DAYS_EMPLOYED')
plt.show()
```
### Observation
After dropping the outliers we observe that there very few points mentioned on the box plots above for the outliers.
```
### Plotting the distribution plot after removing the outliers
fig, axes = plt.subplots(nrows=3, ncols = 2, figsize=(50,50))
sns.distplot(applicationDataAfterDroppedColumns.AMT_CREDIT.dropna(), ax=axes[0][0])
axes[0][0].set_title('AMT_CREDIT')
sns.distplot(applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna(), ax=axes[0][1])
axes[0][1].set_title('AMT_ANNUITY')
sns.distplot(applicationDataAfterDroppedColumns.AMT_GOODS_PRICE.dropna(), ax=axes[1][0])
axes[1][0].set_title('AMT_GOODS_PRICE')
sns.distplot(applicationDataAfterDroppedColumns.AMT_INCOME_TOTAL.dropna(), ax=axes[1][1])
axes[1][1].set_title('AMT_INCOME_TOTAL')
sns.distplot(applicationDataAfterDroppedColumns.DAYS_BIRTH.dropna(), ax=axes[2][0])
axes[2][0].set_title('DAYS_BIRTH')
sns.distplot(applicationDataAfterDroppedColumns.DAYS_EMPLOYED.dropna(), ax=axes[2][1])
axes[2][1].set_title('DAYS_EMPLOYED')
plt.show()
```
### Observation
Based on the distplots above you can see that there is a marked difference between the minimum values for various columns, particularly the DAYS_EMPLOYED column where the minimum value increased from -7500 to -6000. This proves that the treatment of outliers was succesful
```
applicationDataAfterDroppedColumns.shape
```
### Observation
We observe that after removing the outliers the boxplots show a slight shift in the maximum ranges.
The distribution plot gives us a more significant display in changes. There is a significant reduction in the max ranges on the x-axis for all the three variables we chose.
As we can see above, after treating the outliers for various columns the shape of our dataset has changed significantly. The shape of the dataframe after dropping columns with high number of null values was (307511, 105) & after treating for outliers is (209624, 105).
Let us now create bins for 3 different continous variables and plot them. We will use AMT_INCOME_TOTAL, AMT_CREDIT & DAYS_BIRTH to create our bins.
```
## Creating bins for Income range based on AMT_INCOME_TOTAL
bins=[0,100000,200000,300000,400000,500000,600000,20000000]
range_period=['0-100000','100000-200000','200000-300000','300000-400000','400000-500000','500000-600000','600000 and above']
applicationDataAfterDroppedColumns['Income_amount_range']=pd.cut(applicationDataAfterDroppedColumns['AMT_INCOME_TOTAL'],bins,labels=range_period)
plotIncomeAmountRange = applicationDataAfterDroppedColumns['Income_amount_range'].value_counts().plot(kind='bar', title='Income Range Bins Plot')
plotIncomeAmountRange.set_xlabel('Income Range Bins')
plotIncomeAmountRange.set_ylabel('Count')
```
### Observation
As you can clearly see from the plot above:
- The most number of people earn between 100000-200000
- The number of people who earn between 200000-300000 is less than half of the number of people in 100000-200000 range
- No one earns above 300000.
```
#create bins for credit anount
bins=[0,50000,100000,150000,200000,250000,300000,400000]
range_period=['0-50000','50000-100000','100000-150000','150000-200000','200000-250000','250000-300000','300000-400000']
applicationDataAfterDroppedColumns['credit_amount_range']=pd.cut(applicationDataAfterDroppedColumns['AMT_CREDIT'],bins,labels=range_period)
plotCreditAmountRange = applicationDataAfterDroppedColumns['credit_amount_range'].value_counts().plot(kind='bar', title='Credit Amount Range Plots')
plotCreditAmountRange.set_xlabel('Credit Amount Range Bins')
plotCreditAmountRange.set_ylabel('Count')
```
### Observation
As you can see from the plots above
- Very less number of people borrow money between 0-50000
- Highest number of people are borrowing money between 250000-300000
```
##Creating bins for age range for DAYS_BIRTH in years
bins = [10, 20, 30, 40, 50, 60, 70, 80]
labels = ['10-20','21-30','31-40','41-50','51-60','61-70','71-80']
applicationDataAfterDroppedColumns['BINNED_AGE'] = pd.cut(applicationDataAfterDroppedColumns['DAYS_BIRTH'], bins=bins,labels=labels)
plotAgeRange = applicationDataAfterDroppedColumns['BINNED_AGE'].value_counts().plot(kind='bar', title='Age Range Plot')
plotAgeRange.set_xlabel('Age Range')
plotAgeRange.set_ylabel('Count')
```
### Observation
- People between the ages of 71-80 & 10-20 are not borrowing any money.
- For people in the age range of 10-20, no borrowing could suggest that children/teenagers/young adults could have just opened new bank accounts with their parents or have just joined university so do not have a need of borrowing money
- People in between the ages of 31-40 have a significantly higher number of borrowers, this could be suggestive of various personal expenses & it would be beneficial for the firm to identify the reasons why they are borrowing more so that they can introduce newer products at more competitive interest rates to these customers
# 4. Data Analysis
In this section we will perform indepth analysis on the application_data.csv file.
This will be achieved by:
- Checking the imbalance percentage in the dataset
- Dividing the dataset based on the "TARGET" column into 2 separate dataframes
- Performing univariate analysis for categorical variables on both Target = 0 & Target = 1 columns
- Identifying the correlation between the numerical columns for both Target = 0 & Target = 1 columns
- Comparing the results across continous variables
- Performing bivariate analysis for numerical variables on both Target = 0 & Target = 1 columns
## Selecting relevant columns from 'applicationDataAfterDroppedColumns' which would be used for EDA further
- Selecting only the relevant columns(25 or so) from 'applicationDataAfterDroppedColumns' i.e. removing those columns which aren't relevant for analysis out of a total of 105 columns
```
applicationDataWithRelevantColumns = applicationDataAfterDroppedColumns.loc[:,['SK_ID_CURR',
'TARGET',
'NAME_CONTRACT_TYPE',
'CODE_GENDER',
'FLAG_OWN_CAR',
'FLAG_OWN_REALTY',
'CNT_CHILDREN',
'AMT_INCOME_TOTAL',
'AMT_CREDIT',
'AMT_ANNUITY',
'AMT_GOODS_PRICE',
'NAME_INCOME_TYPE',
'NAME_EDUCATION_TYPE',
'NAME_FAMILY_STATUS',
'NAME_HOUSING_TYPE',
'REGION_POPULATION_RELATIVE',
'BINNED_AGE',
'DAYS_EMPLOYED',
'DAYS_REGISTRATION',
'DAYS_ID_PUBLISH',
'FLAG_CONT_MOBILE',
'OCCUPATION_TYPE',
'CNT_FAM_MEMBERS',
'REGION_RATING_CLIENT',
'REGION_RATING_CLIENT_W_CITY',
'ORGANIZATION_TYPE',
'AMT_REQ_CREDIT_BUREAU_HOUR',
'AMT_REQ_CREDIT_BUREAU_DAY']]
```
We will now use applicationDataWithRelevantColumns as our dataframe to run further analysis
```
### Checking shape of the new dataframe
applicationDataWithRelevantColumns.shape
applicationDataWithRelevantColumns['CODE_GENDER'].value_counts()
```
Since the number of Females is higher than Males, we can safely impute XNA values with F.
```
applicationDataWithRelevantColumns.loc[applicationDataWithRelevantColumns['CODE_GENDER']=='XNA','CODE_GENDER']='F'
applicationDataWithRelevantColumns['CODE_GENDER'].value_counts()
#Check the total percentage of target value as 0 and 1.
imbalancePercentage = applicationDataWithRelevantColumns['TARGET'].value_counts()*100/len(applicationDataAfterDroppedColumns)
imbalancePercentage
imbalancePercentage.plot(kind='bar',rot=0)
```
### Observation
We can easily see that this data is very much imbalance. Rows with target value 0 is only 90.612239% and with 1 is only 9.387761%.
This also means that only 9.38% of all the loan applicants default while paying back their loans.
```
#Splitting the data based on target values
one_df = applicationDataWithRelevantColumns.loc[applicationDataWithRelevantColumns['TARGET']==1]
zero_df = applicationDataWithRelevantColumns.loc[applicationDataWithRelevantColumns['TARGET']==0]
## Inspecting data with TARGET = 1
one_df.head()
one_df.info()
one_df.shape
## Inspecting data with TARGET = 0
zero_df.head()
zero_df.describe
zero_df.shape
zero_df.info
```
We will now use the following columns to perform Univariate & Bivariate analysis
- CODE_GENDER
- NAME_CONTRACT_TYPE
- NAME_INCOME_TYPE
- NAME_EDUCATION_TYPE
- NAME_FAMILY_STATUS
- NAME_HOUSING_TYPE
- OCCUPATION_TYPE
- ORGANIZATION_TYPE
### Univariate Analysis:-
Univariate Analysis on one_df dataset
```
#Univariate Analysis for categorical variable 'CODE_GENDER' in dataframe one_df.
sns.countplot(x ='CODE_GENDER', data = one_df)
plt.title('Number of applications by Gender')
plt.ylabel('Number of Applications')
plt.xlabel('Gender')
plt.show()
```
### Observation
As you can see above the number of Female applicants is higher than the number of Male applicants.
```
#Univariate Analysis for categorical variable 'NAME_EDUCATION_TYPE' in dataframe T1.
sns.countplot(x ='NAME_EDUCATION_TYPE', data = one_df)
plt.title("Number of applications by Client's Education Level")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Education Level")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
From the plot above we can infer that:
- The highest number of applications for credit were made by people having Secondary/ secondary special education and these people defaulted on being able to pay back their loans. This could mean that they face trouble in being able to manage their money effectively or have jobs that pay less/are contractual in nature
- People with higher education also applied for a credit and defaulted on their loans
```
#Univariate Analysis for categorical variable 'NAME_CONTRACT_TYPE' in dataframe one_df.
sns.countplot(x ='NAME_CONTRACT_TYPE', data = one_df)
plt.title('Number of applications by Contract Type')
plt.ylabel('Number of Applications')
plt.xlabel('Contract Type')
plt.show()
```
### Observation
- A high number of applicants who defaulted applied for cash loans
```
#Univariate Analysis for categorical variable 'NAME_INCOME_TYPE' in dataframe one_df.
sns.countplot(x ='NAME_INCOME_TYPE', data = one_df)
plt.title("Number of applications by Client's Income Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Income Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Mostly working professionals apply for credit and are also the ones that default on being able to payback the loans on time
- State servants have a very low number of defaulters
```
#Univariate Analysis for categorical variable 'NAME_FAMILY_STATUS' in dataframe one_df.
sns.countplot(x ='NAME_FAMILY_STATUS', data = one_df)
plt.title("Number of applications by Client's Family Status")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Family Status")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Married applicants make a higher number of applications as compared to other categories
- It would be beneficial for the bank to introduce newer products for people in such a category to attract more customers
```
#Univariate Analysis for categorical variable 'NAME_HOUSING_TYPE' in dataframe one_df.
sns.countplot(x ='NAME_HOUSING_TYPE', data = one_df)
plt.title("Number of applications by Client's Housing Status")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Housing Status")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- People who live in their own apartment/house apply for loans almost 160 times more than those who live with their parents.
- People living in office apartments default significantly less. This could be because their houses are rent free or they pay minimum charges to live in the house.
```
#Univariate Analysis for categorical variable 'OCCUPATION_TYPE' in dataframe one_df.
sns.countplot(x ='OCCUPATION_TYPE', data = one_df)
plt.title("Number of applications by Client's Occupation Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Occupation Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Labourers apply for a lot of loans and default on being able to repay them. This could be because of the contractual nature of their work and the unsetady + low income they might earn from their daily jobs
- IT & HR Staff make very few applications for credit and default the least on their loan applications. This could be, in stark contrast to the labourers, because of the stable job & salaried nature of their work. Thus enabling them to be better at handling monthly expenses.
```
# Since there are subcategories like Type1,2 etc under few categories like Business Entity,Trade etc.
# Because of this, there are a lot of categories making it difficult to analyse data
# Its better to remove the types and just have the main category there
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Business Entity Type 3", "Business Entity")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Business Entity Type 2", "Business Entity")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Business Entity Type 1", "Business Entity")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 7", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 3", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 2", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 1", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 6", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 5", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Trade: type 4", "Trade")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Transport: type 4", "Transport")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Transport: type 3", "Transport")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Transport: type 2", "Transport")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Transport: type 1", "Transport")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 1", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 2", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 3", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 4", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 5", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 6", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 7", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 8", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 9", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 10", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 11", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 12", "Industry")
one_df.ORGANIZATION_TYPE= one_df.ORGANIZATION_TYPE.replace("Industry: type 13", "Industry")
one_df['ORGANIZATION_TYPE'].value_counts()
#Univariate Analysis for categorical variable 'ORGANIZATION_TYPE' in dataframe one_df.
plt.figure(figsize = (14,14))
sns.countplot(x ='ORGANIZATION_TYPE', data = one_df)
plt.title("Number of applications by Client's Organization Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Organization Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Based on the plot above we can see that Business Entity employees have the maximum number of loan applications
- Religious people, priests etc dont seem to be making any credit applications at all
- Self-employed people also make a lot of loan applications. This could be to boost their business or to repay other loans.
##### Continuous - Continuous Bivariate Analysis for one_df dataframe
```
## Plotting cont-cont Client Income vs Credit Amount
plt.figure(figsize=(12,12))
sns.scatterplot(x="AMT_INCOME_TOTAL", y="AMT_CREDIT",
hue="CODE_GENDER", style="CODE_GENDER", data=one_df)
plt.xlabel('Income of client')
plt.ylabel('Credit Amount of loan')
plt.title('Client Income vs Credit Amount')
plt.show()
```
### Observation
- We do see some outliers here wherein Females having income less than 50000 have applied for loan with credit amount 1300000 approx
- Most of the loans seem to be concentrated between credit amount of 200000 & 6000000 for income ranging from 50000-150000
```
## Plotting cont-cont Client Income vs Region population
plt.figure(figsize=(12,12))
sns.scatterplot(x="AMT_INCOME_TOTAL", y="REGION_POPULATION_RELATIVE",
hue="CODE_GENDER", style="CODE_GENDER", data=one_df)
plt.xlabel('Income of client')
plt.ylabel('Population of region where client lives')
plt.title('Client Income vs Region population')
plt.show()
```
### Observation
- Very less no of people live in highly dense/populated region
- Most of the clients live between population density of 0.00 to 0.04
##### Univariate analysis for zero_df dataframe
```
#Univariate Analysis for categorical variable 'CODE_GENDER' in dataframe zero_df.
sns.countplot(x ='CODE_GENDER', data = zero_df)
plt.title('Number of applications by Gender')
plt.ylabel('Number of Applications')
plt.xlabel('Gender')
plt.show()
```
### Observation
As you can see above the number of Female applicants is higher than the number of Male applicants.
```
#Univariate Analysis for categorical variable 'NAME_CONTRACT_TYPE' in dataframe zero_df.
sns.countplot(x ='NAME_CONTRACT_TYPE', data = zero_df)
plt.title('Number of applications by Contract Type')
plt.ylabel('Number of Applications')
plt.xlabel('Contract Type')
plt.show()
```
### Observation
Applicants prefer to apply more for cash loans rather than revolving loans
```
#Univariate Analysis for categorical variable 'NAME_INCOME_TYPE' in dataframe zero_df.
sns.countplot(x ='NAME_INCOME_TYPE', data = zero_df)
plt.title("Number of applications by Client's Income Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Income Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Working people make the most number of applications and are able to successfully repay their loans as well.
- Students, Pensioners, Business men and Maternity leave applicants is close to 0. This could be due to a multitude of reasons.
```
#Univariate Analysis for categorical variable 'NAME_EDUCATION_TYPE' in dataframe zero_df.
sns.countplot(x ='NAME_EDUCATION_TYPE', data = zero_df)
plt.title("Number of applications by Client's Education Level")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Education Level")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
From the plot above we can infer that:
- The highest number of applications for credit were made by people having Secondary/ secondary special education and these people did not default on being able to pay back their loans.
- People with higher education also applied for a credit and were able to repay them successfully
```
#Univariate Analysis for categorical variable 'NAME_FAMILY_STATUS' in dataframe zero_df.
sns.countplot(x ='NAME_FAMILY_STATUS', data = zero_df)
plt.title("Number of applications by Client's Family Status")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Family Status")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
From the plot above we can infer that:
- Married people apply for credit the most.
- Married people are able to repay their loans without any defaults as well
```
#Univariate Analysis for categorical variable 'NAME_HOUSING_TYPE' in dataframe zero_df.
sns.countplot(x ='NAME_HOUSING_TYPE', data = zero_df)
plt.title("Number of applications by Client's Housing Status")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Housing Status")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- People who live in their own apartment/house apply for loans almost 160 times more than those who live with their parents.
- People living in office apartments apply for loans significantly less. This could be because their houses are rent free or they pay minimum charges to live in the house.
- People in rented apartments apply for loans significantly less. This could be due to the added expenses of paying rent and other utility bills leaves them with not enough capital to payback their loans.
```
#Univariate Analysis for categorical variable 'OCCUPATION_TYPE' in dataframe zero_df.
sns.countplot(x ='OCCUPATION_TYPE', data = zero_df)
plt.title("Number of applications by Client's Occupation Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Occupation Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Labourers apply for a lot of loans.
- IT & HR Staff make very few applications for credit. This could be, in stark contrast to the labourers, because of the stable job & salaried nature of their work. Thus enabling them to be better at handling monthly expenses.
```
# Since there are subcategories like Type1,2 etc under few categories like Business Entity,Trade etc.
# Because of this, there are a lot of categories making it difficult to analyse data
# Its better to remove the types and just have the main category there
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Business Entity Type 3", "Business Entity")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Business Entity Type 2", "Business Entity")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Business Entity Type 1", "Business Entity")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 7", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 3", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 2", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 1", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 6", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 5", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Trade: type 4", "Trade")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Transport: type 4", "Transport")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Transport: type 3", "Transport")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Transport: type 2", "Transport")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Transport: type 1", "Transport")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 1", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 2", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 3", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 4", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 5", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 6", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 7", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 8", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 9", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 10", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 11", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 12", "Industry")
zero_df.ORGANIZATION_TYPE= zero_df.ORGANIZATION_TYPE.replace("Industry: type 13", "Industry")
zero_df['ORGANIZATION_TYPE'].value_counts()
#Univariate Analysis for categorical variable 'ORGANIZATION_TYPE' in dataframe zero_df.
plt.figure(figsize = (14,14))
sns.countplot(x ='ORGANIZATION_TYPE', data = zero_df)
plt.title("Number of applications by Client's Organization Type")
plt.ylabel('Number of Applications')
plt.xlabel("Client's Organization Type")
plt.xticks(rotation = 90)
plt.show()
```
### Observation
- Based on the plot above we can see that Business Entity employees have the maximum number of loan applications
- Religious people, priests etc dont seem to be making a lot of credit applications at all. They are able to repay their loans on time as well.
- Self-employed people also make a lot of loan applications. This could be to boost their business or to repay other loans.
### Bivariate Analysis for zero_df
```
### Let us create a helper function to help with
### plotting various graphs
def uniplot(df,col,title,hue =None):
sns.set_style('whitegrid')
sns.set_context('talk')
plt.rcParams["axes.labelsize"] = 20
plt.rcParams['axes.titlesize'] = 22
plt.rcParams['axes.titlepad'] = 30
plt.figure(figsize=(40,20))
temp = pd.Series(data = hue)
fig, ax = plt.subplots()
width = len(df[col].unique()) + 7 + 4*len(temp.unique())
fig.set_size_inches(width , 8)
plt.xticks(rotation=45)
plt.title(title)
ax = sns.countplot(data = df, x= col, order=df[col].value_counts().index,hue = hue,
palette='magma')
plt.show()
# PLotting for income range
uniplot(zero_df,col='NAME_INCOME_TYPE',title='Distribution of Income type',hue='CODE_GENDER')
```
### Observation
- For income type ‘working’, ’commercial associate’, and ‘State Servant’ the number of credits are higher than others.
- For this Females are having more number of credit applications than males in all the categories.
```
uniplot(zero_df,col='NAME_CONTRACT_TYPE',title='Distribution of contract type',hue='CODE_GENDER')
```
### Observation
- For contract type ‘cash loans’ is having higher number of credits than ‘Revolving loans’ contract type.
- For this also Females are applying for credit a lot more than males.
```
uniplot(zero_df,col='NAME_FAMILY_STATUS',title='Distribution of Family status',hue='CODE_GENDER')
```
### Observation
- As observed above the number of married females applying for loans is almost 3.5 times the number of single females.
- No male widowers are applying for credit
```
uniplot(zero_df,col='NAME_EDUCATION_TYPE',title='Distribution of education level',hue='CODE_GENDER')
```
### Observation
- No person with an 'Academic Degree' is applying for a loan
- The number of females with 'Higher Education' that apply for a loan is almost double the number of males for the same category
```
uniplot(zero_df,col='NAME_HOUSING_TYPE',title='Distribution of Housing Type',hue='CODE_GENDER')
```
### Observation
- Females living in their own apartments/houses apply for more loans and are able to successfully payback.
- A very small number of females living in Co-op apartments apply for loans
```
uniplot(zero_df,col='OCCUPATION_TYPE',title='Distribution Occupation Type',hue='CODE_GENDER')
```
### Observation
- Male Labourers & Drivers take more loans and are able to successfully payback in time.
- Female Care staff & Sales Staff are also able to take loans and payback in time
### Bivariate Analysis on one_df
Perform correlation between numerical columns for finding correlation which having TARGET value as 1
```
uniplot(one_df,col='NAME_INCOME_TYPE',title='Distribution of Income type',hue='CODE_GENDER')
```
### Observation
- For income type ‘working’, ’commercial associate’, and ‘State Servant’ the number of credits are higher than others.
- Females have more number of credit applications than males in all the categories.
```
uniplot(one_df,col='NAME_CONTRACT_TYPE',title='Distribution of contract type',hue='CODE_GENDER')
```
### Observation
- For contract type ‘cash loans’ is having higher number of credits than ‘Revolving loans’ contract type.
- For this also Females are applying for credit a lot more than males.
- Females are also able to payback their loans on time
```
uniplot(one_df,col='NAME_FAMILY_STATUS',title='Distribution of Family status',hue='CODE_GENDER')
```
### Observation
- As observed above the number of married females applying for loans is almost 3.5 times the number of single females.
- No male widowers are applying for credit
- The number of males applying for loans and being able to not payback is higher if they are unmarried/single compared to females
- A very small number of male widowers are unable to payback their loans after
```
uniplot(one_df,col='NAME_EDUCATION_TYPE',title='Distribution of education level',hue='CODE_GENDER')
```
### Observation
- Males with lower secondary education make more loan applications and default more compared to females
- There is very little difference between the number of defaulters for males and females with secondary education compared to the non-defaulters we saw above
```
uniplot(one_df,col='NAME_HOUSING_TYPE',title='Distribution of Housing Type',hue='CODE_GENDER')
```
### Observation
- Males living with their parents tend to apply and default more on their loans
- Almost an equal number of males and females default on loans if they are living in rented apartments
```
uniplot(one_df,col='OCCUPATION_TYPE',title='Distribution Occupation Type',hue='CODE_GENDER')
```
### Observations
- The number of male applicants who default on paying back their loans is almost double the amount of female applicants
- Irrespective of gender, managers seem to default on their loans equally
#### Categorical vs Numerical Analysis
```
# Box plotting for Credit amount for zero_df based on education type and family status
plt.figure(figsize=(40,20))
plt.xticks(rotation=45)
sns.boxplot(data =zero_df, x='NAME_EDUCATION_TYPE',y='AMT_CREDIT', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Credit amount vs Education Status')
plt.show()
```
### Observation
- Widows with secondary education have a very high median credit amount borrowing and default on paying back loans as well. It would be better to be vary of lending to them
- Widows with an academic degree have a higher median for borrowing as compared to any other category.
- People in civil marriages, those who are seperated and widows with secondary education have the same median values and usually borrow in around 400000
```
# Box plotting for Income amount for zero_df based on their education type & family status
plt.figure(figsize=(40,20))
plt.xticks(rotation=45)
plt.yscale('log')
sns.boxplot(data =zero_df, x='NAME_EDUCATION_TYPE',y='AMT_INCOME_TOTAL', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Income amount vs Education Status')
plt.show()
```
### Observation
- Except widows, the median earning for all other family status types with an incomplete higher education is the same
- Median income for all family status categories is the same for people with a secondary education
```
# Box plotting for Credit amount for one_df
plt.figure(figsize=(16,12))
plt.xticks(rotation=45)
sns.boxplot(data =one_df, x='NAME_EDUCATION_TYPE',y='AMT_CREDIT', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Credit amount vs Education Status')
plt.show()
```
### Observation
- Widows with secondary education have a very high median credit amount borrowing and default on paying back loans as well. It would be better to be vary of lending to them
- Married people have a consistently high median across all categories of education except secondary education
```
# Box plotting for Income amount for one_df
plt.figure(figsize=(40,20))
plt.xticks(rotation=45)
plt.yscale('log')
sns.boxplot(data =one_df, x='NAME_EDUCATION_TYPE',y='AMT_INCOME_TOTAL', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Income amount vs Education Status')
plt.show()
```
### Observation
- The median income for all family status types is the same for people with education type as Secondary/secondary special
- The median income for widows is the lowest across all the education types
```
### Perform correlation between CNT_CHILDREN, AMT_INCOME_TOTAL, AMT_CREDIT, AMT_GOODS_PRICE, REGION_POPULATION_RELATIVE
### and AMT_ANNUITY. Then make correlation matrix across the one_df dataframe
columns=['CNT_CHILDREN','AMT_INCOME_TOTAL','AMT_CREDIT','AMT_GOODS_PRICE','REGION_POPULATION_RELATIVE', 'AMT_ANNUITY']
corr=one_df[columns].corr()
corr.style.background_gradient(cmap='coolwarm')
```
### Observation
In the heatmap above: The closer you are to RED there is a stronger relationship, the closer you are to blue the weaker the relationship.
As we can see from the corelation matrix above, there is a very close relationship between AMT_GOODS_PRICE & AMT_CREDIT.
AMT_ANNUITY & AMT_CREDIT have a medium/strong relationship. Annuity has a similar relationship with AMT_GOODS_PRICE.
```
### Sorting based on the correlation and extracting top 10 relationships on the defaulters in one_df
corrOneDf = corr.where(np.triu(np.ones(corr.shape), k=1).astype(np.bool)).unstack().reset_index()
corrOneDf.columns = ['VAR1','VAR2','Correlation']
corrOneDf.sort_values('Correlation', ascending = False).nlargest(10, 'Correlation')
```
### Observation
In the correlation matrix, we can identify-
Columns with High Correlation:
1.AMT_GOODS_PRICE and AMT_CREDIT
Columns with Medium Correlation:
1.REGION_POPULATION_RELATIVE and AMT_INCOME_TOTAL
2.REGION_POPULATION_RELATIVE and AMT_GOODS_PRICE
3.REGION_POPULATION_RELATIVE and AMT_CREDIT
Columns with low correlation:
1.AMT_INCOME_TOTAL and CNT_CHILDREN
We also observed that the top 10 correlation pairs are:
- VAR1 VAR2 Correlation Value
- AMT_GOODS_PRICE AMT_CREDIT 0.981276
- AMT_ANNUITY AMT_CREDIT 0.748446
- AMT_ANNUITY AMT_GOODS_PRICE 0.747315
- AMT_ANNUITY AMT_INCOME_TOTAL 0.390809
- AMT_GOODS_PRICE AMT_INCOME_TOTAL 0.317123
- AMT_CREDIT AMT_INCOME_TOTAL 0.313347
- REGION_POPULATION_RELATIVE AMT_INCOME_TOTAL 0.141307
- AMT_ANNUITY REGION_POPULATION_RELATIVE 0.065024
- REGION_POPULATION_RELATIVE AMT_GOODS_PRICE 0.055120
- REGION_POPULATION_RELATIVE AMT_CREDIT 0.050097
Perform correlation between numerical columns for finding correlation which having TARGET value as 0
```
#Perform correlation between CNT_CHILDREN, AMT_INCOME_TOTAL, AMT_CREDIT, AMT_GOODS_PRICE and REGION_POPULATION_RELATIVE
#Then make correlation matrix
corrZero=zero_df[columns].corr()
corrZero.style.background_gradient(cmap='coolwarm')
```
### Observation
In the heatmap above: The closer you are to RED there is a stronger relationship, the closer you are to blue the weaker the relationship.
As we can see from the corelation matrix above, there is a very close relationship between AMT_GOODS_PRICE & AMT_CREDIT.
AMT_ANNUITY & AMT_CREDIT have a medium/strong relationship. Annuity has a similar relationship with AMT_GOODS_PRICE.
This relationship is consistent with the one we saw for the defaulters in the one_df dataframe. Thus confirming that the relationships are consistent across TARGET values
```
corrZeroDf = corrZero.where(np.triu(np.ones(corrZero.shape), k=1).astype(np.bool)).unstack().reset_index()
corrZeroDf.columns = ['VAR1','VAR2','Correlation']
# corrOneDf.dropna(subset - ['Correlation'],inplace = True)
corrZeroDf.sort_values('Correlation', ascending = False).nlargest(10, 'Correlation')
```
In the correlation matrix, we can identify-
Columns with High Correlation:
1.AMT_GOODS_PRICE and AMT_CREDIT
Columns with Medium Correlation:
1.AMT_INCOME_TOTAL and AMT_CREDIT
2.AMT_INCOME_TOTAL and AMT_GOODS_PRICE
Columns with low correlation:
1.AMT_GOODS_PRICE and CNT_CHILDREN
We also observed that the top 10 correlation pairs are:
- VAR1 VAR2 Correlation
- AMT_GOODS_PRICE AMT_CREDIT 0.981276
- AMT_ANNUITY AMT_CREDIT 0.748446
- AMT_ANNUITY AMT_GOODS_PRICE 0.747315
- AMT_ANNUITY AMT_INCOME_TOTAL 0.390809
- AMT_GOODS_PRICE AMT_INCOME_TOTAL 0.317123
- AMT_CREDIT AMT_INCOME_TOTAL 0.313347
- REGION_POPULATION_RELATIVE AMT_INCOME_TOTAL 0.141307
- AMT_ANNUITY REGION_POPULATION_RELATIVE 0.065024
- REGION_POPULATION_RELATIVE AMT_GOODS_PRICE 0.055120
- REGION_POPULATION_RELATIVE AMT_CREDIT 0.050097
#### Key Obervation
We also observed that the top categories between both the data frames zero_df & one_df is the same:
AMT_GOODS_PRICE AMT_CREDIT 0.981276
### Analysing Numerical Data
```
#Box plot on the numerical columns having TARGET value as 1
plt.figure(figsize=(25,25))
plt.subplot(2,2,1)
plt.title('CHILDREN COUNT')
sns.boxplot(one_df['CNT_CHILDREN'])
plt.subplot(2,2,2)
plt.title('AMT_INCOME_TOTAL')
sns.boxplot(one_df['AMT_INCOME_TOTAL'])
plt.subplot(2,2,3)
plt.title('AMT_CREDIT')
sns.boxplot(one_df['AMT_CREDIT'])
plt.subplot(2,2,4)
plt.title('AMT_GOODS_PRICE')
sns.boxplot(one_df['AMT_GOODS_PRICE'])
plt.show()
```
### Observation
- From the box plots above we can safely say that having children has no impact on the reason to why someone defaults on paying back their loans
- The amount of credit taken is roughly around 450000 by the defaulters
```
#Box plot on the numerical columns having TARGET value as 0
plt.figure(figsize=(25,25))
plt.subplot(2,2,1)
plt.title('CHILDREN COUNT')
sns.boxplot(zero_df['CNT_CHILDREN'])
plt.subplot(2,2,2)
plt.title('AMT_INCOME_TOTAL')
sns.boxplot(zero_df['AMT_INCOME_TOTAL'])
plt.subplot(2,2,3)
plt.title('AMT_CREDIT')
sns.boxplot(zero_df['AMT_CREDIT'])
plt.subplot(2,2,4)
plt.title('AMT_GOODS_PRICE')
sns.boxplot(zero_df['AMT_GOODS_PRICE'])
plt.show()
```
### Observation
- From the box plots above we can safely say that having children has no impact oa persons ability to repay their loans
- The amount of credit taken is roughly around 450000 by the defaulters
- There are no outliers in the amoount of goods price
- The income median lies just below 150000
### Bivariate Analysis on zero_df for continuous - continuous (Target value =0)
```
## Plotting cont-cont Client Income vs Credit Amount
plt.figure(figsize=(12,12))
sns.scatterplot(x="AMT_INCOME_TOTAL", y="AMT_CREDIT",
hue="CODE_GENDER", style="CODE_GENDER", data=zero_df)
plt.xlabel('Income of client')
plt.ylabel('Credit Amount of loan')
plt.title('Client Income vs Credit Amount')
plt.show()
```
### Observation
- We do see some outliers here wherein Females having income less than 50000 have applied for loan with credit amount 1300000 approx
```
## Plotting cont-cont Client Income vs Region population
plt.figure(figsize=(12,12))
sns.scatterplot(x="AMT_INCOME_TOTAL", y="REGION_POPULATION_RELATIVE",
hue="CODE_GENDER", style="CODE_GENDER", data=zero_df)
plt.xlabel('Income of client')
plt.ylabel('Population of region where client lives')
plt.title('Client Income vs Region population')
plt.show()
```
### Observation
- Very less no of people live in highly dense/populated region >0.07
- Most of the clients live between population density of 0.00 to 0.04
# 5 PREVIOUS DATA
Read the dataset file previous_application.csv which consist previous loan of the customer.
```
previousApplicationData=pd.read_csv("./previous_application.csv")
previousApplicationData.head()
```
### Analysing previous application data
```
previousApplicationData.shape
previousApplicationData.describe
previousApplicationData.columns
previousApplicationData.dtypes
### Join the previous application data and application data files using merge
mergedApplicationDataAndPreviousData = pd.merge(applicationDataWithRelevantColumns, previousApplicationData, how='left', on=['SK_ID_CURR'])
mergedApplicationDataAndPreviousData.head()
```
### Observation
We will be merging on 'SK_ID_CURR' column as we have duplicate IDs present in the SK_ID_CURR in previousApplicationData and in the application_data file all the values are unique.
```
mergedApplicationDataAndPreviousData.shape
mergedApplicationDataAndPreviousData.NAME_CONTRACT_STATUS.value_counts(normalize=True)
```
### Analysis
We will be focusing on analysing the NAME_CONTRACT_STATUS Column and the various relationships based on that.
## Univariate Analysis
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution of contract status type', hue=None)
```
### Observation
- A large number of applications were approved for the clients
- Some clients who recieved the offer did not use their loan offers
- The number of refused & cancelled applications is roughly the same
## Bivariate Analysis
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution Occupation Type',hue='NAME_INCOME_TYPE')
```
### Observation
Based on the plot above we can conclude that:
- Working professionals have the highest number of approved loan applications.
- Working professionals also have the highest number of refused or cancelled loan applications
- Students, pensioners, businessmen and applicants on maternity leave have statistically low or no application status data present
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution based on Gender',hue='CODE_GENDER')
```
### Observation
- Female applicants make more applications and have a higher number of applications approved
- They also have a higher number of applications refused or canceled
- The number of male applicant statuses is lower than female ones across the board. This could be because of low number of males present in the dataset.
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution Target',hue='TARGET')
```
### Observation
- Based on the target column, we see that a high number of applicants who have a history of being abe to repay their loans are approved for new loans
- A very low number of defaulters are approved for new loans. This means that the bank is following a cautious approach to defaulters
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution based on Family Status',hue='NAME_FAMILY_STATUS')
```
### Observation
- A large number of married people make loan applications & are approved for loans
- Separated individuals have a very low number of applications in the unused offer
- The number of single/not married people who apply for loans and are refused or have their applications cancelled as compared to approved is less than half.
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution based Application Start Day',hue='WEEKDAY_APPR_PROCESS_START')
```
### Observation
- Most applicants start their loan applications on a Saturday and are successfully approved
- Applicants who start their applications on Friday have a higher chance of getting rejected or cancelling their application compared to the other 2 weekend days, Saturday and Sunday
- The number of cancelled applications is highest on Monday. This could suggest that after starting the application on the weekend, the client changed their mind on a workday.
```
uniplot(mergedApplicationDataAndPreviousData,col='NAME_CONTRACT_STATUS',title='Distribution of Age on Loans',hue='BINNED_AGE')
```
### Observation
- People between the ages of 31-40 apply for the most number of loans and have consistently higher values across all application statuses
- People above the age of 71 & below 20 dont make any loan applications
- The people in the ages of 31-40 could be applying for more loans as they are married or living with a partner
```
plt.figure(figsize=(40,25))
sns.catplot(x="NAME_CONTRACT_STATUS", hue="TARGET", col="CODE_GENDER",
data=mergedApplicationDataAndPreviousData, kind="count")
```
### Observation
- Female population has high chances of getting the loans approved
- Cancellation of loans by females is significant across defaulters and non defaulters
### Continous & Categorical Plots
```
### Plotting the relationship between NAME_CONTRACT_STATUS vs AMT_CREDIT_x
### from the merged application data and splitting on the basis of family status
plt.figure(figsize=(40,25))
plt.xticks(rotation=45)
sns.boxplot(data =mergedApplicationDataAndPreviousData, x='NAME_CONTRACT_STATUS',y='AMT_CREDIT_x', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Income amount vs Application Status based on Family Status')
plt.show()
```
### Observation
- Married people take a higher amount of credit and have a higher median chance of getting approved
- People in Civil marriage, widows & separated applicants have a consistently similar median value across all the application statuses
```
### Plotting the relationship between NAME_CONTRACT_STATUS vs AMT_INCOME_TOTAL
### from the merged application data and splitting on the basis of family status
plt.figure(figsize=(40,25))
plt.xticks(rotation=45)
plt.yscale('log')
sns.boxplot(data =mergedApplicationDataAndPreviousData, x='NAME_CONTRACT_STATUS',y='AMT_INCOME_TOTAL', hue ='NAME_FAMILY_STATUS',orient='v')
plt.title('Income amount vs Application status based on Family Status')
plt.show()
```
### Observation
- People who are married, live in civil marriages & single/not married earn consistently well across all application status types
- Their median income is also the same
- Widows earn less than all the other categories
### Continous & Continuous Plots
```
plt.figure(figsize=(30,20))
plt.scatter(mergedApplicationDataAndPreviousData.AMT_APPLICATION, mergedApplicationDataAndPreviousData.AMT_CREDIT_y)
plt.title("Final Amount Approved vs Credit Amount Applied")
plt.xlabel("Credit Amount applied by Client")
plt.ylabel("Final Amount approved by Bank")
plt.show()
```
### Observation
- The Credit Amount applied vs Final Amount approved shows a good linear relation till 2000000.
- However post 2000000, we could see good number of outliers where the approved amount is quite less as compared to amount applied
- The number of applications with credit amount > 3500000 are quite less and there are not very good chances that the same amount is going to be approved
# Conclusion
Through this case study we have made the following conclusions:
- Most popular days for making applications is Saturday. The bank could focus on keeping offices open longer on Saturday to aid in completion of the applications.
- Most popular age group for taking loans or credit is 31-40 with the most number of applications. The firm should focus on exploring more lucrative options for clients in that age range. They could be offered lower interest rates, longer repayment holidays etc.
- Married people have the highest chance of making a loan application and being approved for a loan.
- Because of the imbalance in the data, Females appear to be making the most number of loan applications. They also have a higher chance of getting approved and being able to repay the loans on time
- Widows with secondary education have a very high median credit amount borrowing and default on paying back loans as well. It would be better to be vary of lending to them
- Male labourers have high number of applications and also a high number of defaults as compared to females. It would be better for the bank to assess whether the person borrowing in this occupation type could be helped with staged loans or with loans on a lower interest rate than the other categories
- The number of applications with credit amount > 3500000 are quite less and there are not very good chances that the same amount is going to be approved
- Cancellation of loans by females is significant across defaulters and non defaulters
```
sns.boxplot(data= applicationData.AMT_ANNUITY.head(500000).isnull())
axes[0][1].set_title('AMT_ANNUITY')
plt.show()
print(applicationDataAfterDroppedColumns.AMT_ANNUITY.head(500000).isnull().sum())
print(applicationData.AMT_ANNUITY.head(500000).isnull().sum())
sns.boxplot(data= applicationDataAfterDroppedColumns.AMT_ANNUITY.dropna())
plt.show()
```
# END OF FILE
| true |
code
| 0.567997 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.