code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
```
src data
========
196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
244 51 2 880606923
166 346 1 886397596
298 474 4 884182806
115 265 2 881171488
253 465 5 891628467
305 451 3 886324817
6 86 3 883603013
62 257 2 879372434
286 1014 5 879781125
200 222 5 876042340
210 40 3 891035994
224 29 3 888104457
train_data
========== seq target user
0 [1, 290, 492, 381, 752] [467, 523, 11] 1
1 [290, 492, 381, 752, 467] [523, 11, 673] 1
2 [492, 381, 752, 467, 523] [11, 673, 1046] 1
3 [381, 752, 467, 523, 11] [673, 1046, 650] 1
4 [752, 467, 523, 11, 673] [1046, 650, 378] 1
5 [467, 523, 11, 673, 1046] [650, 378, 180] 1
6 [523, 11, 673, 1046, 650] [378, 180, 390] 1
7 [11, 673, 1046, 650, 378] [180, 390, 666] 1
8 [673, 1046, 650, 378, 180] [390, 666, 513] 1
9 [1046, 650, 378, 180, 390] [666, 513, 432] 1
test_data
========= seq target user
0 [633, 657, 1007, 948, 364] [522, 0, 0] 1
1 [26, 247, 49, 531, 146] [32, 0, 0] 2
2 [459, 477, 369, 770, 15] [306, 0, 0] 3
3 [1093, 946, 1101, 690, 1211] [526, 0, 0] 4
4 [732, 266, 669, 188, 253] [986, 0, 0] 5
5 [410, 446, 104, 782, 96] [26, 0, 0] 6
6 [146, 817, 536, 694, 186] [525, 0, 0] 7
7 [395, 669, 281, 289, 98] [731, 0, 0] 8
8 [588, 671, 369, 292, 304] [250, 0, 0] 9
9 [472, 222, 82, 716, 8] [131, 0, 0] 10
```
```
!wget -q --show-progress https://github.com/sparsh-ai/general-recsys/raw/T894414/data/v1/u.data
!head u.data
import pandas as pd
import pickle
def make_datasets(file, target_counts, seq_counts, isSave=True):
file_path = file
names = ['user', 'item', 'rateing', 'timestamps']
data = pd.read_csv(file_path, header=None, sep='\t', names=names)
# ReMap item ids
item_unique = data['item'].unique().tolist()
item_map = dict(zip(item_unique, range(1,len(item_unique) + 1)))
item_map[-1] = 0
all_item_count = len(item_map)
data['item'] = data['item'].apply(lambda x: item_map[x])
# ReMap usr ids
user_unique = data['user'].unique().tolist()
user_map = dict(zip(user_unique, range(1, len(user_unique) + 1)))
user_map[-1] = 0
all_user_count = len(user_map)
data['user'] = data['user'].apply(lambda x: user_map[x])
# Get user session
data = data.sort_values(by=['user','timestamps']).reset_index(drop=True)
user_sessions = data.groupby('user')['item'].apply(lambda x: x.tolist()) \
.reset_index().rename(columns={'item': 'item_list'})
train_users = []
train_seqs = []
train_targets = []
test_users = []
test_seqs = []
test_targets = []
user_all_items = {}
for index, row in user_sessions.iterrows():
user = row['user']
items = row['item_list']
user_all_items[user] = items
for i in range(seq_counts,len(items) - target_counts):
targets = items[i:i+target_counts]
seqs = items[max(0,i - seq_counts):i]
train_users.append(user)
train_seqs.append(seqs)
train_targets.append(targets)
last_item = [items[-1],0,0]
last_seq = items[-1 - seq_counts:-1]
test_users.append(user)
test_seqs.append(last_seq)
test_targets.append(last_item)
train = pd.DataFrame({'user':train_users,'seq':train_seqs,'target':train_targets})
test = pd.DataFrame({'user': test_users, 'seq': test_seqs, 'target': test_targets})
if isSave:
train.to_csv('train.csv', index=False)
test.to_csv('test.csv', index=False)
with open('info.pkl','wb+') as f:
pickle.dump(user_all_items,f,pickle.HIGHEST_PROTOCOL)
pickle.dump(all_user_count,f,pickle.HIGHEST_PROTOCOL)
pickle.dump(all_item_count, f, pickle.HIGHEST_PROTOCOL)
pickle.dump(user_map, f, pickle.HIGHEST_PROTOCOL)
pickle.dump(item_map, f, pickle.HIGHEST_PROTOCOL)
return train,test,\
user_all_items,all_user_count,\
all_item_count,user_map,item_map
if __name__ == '__main__':
make_datasets('u.data', target_counts=3, seq_counts=5)
!head train.csv
!head test.csv
```
|
github_jupyter
|
src data
========
196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
244 51 2 880606923
166 346 1 886397596
298 474 4 884182806
115 265 2 881171488
253 465 5 891628467
305 451 3 886324817
6 86 3 883603013
62 257 2 879372434
286 1014 5 879781125
200 222 5 876042340
210 40 3 891035994
224 29 3 888104457
train_data
========== seq target user
0 [1, 290, 492, 381, 752] [467, 523, 11] 1
1 [290, 492, 381, 752, 467] [523, 11, 673] 1
2 [492, 381, 752, 467, 523] [11, 673, 1046] 1
3 [381, 752, 467, 523, 11] [673, 1046, 650] 1
4 [752, 467, 523, 11, 673] [1046, 650, 378] 1
5 [467, 523, 11, 673, 1046] [650, 378, 180] 1
6 [523, 11, 673, 1046, 650] [378, 180, 390] 1
7 [11, 673, 1046, 650, 378] [180, 390, 666] 1
8 [673, 1046, 650, 378, 180] [390, 666, 513] 1
9 [1046, 650, 378, 180, 390] [666, 513, 432] 1
test_data
========= seq target user
0 [633, 657, 1007, 948, 364] [522, 0, 0] 1
1 [26, 247, 49, 531, 146] [32, 0, 0] 2
2 [459, 477, 369, 770, 15] [306, 0, 0] 3
3 [1093, 946, 1101, 690, 1211] [526, 0, 0] 4
4 [732, 266, 669, 188, 253] [986, 0, 0] 5
5 [410, 446, 104, 782, 96] [26, 0, 0] 6
6 [146, 817, 536, 694, 186] [525, 0, 0] 7
7 [395, 669, 281, 289, 98] [731, 0, 0] 8
8 [588, 671, 369, 292, 304] [250, 0, 0] 9
9 [472, 222, 82, 716, 8] [131, 0, 0] 10
!wget -q --show-progress https://github.com/sparsh-ai/general-recsys/raw/T894414/data/v1/u.data
!head u.data
import pandas as pd
import pickle
def make_datasets(file, target_counts, seq_counts, isSave=True):
file_path = file
names = ['user', 'item', 'rateing', 'timestamps']
data = pd.read_csv(file_path, header=None, sep='\t', names=names)
# ReMap item ids
item_unique = data['item'].unique().tolist()
item_map = dict(zip(item_unique, range(1,len(item_unique) + 1)))
item_map[-1] = 0
all_item_count = len(item_map)
data['item'] = data['item'].apply(lambda x: item_map[x])
# ReMap usr ids
user_unique = data['user'].unique().tolist()
user_map = dict(zip(user_unique, range(1, len(user_unique) + 1)))
user_map[-1] = 0
all_user_count = len(user_map)
data['user'] = data['user'].apply(lambda x: user_map[x])
# Get user session
data = data.sort_values(by=['user','timestamps']).reset_index(drop=True)
user_sessions = data.groupby('user')['item'].apply(lambda x: x.tolist()) \
.reset_index().rename(columns={'item': 'item_list'})
train_users = []
train_seqs = []
train_targets = []
test_users = []
test_seqs = []
test_targets = []
user_all_items = {}
for index, row in user_sessions.iterrows():
user = row['user']
items = row['item_list']
user_all_items[user] = items
for i in range(seq_counts,len(items) - target_counts):
targets = items[i:i+target_counts]
seqs = items[max(0,i - seq_counts):i]
train_users.append(user)
train_seqs.append(seqs)
train_targets.append(targets)
last_item = [items[-1],0,0]
last_seq = items[-1 - seq_counts:-1]
test_users.append(user)
test_seqs.append(last_seq)
test_targets.append(last_item)
train = pd.DataFrame({'user':train_users,'seq':train_seqs,'target':train_targets})
test = pd.DataFrame({'user': test_users, 'seq': test_seqs, 'target': test_targets})
if isSave:
train.to_csv('train.csv', index=False)
test.to_csv('test.csv', index=False)
with open('info.pkl','wb+') as f:
pickle.dump(user_all_items,f,pickle.HIGHEST_PROTOCOL)
pickle.dump(all_user_count,f,pickle.HIGHEST_PROTOCOL)
pickle.dump(all_item_count, f, pickle.HIGHEST_PROTOCOL)
pickle.dump(user_map, f, pickle.HIGHEST_PROTOCOL)
pickle.dump(item_map, f, pickle.HIGHEST_PROTOCOL)
return train,test,\
user_all_items,all_user_count,\
all_item_count,user_map,item_map
if __name__ == '__main__':
make_datasets('u.data', target_counts=3, seq_counts=5)
!head train.csv
!head test.csv
| 0.159446 | 0.32453 |
# Develop Model
In this noteook, we will go through the steps to load the pre-trained InceptionV3 model, pre-process the images to the required format and call the model to find the top predictions.
```
import os
import numpy as np
import tensorflow as tf
from keras.applications.inception_v3 import InceptionV3
from keras.preprocessing import image
from keras.applications.imagenet_utils import decode_predictions
from keras.applications.inception_v3 import preprocess_input
from keras import backend as K
from keras.models import Model
from PIL import Image
from numba import cuda
print(tf.__version__)
```
We first load the model in test mode.
```
K.set_learning_phase(0) # (0=test, 1=train)
%%time
model = InceptionV3(input_shape=(299, 299, 3), include_top=True, weights='imagenet')
model = InceptionV3(input_shape=(299, 299, 3), include_top=True, weights='imagenet')
```
Here is the summary of the model.
```
model.summary()
```
Next, we serialize the model and get its weights to re-build the model with learning phase hard coded to 0.
```
config = model.get_config()
weights = model.get_weights()
del model
new_model = Model.from_config(config)
new_model.set_weights(weights)
```
Let's test our model with an image of a Lynx.
```
!wget https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg
img_path = '220px-Lynx_lynx_poing.jpg'
print(Image.open(img_path).size)
Image.open(img_path)
```
Below, we load the image by resizing to (224, 224) and then preprocessing using the methods from keras preprocessing.
```
img = image.load_img(img_path, target_size=(299, 299))
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = preprocess_input(img)
```
Now, let's call the model on our image to predict the top 3 labels.
```
%%time
preds = new_model.predict(img)
print('Predicted:', decode_predictions(preds, top=3))
```
The model's top guess is Lynx. We can now move on to exporting the model for TensorFlow serving.
## Save Model as a TensorFlow Servable
TensorFlow Serving requires that the model is saved in [SavedModel](https://www.tensorflow.org/api_docs/python/tf/saved_model) format. We will first create a directory hierarchy which will include a version number for our model and then save it in the required format.
```
MODEL_DIRECTORY = 'models'
VERSION = '1'
export_path = os.path.join(MODEL_DIRECTORY, VERSION)
print('export_path = {}'.format(export_path))
if os.path.isdir(export_path):
print('Already saved a model, cleaning up.')
!rm -r {export_path}
```
We will now fetch the Keras session and save the model as a servable.
```
tf.saved_model.simple_save(
K.get_session(),
export_path,
inputs={'input_image': new_model.input},
outputs={t.name:t for t in new_model.outputs})
```
Let's clear GPU memory before we move on to the next notebook.
```
K.clear_session()
del new_model
# cuda.select_device(0)
# cuda.close()
```
Next, we will [serve the exported model with TensorFlow serving image and test locally](01_ServeModelLocally.ipynb).
|
github_jupyter
|
import os
import numpy as np
import tensorflow as tf
from keras.applications.inception_v3 import InceptionV3
from keras.preprocessing import image
from keras.applications.imagenet_utils import decode_predictions
from keras.applications.inception_v3 import preprocess_input
from keras import backend as K
from keras.models import Model
from PIL import Image
from numba import cuda
print(tf.__version__)
K.set_learning_phase(0) # (0=test, 1=train)
%%time
model = InceptionV3(input_shape=(299, 299, 3), include_top=True, weights='imagenet')
model = InceptionV3(input_shape=(299, 299, 3), include_top=True, weights='imagenet')
model.summary()
config = model.get_config()
weights = model.get_weights()
del model
new_model = Model.from_config(config)
new_model.set_weights(weights)
!wget https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg
img_path = '220px-Lynx_lynx_poing.jpg'
print(Image.open(img_path).size)
Image.open(img_path)
img = image.load_img(img_path, target_size=(299, 299))
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = preprocess_input(img)
%%time
preds = new_model.predict(img)
print('Predicted:', decode_predictions(preds, top=3))
MODEL_DIRECTORY = 'models'
VERSION = '1'
export_path = os.path.join(MODEL_DIRECTORY, VERSION)
print('export_path = {}'.format(export_path))
if os.path.isdir(export_path):
print('Already saved a model, cleaning up.')
!rm -r {export_path}
tf.saved_model.simple_save(
K.get_session(),
export_path,
inputs={'input_image': new_model.input},
outputs={t.name:t for t in new_model.outputs})
K.clear_session()
del new_model
# cuda.select_device(0)
# cuda.close()
| 0.684475 | 0.930521 |
Exploratory data analysis of the Hotel Booking Demand dataset
================
- contributors: Debananda Sarkar, Chen Zhao, Jared Splinter, Peter Yang
- Created on: 2020-11-21
# Summary of the data set
The data set used in this project comes from the Hotel Booking demand datasets from [Antonio, Almeida and Nunes, 2019](https://www.sciencedirect.com/science/article/pii/S2352340918315191#ack0005) and the data can be found from the GitHub Repository [here](https://github.com/rfordatascience/tidytuesday/tree/master/data/2020/2020-02-11). The dataset contains real world data obtained from two hotels: one resort hotel and one city hotel. Each row represents an individual hotel booking due to arrive between July 1st, 2015 and August 31st, 2017. There are 119390 observations in the data set, and 31 features. The following table shows the counts of observations for each hotel.
| Resort Hotel | City Hotel |
| -----------: | ---------: |
| 40060 | 79330 |
Table 1: Counts of observation for each hotel.
# Import Packages and Load Data
```
# common packages
import numpy as np
import pandas as pd
# ML packages
from sklearn.model_selection import train_test_split
# Visualization packages
import altair as alt
from altair_saver import save
# Save a vega-lite spec and a PNG blob for each plot in the notebook
alt.renderers.enable('mimetype')
# Handle large data sets without embedding them in the notebook
alt.data_transformers.enable('data_server')
# set seed
seed = 2020
# Load data:
hotels_df = pd.read_csv("../data/raw/hotels_dataset.csv")
# Split data:
# 80% of observations are in the training and 20% of observations are in the test set
train_df, test_df = train_test_split(hotels_df, test_size=0.2, random_state=seed)
# Split the features and targets:
X_train = train_df.drop(["is_canceled"], axis=1)
y_train = train_df["is_canceled"]
X_test = test_df.drop(["is_canceled"], axis=1)
y_test = test_df["is_canceled"]
# Seperate Resort and City Hotel:
resort_train = X_train.loc[(X_train["hotel"] == "Resort Hotel")].copy()
city_train = X_train.loc[(X_train["hotel"] == "City Hotel")].copy()
```
## Splitting the data set into training and test data sets
- 80% of observations are in the training and 20% of observations are in the test set
| Data partition | Is Canceled | Is Not Canceled |
| :------------- | -----------: | ---------: |
| Training | 35407 | 60105 |
| Test | 8817 | 15061 |
Table 2: Counts of observation for each hotel for each data partition
There is a class imbalance, We would like to predict the cancellation as accurately as possible, so that the hotel does not get a unwanted surprise. Hence, we would like to maximize recall.However, in the process of maximizing recall we might over predict cancellation. That will be an adverse scenario as the management may get into panic and start introducing promotions which might take a toll of hotel revenue. Hence, we would like to keep the precision high as well. As we are interested in keeping both precision and recall high, **f1-score** will be a good evaluation metric here.
# Viewing the Train data set
```
train_df.head()
train_df.info()
# Check Null values
null_df = train_df.isna().sum().reset_index(name="count_of_nulls").query("count_of_nulls != 0")
null_df["perc"] = np.round(null_df["count_of_nulls"] / train_df.shape[0] * 100, 2)
null_df
```
### Train data set observations
Our train data set has a number of numeric and categorical features to be explored as well as many that may not be useful for prediction. There are also a few features that have a large number of missing values. The most of which is the feature "company" which can likely be omitted from the analysis as ~94% of the observations have null values. Next steps will require delving more into the features to see if they serve any purpose for training or if they should be omited.
# Exploratory Analysis Visualizations
## Feature Distributions
```
numeric_features = [
"lead_time",
"stays_in_weekend_nights",
"stays_in_week_nights",
"adults",
"children",
"babies",
"previous_cancellations",
"previous_bookings_not_canceled",
"booking_changes",
"days_in_waiting_list",
"adr",
"required_car_parking_spaces",
"total_of_special_requests"
]
train_df = train_df.copy()
train_df["is_canceled_cat"] = train_df["is_canceled"].apply(lambda x: "Canceled" if x == 1 else "Not Canceled")#.copy()
(alt.Chart(train_df)
.mark_line(interpolate='step').encode(
alt.X(alt.repeat(), type='quantitative'),
alt.Y('count()', title = ""),
alt.Color('is_canceled_cat', title = ""))) .properties(width=150, height=150).repeat(numeric_features,columns = 4)
```
### Distribution Observations:
For our numeric feature distributions we find many of the numeric features are right skewed as they are dominated by `0` values. This may mean many of these numeric features may not be good predictors of the target and score low coefficient weights. A few numeric features that look promising for prediciton are `total_of_special_requests`, `required_car_parking_spaces`, `stay_in_week_nights` and `stay_in_weekend_nights` as they have wider distributions.
```
# categorical features against target graph
categorical_features = [
"hotel",
"meal",
"market_segment",
"distribution_channel",
"reserved_room_type",
"deposit_type",
"customer_type",
"is_repeated_guest",
]
cat_vs_target = (
alt.Chart(train_df)
.mark_rect()
.encode(
alt.X(alt.repeat(), type="nominal"),
alt.Y("is_canceled_cat", title=""),
alt.Color("count()", title="Number of Observations"),
)
.properties(width=150, height=150)
.repeat(
categorical_features, columns=4, title="Categorical features with target"
)
)
cat_vs_target
```
## Feature Correlations
```
# correlation chart all variable
corr_df = train_df.corr().stack().reset_index(name="corr")
corr_df["round_corr"] = np.round(corr_df["corr"], 2)
corr_plot = (
alt.Chart(
corr_df.query("level_0 != 'is_canceled' & level_1 != 'is_canceled'"),
title="Feature Correlation",
)
.mark_rect()
.encode(
x="level_0",
y="level_1",
tooltip="corr",
color=alt.Color(
"corr", scale=alt.Scale(domain=(-1, 1), scheme="purpleorange")
),
)
.properties(width=500, height=500)
)
corr_text = (
alt.Chart(corr_df.query("level_0 != 'is_canceled' & level_1 != 'is_canceled'"))
.mark_text(size=8)
.encode(
x=alt.X("level_0", title="Features"),
y=alt.Y("level_1", title="Features"),
text="round_corr",
)
.properties(width=500, height=500)
)
corr_all = corr_plot + corr_text
corr_all
# correlation against target chart
corr_plot = (
alt.Chart(
corr_df[corr_df.level_1 == "is_canceled"], title="Feature Correlation"
)
.mark_rect()
.encode(
x=alt.X("level_0", title="Features"),
y=alt.Y("level_1", title="Target"),
tooltip="corr",
color=alt.Color(
"corr", scale=alt.Scale(domain=(-1, 1), scheme="purpleorange")
),
)
.properties(width=600)
)
corr_text = (
alt.Chart(corr_df[corr_df.level_1 == "is_canceled"])
.mark_text(size=8)
.encode(
x=alt.X("level_0", title="Features"),
y=alt.Y("level_1", title="Target"),
text="round_corr",
)
.properties(width=600)
)
corr_target = corr_plot + corr_text
corr_target
```
### Correlation Observations:
There is a moderate correlation between `arrival_date_week_number` and `arrival_date_year` as well as `stay_in_week_nights` and `stay_in_weekend_nights`. These may be expected values however, we need to explore the relations further in regards to training our model. There is also some correlation between `lead_time` and `total_of_special_requests` with the target. Further analysis will reveal if these are useful features for predicting the target.
```
null_df = (
train_df.isna()
.sum()
.reset_index(name="missing_count")
.query("missing_count != 0")
)
null_df["missing_percentage"] = np.round(
null_df["missing_count"] / train_df.shape[0] * 100, 2
)
null_df = null_df.rename({"index": "feature"}, axis=1)
null_df
```
## Feature Examination
```
# feature examination charts
top_20_countries = (
X_train.groupby("country")
.size()
.reset_index(name="counts")
.sort_values(by="counts", ascending=False)[:20]
)
countries = (
alt.Chart(top_20_countries, title="Top 20 home country of guests")
.mark_bar()
.encode(
alt.X("counts", title="Guests numbers"),
alt.Y("country", sort="-x", title="Country"),
alt.Tooltip("country"),
)
)
X_train["adr_ac"] = X_train["adr"] / (X_train["adults"] + X_train["children"])
room_price = X_train[["hotel", "reserved_room_type", "adr_ac"]].sort_values(
"reserved_room_type"
)
room_price = (
alt.Chart(room_price)
.mark_boxplot(extent="min-max", clip=True)
.encode(
alt.X("adr_ac", title="Price [EUR]", scale=alt.Scale(domain=(0, 120))),
alt.Y("hotel", title="Hotel"),
color="hotel",
)
.facet(
"reserved_room_type",
columns=2,
title="Price per night and person for different room types",
)
)
resort_train["total_nights"] = (
resort_train["stays_in_weekend_nights"] + resort_train["stays_in_week_nights"]
)
city_train["total_nights"] = (
city_train["stays_in_weekend_nights"] + city_train["stays_in_week_nights"]
)
num_nights_resort = list(resort_train["total_nights"].value_counts().index)
num_bookings_resort = list(resort_train["total_nights"].value_counts())
rel_bookings_resort = (
resort_train["total_nights"].value_counts() / sum(num_bookings_resort) * 100
) # convert to percent
num_nights_city = list(city_train["total_nights"].value_counts().index)
num_bookings_city = list(city_train["total_nights"].value_counts())
rel_bookings_city = (
city_train["total_nights"].value_counts() / sum(num_bookings_city) * 100
) # convert to percent
resort_nights = pd.DataFrame(
{
"hotel": "Resort hotel",
"num_nights": num_nights_resort,
"rel_num_bookings": rel_bookings_resort,
}
)
city_nights = pd.DataFrame(
{
"hotel": "City hotel",
"num_nights": num_nights_city,
"rel_num_bookings": rel_bookings_city,
}
)
nights_data = pd.concat([resort_nights, city_nights], ignore_index=True)
nights_data
stay = (
alt.Chart(nights_data)
.mark_bar()
.encode(
alt.X("num_nights", title="Number of nights"),
alt.Y("rel_num_bookings", title="Percent of guests"),
color=alt.Color("hotel", legend=None),
)
.facet("hotel", title="Length of guests stay")
)
feature_exam = (countries.properties(height=300, width=200) | stay) & room_price
feature_exam
```
### Examination Observations
Looking in depth at a few features we find some interesting results. First, we notice most of our observations come from European countries, specifically most are from Portugal. It may be that model may perform better predicting on guests from non European countries. Or perhaps, the model will perform worse due to limitations from the data set coming from 2 hotels, most of the observations come because most of the guests are mainly European.
Second, looking at the number of nights stayed we find a difference between the hotels. For the city hotel, guests tend to stay for 1-4 nights. While for the resort hotel, the distribution is similar, but more guests stay up to 7 nights. This could be because there is a tendency for resort hotel guests to stay longer.
We can hypothesize that room price could be a good predictor for a cancellation. We find a difference in room prices between the resort and city hotels in the different room types. Perhaps the roomy type prices have a role in prediction. The limitations of this feature are that there is no currency information for price in the dataset, but we know most of guests are from European countries so it may be safe to assume that all prices all in EUR. Additionally, due to guest anonymity reasons, the rooms types are only given as letters, so we would not be able to tell which specific room types are good predictors.
### Price per night varies over the year
```
# price versus month graph
prices_monthly = X_train[["hotel", "arrival_date_month", "adr_ac"]].sort_values(
"arrival_date_month"
)
# order by month:
months_ordered = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
]
prices_monthly["arrival_date_month"] = pd.Categorical(
prices_monthly["arrival_date_month"], categories=months_ordered, ordered=True
)
prices_monthly = prices_monthly.sort_values("arrival_date_month")
prices_points = (
alt.Chart(prices_monthly, title="Room price per night over the year")
.mark_point()
.encode(
alt.X("arrival_date_month", title="Month", sort=months_ordered),
alt.Y("adr_ac", title="Price [EUR]"),
alt.Color("hotel"),
)
.properties(width=500, height=400)
)
price_vs_month = prices_points.encode(y="mean(adr_ac)").mark_line()
price_vs_month
```
##### Plot Summary
- During the summer time, the prices in Resort Hotel are much higher than City Hotel
- The prices of Resort Hotel varies a lot, and is most expensive during summer time
- The prices of City Hotel varies less, and is most expensive during spring.
### Most busy month
```
# guest versus month graph
rguests_monthly = resort_train.groupby("arrival_date_month")["hotel"].count()
cguests_monthly = city_train.groupby("arrival_date_month")["hotel"].count()
rguest_data = pd.DataFrame(
{
"month": list(rguests_monthly.index),
"hotel": "Resort hotel",
"guests": list(rguests_monthly.values),
}
)
cguest_data = pd.DataFrame(
{
"month": list(cguests_monthly.index),
"hotel": "City hotel",
"guests": list(cguests_monthly.values),
}
)
guest_data = pd.concat([rguest_data, cguest_data], ignore_index=True)
guest_data["month"] = pd.Categorical(
guest_data["month"], categories=months_ordered, ordered=True
)
guest_data = guest_data.sort_values("month")
# Dataset contains July and August date from 3 years, the other month from 2 years. Normalize data:
guest_data.loc[
(guest_data["month"] == "July") | (guest_data["month"] == "August"), "guests"
] /= 3
guest_data.loc[
~((guest_data["month"] == "July") | (guest_data["month"] == "August")), "guests"
] /= 2
guests_points = (
alt.Chart(guest_data, title="Number of guests over the year")
.mark_point()
.encode(
alt.X("month", title="Month", sort=months_ordered),
alt.Y("guests", title="Number of guests"),
alt.Color("hotel"),
)
.properties(width=500, height=400)
)
guest_vs_month = guests_points.mark_line()
guest_vs_month
```
##### Plot Summary
- During the winter time, the guests in both hotels are less.
- City Hotel has more guests during the spring and autumn, in the mean time, the prices also in the higer level. In summer time, the guests are less and prices is also lower.
- Resort Hotel's guests number varies less, when the prices reach highest in summer, the guests are less
### Repeated guests with previous booking
```
# guest repeat booking with cancel history graph
guests_prev_cancel = X_train[
["is_repeated_guest", "previous_bookings_not_canceled"]
]
rep_guests_prev_cancel = (
alt.Chart(
guests_prev_cancel, title="Guests repeat booking with cancellation history"
)
.mark_bar()
.encode(
alt.X(
"sum(previous_bookings_not_canceled)",
title="Total number of previous bookings not cancelled",
),
alt.Y("is_repeated_guest:O", title="Repeated guests"),
)
)
rep_guests_prev_cancel
```
##### Plot Summary
- For "Repeated guests", 1 means this guest is a repeated guest, and 0 means the opposite
- When there are more previous bookings are not canceled, the guest tends to be a repeated guest, and vice versa.
# References
<div id="refs" class="references">
<div id="ref-Hotel2019">
Antonio, Nuno, Ana de Almeida, and Luis Nunes. 2019. "Hotel booking demand datasets." Data in brief 22: 41-49. <https://doi.org/10.1016/j.dib.2018.11.126>
</div>
</div>
|
github_jupyter
|
# common packages
import numpy as np
import pandas as pd
# ML packages
from sklearn.model_selection import train_test_split
# Visualization packages
import altair as alt
from altair_saver import save
# Save a vega-lite spec and a PNG blob for each plot in the notebook
alt.renderers.enable('mimetype')
# Handle large data sets without embedding them in the notebook
alt.data_transformers.enable('data_server')
# set seed
seed = 2020
# Load data:
hotels_df = pd.read_csv("../data/raw/hotels_dataset.csv")
# Split data:
# 80% of observations are in the training and 20% of observations are in the test set
train_df, test_df = train_test_split(hotels_df, test_size=0.2, random_state=seed)
# Split the features and targets:
X_train = train_df.drop(["is_canceled"], axis=1)
y_train = train_df["is_canceled"]
X_test = test_df.drop(["is_canceled"], axis=1)
y_test = test_df["is_canceled"]
# Seperate Resort and City Hotel:
resort_train = X_train.loc[(X_train["hotel"] == "Resort Hotel")].copy()
city_train = X_train.loc[(X_train["hotel"] == "City Hotel")].copy()
train_df.head()
train_df.info()
# Check Null values
null_df = train_df.isna().sum().reset_index(name="count_of_nulls").query("count_of_nulls != 0")
null_df["perc"] = np.round(null_df["count_of_nulls"] / train_df.shape[0] * 100, 2)
null_df
numeric_features = [
"lead_time",
"stays_in_weekend_nights",
"stays_in_week_nights",
"adults",
"children",
"babies",
"previous_cancellations",
"previous_bookings_not_canceled",
"booking_changes",
"days_in_waiting_list",
"adr",
"required_car_parking_spaces",
"total_of_special_requests"
]
train_df = train_df.copy()
train_df["is_canceled_cat"] = train_df["is_canceled"].apply(lambda x: "Canceled" if x == 1 else "Not Canceled")#.copy()
(alt.Chart(train_df)
.mark_line(interpolate='step').encode(
alt.X(alt.repeat(), type='quantitative'),
alt.Y('count()', title = ""),
alt.Color('is_canceled_cat', title = ""))) .properties(width=150, height=150).repeat(numeric_features,columns = 4)
# categorical features against target graph
categorical_features = [
"hotel",
"meal",
"market_segment",
"distribution_channel",
"reserved_room_type",
"deposit_type",
"customer_type",
"is_repeated_guest",
]
cat_vs_target = (
alt.Chart(train_df)
.mark_rect()
.encode(
alt.X(alt.repeat(), type="nominal"),
alt.Y("is_canceled_cat", title=""),
alt.Color("count()", title="Number of Observations"),
)
.properties(width=150, height=150)
.repeat(
categorical_features, columns=4, title="Categorical features with target"
)
)
cat_vs_target
# correlation chart all variable
corr_df = train_df.corr().stack().reset_index(name="corr")
corr_df["round_corr"] = np.round(corr_df["corr"], 2)
corr_plot = (
alt.Chart(
corr_df.query("level_0 != 'is_canceled' & level_1 != 'is_canceled'"),
title="Feature Correlation",
)
.mark_rect()
.encode(
x="level_0",
y="level_1",
tooltip="corr",
color=alt.Color(
"corr", scale=alt.Scale(domain=(-1, 1), scheme="purpleorange")
),
)
.properties(width=500, height=500)
)
corr_text = (
alt.Chart(corr_df.query("level_0 != 'is_canceled' & level_1 != 'is_canceled'"))
.mark_text(size=8)
.encode(
x=alt.X("level_0", title="Features"),
y=alt.Y("level_1", title="Features"),
text="round_corr",
)
.properties(width=500, height=500)
)
corr_all = corr_plot + corr_text
corr_all
# correlation against target chart
corr_plot = (
alt.Chart(
corr_df[corr_df.level_1 == "is_canceled"], title="Feature Correlation"
)
.mark_rect()
.encode(
x=alt.X("level_0", title="Features"),
y=alt.Y("level_1", title="Target"),
tooltip="corr",
color=alt.Color(
"corr", scale=alt.Scale(domain=(-1, 1), scheme="purpleorange")
),
)
.properties(width=600)
)
corr_text = (
alt.Chart(corr_df[corr_df.level_1 == "is_canceled"])
.mark_text(size=8)
.encode(
x=alt.X("level_0", title="Features"),
y=alt.Y("level_1", title="Target"),
text="round_corr",
)
.properties(width=600)
)
corr_target = corr_plot + corr_text
corr_target
null_df = (
train_df.isna()
.sum()
.reset_index(name="missing_count")
.query("missing_count != 0")
)
null_df["missing_percentage"] = np.round(
null_df["missing_count"] / train_df.shape[0] * 100, 2
)
null_df = null_df.rename({"index": "feature"}, axis=1)
null_df
# feature examination charts
top_20_countries = (
X_train.groupby("country")
.size()
.reset_index(name="counts")
.sort_values(by="counts", ascending=False)[:20]
)
countries = (
alt.Chart(top_20_countries, title="Top 20 home country of guests")
.mark_bar()
.encode(
alt.X("counts", title="Guests numbers"),
alt.Y("country", sort="-x", title="Country"),
alt.Tooltip("country"),
)
)
X_train["adr_ac"] = X_train["adr"] / (X_train["adults"] + X_train["children"])
room_price = X_train[["hotel", "reserved_room_type", "adr_ac"]].sort_values(
"reserved_room_type"
)
room_price = (
alt.Chart(room_price)
.mark_boxplot(extent="min-max", clip=True)
.encode(
alt.X("adr_ac", title="Price [EUR]", scale=alt.Scale(domain=(0, 120))),
alt.Y("hotel", title="Hotel"),
color="hotel",
)
.facet(
"reserved_room_type",
columns=2,
title="Price per night and person for different room types",
)
)
resort_train["total_nights"] = (
resort_train["stays_in_weekend_nights"] + resort_train["stays_in_week_nights"]
)
city_train["total_nights"] = (
city_train["stays_in_weekend_nights"] + city_train["stays_in_week_nights"]
)
num_nights_resort = list(resort_train["total_nights"].value_counts().index)
num_bookings_resort = list(resort_train["total_nights"].value_counts())
rel_bookings_resort = (
resort_train["total_nights"].value_counts() / sum(num_bookings_resort) * 100
) # convert to percent
num_nights_city = list(city_train["total_nights"].value_counts().index)
num_bookings_city = list(city_train["total_nights"].value_counts())
rel_bookings_city = (
city_train["total_nights"].value_counts() / sum(num_bookings_city) * 100
) # convert to percent
resort_nights = pd.DataFrame(
{
"hotel": "Resort hotel",
"num_nights": num_nights_resort,
"rel_num_bookings": rel_bookings_resort,
}
)
city_nights = pd.DataFrame(
{
"hotel": "City hotel",
"num_nights": num_nights_city,
"rel_num_bookings": rel_bookings_city,
}
)
nights_data = pd.concat([resort_nights, city_nights], ignore_index=True)
nights_data
stay = (
alt.Chart(nights_data)
.mark_bar()
.encode(
alt.X("num_nights", title="Number of nights"),
alt.Y("rel_num_bookings", title="Percent of guests"),
color=alt.Color("hotel", legend=None),
)
.facet("hotel", title="Length of guests stay")
)
feature_exam = (countries.properties(height=300, width=200) | stay) & room_price
feature_exam
# price versus month graph
prices_monthly = X_train[["hotel", "arrival_date_month", "adr_ac"]].sort_values(
"arrival_date_month"
)
# order by month:
months_ordered = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
]
prices_monthly["arrival_date_month"] = pd.Categorical(
prices_monthly["arrival_date_month"], categories=months_ordered, ordered=True
)
prices_monthly = prices_monthly.sort_values("arrival_date_month")
prices_points = (
alt.Chart(prices_monthly, title="Room price per night over the year")
.mark_point()
.encode(
alt.X("arrival_date_month", title="Month", sort=months_ordered),
alt.Y("adr_ac", title="Price [EUR]"),
alt.Color("hotel"),
)
.properties(width=500, height=400)
)
price_vs_month = prices_points.encode(y="mean(adr_ac)").mark_line()
price_vs_month
# guest versus month graph
rguests_monthly = resort_train.groupby("arrival_date_month")["hotel"].count()
cguests_monthly = city_train.groupby("arrival_date_month")["hotel"].count()
rguest_data = pd.DataFrame(
{
"month": list(rguests_monthly.index),
"hotel": "Resort hotel",
"guests": list(rguests_monthly.values),
}
)
cguest_data = pd.DataFrame(
{
"month": list(cguests_monthly.index),
"hotel": "City hotel",
"guests": list(cguests_monthly.values),
}
)
guest_data = pd.concat([rguest_data, cguest_data], ignore_index=True)
guest_data["month"] = pd.Categorical(
guest_data["month"], categories=months_ordered, ordered=True
)
guest_data = guest_data.sort_values("month")
# Dataset contains July and August date from 3 years, the other month from 2 years. Normalize data:
guest_data.loc[
(guest_data["month"] == "July") | (guest_data["month"] == "August"), "guests"
] /= 3
guest_data.loc[
~((guest_data["month"] == "July") | (guest_data["month"] == "August")), "guests"
] /= 2
guests_points = (
alt.Chart(guest_data, title="Number of guests over the year")
.mark_point()
.encode(
alt.X("month", title="Month", sort=months_ordered),
alt.Y("guests", title="Number of guests"),
alt.Color("hotel"),
)
.properties(width=500, height=400)
)
guest_vs_month = guests_points.mark_line()
guest_vs_month
# guest repeat booking with cancel history graph
guests_prev_cancel = X_train[
["is_repeated_guest", "previous_bookings_not_canceled"]
]
rep_guests_prev_cancel = (
alt.Chart(
guests_prev_cancel, title="Guests repeat booking with cancellation history"
)
.mark_bar()
.encode(
alt.X(
"sum(previous_bookings_not_canceled)",
title="Total number of previous bookings not cancelled",
),
alt.Y("is_repeated_guest:O", title="Repeated guests"),
)
)
rep_guests_prev_cancel
| 0.771585 | 0.964355 |

# Discriminating qubit states using Qiskit
In this Jupyter notebook we show how to convert level 1 data, such as IQ data for superconducting qubits, to level 2 data, i.e. qubit states. We do this by training a discriminator using calibration circuits and create a filter, based on the fitted discriminator, which can be applied to subsequent measurement. This notebook does the following steps
1) Creates pulse schedules to run on the IBM Q devices. The schedule list has calibration schedules and experiment schedules.
2) Use the calibration schedules to train a discriminator.
3) Create a filter from the discriminator to discriminate the qubit states for the experiment schedule.
```
import matplotlib.pyplot as plt
import numpy as np
from copy import deepcopy
from sklearn.svm import SVC
%matplotlib inline
plt.rcParams['font.size'] = 16
import qiskit
from qiskit.ignis.measurement.discriminator.iq_discriminators import \
LinearIQDiscriminator, QuadraticIQDiscriminator, SklearnIQDiscriminator
from qiskit.result.models import ExperimentResultData
from qiskit import IBMQ
from qiskit.pulse import MeasureChannel, DriveChannel
from qiskit.tools.visualization import plot_histogram
import qiskit.pulse as pulse
import qiskit.pulse.pulse_lib as pulse_lib
from qiskit.compiler import assemble
#Set to False to run the discriminator, set to True
#to use pickled data
use_prerun_data = True
```
## 1) Creating Pulse Schedules to run on IBM Q
```
account_provider = IBMQ.load_account()
hub = account_provider.credentials.hub
group = account_provider.credentials.group
project = account_provider.credentials.project
provider = IBMQ.get_provider(hub=hub, group=hub, project=hub)
backend = provider.get_backend('ibmq_almaden')
back_config = backend.configuration().to_dict()
defaults = backend.defaults()
# command definition from defaults.
cmd_def = pulse.CmdDef.from_defaults(defaults.cmd_def, defaults.pulse_library)
```
We create schedules to measure two qubits. The calibration schedules `cal_00` and `cal_11` serve to calibrate the 0 and 1 sates of the qubits. The schedule `X90p` is our experiment where we apply a pi-half-pulse to both qubits.
```
qubits = [0, 1]
schedules = []
meas_buffer = 2
shots = 512
experiment_name = 'X90p'
meas = cmd_def.get('measure', qubits=tuple(range(20)))
# Create a calibration schedule for the ground state.
schedule_no_pi = pulse.Schedule(name='cal_00')
schedule_no_pi += meas
# Create a calibration schedule for the excited state.
schedule_pi = pulse.Schedule(name='cal_11')
for q in qubits:
xgate = cmd_def.get('x', qubits=q)
schedule_pi += xgate
schedule_pi += meas << (schedule_pi.duration + meas_buffer)
# Measurement schedule. Do an X90p gate on both qubits.
schedule_x90p = pulse.Schedule(name=experiment_name)
for q in qubits:
x90p = cmd_def.get('u3', qubits=q, P0=np.pi/2., P1=0., P2=0.)
schedule_x90p += x90p
schedule_x90p += meas << (schedule_x90p.duration + meas_buffer)
schedules = [schedule_no_pi, schedule_pi, schedule_x90p]
plt_chs = []
for q in [0, 1]:
plt_chs.append(MeasureChannel(q))
plt_chs.append(DriveChannel(q))
schedules[2].draw(channels=plt_chs, scaling=10.)
```
### Obtaining level 1 data
Level 1 data can be obtained either by running the above schedules on the IBM Q hardware or by loading the data from an existing experiment which has been saved in the test_result.pickle file.
#### Running on IBM Q hardware
```
qobj = assemble(schedules, backend, meas_level=1, meas_return='single', shots=shots)
if not use_prerun_data:
job = backend.run(qobj)
else:
print('Not running job, will use prerun data')
if not use_prerun_data:
job.status()
else:
print('Not running job, will use prerun data')
if use_prerun_data:
#Use pickle to load existing data
import pickle
from qiskit.result import Result
with open('10_discriminator_data.pickle', 'rb') as handle:
res = pickle.load(handle)
result = Result.from_dict(res)
else:
result = job.result(timeout=3600)
```
## 2) Fitting and using a discriminator
We use the calibration schedules `cal_00` and `cal_11` to fit two linear discriminant discriminators. One for each qubit. Doing so implies that the qubit-qubit correlations in the single shot data are neglected. The discrimination does not necessarily need to be done in this way. Indeed, a single discriminator could be used to account for qubit-qubit correlations. The decision boundary is however harder to illustrate since two qubits implies a 4D decision space.
### Single-qubit discriminator
The routine below is the **core of the discriminant analysis**. A call to the constructor also fits the discriminator to the provided results. We fit one discriminator per qubit.
```
discriminators = {}
for q in qubits:
discriminators[q] = LinearIQDiscriminator(result, [q], ['0', '1'])
```
We can retrieve the state of an I/Q point by calling the discriminator's **discriminate(iq_data) method**. The code below illustrates this using two points in the IQ plane `(0,0)` and `(0, -2e11)`. Depending on the results, these points will correspond a $|0\rangle$ or $|1\rangle$ state of the `test_qubit`.
```
test_qubit = 0
test_iq_data = [[0.0, 0.0], [0.0, -2.0e11]]
test_states = discriminators[test_qubit].discriminate(test_iq_data)
print('Example results for qubit %i:' % test_qubit)
for idx, iq_point in enumerate(test_iq_data):
print('IQ point ({:.0f}, {:.0f}) corresponds to state '.format(iq_point[0], iq_point[1]) + test_states[idx])
```
The code below illustrates the use of the discriminator through plots. Discriminators have a dedicated plot method.
```
plt.rcParams['font.size'] = 14
discriminators[0].plot();
```
For discriminators that discriminate a single qubit based on the IQ data from a single qubit we may plot the two-dimensional decision boundary. All discriminators also have the option to flag the misclassified IQ points in the scatter plot. This is done by setting the option flag_misclassified to True.
```
fig, ax = plt.subplots(1, 2, figsize=(14,5))
discriminators[0].plot(ax[0], flag_misclassified=True, show_boundary=True)
discriminators[1].plot(ax[1], flag_misclassified=False, show_boundary=True);
```
It is also possible to use an sklearn classifier as a discriminator by passing it to the *SklearnIQDiscriminator*. The example below shows how to do this with an support vector classifier (SVC).
Note: Each discriminator should be passed its own classifier. If a classifier were shared, discriminators created later would override the fitting of the classifier to earlier results.
```
svcs = {
0: SVC(C=1., kernel="rbf", gamma="scale"),
1: SVC(C=1., kernel="rbf", gamma="scale")
}
svc_discriminators = {}
for q in qubits:
svc_discriminators[q] = SklearnIQDiscriminator(svcs[q], result, [q], ['0', '1'])
```
The results of fitting the SVC discriminators are plotted below. Note the non-linear decision boundaries generated by the SVCs.
```
fig, ax = plt.subplots(1, 2, figsize=(14,5))
svc_discriminators[0].plot(ax[0], flag_misclassified=True, show_boundary=True)
svc_discriminators[1].plot(ax[1], flag_misclassified=False, show_boundary=True);
```
### Multi-qubit discriminator
The code below illustrates a discriminator that discriminates the '00' from the '11' state. This is thus a multi-qubit discriminator with a four-dimensional decision space. In this case it is no longer possible to properly illustrate the four-dimensional decision boundary using two-dimensional plots.
```
discriminators_2q = LinearIQDiscriminator(result, [0, 1])
fig, axs = plt.subplots(1, 2, figsize=(14,5))
axs, _ = discriminators_2q.plot(axs, flag_misclassified=True, title=False)
```
We may elect to plot only the data for a single qubit, see the example below where only the data for qubit 0 is plotted. This functionallity is useful when we wish to discriminate a single qubit based on the IQ data from more than one qubit.
```
axs, _ = discriminators_2q.plot(qubits_to_plot=[0], flag_misclassified=True)
```
## 3) Creating a filter associated to the discriminator
We can create a filter based on the discriminator to convert level 1 data into level 2 data. This filter can then be used to discriminate subsequent data.
```
from qiskit.ignis.measurement.discriminator.filters import DiscriminationFilter
```
Create the filters based on the discriminators.
```
filters = {}
for q in qubits:
filters[q] = DiscriminationFilter(discriminators[q])
```
Use the filters to create new results with the level 2 data.
```
results_lvl2 = {}
for q in qubits:
results_lvl2[q] = filters[q].apply(result)
```
Plot the result. Note that the `plot_xdata` of the discriminator will only plot the non-calibration data in the result. To plot the calibration data that was used to fit the discriminator one should use the `plot` function of the discriminator as was done above.
```
fig, ax = plt.subplots(2, 2, figsize=(14,10))
for q in [0, 1]:
discriminators[q].plot(ax[q, 0], show_fitting_data=False, show_boundary=True)
discriminators[q].plot_xdata(ax[q, 0], result) # This only plots non-cal data.
counts = results_lvl2[q].get_counts(experiment_name)
plot_histogram(counts, ax=ax[q, 1])
fig.tight_layout()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
from copy import deepcopy
from sklearn.svm import SVC
%matplotlib inline
plt.rcParams['font.size'] = 16
import qiskit
from qiskit.ignis.measurement.discriminator.iq_discriminators import \
LinearIQDiscriminator, QuadraticIQDiscriminator, SklearnIQDiscriminator
from qiskit.result.models import ExperimentResultData
from qiskit import IBMQ
from qiskit.pulse import MeasureChannel, DriveChannel
from qiskit.tools.visualization import plot_histogram
import qiskit.pulse as pulse
import qiskit.pulse.pulse_lib as pulse_lib
from qiskit.compiler import assemble
#Set to False to run the discriminator, set to True
#to use pickled data
use_prerun_data = True
account_provider = IBMQ.load_account()
hub = account_provider.credentials.hub
group = account_provider.credentials.group
project = account_provider.credentials.project
provider = IBMQ.get_provider(hub=hub, group=hub, project=hub)
backend = provider.get_backend('ibmq_almaden')
back_config = backend.configuration().to_dict()
defaults = backend.defaults()
# command definition from defaults.
cmd_def = pulse.CmdDef.from_defaults(defaults.cmd_def, defaults.pulse_library)
qubits = [0, 1]
schedules = []
meas_buffer = 2
shots = 512
experiment_name = 'X90p'
meas = cmd_def.get('measure', qubits=tuple(range(20)))
# Create a calibration schedule for the ground state.
schedule_no_pi = pulse.Schedule(name='cal_00')
schedule_no_pi += meas
# Create a calibration schedule for the excited state.
schedule_pi = pulse.Schedule(name='cal_11')
for q in qubits:
xgate = cmd_def.get('x', qubits=q)
schedule_pi += xgate
schedule_pi += meas << (schedule_pi.duration + meas_buffer)
# Measurement schedule. Do an X90p gate on both qubits.
schedule_x90p = pulse.Schedule(name=experiment_name)
for q in qubits:
x90p = cmd_def.get('u3', qubits=q, P0=np.pi/2., P1=0., P2=0.)
schedule_x90p += x90p
schedule_x90p += meas << (schedule_x90p.duration + meas_buffer)
schedules = [schedule_no_pi, schedule_pi, schedule_x90p]
plt_chs = []
for q in [0, 1]:
plt_chs.append(MeasureChannel(q))
plt_chs.append(DriveChannel(q))
schedules[2].draw(channels=plt_chs, scaling=10.)
qobj = assemble(schedules, backend, meas_level=1, meas_return='single', shots=shots)
if not use_prerun_data:
job = backend.run(qobj)
else:
print('Not running job, will use prerun data')
if not use_prerun_data:
job.status()
else:
print('Not running job, will use prerun data')
if use_prerun_data:
#Use pickle to load existing data
import pickle
from qiskit.result import Result
with open('10_discriminator_data.pickle', 'rb') as handle:
res = pickle.load(handle)
result = Result.from_dict(res)
else:
result = job.result(timeout=3600)
discriminators = {}
for q in qubits:
discriminators[q] = LinearIQDiscriminator(result, [q], ['0', '1'])
test_qubit = 0
test_iq_data = [[0.0, 0.0], [0.0, -2.0e11]]
test_states = discriminators[test_qubit].discriminate(test_iq_data)
print('Example results for qubit %i:' % test_qubit)
for idx, iq_point in enumerate(test_iq_data):
print('IQ point ({:.0f}, {:.0f}) corresponds to state '.format(iq_point[0], iq_point[1]) + test_states[idx])
plt.rcParams['font.size'] = 14
discriminators[0].plot();
fig, ax = plt.subplots(1, 2, figsize=(14,5))
discriminators[0].plot(ax[0], flag_misclassified=True, show_boundary=True)
discriminators[1].plot(ax[1], flag_misclassified=False, show_boundary=True);
svcs = {
0: SVC(C=1., kernel="rbf", gamma="scale"),
1: SVC(C=1., kernel="rbf", gamma="scale")
}
svc_discriminators = {}
for q in qubits:
svc_discriminators[q] = SklearnIQDiscriminator(svcs[q], result, [q], ['0', '1'])
fig, ax = plt.subplots(1, 2, figsize=(14,5))
svc_discriminators[0].plot(ax[0], flag_misclassified=True, show_boundary=True)
svc_discriminators[1].plot(ax[1], flag_misclassified=False, show_boundary=True);
discriminators_2q = LinearIQDiscriminator(result, [0, 1])
fig, axs = plt.subplots(1, 2, figsize=(14,5))
axs, _ = discriminators_2q.plot(axs, flag_misclassified=True, title=False)
axs, _ = discriminators_2q.plot(qubits_to_plot=[0], flag_misclassified=True)
from qiskit.ignis.measurement.discriminator.filters import DiscriminationFilter
filters = {}
for q in qubits:
filters[q] = DiscriminationFilter(discriminators[q])
results_lvl2 = {}
for q in qubits:
results_lvl2[q] = filters[q].apply(result)
fig, ax = plt.subplots(2, 2, figsize=(14,10))
for q in [0, 1]:
discriminators[q].plot(ax[q, 0], show_fitting_data=False, show_boundary=True)
discriminators[q].plot_xdata(ax[q, 0], result) # This only plots non-cal data.
counts = results_lvl2[q].get_counts(experiment_name)
plot_histogram(counts, ax=ax[q, 1])
fig.tight_layout()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
| 0.58059 | 0.97711 |
# Example queries for biological entities on COVID-19-Net Knowledge Graph
[Work in progress]
This notebook demonstrates how to run Cypher queries to retrieve data about biological entities in the Knowledge Graph.
```
import datetime
import pandas as pd
from py2neo import Graph
pd.options.display.max_rows = None # display all rows
pd.options.display.max_columns = None # display all columsns
```
#### Connect to COVID-19-Net Knowledge Graph
```
graph = Graph("bolt://132.249.238.185:7687", user="reader", password="demo")
```
### What kind of entities, including bioentities, does the KG contain?
```
query = """
MATCH (n:NodeMetadata)
RETURN n.name, n.shortDescription, n.description, n.example, n.definitionSource, n.dataProviders
"""
graph.run(query).to_data_frame()
```
### Fuzzy full-text search for bioentities
Results are ordered by match score.
Note, proteins may have one or more protein names. Proteins that have been cleaved have the fullLength flag set to `False`.
```
query = """
CALL db.index.fulltext.queryNodes("bioentities", "Spike") YIELD node, score
RETURN node.name AS name, node.fullLength AS fullLength, labels(node), score
"""
df = graph.run(query).to_data_frame()
df.head(25)
```
### Exact full-text search for bioentities
For exact matches, enclose the phrase with `\"`.
```
query = """
CALL db.index.fulltext.queryNodes("bioentities", '\"Spike glycoprotein\"') YIELD node
RETURN node.name AS name, node.start as start, node.end as end, node.fullLength as fullLength, node.accession as accession, node.proId as proId, labels(node)
"""
df = graph.run(query).to_data_frame()
df.head(25)
```
### Which organisms have been identified as hosts for SARS-CoV-2
```
query = """
MATCH (h:Host)
RETURN h.name AS name, h.scientificName AS scientificName, h.id AS taxonomyId
"""
graph.run(query).to_data_frame()
```
### Which Pathogens have caused Coronavirus Outbreaks?
```
query = """
MATCH (p:Pathogen)-[:CAUSES]->(o:Outbreak)
RETURN p.name AS name, p.scientificName AS scientificName, p.id AS taxonomyId, o.id AS outbreak, o.startDate AS startDate
"""
graph.run(query).to_data_frame()
```
### Which PubMedCentral articles mention strain datasets about different hosts?
```
query = """
MATCH (p:Publication)-[:MENTIONS]->(s:Strain)<-[:CARRIES]-(h:Host)
RETURN p.id AS pmc, s.name AS name, s.collectionDate AS collectionDate, h.name AS host, h.id AS hostTaxonomyId
ORDER by s.collectionDate
"""
graph.run(query).to_data_frame().head(20)
```
### What are the genes and gene products of the SARS-CoV-2 Virus?
This query lists the genes and proteins encoded by the SARS-CoV-2 NC_045512 reference genome. MN908947 represents the original un-annotated dataset submitted to NCBI. This is the first sequenced genome of SARS-CoV-2 collected in Wuhan.
```
query = """
MATCH (n:Genome{id: 'refseq:NC_045512'})-[:HAS_GENE]->(g:Gene)-[:ENCODES]->(p:Protein)
RETURN n.id as referenceGenome, n.name as name,
g.name as gene, g.id as geneId, p.name as protein, p.accession as accession, p.sequence as sequence
"""
graph.run(query).to_data_frame()
```
### How many complete SARS-CoV-2 genomes have been sequenced in each country?
Here we aggregate the number of genomes over up to 3 hops to the UN Region level:
`City -> Admin2(county) -> Admin1(state, province) -> Country`
using the variable-length relationship `[:IN*0..3]`.
This number includes only complete high-quality genomes as determined by the [China National Center for Bioinformation, 2019 Novel Coronavirus Resource (2019nCoVR)](https://bigd.big.ac.cn/ncov/release_genome).
Note, some strains have been deposited under different names in multiple repositories. Therefore, the numbers below includes some duplicates.
```
query = """
MATCH (s:Strain)-[:FOUND_IN]->(l:Location)-[:IN*0..3]->(c:Country)
RETURN count(s) AS count, c.name AS country
ORDER by count DESC
"""
graph.run(query).to_data_frame()
```
### How many complete SARS-CoV-2 genome have been sequenced in each UN Region?
Here we aggregate the number of genomes over up to 6 hops to the UN Region level:
`City -> Admin2(county) -> Admin1(state, province) -> Country -> UNSubRegion -> UNIntermediateRegion -> UNRegion`
using the variable-length relationship `[:IN*0..6]`.
```
query = """
MATCH (s:Strain)-[:FOUND_IN]->(l:Location)-[:IN*0..6]->(u:UNRegion)
RETURN count(s) AS count, u.name
ORDER by count DESC
"""
graph.run(query).to_data_frame()
```
|
github_jupyter
|
import datetime
import pandas as pd
from py2neo import Graph
pd.options.display.max_rows = None # display all rows
pd.options.display.max_columns = None # display all columsns
graph = Graph("bolt://132.249.238.185:7687", user="reader", password="demo")
query = """
MATCH (n:NodeMetadata)
RETURN n.name, n.shortDescription, n.description, n.example, n.definitionSource, n.dataProviders
"""
graph.run(query).to_data_frame()
query = """
CALL db.index.fulltext.queryNodes("bioentities", "Spike") YIELD node, score
RETURN node.name AS name, node.fullLength AS fullLength, labels(node), score
"""
df = graph.run(query).to_data_frame()
df.head(25)
query = """
CALL db.index.fulltext.queryNodes("bioentities", '\"Spike glycoprotein\"') YIELD node
RETURN node.name AS name, node.start as start, node.end as end, node.fullLength as fullLength, node.accession as accession, node.proId as proId, labels(node)
"""
df = graph.run(query).to_data_frame()
df.head(25)
query = """
MATCH (h:Host)
RETURN h.name AS name, h.scientificName AS scientificName, h.id AS taxonomyId
"""
graph.run(query).to_data_frame()
query = """
MATCH (p:Pathogen)-[:CAUSES]->(o:Outbreak)
RETURN p.name AS name, p.scientificName AS scientificName, p.id AS taxonomyId, o.id AS outbreak, o.startDate AS startDate
"""
graph.run(query).to_data_frame()
query = """
MATCH (p:Publication)-[:MENTIONS]->(s:Strain)<-[:CARRIES]-(h:Host)
RETURN p.id AS pmc, s.name AS name, s.collectionDate AS collectionDate, h.name AS host, h.id AS hostTaxonomyId
ORDER by s.collectionDate
"""
graph.run(query).to_data_frame().head(20)
query = """
MATCH (n:Genome{id: 'refseq:NC_045512'})-[:HAS_GENE]->(g:Gene)-[:ENCODES]->(p:Protein)
RETURN n.id as referenceGenome, n.name as name,
g.name as gene, g.id as geneId, p.name as protein, p.accession as accession, p.sequence as sequence
"""
graph.run(query).to_data_frame()
query = """
MATCH (s:Strain)-[:FOUND_IN]->(l:Location)-[:IN*0..3]->(c:Country)
RETURN count(s) AS count, c.name AS country
ORDER by count DESC
"""
graph.run(query).to_data_frame()
query = """
MATCH (s:Strain)-[:FOUND_IN]->(l:Location)-[:IN*0..6]->(u:UNRegion)
RETURN count(s) AS count, u.name
ORDER by count DESC
"""
graph.run(query).to_data_frame()
| 0.364212 | 0.937954 |
```
knitr::opts_chunk$set(warning = FALSE, message = FALSE)
```
***
This notebook contains the code samples found in Chapter 6, Section 2 of [Deep Learning with R](https://www.manning.com/books/deep-learning-with-r). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
***
## A first recurrent layer in Keras
The process you just naively implemented in R corresponds to an actual Keras layer -- `layer_simple_rnn()`.
```
layer_simple_rnn(units = 32)
```
There is one minor difference: `layer_simple_rnn()` processes batches of sequences, like all other Keras layers, not a single sequence as in the R example. This means it takes inputs of shape `(batch_size, timesteps, input_features)`, rather than `(timesteps, input_features)`.
Like all recurrent layers in Keras, `layer_simple_rnn()` can be run in two different modes: it can return either the full sequences of successive outputs for each timestep (a 3D tensor of shape `(batch_size, timesteps, output_features)`) or only the last output for each input sequence (a 2D tensor of shape `(batch_size, output_features)`). These two modes are controlled by the `return_sequences` constructor argument. Let's look at an example that uses `layer_simple_rnn()` and returns the last state:
```
library(keras)
model <- keras_model_sequential() %>%
layer_embedding(input_dim = 10000, output_dim = 32) %>%
layer_simple_rnn(units = 32)
summary(model)
model <- keras_model_sequential() %>%
layer_embedding(input_dim = 10000, output_dim = 32) %>%
layer_simple_rnn(units = 32, return_sequences = TRUE)
summary(model)
```
It is sometimes useful to stack several recurrent layers one after the other in order to increase the representational power of a network.
In such a setup, you have to get all intermediate layers to return full sequences:
```
model <- keras_model_sequential() %>%
layer_embedding(input_dim = 10000, output_dim = 32) %>%
layer_simple_rnn(units = 32, return_sequences = TRUE) %>%
layer_simple_rnn(units = 32, return_sequences = TRUE) %>%
layer_simple_rnn(units = 32, return_sequences = TRUE) %>%
layer_simple_rnn(units = 32) # This last layer only returns the last outputs.
summary(model)
```
Now let's try to use such a model on the IMDB movie review classification problem. First, let's preprocess the data:
```
library(keras)
max_features <- 10000 # Number of words to consider as features
maxlen <- 500 # Cuts off texts after this many words (among the max_features most common words)
batch_size <- 32
cat("Loading data...\n")
imdb <- dataset_imdb(num_words = max_features)
c(c(input_train, y_train), c(input_test, y_test)) %<-% imdb
cat(length(input_train), "train sequences\n")
cat(length(input_test), "test sequences")
cat("Pad sequences (samples x time)\n")
input_train <- pad_sequences(input_train, maxlen = maxlen)
input_test <- pad_sequences(input_test, maxlen = maxlen)
cat("input_train shape:", dim(input_train), "\n")
cat("input_test shape:", dim(input_test), "\n")
```
Let's train a simple recurrent network using a `layer_embedding()` and `layer_simple_rnn()`.
```
model <- keras_model_sequential() %>%
layer_embedding(input_dim = max_features, output_dim = 32) %>%
layer_simple_rnn(units = 32) %>%
layer_dense(units = 1, activation = "sigmoid")
model %>% compile(
optimizer = "rmsprop",
loss = "binary_crossentropy",
metrics = c("acc")
)
history <- model %>% fit(
input_train, y_train,
epochs = 10,
batch_size = 128,
validation_split = 0.2
)
```
Let's display the training and validation loss and accuracy:
```
plot(history)
```
As a reminder, in chapter 3, the first naive approach to this dataset got you to a test accuracy of 88%. Unfortunately, this small recurrent network doesn't perform well compared to this baseline (only 84% validation accuracy). Part of the problem is that your inputs only consider the first 500 words, rather than full sequences -- hence the RNN has access to less information than the earlier baseline model. The remainder of the problem is that `layer_simple_rnn()` isn't good at processing long sequences, such as text. Other types of recurrent layers perform much better. Let's look at some more advanced layers.
## A concrete LSTM example in Keras
Now let's switch to more practical concerns: we will set up a model using a LSTM layer and train it on the IMDB data. Here's the network,similar to the one with `layer_simple_rnn()` that we just presented. We only specify the output dimensionality of the LSTM layer, and leave every other argument (there are lots) to the Keras defaults. Keras has good defaults, and things will almost always "just work" without you having to spend time tuning parameters by hand.
```
model <- keras_model_sequential() %>%
layer_embedding(input_dim = max_features, output_dim = 32) %>%
layer_lstm(units = 32) %>%
layer_dense(units = 1, activation = "sigmoid")
model %>% compile(
optimizer = "rmsprop",
loss = "binary_crossentropy",
metrics = c("acc")
)
history <- model %>% fit(
input_train, y_train,
epochs = 10,
batch_size = 128,
validation_split = 0.2
)
plot(history)
```
|
github_jupyter
|
knitr::opts_chunk$set(warning = FALSE, message = FALSE)
layer_simple_rnn(units = 32)
library(keras)
model <- keras_model_sequential() %>%
layer_embedding(input_dim = 10000, output_dim = 32) %>%
layer_simple_rnn(units = 32)
summary(model)
model <- keras_model_sequential() %>%
layer_embedding(input_dim = 10000, output_dim = 32) %>%
layer_simple_rnn(units = 32, return_sequences = TRUE)
summary(model)
model <- keras_model_sequential() %>%
layer_embedding(input_dim = 10000, output_dim = 32) %>%
layer_simple_rnn(units = 32, return_sequences = TRUE) %>%
layer_simple_rnn(units = 32, return_sequences = TRUE) %>%
layer_simple_rnn(units = 32, return_sequences = TRUE) %>%
layer_simple_rnn(units = 32) # This last layer only returns the last outputs.
summary(model)
library(keras)
max_features <- 10000 # Number of words to consider as features
maxlen <- 500 # Cuts off texts after this many words (among the max_features most common words)
batch_size <- 32
cat("Loading data...\n")
imdb <- dataset_imdb(num_words = max_features)
c(c(input_train, y_train), c(input_test, y_test)) %<-% imdb
cat(length(input_train), "train sequences\n")
cat(length(input_test), "test sequences")
cat("Pad sequences (samples x time)\n")
input_train <- pad_sequences(input_train, maxlen = maxlen)
input_test <- pad_sequences(input_test, maxlen = maxlen)
cat("input_train shape:", dim(input_train), "\n")
cat("input_test shape:", dim(input_test), "\n")
model <- keras_model_sequential() %>%
layer_embedding(input_dim = max_features, output_dim = 32) %>%
layer_simple_rnn(units = 32) %>%
layer_dense(units = 1, activation = "sigmoid")
model %>% compile(
optimizer = "rmsprop",
loss = "binary_crossentropy",
metrics = c("acc")
)
history <- model %>% fit(
input_train, y_train,
epochs = 10,
batch_size = 128,
validation_split = 0.2
)
plot(history)
model <- keras_model_sequential() %>%
layer_embedding(input_dim = max_features, output_dim = 32) %>%
layer_lstm(units = 32) %>%
layer_dense(units = 1, activation = "sigmoid")
model %>% compile(
optimizer = "rmsprop",
loss = "binary_crossentropy",
metrics = c("acc")
)
history <- model %>% fit(
input_train, y_train,
epochs = 10,
batch_size = 128,
validation_split = 0.2
)
plot(history)
| 0.679179 | 0.979531 |
# NumPy
NumPy (or Numpy) is a Linear Algebra Library for Python, the reason it is so important for Data Science with Python is that almost all of the libraries in the PyData Ecosystem rely on NumPy as one of their main building blocks.
Numpy is also incredibly fast, as it has bindings to C libraries. For more info on why you would want to use Arrays instead of lists, check out this great [StackOverflow post](http://stackoverflow.com/questions/993984/why-numpy-instead-of-python-lists).
We will only learn the basics of NumPy, to get started we need to install it!
## Installation Instructions
**It is highly recommended you install Python using the Anaconda distribution to make sure all underlying dependencies (such as Linear Algebra libraries) all sync up with the use of a conda install. If you have Anaconda, install NumPy by going to your terminal or command prompt and typing:**
conda install numpy
**If you do not have Anaconda and can not install it, please refer to [Numpy's official documentation on various installation instructions.](http://docs.scipy.org/doc/numpy-1.10.1/user/install.html)**
## Using NumPy
Once you've installed NumPy you can import it as a library:
```
import numpy as np
```
Numpy has many built-in functions and capabilities. We won't cover them all but instead we will focus on some of the most important aspects of Numpy: vectors,arrays,matrices, and number generation. Let's start by discussing arrays.
# Numpy Arrays
NumPy arrays are the main way we will use Numpy throughout the course. Numpy arrays essentially come in two flavors: vectors and matrices. Vectors are strictly 1-d arrays and matrices are 2-d (but you should note a matrix can still have only one row or one column).
Let's begin our introduction by exploring how to create NumPy arrays.
## Creating NumPy Arrays
### From a Python List
We can create an array by directly converting a list or list of lists:
```
my_list = [1,2,3]
my_list
np.array(my_list)
my_matrix = [[1,2,3],[4,5,6],[7,8,9]]
my_matrix
np.array(my_matrix)
```
## Built-in Methods
There are lots of built-in ways to generate Arrays
### arange
Return evenly spaced values within a given interval.
```
np.arange(0,10)
np.arange(0,11,2)
```
### zeros and ones
Generate arrays of zeros or ones
```
np.zeros(3)
np.zeros((5,5))
np.ones(3)
np.ones((3,3))
```
### linspace
Return evenly spaced numbers over a specified interval.
```
np.linspace(0,10,3)
np.linspace(0,10,50)
```
## eye
Creates an identity matrix
```
np.eye(4)
```
## Random
Numpy also has lots of ways to create random number arrays:
### rand
Create an array of the given shape and populate it with
random samples from a uniform distribution
over ``[0, 1)``.
```
np.random.rand(2)
np.random.rand(5,5)
```
### randn
Return a sample (or samples) from the "standard normal" distribution. Unlike rand which is uniform:
```
np.random.randn(2)
np.random.randn(5,5)
```
### randint
Return random integers from `low` (inclusive) to `high` (exclusive).
```
np.random.randint(1,100)
np.random.randint(1,100,10)
```
## Array Attributes and Methods
Let's discuss some useful attributes and methods or an array:
```
arr = np.arange(25)
ranarr = np.random.randint(0,50,10)
arr
ranarr
```
## Reshape
Returns an array containing the same data with a new shape.
```
arr.reshape(5,5)
# -1 blur
arr.reshape(-1,5)
```
### max,min,argmax,argmin
These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax
```
ranarr
ranarr.max()
ranarr.argmax()
ranarr.min()
ranarr.argmin()
```
## Shape
Shape is an attribute that arrays have (not a method):
```
# Vector
arr.shape
# Notice the two sets of brackets
arr.reshape(1,25)
arr.reshape(1,25).shape
arr.reshape(25,1)
arr.reshape(25,1).shape
```
### dtype
You can also grab the data type of the object in the array:
```
arr.dtype
```
|
github_jupyter
|
import numpy as np
my_list = [1,2,3]
my_list
np.array(my_list)
my_matrix = [[1,2,3],[4,5,6],[7,8,9]]
my_matrix
np.array(my_matrix)
np.arange(0,10)
np.arange(0,11,2)
np.zeros(3)
np.zeros((5,5))
np.ones(3)
np.ones((3,3))
np.linspace(0,10,3)
np.linspace(0,10,50)
np.eye(4)
np.random.rand(2)
np.random.rand(5,5)
np.random.randn(2)
np.random.randn(5,5)
np.random.randint(1,100)
np.random.randint(1,100,10)
arr = np.arange(25)
ranarr = np.random.randint(0,50,10)
arr
ranarr
arr.reshape(5,5)
# -1 blur
arr.reshape(-1,5)
ranarr
ranarr.max()
ranarr.argmax()
ranarr.min()
ranarr.argmin()
# Vector
arr.shape
# Notice the two sets of brackets
arr.reshape(1,25)
arr.reshape(1,25).shape
arr.reshape(25,1)
arr.reshape(25,1).shape
arr.dtype
| 0.285671 | 0.993096 |
# Transformer: Attention is all you need
This jupyter notebook is Tensorflow version implemented in the paper [Attention is all you need](https://arxiv.org/pdf/1706.03762.pdf). The task is translating a source human-readable datetime to a target fixed datetime format **yyyy-mm-dd**, e.g: "24th Aug 19" -> "2019-08-24". Best way to start implement a model from scratch is using small dataset and non-complex.
```
import numpy as np
import tqdm
from faker import Faker
from babel.dates import format_date
from nmt_utils import load_dataset_v2, preprocess_data, string_to_int, int_to_string, softmax
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
import os
m = 40000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset_v2(m)
human_vocab
machine_vocab
Tx = 30
Ty = 10
X, Y = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty+1)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
import tensorflow as tf
tf.enable_eager_execution()
L = tf.keras.layers
```
## Transformer model with Tensorflow.
### Hyperparameter:
$d_{model}$: dimension of word embeding, output of **Multi-head Attention** layer, output of **Feed Forward** layer.
$d_k$: dimension of matrix Q, K
$d_v$: dimension of matrix V
$d_{ff}$: dimension of intermediate **Feed forward** layer
$h$: number of heads at each block.
### Positional Encoding:
Since the Transformer model isn't sequential model like RNN and CNN. The computation is parallel over all input sentence flow from Embedding Layer, so we need to compute the relative or absolute position between the words. The author use non-trainable/fixed signusoid function:
$$PE_{(pos, 2i)} = sin\left(\frac{pos}{10000^{2i/d_{model}}}\right) \mbox{this corresponding to the even indices}$$
$$PE_{(pos, 2i+1)} = cos\left(\frac{pos}{10000^{2i/d_{model}}}\right) \mbox{this corresponding to the odd indices}$$
where $pos$ is position in the sequence and $i$ is the dimension.
### Scaled Dot-Product Attention:
<img style="width:300px; height:300px" src="https://i.imgur.com/HuXNlr0.png" />
$$Attention(Q, K, V) = softmax\left(\frac{QK^T}{\sqrt{d_k}}\right)V$$
### (Encoder-Decoder) Multi-Head Attention:
<img style="weight:300px; height:300px" src="https://i.imgur.com/vgfOLR2.png" />
$$MultiHead(Q, K, V) = Concat(head_1, head_2, ..., head_h)W^O$$
$$\mbox{where } head_i = Attention(Q, K, V)$$
### Feed forward:
$$FFN(x) = max(0, xW_1 + b_1)W_2 + b_2$$
### Encoder blocks:
Each encoder block include 2 layers: **Multi-head Attention Mechanism** and **Position-wise Feed Forward**, respestively. Output at each layer use residual connection with its input followed by [Layer Normalization](https://arxiv.org/pdf/1607.06450.pdf): $LayerNorm(x + f(x))$
### Decoder blocks:
Each decoder block includes 3 layers: **Multi-head Attention Mechanism**, **Encoder-Decoder Multi-head Attention** and **Position-wise Feed Forward**. Same as **Encoder** blocks, output at each layer use residual connection with its input follow by Layer Normalization.
<img src="https://i.imgur.com/1NUHvLi.jpg" />
```
class Transformer(tf.keras.Model):
def __init__(self, num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff):
super(Transformer, self).__init__()
self.num_blocks = num_blocks
self.num_heads = num_heads
self.vocab_size = vocab_size
self.d_model = d_model
self.seq_len = seq_len
self.d_k = d_k
self.d_v = d_v
self.d_ff = d_ff
self.word_embed = L.Embedding(vocab_size, d_model)
def _format(self, block, head):
return str(block) + str(head)
def _init_structure(self, decoder_part=False):
assert not hasattr(self, "pos_enc"), "The structure is initialized already."
self.pos_enc = np.zeros(shape=(1, self.seq_len, self.d_model))
for pos in range(self.seq_len):
for i in range(0, self.d_model, 2):
self.pos_enc[:, pos, i] = np.sin(pos / (10000 ** ((2 * i)/self.d_model)))
self.pos_enc[:, pos, i + 1] = np.cos(pos / (10000 ** ((2 * i)/self.d_model)))
if decoder_part:
self.mask = [[0]*(i+1) + [-1e9]*(self.seq_len-(i+1)) for i in range(self.seq_len)]
self.mask = np.array([self.mask])
for block_id in range(self.num_blocks):
setattr(self, "Q" + str(block_id), L.Dense(self.d_k*self.num_heads))
setattr(self, "K" + str(block_id), L.Dense(self.d_k*self.num_heads))
setattr(self, "V" + str(block_id), L.Dense(self.d_v*self.num_heads))
if decoder_part:
setattr(self, "Qenc" + str(block_id), L.Dense(self.d_k*self.num_heads))
setattr(self, "Kenc" + str(block_id), L.Dense(self.d_k*self.num_heads))
setattr(self, "Venc" + str(block_id), L.Dense(self.d_v*self.num_heads))
setattr(self, "O" + str(block_id), L.Dense(self.d_model))
setattr(self, "FFN1" + str(block_id), L.Dense(self.d_ff, activation="relu"))
setattr(self, "FFN2" + str(block_id), L.Dense(self.d_model))
def _ffn(self, block_id, attention_output):
ffn1 = getattr(self, "FFN1" + str(block_id))(attention_output)
ffn2 = getattr(self, "FFN2" + str(block_id))(ffn1)
return ffn2
def _scaled_dot_product(self, Q, K, V, mask=False):
score = tf.matmul(Q, K, transpose_b=True)
if mask:
# apply mask to score, prevent the affect of feature words to current word.
score = score + self.mask[:, :score.shape[1], :score.shape[1]]
score = tf.nn.softmax(score/np.sqrt(self.d_k), axis=-1)
score = tf.matmul(score, V)
return score
def _multi_head_attention(self, block_id, Q, K, V, connection_head=False, mask=False):
if connection_head:
Q = getattr(self, "Qenc" + str(block_id))(Q)
K = getattr(self, "Kenc" + str(block_id))(K)
V = getattr(self, "Venc" + str(block_id))(V)
else:
Q = getattr(self, "Q" + str(block_id))(Q)
K = getattr(self, "K" + str(block_id))(K)
V = getattr(self, "V" + str(block_id))(V)
score = self._scaled_dot_product(Q, K, V, mask)
head_output = getattr(self, "O" + str(block_id))(score)
return head_output
def _block_computation(self, *args, **kwargs):
raise NotImplementedError("Transformer is abstract class. You must implement this function!")
def call(self, *args, **kwargs):
raise NotImplementedError("Transformer is abstract class. You must implement this function!")
class Encoder(Transformer):
def __init__(self, num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff):
super(Encoder, self).__init__(num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff)
self._init_structure()
def _block_computation(self, block_id, x):
attention_output = self._multi_head_attention(block_id, x, x, x, connection_head=False, mask=False)
attention_output = L.LayerNormalization()(attention_output + x)
block_output = self._ffn(block_id, attention_output)
block_output = L.LayerNormalization()(block_output + attention_output)
return block_output
def call(self, x):
word_embed = self.word_embed(x)
word_embed = word_embed + self.pos_enc
block_output = word_embed
for block_id in range(self.num_blocks):
block_output = self._block_computation(block_id, block_output)
return block_output
class Decoder(Transformer):
def __init__(self, num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff):
super(Decoder, self).__init__(num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff)
self._init_structure(decoder_part=True)
self.logits = L.Dense(units=vocab_size)
def _block_computation(self, block_id, x, encoder_output):
attention_output = self._multi_head_attention(block_id, x, x, x, connection_head=False, mask=True)
attention_output = L.LayerNormalization()(attention_output + x)
connection_output = self._multi_head_attention(block_id, attention_output, encoder_output,
encoder_output, connection_head=True, mask=False)
connection_output = L.LayerNormalization()(connection_output + attention_output)
block_output = self._ffn(block_id, connection_output)
block_output = L.LayerNormalization()(block_output + connection_output)
return block_output
def call(self, x, encoder_output):
word_embed = self.word_embed(x)
word_embed = word_embed + self.pos_enc[:, :word_embed.shape[1], :]
block_output = word_embed
for block_id in range(self.num_blocks):
block_output = self._block_computation(block_id, block_output, encoder_output)
logits = self.logits(block_output)
return logits
def loss_function(labels, logits):
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits)
return tf.reduce_mean(tf.reduce_sum(loss, axis=1), axis=0)
```
### Define hyperparameter for Transformer Model
```
NUM_BLOCKS = 2
NUM_HEADS = 2
DIMENSION_MODEL = 32
DIMENSION_K = 16
DIMENSION_V = 16
DIMENSION_FF = 64
encoder = Encoder(num_blocks=NUM_BLOCKS, num_heads=NUM_HEADS, vocab_size=len(human_vocab), seq_len=Tx,
d_model=DIMENSION_MODEL, d_k=DIMENSION_K, d_v=DIMENSION_V, d_ff=DIMENSION_FF)
decoder = Decoder(num_blocks=NUM_BLOCKS, num_heads=NUM_HEADS, vocab_size=len(machine_vocab), seq_len=Ty,
d_model=DIMENSION_MODEL, d_k=DIMENSION_K, d_v=DIMENSION_V, d_ff=DIMENSION_FF)
epochs = 3
batch_size = 64
num_batches = X.shape[0]//batch_size if X.shape[0] % batch_size == 0 else X.shape[0]//batch_size + 1
data = tf.concat([X, Y], axis=1)
optimizer = tf.train.AdamOptimizer()
for e in range(epochs):
data = tf.random.shuffle(data)
X, Y = data[:, :Tx], data[:, Tx:]
pbar = tqdm.tqdm_notebook(range(0, num_batches), desc="Epoch " + str(e+1))
train_loss = 0
for it in pbar:
start = it*batch_size
end = (it+1)*batch_size
with tf.GradientTape() as tape:
encoder_output = encoder(X[start:end])
logits = decoder(Y[start:end, :-1], encoder_output)
loss = loss_function(Y[start:end, 1:], logits)
train_loss += loss
pbar.set_description("Epoch %s - Training loss: %f" % (e+1, (train_loss / (it+1))))
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
source = np.array([source])
encoder_output = encoder(source)
sentence = [machine_vocab["#"]]
for t in range(Ty):
logits = decoder(np.array([sentence]), encoder_output)
prediction = tf.nn.softmax(logits, axis=-1)
prediction = np.argmax(prediction, axis=-1)
sentence.append(prediction[0][-1])
sequential_output = [inv_machine_vocab[s] for s in sentence[1:]]
parallel_output = [inv_machine_vocab[s] for s in prediction[0]]
print("source:", example)
print("sequential output:", ''.join(sequential_output))
print("parallel output:", ''.join(parallel_output))
print("-----------------------------------------------")
```
|
github_jupyter
|
import numpy as np
import tqdm
from faker import Faker
from babel.dates import format_date
from nmt_utils import load_dataset_v2, preprocess_data, string_to_int, int_to_string, softmax
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
import os
m = 40000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset_v2(m)
human_vocab
machine_vocab
Tx = 30
Ty = 10
X, Y = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty+1)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
import tensorflow as tf
tf.enable_eager_execution()
L = tf.keras.layers
class Transformer(tf.keras.Model):
def __init__(self, num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff):
super(Transformer, self).__init__()
self.num_blocks = num_blocks
self.num_heads = num_heads
self.vocab_size = vocab_size
self.d_model = d_model
self.seq_len = seq_len
self.d_k = d_k
self.d_v = d_v
self.d_ff = d_ff
self.word_embed = L.Embedding(vocab_size, d_model)
def _format(self, block, head):
return str(block) + str(head)
def _init_structure(self, decoder_part=False):
assert not hasattr(self, "pos_enc"), "The structure is initialized already."
self.pos_enc = np.zeros(shape=(1, self.seq_len, self.d_model))
for pos in range(self.seq_len):
for i in range(0, self.d_model, 2):
self.pos_enc[:, pos, i] = np.sin(pos / (10000 ** ((2 * i)/self.d_model)))
self.pos_enc[:, pos, i + 1] = np.cos(pos / (10000 ** ((2 * i)/self.d_model)))
if decoder_part:
self.mask = [[0]*(i+1) + [-1e9]*(self.seq_len-(i+1)) for i in range(self.seq_len)]
self.mask = np.array([self.mask])
for block_id in range(self.num_blocks):
setattr(self, "Q" + str(block_id), L.Dense(self.d_k*self.num_heads))
setattr(self, "K" + str(block_id), L.Dense(self.d_k*self.num_heads))
setattr(self, "V" + str(block_id), L.Dense(self.d_v*self.num_heads))
if decoder_part:
setattr(self, "Qenc" + str(block_id), L.Dense(self.d_k*self.num_heads))
setattr(self, "Kenc" + str(block_id), L.Dense(self.d_k*self.num_heads))
setattr(self, "Venc" + str(block_id), L.Dense(self.d_v*self.num_heads))
setattr(self, "O" + str(block_id), L.Dense(self.d_model))
setattr(self, "FFN1" + str(block_id), L.Dense(self.d_ff, activation="relu"))
setattr(self, "FFN2" + str(block_id), L.Dense(self.d_model))
def _ffn(self, block_id, attention_output):
ffn1 = getattr(self, "FFN1" + str(block_id))(attention_output)
ffn2 = getattr(self, "FFN2" + str(block_id))(ffn1)
return ffn2
def _scaled_dot_product(self, Q, K, V, mask=False):
score = tf.matmul(Q, K, transpose_b=True)
if mask:
# apply mask to score, prevent the affect of feature words to current word.
score = score + self.mask[:, :score.shape[1], :score.shape[1]]
score = tf.nn.softmax(score/np.sqrt(self.d_k), axis=-1)
score = tf.matmul(score, V)
return score
def _multi_head_attention(self, block_id, Q, K, V, connection_head=False, mask=False):
if connection_head:
Q = getattr(self, "Qenc" + str(block_id))(Q)
K = getattr(self, "Kenc" + str(block_id))(K)
V = getattr(self, "Venc" + str(block_id))(V)
else:
Q = getattr(self, "Q" + str(block_id))(Q)
K = getattr(self, "K" + str(block_id))(K)
V = getattr(self, "V" + str(block_id))(V)
score = self._scaled_dot_product(Q, K, V, mask)
head_output = getattr(self, "O" + str(block_id))(score)
return head_output
def _block_computation(self, *args, **kwargs):
raise NotImplementedError("Transformer is abstract class. You must implement this function!")
def call(self, *args, **kwargs):
raise NotImplementedError("Transformer is abstract class. You must implement this function!")
class Encoder(Transformer):
def __init__(self, num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff):
super(Encoder, self).__init__(num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff)
self._init_structure()
def _block_computation(self, block_id, x):
attention_output = self._multi_head_attention(block_id, x, x, x, connection_head=False, mask=False)
attention_output = L.LayerNormalization()(attention_output + x)
block_output = self._ffn(block_id, attention_output)
block_output = L.LayerNormalization()(block_output + attention_output)
return block_output
def call(self, x):
word_embed = self.word_embed(x)
word_embed = word_embed + self.pos_enc
block_output = word_embed
for block_id in range(self.num_blocks):
block_output = self._block_computation(block_id, block_output)
return block_output
class Decoder(Transformer):
def __init__(self, num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff):
super(Decoder, self).__init__(num_blocks, num_heads, vocab_size, seq_len, d_model, d_k, d_v, d_ff)
self._init_structure(decoder_part=True)
self.logits = L.Dense(units=vocab_size)
def _block_computation(self, block_id, x, encoder_output):
attention_output = self._multi_head_attention(block_id, x, x, x, connection_head=False, mask=True)
attention_output = L.LayerNormalization()(attention_output + x)
connection_output = self._multi_head_attention(block_id, attention_output, encoder_output,
encoder_output, connection_head=True, mask=False)
connection_output = L.LayerNormalization()(connection_output + attention_output)
block_output = self._ffn(block_id, connection_output)
block_output = L.LayerNormalization()(block_output + connection_output)
return block_output
def call(self, x, encoder_output):
word_embed = self.word_embed(x)
word_embed = word_embed + self.pos_enc[:, :word_embed.shape[1], :]
block_output = word_embed
for block_id in range(self.num_blocks):
block_output = self._block_computation(block_id, block_output, encoder_output)
logits = self.logits(block_output)
return logits
def loss_function(labels, logits):
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits)
return tf.reduce_mean(tf.reduce_sum(loss, axis=1), axis=0)
NUM_BLOCKS = 2
NUM_HEADS = 2
DIMENSION_MODEL = 32
DIMENSION_K = 16
DIMENSION_V = 16
DIMENSION_FF = 64
encoder = Encoder(num_blocks=NUM_BLOCKS, num_heads=NUM_HEADS, vocab_size=len(human_vocab), seq_len=Tx,
d_model=DIMENSION_MODEL, d_k=DIMENSION_K, d_v=DIMENSION_V, d_ff=DIMENSION_FF)
decoder = Decoder(num_blocks=NUM_BLOCKS, num_heads=NUM_HEADS, vocab_size=len(machine_vocab), seq_len=Ty,
d_model=DIMENSION_MODEL, d_k=DIMENSION_K, d_v=DIMENSION_V, d_ff=DIMENSION_FF)
epochs = 3
batch_size = 64
num_batches = X.shape[0]//batch_size if X.shape[0] % batch_size == 0 else X.shape[0]//batch_size + 1
data = tf.concat([X, Y], axis=1)
optimizer = tf.train.AdamOptimizer()
for e in range(epochs):
data = tf.random.shuffle(data)
X, Y = data[:, :Tx], data[:, Tx:]
pbar = tqdm.tqdm_notebook(range(0, num_batches), desc="Epoch " + str(e+1))
train_loss = 0
for it in pbar:
start = it*batch_size
end = (it+1)*batch_size
with tf.GradientTape() as tape:
encoder_output = encoder(X[start:end])
logits = decoder(Y[start:end, :-1], encoder_output)
loss = loss_function(Y[start:end, 1:], logits)
train_loss += loss
pbar.set_description("Epoch %s - Training loss: %f" % (e+1, (train_loss / (it+1))))
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
source = np.array([source])
encoder_output = encoder(source)
sentence = [machine_vocab["#"]]
for t in range(Ty):
logits = decoder(np.array([sentence]), encoder_output)
prediction = tf.nn.softmax(logits, axis=-1)
prediction = np.argmax(prediction, axis=-1)
sentence.append(prediction[0][-1])
sequential_output = [inv_machine_vocab[s] for s in sentence[1:]]
parallel_output = [inv_machine_vocab[s] for s in prediction[0]]
print("source:", example)
print("sequential output:", ''.join(sequential_output))
print("parallel output:", ''.join(parallel_output))
print("-----------------------------------------------")
| 0.66888 | 0.939025 |
# Image classification via fine-tuning with EfficientNet
**Author:** [Yixing Fu](https://github.com/yixingfu)<br>
**Date created:** 2020/06/30<br>
**Last modified:** 2020/07/16<br>
**Description:** Use EfficientNet with weights pre-trained on imagenet for Stanford Dogs classification.
## Introduction: what is EfficientNet
EfficientNet, first introduced in [Tan and Le, 2019](https://arxiv.org/abs/1905.11946)
is among the most efficient models (i.e. requiring least FLOPS for inference)
that reaches State-of-the-Art accuracy on both
imagenet and common image classification transfer learning tasks.
The smallest base model is similar to [MnasNet](https://arxiv.org/abs/1807.11626), which
reached near-SOTA with a significantly smaller model. By introducing a heuristic way to
scale the model, EfficientNet provides a family of models (B0 to B7) that represents a
good combination of efficiency and accuracy on a variety of scales. Such a scaling
heuristics (compound-scaling, details see
[Tan and Le, 2019](https://arxiv.org/abs/1905.11946)) allows the
efficiency-oriented base model (B0) to surpass models at every scale, while avoiding
extensive grid-search of hyperparameters.
A summary of the latest updates on the model is available at
[here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet), where various
augmentation schemes and semi-supervised learning approaches are applied to further
improve the imagenet performance of the models. These extensions of the model can be used
by updating weights without changing model architecture.
## B0 to B7 variants of EfficientNet
*(This section provides some details on "compound scaling", and can be skipped
if you're only interested in using the models)*
Based on the [original paper](https://arxiv.org/abs/1905.11946) people may have the
impression that EfficientNet is a continuous family of models created by arbitrarily
choosing scaling factor in as Eq.(3) of the paper. However, choice of resolution,
depth and width are also restricted by many factors:
- Resolution: Resolutions not divisible by 8, 16, etc. cause zero-padding near boundaries
of some layers which wastes computational resources. This especially applies to smaller
variants of the model, hence the input resolution for B0 and B1 are chosen as 224 and
240.
- Depth and width: The building blocks of EfficientNet demands channel size to be
multiples of 8.
- Resource limit: Memory limitation may bottleneck resolution when depth
and width can still increase. In such a situation, increasing depth and/or
width but keep resolution can still improve performance.
As a result, the depth, width and resolution of each variant of the EfficientNet models
are hand-picked and proven to produce good results, though they may be significantly
off from the compound scaling formula.
Therefore, the keras implementation (detailed below) only provide these 8 models, B0 to B7,
instead of allowing arbitray choice of width / depth / resolution parameters.
## Keras implementation of EfficientNet
An implementation of EfficientNet B0 to B7 has been shipped with tf.keras since TF2.3. To
use EfficientNetB0 for classifying 1000 classes of images from imagenet, run:
```python
from tensorflow.keras.applications import EfficientNetB0
model = EfficientNetB0(weights='imagenet')
```
This model takes input images of shape (224, 224, 3), and the input data should range
[0, 255]. Normalization is included as part of the model.
Because training EfficientNet on ImageNet takes a tremendous amount of resources and
several techniques that are not a part of the model architecture itself. Hence the Keras
implementation by default loads pre-trained weights obtained via training with
[AutoAugment](https://arxiv.org/abs/1805.09501).
For B0 to B7 base models, the input shapes are different. Here is a list of input shape
expected for each model:
| Base model | resolution|
|----------------|-----|
| EfficientNetB0 | 224 |
| EfficientNetB1 | 240 |
| EfficientNetB2 | 260 |
| EfficientNetB3 | 300 |
| EfficientNetB4 | 380 |
| EfficientNetB5 | 456 |
| EfficientNetB6 | 528 |
| EfficientNetB7 | 600 |
When the model is intended for transfer learning, the Keras implementation
provides a option to remove the top layers:
```
model = EfficientNetB0(include_top=False, weights='imagenet')
```
This option excludes the final `Dense` layer that turns 1280 features on the penultimate
layer into prediction of the 1000 ImageNet classes. Replacing the top layer with custom
layers allows using EfficientNet as a feature extractor in a transfer learning workflow.
Another argument in the model constructor worth noticing is `drop_connect_rate` which controls
the dropout rate responsible for [stochastic depth](https://arxiv.org/abs/1603.09382).
This parameter serves as a toggle for extra regularization in finetuning, but does not
affect loaded weights. For example, when stronger regularization is desired, try:
```python
model = EfficientNetB0(weights='imagenet', drop_connect_rate=0.4)
```
The default value is 0.2.
## Example: EfficientNetB0 for Stanford Dogs.
EfficientNet is capable of a wide range of image classification tasks.
This makes it a good model for transfer learning.
As an end-to-end example, we will show using pre-trained EfficientNetB0 on
[Stanford Dogs](http://vision.stanford.edu/aditya86/ImageNetDogs/main.html) dataset.
```
# IMG_SIZE is determined by EfficientNet model choice
IMG_SIZE = 224
```
## Setup and data loading
This example requires TensorFlow 2.3 or above.
To use TPU, the TPU runtime must match current running TensorFlow
version. If there is a mismatch, try:
```python
from cloud_tpu_client import Client
c = Client()
c.configure_tpu_version(tf.__version__, restart_type="always")
```
```
import tensorflow as tf
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
print("Running on TPU ", tpu.cluster_spec().as_dict()["worker"])
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except ValueError:
print("Not connected to a TPU runtime. Using CPU/GPU strategy")
strategy = tf.distribute.MirroredStrategy()
```
### Loading data
Here we load data from [tensorflow_datasets](https://www.tensorflow.org/datasets)
(hereafter TFDS).
Stanford Dogs dataset is provided in
TFDS as [stanford_dogs](https://www.tensorflow.org/datasets/catalog/stanford_dogs).
It features 20,580 images that belong to 120 classes of dog breeds
(12,000 for training and 8,580 for testing).
By simply changing `dataset_name` below, you may also try this notebook for
other datasets in TFDS such as
[cifar10](https://www.tensorflow.org/datasets/catalog/cifar10),
[cifar100](https://www.tensorflow.org/datasets/catalog/cifar100),
[food101](https://www.tensorflow.org/datasets/catalog/food101),
etc. When the images are much smaller than the size of EfficientNet input,
we can simply upsample the input images. It has been shown in
[Tan and Le, 2019](https://arxiv.org/abs/1905.11946) that transfer learning
result is better for increased resolution even if input images remain small.
For TPU: if using TFDS datasets,
a [GCS bucket](https://cloud.google.com/storage/docs/key-terms#buckets)
location is required to save the datasets. For example:
```python
tfds.load(dataset_name, data_dir="gs://example-bucket/datapath")
```
Also, both the current environment and the TPU service account have
proper [access](https://cloud.google.com/tpu/docs/storage-buckets#authorize_the_service_account)
to the bucket. Alternatively, for small datasets you may try loading data
into the memory and use `tf.data.Dataset.from_tensor_slices()`.
```
import tensorflow_datasets as tfds
batch_size = 64
dataset_name = "stanford_dogs"
(ds_train, ds_test), ds_info = tfds.load(
dataset_name, split=["train", "test"], with_info=True, as_supervised=True
)
NUM_CLASSES = ds_info.features["label"].num_classes
```
When the dataset include images with various size, we need to resize them into a
shared size. The Stanford Dogs dataset includes only images at least 200x200
pixels in size. Here we resize the images to the input size needed for EfficientNet.
```
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
```
### Visualizing the data
The following code shows the first 9 images with their labels.
```
import matplotlib.pyplot as plt
def format_label(label):
string_label = label_info.int2str(label)
return string_label.split("-")[1]
label_info = ds_info.features["label"]
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
```
### Data augmentation
We can use preprocessing layers APIs for image augmentation.
```
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
img_augmentation = Sequential(
[
preprocessing.RandomRotation(factor=0.15),
preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),
preprocessing.RandomFlip(),
preprocessing.RandomContrast(factor=0.1),
],
name="img_augmentation",
)
```
This `Sequential` model object can be used both as a part of
the model we later build, and as a function to preprocess
data before feeding into the model. Using them as function makes
it easy to visualize the augmented images. Here we plot 9 examples
of augmentation result of a given figure.
```
for image, label in ds_train.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
aug_img = img_augmentation(tf.expand_dims(image, axis=0))
plt.imshow(aug_img[0].numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
```
### Prepare inputs
Once we verify the input data and augmentation are working correctly,
we prepare dataset for training. The input data are resized to uniform
`IMG_SIZE`. The labels are put into one-hot
(a.k.a. categorical) encoding. The dataset is batched.
Note: `prefetch` and `AUTOTUNE` may in some situation improve
performance, but depends on environment and the specific dataset used.
See this [guide](https://www.tensorflow.org/guide/data_performance)
for more information on data pipeline performance.
```
# One-hot / categorical encoding
def input_preprocess(image, label):
label = tf.one_hot(label, NUM_CLASSES)
return image, label
ds_train = ds_train.map(
input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True)
```
## Training a model from scratch
We build an EfficientNetB0 with 120 output classes, that is initialized from scratch:
Note: the accuracy will increase very slowly and may overfit.
```
from tensorflow.keras.applications import EfficientNetB0
with strategy.scope():
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
outputs = EfficientNetB0(include_top=True, weights=None, classes=NUM_CLASSES)(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.summary()
epochs = 40 # @param {type: "slider", min:10, max:100}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
```
Training the model is relatively fast (takes only 20 seconds per epoch on TPUv2 that is
available on Colab). This might make it sounds easy to simply train EfficientNet on any
dataset wanted from scratch. However, training EfficientNet on smaller datasets,
especially those with lower resolution like CIFAR-100, faces the significant challenge of
overfitting.
Hence training from scratch requires very careful choice of hyperparameters and is
difficult to find suitable regularization. It would also be much more demanding in resources.
Plotting the training and validation accuracy
makes it clear that validation accuracy stagnates at a low value.
```
import matplotlib.pyplot as plt
def plot_hist(hist):
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("model accuracy")
plt.ylabel("accuracy")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
plot_hist(hist)
```
## Transfer learning from pre-trained weights
Here we initialize the model with pre-trained ImageNet weights,
and we fine-tune it on our own dataset.
```
from tensorflow.keras.layers.experimental import preprocessing
def build_model(num_classes):
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
model = EfficientNetB0(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(NUM_CLASSES, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
```
The first step to transfer learning is to freeze all layers and train only the top
layers. For this step, a relatively large learning rate (1e-2) can be used.
Note that validation accuracy and loss will usually be better than training
accuracy and loss. This is because the regularization is strong, which only
suppresses training-time metrics.
Note that the convergence may take up to 50 epochs depending on choice of learning rate.
If image augmentation layers were not
applied, the validation accuracy may only reach ~60%.
```
with strategy.scope():
model = build_model(num_classes=NUM_CLASSES)
epochs = 25 # @param {type: "slider", min:8, max:80}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
```
The second step is to unfreeze a number of layers and fit the model using smaller
learning rate. In this example we show unfreezing all layers, but depending on
specific dataset it may be desireble to only unfreeze a fraction of all layers.
When the feature extraction with
pretrained model works good enough, this step would give a very limited gain on
validation accuracy. In our case we only see a small improvement,
as ImageNet pretraining already exposed the model to a good amount of dogs.
On the other hand, when we use pretrained weights on a dataset that is more different
from ImageNet, this fine-tuning step can be crucial as the feature extractor also
needs to be adjusted by a considerable amount. Such a situation can be demonstrated
if choosing CIFAR-100 dataset instead, where fine-tuning boosts validation accuracy
by about 10% to pass 80% on `EfficientNetB0`.
In such a case the convergence may take more than 50 epochs.
A side note on freezing/unfreezing models: setting `trainable` of a `Model` will
simultaneously set all layers belonging to the `Model` to the same `trainable`
attribute. Each layer is trainable only if both the layer itself and the model
containing it are trainable. Hence when we need to partially freeze/unfreeze
a model, we need to make sure the `trainable` attribute of the model is set
to `True`.
```
def unfreeze_model(model):
# We unfreeze the top 20 layers while leaving BatchNorm layers frozen
for layer in model.layers[-20:]:
if not isinstance(layer, layers.BatchNormalization):
layer.trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
unfreeze_model(model)
epochs = 10 # @param {type: "slider", min:8, max:50}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
```
### Tips for fine tuning EfficientNet
On unfreezing layers:
- The `BathcNormalization` layers need to be kept frozen
([more details](https://keras.io/guides/transfer_learning/)).
If they are also turned to trainable, the
first epoch after unfreezing will significantly reduce accuracy.
- In some cases it may be beneficial to open up only a portion of layers instead of
unfreezing all. This will make fine tuning much faster when going to larger models like
B7.
- Each block needs to be all turned on or off. This is because the architecture includes
a shortcut from the first layer to the last layer for each block. Not respecting blocks
also significantly harms the final performance.
Some other tips for utilizing EfficientNet:
- Larger variants of EfficientNet do not guarantee improved performance, especially for
tasks with less data or fewer classes. In such a case, the larger variant of EfficientNet
chosen, the harder it is to tune hyperparameters.
- EMA (Exponential Moving Average) is very helpful in training EfficientNet from scratch,
but not so much for transfer learning.
- Do not use the RMSprop setup as in the original paper for transfer learning. The
momentum and learning rate are too high for transfer learning. It will easily corrupt the
pretrained weight and blow up the loss. A quick check is to see if loss (as categorical
cross entropy) is getting significantly larger than log(NUM_CLASSES) after the same
epoch. If so, the initial learning rate/momentum is too high.
- Smaller batch size benefit validation accuracy, possibly due to effectively providing
regularization.
## Using the latest EfficientNet weights
Since the initial paper, the EfficientNet has been improved by various methods for data
preprocessing and for using unlabelled data to enhance learning results. These
improvements are relatively hard and computationally costly to reproduce, and require
extra code; but the weights are readily available in the form of TF checkpoint files. The
model architecture has not changed, so loading the improved checkpoints is possible.
To use a checkpoint provided at
[the official model repository](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet), first
download the checkpoint. As example, here we download noisy-student version of B1:
```
!wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet\
/noisystudent/noisy_student_efficientnet-b1.tar.gz
!tar -xf noisy_student_efficientnet-b1.tar.gz
```
Then use the script efficientnet_weight_update_util.py to convert ckpt file to h5 file.
```
!python efficientnet_weight_update_util.py --model b1 --notop --ckpt \
efficientnet-b1/model.ckpt --o efficientnetb1_notop.h5
```
When creating model, use the following to load new weight:
```python
model = EfficientNetB0(weights="efficientnetb1_notop.h5", include_top=False)
```
|
github_jupyter
|
from tensorflow.keras.applications import EfficientNetB0
model = EfficientNetB0(weights='imagenet')
model = EfficientNetB0(include_top=False, weights='imagenet')
model = EfficientNetB0(weights='imagenet', drop_connect_rate=0.4)
# IMG_SIZE is determined by EfficientNet model choice
IMG_SIZE = 224
from cloud_tpu_client import Client
c = Client()
c.configure_tpu_version(tf.__version__, restart_type="always")
import tensorflow as tf
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
print("Running on TPU ", tpu.cluster_spec().as_dict()["worker"])
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except ValueError:
print("Not connected to a TPU runtime. Using CPU/GPU strategy")
strategy = tf.distribute.MirroredStrategy()
tfds.load(dataset_name, data_dir="gs://example-bucket/datapath")
import tensorflow_datasets as tfds
batch_size = 64
dataset_name = "stanford_dogs"
(ds_train, ds_test), ds_info = tfds.load(
dataset_name, split=["train", "test"], with_info=True, as_supervised=True
)
NUM_CLASSES = ds_info.features["label"].num_classes
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
import matplotlib.pyplot as plt
def format_label(label):
string_label = label_info.int2str(label)
return string_label.split("-")[1]
label_info = ds_info.features["label"]
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
img_augmentation = Sequential(
[
preprocessing.RandomRotation(factor=0.15),
preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),
preprocessing.RandomFlip(),
preprocessing.RandomContrast(factor=0.1),
],
name="img_augmentation",
)
for image, label in ds_train.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
aug_img = img_augmentation(tf.expand_dims(image, axis=0))
plt.imshow(aug_img[0].numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
# One-hot / categorical encoding
def input_preprocess(image, label):
label = tf.one_hot(label, NUM_CLASSES)
return image, label
ds_train = ds_train.map(
input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True)
from tensorflow.keras.applications import EfficientNetB0
with strategy.scope():
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
outputs = EfficientNetB0(include_top=True, weights=None, classes=NUM_CLASSES)(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.summary()
epochs = 40 # @param {type: "slider", min:10, max:100}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
import matplotlib.pyplot as plt
def plot_hist(hist):
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("model accuracy")
plt.ylabel("accuracy")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
plot_hist(hist)
from tensorflow.keras.layers.experimental import preprocessing
def build_model(num_classes):
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
model = EfficientNetB0(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(NUM_CLASSES, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
with strategy.scope():
model = build_model(num_classes=NUM_CLASSES)
epochs = 25 # @param {type: "slider", min:8, max:80}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
def unfreeze_model(model):
# We unfreeze the top 20 layers while leaving BatchNorm layers frozen
for layer in model.layers[-20:]:
if not isinstance(layer, layers.BatchNormalization):
layer.trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
unfreeze_model(model)
epochs = 10 # @param {type: "slider", min:8, max:50}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
!wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet\
/noisystudent/noisy_student_efficientnet-b1.tar.gz
!tar -xf noisy_student_efficientnet-b1.tar.gz
!python efficientnet_weight_update_util.py --model b1 --notop --ckpt \
efficientnet-b1/model.ckpt --o efficientnetb1_notop.h5
model = EfficientNetB0(weights="efficientnetb1_notop.h5", include_top=False)
| 0.882174 | 0.989811 |
```
# Problem solving using Uniform Cost Search Algorithm
def min_cost_and_path_to_travel(current_node):
queue = {} # Used to store the path along with the cost
min_fare = 0
while(current_node[-1] != 'G'): # Condition to check whether we reached the destination node or not
expansion_node = current_node[-1] # Contains the node that is to be further expanded
current_node_expansion = {} # The dictionary which records the path and cost of the node expanded
for key, value in graph[expansion_node].items():
if key not in current_node: # Condition to check if we are visiting a node that is already visited
current_node_expansion[current_node+key] = graph[expansion_node][key] + min_fare # Calculating the total cost to travel from initial node to the current node
### Updating to queue
for key, value in current_node_expansion.items():
last_node_dict = {key[-1]:key for key in list(queue.keys())} # Mapping the last node explored and the best path to reach the last node explored
if key[-1] in last_node_dict.keys():
if current_node_expansion[key] < queue[last_node_dict[key[-1]]]: # Checking whether the cost to reach the current_node is less than updated in queue
del queue[last_node_dict[key[-1]]]
queue[key] = current_node_expansion[key] # Updating the queue with least cost
else:
queue[key] = value
min_fare = min(queue.values()) # Finding the min fare to travel from initial node to current node
for key in queue.keys():
if queue[key] == min_fare:
current_node = key
current_node_cost = queue[current_node]
del queue[current_node] # Deleting the node that is going to be explored further
break
return (current_node, current_node_cost) # Returning the path and the min cost to travel from initial to goal node
'''
We are using a nested dictionary to represent the graph
{'A':{'B':500,'D':850,'C':650}} denotes that A is connected to nodes B, D and C
and the cost to travel between these nodes from A is 500, 850 and 650 respectively
'''
graph = {'A':{'B':500,'D':850,'C':650},
'B':{'A':500, 'C':1000,'D':590,'G':1250},
'C':{'A':650, 'B':1000, 'D':600},
'D':{'A':850, 'B':590, 'C':600,'E':700,'G':1500},
'E':{'D':700,'G':2500},
'G':{'B':1250,'D':1500,'E':2500}}
initial_node = input("Enter initial node:")
if initial_node not in graph.keys():
print('Initial node is incorrect!!!')
path, cost = min_cost_and_path_to_travel(initial_node)
print("Path taken by driver based on minimum cost of travel is {}".format(path))
print("Minimum cost required to travel from {} to G is Rs {}".format(initial_node, cost))
```
|
github_jupyter
|
# Problem solving using Uniform Cost Search Algorithm
def min_cost_and_path_to_travel(current_node):
queue = {} # Used to store the path along with the cost
min_fare = 0
while(current_node[-1] != 'G'): # Condition to check whether we reached the destination node or not
expansion_node = current_node[-1] # Contains the node that is to be further expanded
current_node_expansion = {} # The dictionary which records the path and cost of the node expanded
for key, value in graph[expansion_node].items():
if key not in current_node: # Condition to check if we are visiting a node that is already visited
current_node_expansion[current_node+key] = graph[expansion_node][key] + min_fare # Calculating the total cost to travel from initial node to the current node
### Updating to queue
for key, value in current_node_expansion.items():
last_node_dict = {key[-1]:key for key in list(queue.keys())} # Mapping the last node explored and the best path to reach the last node explored
if key[-1] in last_node_dict.keys():
if current_node_expansion[key] < queue[last_node_dict[key[-1]]]: # Checking whether the cost to reach the current_node is less than updated in queue
del queue[last_node_dict[key[-1]]]
queue[key] = current_node_expansion[key] # Updating the queue with least cost
else:
queue[key] = value
min_fare = min(queue.values()) # Finding the min fare to travel from initial node to current node
for key in queue.keys():
if queue[key] == min_fare:
current_node = key
current_node_cost = queue[current_node]
del queue[current_node] # Deleting the node that is going to be explored further
break
return (current_node, current_node_cost) # Returning the path and the min cost to travel from initial to goal node
'''
We are using a nested dictionary to represent the graph
{'A':{'B':500,'D':850,'C':650}} denotes that A is connected to nodes B, D and C
and the cost to travel between these nodes from A is 500, 850 and 650 respectively
'''
graph = {'A':{'B':500,'D':850,'C':650},
'B':{'A':500, 'C':1000,'D':590,'G':1250},
'C':{'A':650, 'B':1000, 'D':600},
'D':{'A':850, 'B':590, 'C':600,'E':700,'G':1500},
'E':{'D':700,'G':2500},
'G':{'B':1250,'D':1500,'E':2500}}
initial_node = input("Enter initial node:")
if initial_node not in graph.keys():
print('Initial node is incorrect!!!')
path, cost = min_cost_and_path_to_travel(initial_node)
print("Path taken by driver based on minimum cost of travel is {}".format(path))
print("Minimum cost required to travel from {} to G is Rs {}".format(initial_node, cost))
| 0.475362 | 0.81372 |
# Slow-waves detection
This notebook demonstrates how to use YASA to automatically detect slow waves on single-channel EEG.
Please make sure to install the latest version of YASA first by typing the following line in your terminal or command prompt:
`pip install --upgrade yasa`
```
import yasa
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from mne.filter import filter_data
sns.set(font_scale=1.2)
```
## Data loading
Let's load 30 seconds of N3 sleep on a single frontal EEG channel sampled at 100 Hz.
```
# Load data
data = np.load('data_full_6hrs_100Hz_Cz+Fz+Pz.npz').get('data')
ch_names = ['Cz', 'Fz', 'Pz']
hypno = np.load('data_full_6hrs_100Hz_hypno.npz').get('hypno')
# Keep only Fz and during a N3 sleep period with (huge) slow-waves
data = data[1, 669000:672000].astype(np.float64)
hypno = hypno[669000:672000]
# Define sampling frequency and time vector
sf = 100.
times = np.arange(data.size) / sf
# Plot the signal
fig, ax = plt.subplots(1, 1, figsize=(16, 4))
plt.plot(times, data, lw=1.5, color='k')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([times.min(), times.max()])
plt.title('N3 sleep EEG data')
sns.despine()
```
## Apply the detection using absolute thresholds (default)
We use the [yasa.sw_detect](https://raphaelvallat.com/yasa/build/html/generated/yasa.sw_detect.html#yasa.sw_detect) function to apply the detection. The different input and output parameters are described in the [documentation of the function](https://raphaelvallat.com/yasa/build/html/generated/yasa.sw_detect.html#yasa.sw_detect).
**Note**: as explained below, you can also use relative amplitude thresholds (e.g. z-score or percentiles) instead of absolute physical thresholds (in uV).
```
from yasa import sw_detect
# Short version
# sw = sw_detect(data, sf, hypno=hypno)
# Long version (with all the optional implicit arguments)
sw = sw_detect(data, sf, hypno=hypno, include=(2, 3), freq_sw=(0.3, 1.5),
dur_neg=(0.3, 1.5), dur_pos=(0.1, 1), amp_neg=(40, 200),
amp_pos=(10, 150), amp_ptp=(75, 350), coupling=False,
remove_outliers=False, verbose=False)
# To get the full detection dataframe, we use the .summary() method
events = sw.summary()
events.round(2)
```
The output of the slow-waves detection is a [SWResults](https://raphaelvallat.com/yasa/build/html/generated/yasa.SWResults.html#yasa.SWResults) class, which comes with some pre-compiled functions (also called methods). For instance, the [summary](https://raphaelvallat.com/yasa/build/html/generated/yasa.SWResults.html#yasa.SWResults.summary) method returns a [pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe) with all the detected slow-waves and their properties. The different slow-waves properties are explained in the figure below:
<img src="https://raw.githubusercontent.com/raphaelvallat/yasa/master/docs/pictures/slow_waves.png" alt="slow-waves" style="width: 600px;"/>
Using the ``grp_chan`` argument of the [summary](https://raphaelvallat.com/yasa/build/html/generated/yasa.SWResults.html#yasa.SWResults.summary) method, we can also easily get the average parameters of all detected slow-waves:
```
sw.summary(grp_chan=True, aggfunc='mean')
```
### Plot the detected slow-waves
First we need to create a boolean array of the same size of data indicating for each sample if this sample is part of a spindles or not. This is done using the [get_mask](https://raphaelvallat.com/yasa/build/html/generated/yasa.SWResults.html#yasa.SWResults.get_mask) method
```
# Let's get a mask indicating for each sample
mask = sw.get_mask()
mask
sw_highlight = data * mask
sw_highlight[sw_highlight == 0] = np.nan
plt.figure(figsize=(16, 4.5))
plt.plot(times, data, 'k')
plt.plot(times, sw_highlight, 'indianred')
plt.plot(events['NegPeak'], sw_highlight[(events['NegPeak'] * sf).astype(int)], 'bo', label='Negative peaks')
plt.plot(events['PosPeak'], sw_highlight[(events['PosPeak'] * sf).astype(int)], 'go', label='Positive peaks')
plt.plot(events['Start'], data[(events['Start'] * sf).astype(int)], 'ro', label='Start')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([0, times[-1]])
plt.title('N3 sleep EEG data')
plt.legend()
sns.despine()
```
You may notice that some of the peaks and start points looks a bit out of sync. This is because the slow-waves detection is calculated on a bandpass-filtered signal and not the raw signal. Let's do the same plot with the filtered signal:
```
# The 1D filtered data can be obtained using:
data_filt = np.squeeze(sw._data_filt)
data_filt
sw_highlight = data_filt * mask
sw_highlight[sw_highlight == 0] = np.nan
plt.figure(figsize=(16, 4.5))
plt.plot(times, data_filt, 'k')
plt.plot(times, sw_highlight, 'indianred')
plt.plot(events['NegPeak'], events['ValNegPeak'], 'bo', label='Negative peaks')
plt.plot(events['PosPeak'], events['ValPosPeak'], 'go', label='Positive peaks')
plt.plot(events['Start'], np.squeeze(sw._data_filt)[(events['Start'] * sf).astype(int)], 'ro', label='Start')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([0, times[-1]])
plt.title('N3 sleep EEG data (filtered)')
plt.legend()
sns.despine()
```
Finally, we can plot an average template of all detected slow-waves with the [plot_average](https://raphaelvallat.com/yasa/build/html/generated/yasa.SWResults.html#yasa.SWResults.plot_average) method:
```
sw.plot_average(time_before=0.4, time_after=0.8, center="NegPeak");
```
### Computation time
```
%timeit sw_detect(data, sf)
# Line profiling
# %load_ext line_profiler
# %lprun -f sw_detect sw_detect(data, sf)
```
*****
## Using relative thresholds
For a variety of reasons, one may prefer to use relative amplitude thresholds rather than absolute (physical) units. For instance, older adults typically have lower-amplitude slow-waves, for which the default thresholds defined by the AASM may not work properly (for more details, refer to [Muehlroth & Werkle-Bergner, 2020](https://doi.org/10.1111/psyp.13523)).
The script below demonstrates how to apply the detection on previously-normalized data. The amplitude thresholds are defined in terms of z-scores, i.e. the number of standard deviations from the mean.
```
from scipy.stats import zscore
# Z-score the data
data_zscored = zscore(data)
# Detect all events with a relative peak-to-peak
# amplitude between 3 to 10 z-scores, and positive/negative
# peaks amplitude > 1 standard deviations
sw = sw_detect(data_zscored, sf,
amp_neg=(1, None),
amp_pos=(1, None),
amp_ptp=(3, 10))
sw.summary().round(2)
```
Even without z-scoring the data, one can directly use a percentile threshold on the raw data to determine the amplitude. In the code below, we show how to detect any peaks that exceed the 75th percentile of the raw data, e.g. from [Helfrich et al. 2018](https://www.ncbi.nlm.nih.gov/pubmed/29249289):
> *(1) Slow oscillations: In brief, we first filtered the continuous signal between 0.16 and1.25 Hz and detected all the zero crossings. Then events were selected based on time (0.8 – 2 s duration) and amplitude (75% percentile) criteria.*
Also note how we can disable the positive and negative amplitude thresholds by simply using "None"
```
thresh = np.percentile(np.abs(data), 75)
print('75th percentile threshold: %.2f uV' % thresh)
sw = sw_detect(data, sf,
amp_neg=(None, None), # Disabled
amp_pos=(None, None), # Disabled
amp_ptp=(thresh, np.inf) # No upper threshold: np.inf
)
sw.summary().round(2)
```
**************************
## Step-by-step description of the algorithm
The slow-waves detection algorithm of YASA is a custom adaptation from:
- Massimini, M., Huber, R., Ferrarelli, F., Hill, S. & Tononi, G. (2004). [The sleep slow oscillation as a traveling wave](https://doi.org/10.1523/JNEUROSCI.1318-04.2004). *J. Neurosci.*.
- Carrier, J. et al. (2011). [Sleep slow wave changes during the middle years of life](https://doi.org/10.1111/j.1460-9568.2010.07543.x). *Eur. J. Neurosci*.
**The steps are:**
1. Bandpass filtering between 0.3 to 1.5 Hz using a [FIR filter](https://martinos.org/mne/stable/auto_tutorials/plot_background_filtering.html#designing-fir-filters) with a transition band of 0.2 Hz.
2. Detection of all the negative peaks in the filtered signal that have an amplitude between -40 to -200 $\mu$V and all the positive peaks with an amplitude comprised between 10 to 150 $\mu$V. This is done using the [scipy.signal.find_peaks](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html) function.
3. For each negative peak (= slow-wave trough), the nearest following positive peak is found and several metrics are computed, such as the peak-to-peak amplitude, durations of the negative and positive phase, frequency, etc.
4. A set of logical thresholds are applied to determine the *true* slow-waves.
5. A pandas DataFrame is created, where each row is a detected slow-wave and each column a property of this slow-wave. An optional automatic outlier rejection is applied on this dataframe to further remove abnormal slow-waves.
### 1. Bandpass filtering
```
# Slow-waves FIR bandpass filter
freq_sw = (0.3, 2)
data_filt = filter_data(data, sf, freq_sw[0], freq_sw[1], method='fir', verbose=1,
l_trans_bandwidth=0.2, h_trans_bandwidth=0.2)
# Plot the signal
fig, ax = plt.subplots(1, 1, figsize=(16, 4))
plt.plot(times, data_filt, lw=1.5, color='k')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([times.min(), times.max()])
plt.axhline(0, color='coral', ls=':', lw=2)
plt.title('Filtered data')
sns.despine()
```
### 2. Peaks detection
```
from scipy.signal import find_peaks
# Negative peaks with value comprised between -40 to -200 uV
idx_neg_peaks, _ = find_peaks(-1 * data_filt, height=(40, 200))
# Positive peaks with values comprised between 10 to 150 uV
idx_pos_peaks, _ = find_peaks(data_filt, height=(10, 150))
display(idx_neg_peaks)
display(idx_pos_peaks)
# For each negative peak, we find the closest following positive peak
pk_sorted = np.searchsorted(idx_pos_peaks, idx_neg_peaks)
closest_pos_peaks = idx_pos_peaks[pk_sorted] - idx_neg_peaks
closest_pos_peaks = closest_pos_peaks[np.nonzero(closest_pos_peaks)]
idx_pos_peaks = idx_neg_peaks + closest_pos_peaks
idx_pos_peaks
```
### 3. Amplitude and duration criteria
```
# Now we check that the total PTP amplitude is within our bounds (75 to 350 uV)
sw_ptp = np.abs(data_filt[idx_neg_peaks]) + data_filt[idx_pos_peaks]
good_ptp = np.logical_and(sw_ptp > 75, sw_ptp < 350)
display(np.round(sw_ptp))
display(good_ptp)
# Remove the slow-waves with peak-to-peak ampitude outside the bounds
sw_ptp = sw_ptp[good_ptp]
idx_neg_peaks = idx_neg_peaks[good_ptp]
idx_pos_peaks = idx_pos_peaks[good_ptp]
idx_neg_peaks
# Then we check the negative and positive phase duration. To do so,
# we first need to compute the zero crossings of the filtered signal:
zero_crossings = yasa.others._zerocrossings(data_filt)
zero_crossings
fig, ax = plt.subplots(1, 1, figsize=(16, 4.5))
plt.plot(times, data_filt, lw=1.5, color='k')
plt.plot(times[zero_crossings], data_filt[zero_crossings], 'ro', label='Zero crossing')
plt.plot(times[idx_neg_peaks], data_filt[idx_neg_peaks], 'bo', label='Negative peaks')
plt.plot(times[idx_pos_peaks], data_filt[idx_pos_peaks], 'go', label='Positive peaks')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([times.min(), times.max()])
plt.title('Filtered data')
plt.legend()
sns.despine()
# Safety check: Make sure that there is a zero-crossing after the last detected peak
if zero_crossings[-1] < max(idx_pos_peaks[-1], idx_neg_peaks[-1]):
# If not, append the index of the last peak
zero_crossings = np.append(zero_crossings,
max(idx_pos_peaks[-1], idx_neg_peaks[-1]))
# For each negative peak, we find the previous and following zero-crossings
neg_sorted = np.searchsorted(zero_crossings, idx_neg_peaks)
previous_neg_zc = zero_crossings[neg_sorted - 1] - idx_neg_peaks
following_neg_zc = zero_crossings[neg_sorted] - idx_neg_peaks
# And from that we calculate the duration of the negative phase
neg_phase_dur = (np.abs(previous_neg_zc) + following_neg_zc) / sf
neg_phase_dur
# For each positive peak, we find the previous and following zero-crossings
pos_sorted = np.searchsorted(zero_crossings, idx_pos_peaks)
previous_pos_zc = zero_crossings[pos_sorted - 1] - idx_pos_peaks
following_pos_zc = zero_crossings[pos_sorted] - idx_pos_peaks
# And from that we calculate the duration of the positive phase
pos_phase_dur = (np.abs(previous_pos_zc) + following_pos_zc) / sf
pos_phase_dur
# Now we can start computing the properties of each detected slow-waves
sw_start = times[idx_neg_peaks + previous_neg_zc]
sw_end = times[idx_pos_peaks + following_pos_zc]
sw_dur = sw_end - sw_start # Same as pos_phase_dur + neg_phase_dur
sw_midcrossing = times[idx_neg_peaks + following_neg_zc]
sw_idx_neg, sw_idx_pos = times[idx_neg_peaks], times[idx_pos_peaks]
sw_slope = sw_ptp / (sw_midcrossing - sw_idx_neg) # Slope between peak trough and midcrossing
# Finally we apply a set of logical thresholds to exclude "bad" slow waves
good_sw = np.logical_and.reduce((
# Data edges
previous_neg_zc != 0,
following_neg_zc != 0,
previous_pos_zc != 0,
following_pos_zc != 0,
# Duration criteria
neg_phase_dur > 0.3,
neg_phase_dur < 1.5,
pos_phase_dur > 0.1,
pos_phase_dur < 1,
# Sanity checks
sw_midcrossing > sw_start,
sw_midcrossing < sw_end,
sw_slope > 0,
))
good_sw
```
### 4. Dataframe creation
```
# Create the dataframe
events = pd.DataFrame({'Start': sw_start,
'NegPeak': sw_idx_neg,
'MidCrossing': sw_midcrossing,
'PosPeak': sw_idx_pos,
'End': sw_end,
'Duration': sw_dur,
'ValNegPeak': data_filt[idx_neg_peaks],
'ValPosPeak': data_filt[idx_pos_peaks],
'PTP': sw_ptp,
'Slope': sw_slope,
'Frequency': 1 / sw_dur,
})[good_sw]
# Remove all duplicates and reset index
events.drop_duplicates(subset=['Start'], inplace=True, keep=False)
events.drop_duplicates(subset=['End'], inplace=True, keep=False)
events.reset_index(drop=True, inplace=True)
events.round(3)
```
**********************
## Appendix
### 1. Display the time points in HH:MM:SS format
```
for c in ['Start', 'NegPeak', 'MidCrossing', 'PosPeak', 'End']:
events[c] = pd.to_timedelta(events[c], unit='s').dt.round('s')
events.head()
```
### 2. Get additional information with logging
YASA uses the [logging](https://docs.python.org/3/library/logging.html) module to selectively print relevant messages. The default level of the logger is set to "WARNING", which means that a message will only be displayed if a warning occurs. However, you can easily set this parameter to "INFO" to get some relevant infos about the detection pipeline and the data.
This can be useful to debug the detection and/or if you feel that the detection is not working well on your data.
```
yasa.sw_detect(data, sf, verbose='INFO').summary().head()
```
|
github_jupyter
|
import yasa
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from mne.filter import filter_data
sns.set(font_scale=1.2)
# Load data
data = np.load('data_full_6hrs_100Hz_Cz+Fz+Pz.npz').get('data')
ch_names = ['Cz', 'Fz', 'Pz']
hypno = np.load('data_full_6hrs_100Hz_hypno.npz').get('hypno')
# Keep only Fz and during a N3 sleep period with (huge) slow-waves
data = data[1, 669000:672000].astype(np.float64)
hypno = hypno[669000:672000]
# Define sampling frequency and time vector
sf = 100.
times = np.arange(data.size) / sf
# Plot the signal
fig, ax = plt.subplots(1, 1, figsize=(16, 4))
plt.plot(times, data, lw=1.5, color='k')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([times.min(), times.max()])
plt.title('N3 sleep EEG data')
sns.despine()
from yasa import sw_detect
# Short version
# sw = sw_detect(data, sf, hypno=hypno)
# Long version (with all the optional implicit arguments)
sw = sw_detect(data, sf, hypno=hypno, include=(2, 3), freq_sw=(0.3, 1.5),
dur_neg=(0.3, 1.5), dur_pos=(0.1, 1), amp_neg=(40, 200),
amp_pos=(10, 150), amp_ptp=(75, 350), coupling=False,
remove_outliers=False, verbose=False)
# To get the full detection dataframe, we use the .summary() method
events = sw.summary()
events.round(2)
sw.summary(grp_chan=True, aggfunc='mean')
# Let's get a mask indicating for each sample
mask = sw.get_mask()
mask
sw_highlight = data * mask
sw_highlight[sw_highlight == 0] = np.nan
plt.figure(figsize=(16, 4.5))
plt.plot(times, data, 'k')
plt.plot(times, sw_highlight, 'indianred')
plt.plot(events['NegPeak'], sw_highlight[(events['NegPeak'] * sf).astype(int)], 'bo', label='Negative peaks')
plt.plot(events['PosPeak'], sw_highlight[(events['PosPeak'] * sf).astype(int)], 'go', label='Positive peaks')
plt.plot(events['Start'], data[(events['Start'] * sf).astype(int)], 'ro', label='Start')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([0, times[-1]])
plt.title('N3 sleep EEG data')
plt.legend()
sns.despine()
# The 1D filtered data can be obtained using:
data_filt = np.squeeze(sw._data_filt)
data_filt
sw_highlight = data_filt * mask
sw_highlight[sw_highlight == 0] = np.nan
plt.figure(figsize=(16, 4.5))
plt.plot(times, data_filt, 'k')
plt.plot(times, sw_highlight, 'indianred')
plt.plot(events['NegPeak'], events['ValNegPeak'], 'bo', label='Negative peaks')
plt.plot(events['PosPeak'], events['ValPosPeak'], 'go', label='Positive peaks')
plt.plot(events['Start'], np.squeeze(sw._data_filt)[(events['Start'] * sf).astype(int)], 'ro', label='Start')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([0, times[-1]])
plt.title('N3 sleep EEG data (filtered)')
plt.legend()
sns.despine()
sw.plot_average(time_before=0.4, time_after=0.8, center="NegPeak");
%timeit sw_detect(data, sf)
# Line profiling
# %load_ext line_profiler
# %lprun -f sw_detect sw_detect(data, sf)
from scipy.stats import zscore
# Z-score the data
data_zscored = zscore(data)
# Detect all events with a relative peak-to-peak
# amplitude between 3 to 10 z-scores, and positive/negative
# peaks amplitude > 1 standard deviations
sw = sw_detect(data_zscored, sf,
amp_neg=(1, None),
amp_pos=(1, None),
amp_ptp=(3, 10))
sw.summary().round(2)
thresh = np.percentile(np.abs(data), 75)
print('75th percentile threshold: %.2f uV' % thresh)
sw = sw_detect(data, sf,
amp_neg=(None, None), # Disabled
amp_pos=(None, None), # Disabled
amp_ptp=(thresh, np.inf) # No upper threshold: np.inf
)
sw.summary().round(2)
# Slow-waves FIR bandpass filter
freq_sw = (0.3, 2)
data_filt = filter_data(data, sf, freq_sw[0], freq_sw[1], method='fir', verbose=1,
l_trans_bandwidth=0.2, h_trans_bandwidth=0.2)
# Plot the signal
fig, ax = plt.subplots(1, 1, figsize=(16, 4))
plt.plot(times, data_filt, lw=1.5, color='k')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([times.min(), times.max()])
plt.axhline(0, color='coral', ls=':', lw=2)
plt.title('Filtered data')
sns.despine()
from scipy.signal import find_peaks
# Negative peaks with value comprised between -40 to -200 uV
idx_neg_peaks, _ = find_peaks(-1 * data_filt, height=(40, 200))
# Positive peaks with values comprised between 10 to 150 uV
idx_pos_peaks, _ = find_peaks(data_filt, height=(10, 150))
display(idx_neg_peaks)
display(idx_pos_peaks)
# For each negative peak, we find the closest following positive peak
pk_sorted = np.searchsorted(idx_pos_peaks, idx_neg_peaks)
closest_pos_peaks = idx_pos_peaks[pk_sorted] - idx_neg_peaks
closest_pos_peaks = closest_pos_peaks[np.nonzero(closest_pos_peaks)]
idx_pos_peaks = idx_neg_peaks + closest_pos_peaks
idx_pos_peaks
# Now we check that the total PTP amplitude is within our bounds (75 to 350 uV)
sw_ptp = np.abs(data_filt[idx_neg_peaks]) + data_filt[idx_pos_peaks]
good_ptp = np.logical_and(sw_ptp > 75, sw_ptp < 350)
display(np.round(sw_ptp))
display(good_ptp)
# Remove the slow-waves with peak-to-peak ampitude outside the bounds
sw_ptp = sw_ptp[good_ptp]
idx_neg_peaks = idx_neg_peaks[good_ptp]
idx_pos_peaks = idx_pos_peaks[good_ptp]
idx_neg_peaks
# Then we check the negative and positive phase duration. To do so,
# we first need to compute the zero crossings of the filtered signal:
zero_crossings = yasa.others._zerocrossings(data_filt)
zero_crossings
fig, ax = plt.subplots(1, 1, figsize=(16, 4.5))
plt.plot(times, data_filt, lw=1.5, color='k')
plt.plot(times[zero_crossings], data_filt[zero_crossings], 'ro', label='Zero crossing')
plt.plot(times[idx_neg_peaks], data_filt[idx_neg_peaks], 'bo', label='Negative peaks')
plt.plot(times[idx_pos_peaks], data_filt[idx_pos_peaks], 'go', label='Positive peaks')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([times.min(), times.max()])
plt.title('Filtered data')
plt.legend()
sns.despine()
# Safety check: Make sure that there is a zero-crossing after the last detected peak
if zero_crossings[-1] < max(idx_pos_peaks[-1], idx_neg_peaks[-1]):
# If not, append the index of the last peak
zero_crossings = np.append(zero_crossings,
max(idx_pos_peaks[-1], idx_neg_peaks[-1]))
# For each negative peak, we find the previous and following zero-crossings
neg_sorted = np.searchsorted(zero_crossings, idx_neg_peaks)
previous_neg_zc = zero_crossings[neg_sorted - 1] - idx_neg_peaks
following_neg_zc = zero_crossings[neg_sorted] - idx_neg_peaks
# And from that we calculate the duration of the negative phase
neg_phase_dur = (np.abs(previous_neg_zc) + following_neg_zc) / sf
neg_phase_dur
# For each positive peak, we find the previous and following zero-crossings
pos_sorted = np.searchsorted(zero_crossings, idx_pos_peaks)
previous_pos_zc = zero_crossings[pos_sorted - 1] - idx_pos_peaks
following_pos_zc = zero_crossings[pos_sorted] - idx_pos_peaks
# And from that we calculate the duration of the positive phase
pos_phase_dur = (np.abs(previous_pos_zc) + following_pos_zc) / sf
pos_phase_dur
# Now we can start computing the properties of each detected slow-waves
sw_start = times[idx_neg_peaks + previous_neg_zc]
sw_end = times[idx_pos_peaks + following_pos_zc]
sw_dur = sw_end - sw_start # Same as pos_phase_dur + neg_phase_dur
sw_midcrossing = times[idx_neg_peaks + following_neg_zc]
sw_idx_neg, sw_idx_pos = times[idx_neg_peaks], times[idx_pos_peaks]
sw_slope = sw_ptp / (sw_midcrossing - sw_idx_neg) # Slope between peak trough and midcrossing
# Finally we apply a set of logical thresholds to exclude "bad" slow waves
good_sw = np.logical_and.reduce((
# Data edges
previous_neg_zc != 0,
following_neg_zc != 0,
previous_pos_zc != 0,
following_pos_zc != 0,
# Duration criteria
neg_phase_dur > 0.3,
neg_phase_dur < 1.5,
pos_phase_dur > 0.1,
pos_phase_dur < 1,
# Sanity checks
sw_midcrossing > sw_start,
sw_midcrossing < sw_end,
sw_slope > 0,
))
good_sw
# Create the dataframe
events = pd.DataFrame({'Start': sw_start,
'NegPeak': sw_idx_neg,
'MidCrossing': sw_midcrossing,
'PosPeak': sw_idx_pos,
'End': sw_end,
'Duration': sw_dur,
'ValNegPeak': data_filt[idx_neg_peaks],
'ValPosPeak': data_filt[idx_pos_peaks],
'PTP': sw_ptp,
'Slope': sw_slope,
'Frequency': 1 / sw_dur,
})[good_sw]
# Remove all duplicates and reset index
events.drop_duplicates(subset=['Start'], inplace=True, keep=False)
events.drop_duplicates(subset=['End'], inplace=True, keep=False)
events.reset_index(drop=True, inplace=True)
events.round(3)
for c in ['Start', 'NegPeak', 'MidCrossing', 'PosPeak', 'End']:
events[c] = pd.to_timedelta(events[c], unit='s').dt.round('s')
events.head()
yasa.sw_detect(data, sf, verbose='INFO').summary().head()
| 0.796728 | 0.898633 |
## Quick Run
This notebook is publicly available for any usage at our data imputation project. Please click [**transdim**](https://github.com/xinychen/transdim).
We start by importing the necessary dependencies. We will make use of `numpy` and `scipy`.
```
import numpy as np
from numpy.linalg import inv as inv
```
# Part 1: Matrix Computation Concepts
## 1) Kronecker product
- **Definition**:
Given two matrices $A\in\mathbb{R}^{m_1\times n_1}$ and $B\in\mathbb{R}^{m_2\times n_2}$, then, the **Kronecker product** between these two matrices is defined as
$$A\otimes B=\left[ \begin{array}{cccc} a_{11}B & a_{12}B & \cdots & a_{1m_2}B \\ a_{21}B & a_{22}B & \cdots & a_{2m_2}B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m_11}B & a_{m_12}B & \cdots & a_{m_1m_2}B \\ \end{array} \right]$$
where the symbol $\otimes$ denotes Kronecker product, and the size of resulted $A\otimes B$ is $(m_1m_2)\times (n_1n_2)$ (i.e., $m_1\times m_2$ columns and $n_1\times n_2$ rows).
- **Example**:
If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]$ and $B=\left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10 \\ \end{array} \right]$, then, we have
$$A\otimes B=\left[ \begin{array}{cc} 1\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 2\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ 3\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 4\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ \end{array} \right]$$
$$=\left[ \begin{array}{cccccc} 5 & 6 & 7 & 10 & 12 & 14 \\ 8 & 9 & 10 & 16 & 18 & 20 \\ 15 & 18 & 21 & 20 & 24 & 28 \\ 24 & 27 & 30 & 32 & 36 & 40 \\ \end{array} \right]\in\mathbb{R}^{4\times 6}.$$
## 2) Khatri-Rao product (`kr_prod`)
- **Definition**:
Given two matrices $A=\left( \boldsymbol{a}_1,\boldsymbol{a}_2,...,\boldsymbol{a}_r \right)\in\mathbb{R}^{m\times r}$ and $B=\left( \boldsymbol{b}_1,\boldsymbol{b}_2,...,\boldsymbol{b}_r \right)\in\mathbb{R}^{n\times r}$ with same number of columns, then, the **Khatri-Rao product** (or **column-wise Kronecker product**) between $A$ and $B$ is given as follows,
$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2,...,\boldsymbol{a}_r\otimes \boldsymbol{b}_r \right)\in\mathbb{R}^{(mn)\times r},$$
where the symbol $\odot$ denotes Khatri-Rao product, and $\otimes$ denotes Kronecker product.
- **Example**:
If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]=\left( \boldsymbol{a}_1,\boldsymbol{a}_2 \right) $ and $B=\left[ \begin{array}{cc} 5 & 6 \\ 7 & 8 \\ 9 & 10 \\ \end{array} \right]=\left( \boldsymbol{b}_1,\boldsymbol{b}_2 \right) $, then, we have
$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2 \right) $$
$$=\left[ \begin{array}{cc} \left[ \begin{array}{c} 1 \\ 3 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 5 \\ 7 \\ 9 \\ \end{array} \right] & \left[ \begin{array}{c} 2 \\ 4 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 6 \\ 8 \\ 10 \\ \end{array} \right] \\ \end{array} \right]$$
$$=\left[ \begin{array}{cc} 5 & 12 \\ 7 & 16 \\ 9 & 20 \\ 15 & 24 \\ 21 & 32 \\ 27 & 40 \\ \end{array} \right]\in\mathbb{R}^{6\times 2}.$$
```
def kr_prod(a, b):
return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8], [9, 10]])
print(kr_prod(A, B))
```
## 3) CP decomposition (`cp_combine`)
- **Definition**:
The CP decomposition factorizes a tensor into a sum of outer products of vectors. For example, for a third-order tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$, the CP decomposition can be written as
$$\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s},$$
or element-wise,
$$\hat{y}_{ijt}=\sum_{s=1}^{r}u_{is}v_{js}x_{ts},\forall (i,j,t),$$
where vectors $\boldsymbol{u}_{s}\in\mathbb{R}^{m},\boldsymbol{v}_{s}\in\mathbb{R}^{n},\boldsymbol{x}_{s}\in\mathbb{R}^{f}$ are columns of factor matrices $U\in\mathbb{R}^{m\times r},V\in\mathbb{R}^{n\times r},X\in\mathbb{R}^{f\times r}$, respectively. The symbol $\circ$ denotes vector outer product.
- **Example**:
Given matrices $U=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]\in\mathbb{R}^{2\times 2}$, $V=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ 5 & 6 \\ \end{array} \right]\in\mathbb{R}^{3\times 2}$ and $X=\left[ \begin{array}{cc} 1 & 5 \\ 2 & 6 \\ 3 & 7 \\ 4 & 8 \\ \end{array} \right]\in\mathbb{R}^{4\times 2}$, then if $\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s}$, then, we have
$$\hat{Y}_1=\hat{\mathcal{Y}}(:,:,1)=\left[ \begin{array}{ccc} 31 & 42 & 65 \\ 63 & 86 & 135 \\ \end{array} \right],$$
$$\hat{Y}_2=\hat{\mathcal{Y}}(:,:,2)=\left[ \begin{array}{ccc} 38 & 52 & 82 \\ 78 & 108 & 174 \\ \end{array} \right],$$
$$\hat{Y}_3=\hat{\mathcal{Y}}(:,:,3)=\left[ \begin{array}{ccc} 45 & 62 & 99 \\ 93 & 130 & 213 \\ \end{array} \right],$$
$$\hat{Y}_4=\hat{\mathcal{Y}}(:,:,4)=\left[ \begin{array}{ccc} 52 & 72 & 116 \\ 108 & 152 & 252 \\ \end{array} \right].$$
```
def cp_combine(U, V, X):
return np.einsum('is, js, ts -> ijt', U, V, X)
U = np.array([[1, 2], [3, 4]])
V = np.array([[1, 3], [2, 4], [5, 6]])
X = np.array([[1, 5], [2, 6], [3, 7], [4, 8]])
print(cp_combine(U, V, X))
print()
print('tensor size:')
print(cp_combine(U, V, X).shape)
```
## 4) Tensor Unfolding (`ten2mat`) and Matrix Folding (`mat2ten`)
Using numpy reshape to perform 3rd rank tensor unfold operation. [[**link**](https://stackoverflow.com/questions/49970141/using-numpy-reshape-to-perform-3rd-rank-tensor-unfold-operation)]
```
import numpy as np
def ten2mat(tensor, mode):
return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')
X = np.array([[[1, 2, 3, 4], [3, 4, 5, 6]], [[5, 6, 7, 8], [7, 8, 9, 10]], [[9, 10, 11, 12], [11, 12, 13, 14]]])
print('tensor size:')
print(X.shape)
print('original tensor:')
print(X)
print()
print('(1) mode-1 tensor unfolding:')
print(ten2mat(X, 0))
print()
print('(2) mode-2 tensor unfolding:')
print(ten2mat(X, 1))
print()
print('(3) mode-3 tensor unfolding:')
print(ten2mat(X, 2))
def mat2ten(mat, tensor_size, mode):
index = list()
index.append(mode)
for i in range(tensor_size.shape[0]):
if i != mode:
index.append(i)
return np.moveaxis(np.reshape(mat, list(tensor_size[index]), order = 'F'), 0, mode)
```
# Part 2: Temporal Regularized Tensor Factorization (TRTF)
```
def TRTF(dense_tensor, sparse_tensor, U, V, X, theta, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter):
dim1, dim2, dim3 = dense_tensor.shape
binary_tensor = np.zeros((dim1, dim2, dim3))
position = np.where(sparse_tensor > 0)
binary_tensor[position] = 1
pos = np.where((dense_tensor > 0) & (sparse_tensor == 0))
d = len(time_lags)
rank = U.shape[1]
for iters in range(maxiter):
var1 = kr_prod(X, V).T
var2 = kr_prod(var1, var1)
var3 = np.matmul(var2, ten2mat(binary_tensor, 0).T).reshape([rank, rank, dim1]) + np.dstack([lambda_u * np.eye(rank)] * dim1)
var4 = np.matmul(var1, ten2mat(sparse_tensor, 0).T)
for i in range(dim1):
var_Lambda1 = var3[ :, :, i]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
U[i, :] = np.matmul(inv_var_Lambda1, var4[:, i])
var1 = kr_prod(X, U).T
var2 = kr_prod(var1, var1)
var3 = np.matmul(var2, ten2mat(binary_tensor, 1).T).reshape([rank, rank, dim2]) + np.dstack([lambda_v * np.eye(rank)] * dim2)
var4 = np.matmul(var1, ten2mat(sparse_tensor, 1).T)
for j in range(dim2):
var_Lambda1 = var3[ :, :, j]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
V[j, :] = np.matmul(inv_var_Lambda1, var4[:, j])
var1 = kr_prod(V, U).T
var2 = kr_prod(var1, var1)
var3 = np.matmul(var2, ten2mat(binary_tensor, 2).T).reshape([rank, rank, dim3])
var4 = np.matmul(var1, ten2mat(sparse_tensor, 2).T)
for t in range(dim3):
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
if t < max(time_lags):
Pt = np.zeros((rank, rank))
Qt = np.zeros(rank)
else:
Pt = np.eye(rank)
Qt = np.einsum('ij, ij -> j', theta, X[t - time_lags, :])
if t < dim3 - np.min(time_lags):
if t >= np.max(time_lags) and t < dim3 - np.max(time_lags):
index = list(range(0, d))
else:
index = list(np.where((t + time_lags >= np.max(time_lags)) & (t + time_lags < dim3)))[0]
for k in index:
theta0 = theta.copy()
theta0[k, :] = 0
Mt = Mt + np.diag(theta[k, :] ** 2);
Nt = Nt + np.multiply(theta[k, :], (X[t + time_lags[k], :]
- np.einsum('ij, ij -> j', theta0,
X[t + time_lags[k] - time_lags, :])))
X[t, :] = np.matmul(np.linalg.inv(var3[:, :, t]
+ lambda_ar * Pt + lambda_ar * Mt + lambda_ar * eta * np.eye(rank)),
(var4[:, t] + lambda_ar * Qt + lambda_ar * Nt))
elif t >= dim3 - np.min(time_lags):
X[t, :] = np.matmul(np.linalg.inv(var3[:, :, t]
+ lambda_ar * Pt +
lambda_ar * eta * np.eye(rank)), (var4[:, t] + Qt))
for k in range(d):
var1 = X[np.max(time_lags) - time_lags[k] : dim3 - time_lags[k], :]
var2 = np.linalg.inv(np.diag(np.einsum('ij, ij -> j', var1, var1))
+ (lambda_theta / lambda_ar) * np.eye(rank))
var3 = np.zeros(rank)
for t in range(np.max(time_lags) - time_lags[k], dim3 - time_lags[k]):
var3 += np.multiply(X[t, :], (X[t + time_lags[k], :]
- np.einsum('ij, ij -> j', theta, X[t + time_lags[k] - time_lags, :])
+ np.multiply(theta[k, :], X[t, :])))
theta[k, :] = np.matmul(var2, var3)
tensor_hat = cp_combine(U, V, X)
mape = np.sum(np.abs(dense_tensor[pos] -
tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
rmse = np.sqrt(np.sum((dense_tensor[pos] -
tensor_hat[pos])**2)/dense_tensor[pos].shape[0])
if (iters + 1) % 100 == 0:
print('Iter: {}'.format(iters + 1))
print('MAPE: {:.6}'.format(mape))
print('RMSE: {:.6}'.format(rmse))
print()
X_new = np.zeros((dim3 + multi_steps, rank))
X_new[0 : dim3, :] = X.copy()
for t0 in range(multi_steps):
X_new[dim3 + t0, :] = np.einsum('ij, ij -> j', theta, X_new[dim3 + t0 - time_lags, :])
return cp_combine(U, V, X_new[dim3 : dim3 + multi_steps, :]), U, V, X_new, theta
```
## Multi-step prediction
In the multi-step prediction task, to enable training data for each rolling step informative, we do not apply an online implementation anymore.
Involving rolling prediction tasks, there are two crucial inputs:
- **`pred_time_steps`**: the number of steps we should forecast, e.g., if we want to forecast time series within 5 days (144 time slots/steps per day) in advance, then the `pred_time_steps` is $5\times 144=720$;
- **`multi_steps`**: the number of steps we should forecast at the current step, e.g., if we want to forecast time series within 2 hours (7 time slots/steps per hour) in advance, then the `multi_stpes` is $2\times 6=12$.
```
def multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter):
T = dense_tensor.shape[2]
start_time = T - pred_time_steps
dim1 = dense_tensor.shape[0]
dim2 = dense_tensor.shape[1]
d = time_lags.shape[0]
tensor_hat = np.zeros((dim1, dim2, pred_time_steps))
for t in range(int(pred_time_steps/multi_steps)):
if t == 0:
ten, U, V, X, theta = TRTF(dense_tensor[:, :, 0 : start_time], sparse_tensor[:, :, 0 : start_time],
0.1 * np.random.rand(dim1, rank), 0.1 * np.random.rand(dim2, rank),
0.1 * np.random.rand(start_time, rank), 0.1 * np.random.rand(d, rank),
time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter[0])
else:
ten, U, V, X, theta = TRTF(dense_tensor[:, :, 0 : start_time + t * multi_steps],
sparse_tensor[:, :, 0 : start_time + t * multi_steps],
U, V, X, theta, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter[1])
tensor_hat[:, :, t * multi_steps : (t + 1) * multi_steps] = ten[:, :, ten.shape[2] - multi_steps : ten.shape[2]]
small_dense_tensor = dense_tensor[:, :, start_time : dense_tensor.shape[2]]
pos = np.where(small_dense_tensor != 0)
final_mape = np.sum(np.abs(small_dense_tensor[pos] -
tensor_hat[pos])/small_dense_tensor[pos])/small_dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((small_dense_tensor[pos] -
tensor_hat[pos]) ** 2)/small_dense_tensor[pos].shape[0])
print('Final MAPE: {:.6}'.format(final_mape))
print('Final RMSE: {:.6}'.format(final_rmse))
print()
return tensor_hat
```
# Part 3: Experiments on New York Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.0
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.1
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.3
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.1
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[1]):
for i3 in range(61):
binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3]
+ 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.3
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[1]):
for i3 in range(61):
binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3]
+ 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of multi-step prediction with missing values using TRTF:
| scenario |`back_steps`|`rank`|`time_lags`| `maxiter` | mape | rmse |
|:----------|-----:|-----:|---------:|---------:|-----------:|----------:|
|**Original data**| - | 10 | (1,2,3,24,24+1,24+2,7$\times$24,7$\times$24+1,7$\times$24+2) | (200,20) | **0.8687** | **7.13**|
|**10%, RM**| - | 10 | (1,2,3,24,24+1,24+2,7$\times$24,7$\times$24+1,7$\times$24+2) | (200,20) | **0.8679** | **7.14**|
|**30%, RM**| - | 10 | (1,2,3,24,24+1,24+2,7$\times$24,7$\times$24+1,7$\times$24+2) | (200,20) | **0.8740** | **7.30**|
|**10%, NM**| - | 10 | (1,2,3,24,24+1,24+2,7$\times$24,7$\times$24+1,7$\times$24+2) | (200,20) | **0.8714** | **7.18**|
|**30%, NM**| - | 10 | (1,2,3,24,24+1,24+2,7$\times$24,7$\times$24+1,7$\times$24+2) | (200,20) | **0.8604** | **7.22**|
|
github_jupyter
|
import numpy as np
from numpy.linalg import inv as inv
def kr_prod(a, b):
return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8], [9, 10]])
print(kr_prod(A, B))
def cp_combine(U, V, X):
return np.einsum('is, js, ts -> ijt', U, V, X)
U = np.array([[1, 2], [3, 4]])
V = np.array([[1, 3], [2, 4], [5, 6]])
X = np.array([[1, 5], [2, 6], [3, 7], [4, 8]])
print(cp_combine(U, V, X))
print()
print('tensor size:')
print(cp_combine(U, V, X).shape)
import numpy as np
def ten2mat(tensor, mode):
return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')
X = np.array([[[1, 2, 3, 4], [3, 4, 5, 6]], [[5, 6, 7, 8], [7, 8, 9, 10]], [[9, 10, 11, 12], [11, 12, 13, 14]]])
print('tensor size:')
print(X.shape)
print('original tensor:')
print(X)
print()
print('(1) mode-1 tensor unfolding:')
print(ten2mat(X, 0))
print()
print('(2) mode-2 tensor unfolding:')
print(ten2mat(X, 1))
print()
print('(3) mode-3 tensor unfolding:')
print(ten2mat(X, 2))
def mat2ten(mat, tensor_size, mode):
index = list()
index.append(mode)
for i in range(tensor_size.shape[0]):
if i != mode:
index.append(i)
return np.moveaxis(np.reshape(mat, list(tensor_size[index]), order = 'F'), 0, mode)
def TRTF(dense_tensor, sparse_tensor, U, V, X, theta, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter):
dim1, dim2, dim3 = dense_tensor.shape
binary_tensor = np.zeros((dim1, dim2, dim3))
position = np.where(sparse_tensor > 0)
binary_tensor[position] = 1
pos = np.where((dense_tensor > 0) & (sparse_tensor == 0))
d = len(time_lags)
rank = U.shape[1]
for iters in range(maxiter):
var1 = kr_prod(X, V).T
var2 = kr_prod(var1, var1)
var3 = np.matmul(var2, ten2mat(binary_tensor, 0).T).reshape([rank, rank, dim1]) + np.dstack([lambda_u * np.eye(rank)] * dim1)
var4 = np.matmul(var1, ten2mat(sparse_tensor, 0).T)
for i in range(dim1):
var_Lambda1 = var3[ :, :, i]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
U[i, :] = np.matmul(inv_var_Lambda1, var4[:, i])
var1 = kr_prod(X, U).T
var2 = kr_prod(var1, var1)
var3 = np.matmul(var2, ten2mat(binary_tensor, 1).T).reshape([rank, rank, dim2]) + np.dstack([lambda_v * np.eye(rank)] * dim2)
var4 = np.matmul(var1, ten2mat(sparse_tensor, 1).T)
for j in range(dim2):
var_Lambda1 = var3[ :, :, j]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
V[j, :] = np.matmul(inv_var_Lambda1, var4[:, j])
var1 = kr_prod(V, U).T
var2 = kr_prod(var1, var1)
var3 = np.matmul(var2, ten2mat(binary_tensor, 2).T).reshape([rank, rank, dim3])
var4 = np.matmul(var1, ten2mat(sparse_tensor, 2).T)
for t in range(dim3):
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
if t < max(time_lags):
Pt = np.zeros((rank, rank))
Qt = np.zeros(rank)
else:
Pt = np.eye(rank)
Qt = np.einsum('ij, ij -> j', theta, X[t - time_lags, :])
if t < dim3 - np.min(time_lags):
if t >= np.max(time_lags) and t < dim3 - np.max(time_lags):
index = list(range(0, d))
else:
index = list(np.where((t + time_lags >= np.max(time_lags)) & (t + time_lags < dim3)))[0]
for k in index:
theta0 = theta.copy()
theta0[k, :] = 0
Mt = Mt + np.diag(theta[k, :] ** 2);
Nt = Nt + np.multiply(theta[k, :], (X[t + time_lags[k], :]
- np.einsum('ij, ij -> j', theta0,
X[t + time_lags[k] - time_lags, :])))
X[t, :] = np.matmul(np.linalg.inv(var3[:, :, t]
+ lambda_ar * Pt + lambda_ar * Mt + lambda_ar * eta * np.eye(rank)),
(var4[:, t] + lambda_ar * Qt + lambda_ar * Nt))
elif t >= dim3 - np.min(time_lags):
X[t, :] = np.matmul(np.linalg.inv(var3[:, :, t]
+ lambda_ar * Pt +
lambda_ar * eta * np.eye(rank)), (var4[:, t] + Qt))
for k in range(d):
var1 = X[np.max(time_lags) - time_lags[k] : dim3 - time_lags[k], :]
var2 = np.linalg.inv(np.diag(np.einsum('ij, ij -> j', var1, var1))
+ (lambda_theta / lambda_ar) * np.eye(rank))
var3 = np.zeros(rank)
for t in range(np.max(time_lags) - time_lags[k], dim3 - time_lags[k]):
var3 += np.multiply(X[t, :], (X[t + time_lags[k], :]
- np.einsum('ij, ij -> j', theta, X[t + time_lags[k] - time_lags, :])
+ np.multiply(theta[k, :], X[t, :])))
theta[k, :] = np.matmul(var2, var3)
tensor_hat = cp_combine(U, V, X)
mape = np.sum(np.abs(dense_tensor[pos] -
tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
rmse = np.sqrt(np.sum((dense_tensor[pos] -
tensor_hat[pos])**2)/dense_tensor[pos].shape[0])
if (iters + 1) % 100 == 0:
print('Iter: {}'.format(iters + 1))
print('MAPE: {:.6}'.format(mape))
print('RMSE: {:.6}'.format(rmse))
print()
X_new = np.zeros((dim3 + multi_steps, rank))
X_new[0 : dim3, :] = X.copy()
for t0 in range(multi_steps):
X_new[dim3 + t0, :] = np.einsum('ij, ij -> j', theta, X_new[dim3 + t0 - time_lags, :])
return cp_combine(U, V, X_new[dim3 : dim3 + multi_steps, :]), U, V, X_new, theta
def multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter):
T = dense_tensor.shape[2]
start_time = T - pred_time_steps
dim1 = dense_tensor.shape[0]
dim2 = dense_tensor.shape[1]
d = time_lags.shape[0]
tensor_hat = np.zeros((dim1, dim2, pred_time_steps))
for t in range(int(pred_time_steps/multi_steps)):
if t == 0:
ten, U, V, X, theta = TRTF(dense_tensor[:, :, 0 : start_time], sparse_tensor[:, :, 0 : start_time],
0.1 * np.random.rand(dim1, rank), 0.1 * np.random.rand(dim2, rank),
0.1 * np.random.rand(start_time, rank), 0.1 * np.random.rand(d, rank),
time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter[0])
else:
ten, U, V, X, theta = TRTF(dense_tensor[:, :, 0 : start_time + t * multi_steps],
sparse_tensor[:, :, 0 : start_time + t * multi_steps],
U, V, X, theta, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter[1])
tensor_hat[:, :, t * multi_steps : (t + 1) * multi_steps] = ten[:, :, ten.shape[2] - multi_steps : ten.shape[2]]
small_dense_tensor = dense_tensor[:, :, start_time : dense_tensor.shape[2]]
pos = np.where(small_dense_tensor != 0)
final_mape = np.sum(np.abs(small_dense_tensor[pos] -
tensor_hat[pos])/small_dense_tensor[pos])/small_dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((small_dense_tensor[pos] -
tensor_hat[pos]) ** 2)/small_dense_tensor[pos].shape[0])
print('Final MAPE: {:.6}'.format(final_mape))
print('Final RMSE: {:.6}'.format(final_rmse))
print()
return tensor_hat
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.0
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.1
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.3
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.1
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[1]):
for i3 in range(61):
binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3]
+ 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.3
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[1]):
for i3 in range(61):
binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3]
+ 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
pred_time_steps = 24 * 7
multi_steps = 24
rank = 10
time_lags = np.array([1, 2, 3, 24, 24+1, 24+2, 7*24, 7*24+1, 7*24+2])
maxiter = np.array([200, 20])
theta = 0.1 * np.random.rand(time_lags.shape[0], rank)
lambda_u = 500
lambda_v = 500
lambda_ar = 500
eta = 2e-2
lambda_theta = 100
tensor_hat = multi_prediction(dense_tensor, sparse_tensor, pred_time_steps, rank, time_lags, multi_steps,
lambda_u, lambda_v, lambda_ar, eta, lambda_theta, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
| 0.274643 | 0.993735 |
# Example of station forecasts postprocessing
In this notebook, using Pythie, we postprocess the 2 metre temperature forecasts at a station. We postprocess it with the 2 metre temperature itself, the maximum 2 metre temperature in the last 6 hours and the soil temperature as predictors.
We use the observation data of the [WMO](https://public.wmo.int/en)-compliant [DWD](https://www.dwd.de) meteorological station of [Soltau](https://en.wikipedia.org/wiki/Soltau) from 1997 to 2016.
The station is located at the point [52°57'37.5"N, 9°47'35.0"E](https://www.google.com/maps/place/52%C2%B057'37.5%22N+9%C2%B047'35.0%22E/@52.9507203,9.7805715,14.5z). The data have been downloaded from the DWD [Climate Data Center](https://cdc.dwd.de/portal/).
The postprocessing is done by making a regression at each lead time between the reforecasts at a nearby (5.3 km) grid point [53°00'00.0"N, 9°45'00.0"E](https://www.google.com/maps/place/53%C2%B000'00.0%22N+9%C2%B045'00.0%22E/@53.0205719,9.7147325,12.25z). For verification, the result of this regression is then applied on the reforecasts themselves (the training set).
The reforecasts at the grid point have been extracted from the reforecasts gridded data available in the gridded reforecasts and reanalysis dataset.
**Note:** *In the following example, we drop the initial conditions of the reforecasts because one of the maximum 2 meter temperature is not defined at this lead time ! As a result, we do not postprocess the lead time 0.*
**Warning:** To perform the computation of this example notebook, you need first to download from [Zenodo](https://zenodo.org/) the gridded observation and reforecast dataset. You also need to install the extra packages. See the [README.md](../README.md) file for more information.
#### Observation data source
Source: [Deutscher Wetterdienst](https://www.dwd.de/), [DWD CDC portal](https://cdc.dwd.de/portal/)
#### Reforecast data source
Source: www.ecmwf.int
Creative Commons Attribution 4.0 International (CC BY 4.0)
Copyright © 2021 European Centre for Medium-Range Weather Forecasts (ECMWF).
See the attached ECMWF_LICENSE.txt file included with the data for more details.
## Preliminaries
Setting the path
```
import sys, os
sys.path.extend([os.path.abspath('../')])
os.chdir('../')
```
Importing external modules
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib import rc
rc('font',**{'family':'serif','sans-serif':['Times'],'size':16})
```
Importing internal modules
```
from core.data import Data
import postprocessors.MBM as MBM
```
Setting some parameters
```
# Date of the forecast
date = "01-02"
# Year to correct
year = 2017
# Number of reforecasts
years_back = 20
# Parameter of the observations (to be postprocessed)
param = '2t'
# Parameters of the predictors
params = ['2t', 'mx2t6', 'stl1']
# Locaction of the data
data_folder = './data/soltau/'
# Station considered
station = 4745
```
## Loading and creating the Data objects
**This section shows how to load Data objects from csv files using pandas**
Loading the reforecast data
```
# Temperature
# First create a list of pandas Dataframes from the csv files
reforecasts_temp = list()
for y in range(year-years_back, year):
# note : we skip the first row to drop the forecast initial condition
reforecasts_temp.append(pd.read_csv(data_folder + 'reforecasts_2t_' + str(y) + '-' + date + '_' + str(station) + '.csv', index_col=0, parse_dates=True, skiprows=[1]))
# Then a Data object from this list, loading it along the observation axis, and each member of the list along the member axis
reforecasts_data_2t = Data()
reforecasts_data_2t.load_scalars(reforecasts_temp, load_axis=['obs', 'member'], columns='all')
reforecasts_data_list = list()
reforecasts_data_list.append(reforecasts_data_2t)
# Same for the maximum temperature over the last 6 hours
reforecasts_data_mx2t6 = Data()
reforecasts_mx2t6 = list()
for y in range(year-years_back, year):
# note : we skip the first row to drop the forecast initial condition
reforecasts_mx2t6.append(pd.read_csv(data_folder + 'reforecasts_mx2t6_' + str(y) + '-' + date + '_' + str(station) + '.csv', index_col=0, parse_dates=True, skiprows=[1]))
reforecasts_data_mx2t6.load_scalars(reforecasts_mx2t6, load_axis=['obs', 'member'], columns='all')
reforecasts_data_list.append(reforecasts_data_mx2t6)
# Same for the soil temperature
reforecasts_data_stl1 = Data()
reforecasts_stl1 = list()
for y in range(year-years_back, year):
# note : we skip the first row to drop the forecast initial condition
reforecasts_stl1.append(pd.read_csv(data_folder + 'reforecasts_stl1_' + str(y) + '-' + date + '_' + str(station) + '.csv', index_col=0, parse_dates=True, skiprows=[1]))
reforecasts_data_stl1.load_scalars(reforecasts_stl1, load_axis=['obs', 'member'], columns='all')
reforecasts_data_list.append(reforecasts_data_stl1)
# saving the first predictor (the variable itself) for latter
reforecast_data_1st_predictor = reforecasts_data_list[0].copy()
# Then loading all the predictors into one single Data object
reforecasts_data = reforecasts_data_list[0].copy()
for reforecast in reforecasts_data_list[1:]:
reforecasts_data.append_predictors(reforecast)
```
Loading the observations corresponding to the reforecast data
```
# skipping the initial condition of the forecast and taking 6-hourly observations to match the reforecasts timestep
skiprows = lambda x: x==1 or (x != 0 and (x-1) % 6 != 0)
# Temperature
# First create a list of pandas Dataframes from the csv files
past_obs_temp = list()
for y in range(year-years_back, year):
past_obs_temp.append(pd.read_csv(data_folder + 'past_observations_2t_' + str(y) + '-' + date + '_' + str(station) + '.csv', index_col=0, parse_dates=True, skiprows=skiprows))
# Then a Data object from this list, loading it along the observation axis, and each member of the list along the member axis
past_obs_data = Data()
for obs in past_obs_temp:
past_obs_data.load_scalars(obs, load_axis='obs', columns='2t', concat_axis='obs')
```
## Training the PostProcessors
In this section, we train the various different postprocessors of the Member-By-Member MBM module with the data previously loaded
```
# List to hold the trained PostProcessors
postprocessors = list()
proc_labels = list()
```
### Simple bias correction
```
%%time
ebc = MBM.BiasCorrection()
ebc.train(past_obs_data, reforecasts_data)
postprocessors.append(ebc)
proc_labels.append('Bias correction')
```
### Ensemble Mean correction
```
%%time
emc = MBM.EnsembleMeanCorrection()
emc.train(past_obs_data, reforecasts_data)
postprocessors.append(emc)
proc_labels.append('Ensemble Mean correction')
```
### Ensemble Spread Scaling correction
```
%%time
essc = MBM.EnsembleSpreadScalingCorrection()
essc.train(past_obs_data, reforecasts_data)
postprocessors.append(essc)
proc_labels.append('Ensemble Spread Scaling correction')
```
### Ensemble Spread Scaling correction with Absolute norm CRPS minimization
```
%%time
essacc = MBM.EnsembleSpreadScalingAbsCRPSCorrection()
essacc.train(past_obs_data, reforecasts_data, ntrial=10)
postprocessors.append(essacc)
proc_labels.append('Ensemble Spread Scaling Abs. CRPS min. correction')
```
### Ensemble Spread Scaling correction + Climatology nudging with Absolute norm CRPS minimization
```
%%time
eacc = MBM.EnsembleAbsCRPSCorrection()
eacc.train(past_obs_data, reforecasts_data, ntrial=10)
postprocessors.append(eacc)
proc_labels.append('Ensemble Spread Scaling + Clim. Nudging Abs. CRPS min. correction')
```
### Ensemble Spread Scaling correction + Climatology nudging with Ngr CRPS minimization
```
%%time
encc = MBM.EnsembleNgrCRPSCorrection()
encc.train(past_obs_data, reforecasts_data, ntrial=10)
postprocessors.append(encc)
proc_labels.append('Ensemble Spread Scaling + Clim. Nudging Ngr CRPS min. correction')
```
## Verification
Here we are going to postprocess the reforecasts themselves to see how well they perform:
```
# List to store the experiment names
exp_results = list()
exp_results.append(reforecast_data_1st_predictor)
exp_labels = list()
exp_labels.append('Raw forecasts')
timestamps = np.array(range(reforecast_data_1st_predictor.number_of_time_steps))
for label, postprocessor in zip(proc_labels, postprocessors):
exp_results.append(postprocessor(reforecasts_data))
exp_labels.append(label)
```
### Computing scores
Computing the bias
```
# List to store the CRPS Data object
bias = list()
for label, result in zip(exp_labels, exp_results):
bias.append(result.bias(past_obs_data))
```
Computing the ensemble mean RMSE
```
# List to store the CRPS Data object
rmse = list()
for label, result in zip(exp_labels, exp_results):
rmse.append(result.ensemble_mean_RMSE(past_obs_data))
```
Computing the CRPS
```
# List to store the CRPS Data object
crps = list()
for label, result in zip(exp_labels, exp_results):
crps.append(result.CRPS(past_obs_data))
```
### Plotting the scores
```
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
first = True
for c, lab in zip(crps, exp_labels):
if first:
c.plot(ax=ax, global_label=lab, timestamps=timestamps, lw=3., ls="--")
first = False
else:
c.plot(ax=ax, global_label=lab, timestamps=timestamps)
ax.legend()
ax.set_title('CRPS Score at station '+str(station))
ax.set_ylabel('CRPS [C°]')
ax.set_xlabel('time x 6hrs');
ax.set_ylim(0., 3.5);
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
first = True
for c, lab in zip(rmse, exp_labels):
if first:
c.plot(ax=ax, global_label=lab, timestamps=timestamps, lw=3., ls="--")
first = False
else:
c.plot(ax=ax, global_label=lab, timestamps=timestamps)
ax.legend()
ax.set_title('RMSE Score at station '+str(station))
ax.set_ylabel('RMSE [C°]')
ax.set_xlabel('time x 6hrs');
ax.set_ylim(0., 5.5);
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
first = True
for c, lab in zip(bias, exp_labels):
if first:
c.plot(ax=ax, global_label=lab, timestamps=timestamps, lw=3., ls="--")
first = False
else:
c.plot(ax=ax, global_label=lab, timestamps=timestamps)
ax.legend()
ax.set_title('Bias at station '+str(station))
ax.set_ylabel('Bias [C°]')
ax.set_xlabel('time x 6hrs');
ax.set_ylim(-3., 1.);
```
### Example of a postprocessing parameters plot
```
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
postprocessors[-2].plot_parameters(ax=ax);
ax.set_ylim(-8.,8.)
ax.set_xlabel('Timestep [hours]')
ax.set_title('Postprocessing parameters\n('+exp_labels[-2]+')');
```
### Example of a reforecast plot
```
a = Data(reforecast_data_1st_predictor[0,-2][np.newaxis, np.newaxis,...], timestamps=[reforecast_data_1st_predictor.timestamps[0,-2]])
b = Data(exp_results[-1][0,-2][np.newaxis, np.newaxis,...], timestamps=[reforecast_data_1st_predictor.timestamps[0,-2]])
bb = Data(exp_results[-2][0,-2][np.newaxis, np.newaxis,...], timestamps=[reforecast_data_1st_predictor.timestamps[0,-2]])
c = Data(past_obs_data[0,-2][np.newaxis, np.newaxis,...], timestamps=[past_obs_data.timestamps[0,-2]])
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
a.plot(color='tab:blue', ax=ax, global_label='Raw ensemble reforecast')
b.plot(color='tab:orange', ax=ax, global_label='Ngr Corrected ensemble reforecast')
c.plot(color='g', ax=ax, label='Station Observation', lw=4.)
ax.set_title('Reforecasts at station '+str(station))
ax.set_ylabel('Date')
ax.set_xlabel('2m Temperature [C°]')
ax.legend();
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
a.plot(color='tab:blue', ax=ax, global_label='Raw ensemble reforecast')
bb.plot(color='tab:orange', ax=ax, global_label='Abs Corrected ensemble reforecast')
c.plot(color='g', ax=ax, label='Station Observation', lw=4.)
ax.set_title('Reforecasts at station '+str(station))
ax.set_ylabel('Date')
ax.set_xlabel('2m Temperature [C°]')
ax.legend();
```
|
github_jupyter
|
import sys, os
sys.path.extend([os.path.abspath('../')])
os.chdir('../')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib import rc
rc('font',**{'family':'serif','sans-serif':['Times'],'size':16})
from core.data import Data
import postprocessors.MBM as MBM
# Date of the forecast
date = "01-02"
# Year to correct
year = 2017
# Number of reforecasts
years_back = 20
# Parameter of the observations (to be postprocessed)
param = '2t'
# Parameters of the predictors
params = ['2t', 'mx2t6', 'stl1']
# Locaction of the data
data_folder = './data/soltau/'
# Station considered
station = 4745
# Temperature
# First create a list of pandas Dataframes from the csv files
reforecasts_temp = list()
for y in range(year-years_back, year):
# note : we skip the first row to drop the forecast initial condition
reforecasts_temp.append(pd.read_csv(data_folder + 'reforecasts_2t_' + str(y) + '-' + date + '_' + str(station) + '.csv', index_col=0, parse_dates=True, skiprows=[1]))
# Then a Data object from this list, loading it along the observation axis, and each member of the list along the member axis
reforecasts_data_2t = Data()
reforecasts_data_2t.load_scalars(reforecasts_temp, load_axis=['obs', 'member'], columns='all')
reforecasts_data_list = list()
reforecasts_data_list.append(reforecasts_data_2t)
# Same for the maximum temperature over the last 6 hours
reforecasts_data_mx2t6 = Data()
reforecasts_mx2t6 = list()
for y in range(year-years_back, year):
# note : we skip the first row to drop the forecast initial condition
reforecasts_mx2t6.append(pd.read_csv(data_folder + 'reforecasts_mx2t6_' + str(y) + '-' + date + '_' + str(station) + '.csv', index_col=0, parse_dates=True, skiprows=[1]))
reforecasts_data_mx2t6.load_scalars(reforecasts_mx2t6, load_axis=['obs', 'member'], columns='all')
reforecasts_data_list.append(reforecasts_data_mx2t6)
# Same for the soil temperature
reforecasts_data_stl1 = Data()
reforecasts_stl1 = list()
for y in range(year-years_back, year):
# note : we skip the first row to drop the forecast initial condition
reforecasts_stl1.append(pd.read_csv(data_folder + 'reforecasts_stl1_' + str(y) + '-' + date + '_' + str(station) + '.csv', index_col=0, parse_dates=True, skiprows=[1]))
reforecasts_data_stl1.load_scalars(reforecasts_stl1, load_axis=['obs', 'member'], columns='all')
reforecasts_data_list.append(reforecasts_data_stl1)
# saving the first predictor (the variable itself) for latter
reforecast_data_1st_predictor = reforecasts_data_list[0].copy()
# Then loading all the predictors into one single Data object
reforecasts_data = reforecasts_data_list[0].copy()
for reforecast in reforecasts_data_list[1:]:
reforecasts_data.append_predictors(reforecast)
# skipping the initial condition of the forecast and taking 6-hourly observations to match the reforecasts timestep
skiprows = lambda x: x==1 or (x != 0 and (x-1) % 6 != 0)
# Temperature
# First create a list of pandas Dataframes from the csv files
past_obs_temp = list()
for y in range(year-years_back, year):
past_obs_temp.append(pd.read_csv(data_folder + 'past_observations_2t_' + str(y) + '-' + date + '_' + str(station) + '.csv', index_col=0, parse_dates=True, skiprows=skiprows))
# Then a Data object from this list, loading it along the observation axis, and each member of the list along the member axis
past_obs_data = Data()
for obs in past_obs_temp:
past_obs_data.load_scalars(obs, load_axis='obs', columns='2t', concat_axis='obs')
# List to hold the trained PostProcessors
postprocessors = list()
proc_labels = list()
%%time
ebc = MBM.BiasCorrection()
ebc.train(past_obs_data, reforecasts_data)
postprocessors.append(ebc)
proc_labels.append('Bias correction')
%%time
emc = MBM.EnsembleMeanCorrection()
emc.train(past_obs_data, reforecasts_data)
postprocessors.append(emc)
proc_labels.append('Ensemble Mean correction')
%%time
essc = MBM.EnsembleSpreadScalingCorrection()
essc.train(past_obs_data, reforecasts_data)
postprocessors.append(essc)
proc_labels.append('Ensemble Spread Scaling correction')
%%time
essacc = MBM.EnsembleSpreadScalingAbsCRPSCorrection()
essacc.train(past_obs_data, reforecasts_data, ntrial=10)
postprocessors.append(essacc)
proc_labels.append('Ensemble Spread Scaling Abs. CRPS min. correction')
%%time
eacc = MBM.EnsembleAbsCRPSCorrection()
eacc.train(past_obs_data, reforecasts_data, ntrial=10)
postprocessors.append(eacc)
proc_labels.append('Ensemble Spread Scaling + Clim. Nudging Abs. CRPS min. correction')
%%time
encc = MBM.EnsembleNgrCRPSCorrection()
encc.train(past_obs_data, reforecasts_data, ntrial=10)
postprocessors.append(encc)
proc_labels.append('Ensemble Spread Scaling + Clim. Nudging Ngr CRPS min. correction')
# List to store the experiment names
exp_results = list()
exp_results.append(reforecast_data_1st_predictor)
exp_labels = list()
exp_labels.append('Raw forecasts')
timestamps = np.array(range(reforecast_data_1st_predictor.number_of_time_steps))
for label, postprocessor in zip(proc_labels, postprocessors):
exp_results.append(postprocessor(reforecasts_data))
exp_labels.append(label)
# List to store the CRPS Data object
bias = list()
for label, result in zip(exp_labels, exp_results):
bias.append(result.bias(past_obs_data))
# List to store the CRPS Data object
rmse = list()
for label, result in zip(exp_labels, exp_results):
rmse.append(result.ensemble_mean_RMSE(past_obs_data))
# List to store the CRPS Data object
crps = list()
for label, result in zip(exp_labels, exp_results):
crps.append(result.CRPS(past_obs_data))
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
first = True
for c, lab in zip(crps, exp_labels):
if first:
c.plot(ax=ax, global_label=lab, timestamps=timestamps, lw=3., ls="--")
first = False
else:
c.plot(ax=ax, global_label=lab, timestamps=timestamps)
ax.legend()
ax.set_title('CRPS Score at station '+str(station))
ax.set_ylabel('CRPS [C°]')
ax.set_xlabel('time x 6hrs');
ax.set_ylim(0., 3.5);
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
first = True
for c, lab in zip(rmse, exp_labels):
if first:
c.plot(ax=ax, global_label=lab, timestamps=timestamps, lw=3., ls="--")
first = False
else:
c.plot(ax=ax, global_label=lab, timestamps=timestamps)
ax.legend()
ax.set_title('RMSE Score at station '+str(station))
ax.set_ylabel('RMSE [C°]')
ax.set_xlabel('time x 6hrs');
ax.set_ylim(0., 5.5);
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
first = True
for c, lab in zip(bias, exp_labels):
if first:
c.plot(ax=ax, global_label=lab, timestamps=timestamps, lw=3., ls="--")
first = False
else:
c.plot(ax=ax, global_label=lab, timestamps=timestamps)
ax.legend()
ax.set_title('Bias at station '+str(station))
ax.set_ylabel('Bias [C°]')
ax.set_xlabel('time x 6hrs');
ax.set_ylim(-3., 1.);
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
postprocessors[-2].plot_parameters(ax=ax);
ax.set_ylim(-8.,8.)
ax.set_xlabel('Timestep [hours]')
ax.set_title('Postprocessing parameters\n('+exp_labels[-2]+')');
a = Data(reforecast_data_1st_predictor[0,-2][np.newaxis, np.newaxis,...], timestamps=[reforecast_data_1st_predictor.timestamps[0,-2]])
b = Data(exp_results[-1][0,-2][np.newaxis, np.newaxis,...], timestamps=[reforecast_data_1st_predictor.timestamps[0,-2]])
bb = Data(exp_results[-2][0,-2][np.newaxis, np.newaxis,...], timestamps=[reforecast_data_1st_predictor.timestamps[0,-2]])
c = Data(past_obs_data[0,-2][np.newaxis, np.newaxis,...], timestamps=[past_obs_data.timestamps[0,-2]])
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
a.plot(color='tab:blue', ax=ax, global_label='Raw ensemble reforecast')
b.plot(color='tab:orange', ax=ax, global_label='Ngr Corrected ensemble reforecast')
c.plot(color='g', ax=ax, label='Station Observation', lw=4.)
ax.set_title('Reforecasts at station '+str(station))
ax.set_ylabel('Date')
ax.set_xlabel('2m Temperature [C°]')
ax.legend();
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
a.plot(color='tab:blue', ax=ax, global_label='Raw ensemble reforecast')
bb.plot(color='tab:orange', ax=ax, global_label='Abs Corrected ensemble reforecast')
c.plot(color='g', ax=ax, label='Station Observation', lw=4.)
ax.set_title('Reforecasts at station '+str(station))
ax.set_ylabel('Date')
ax.set_xlabel('2m Temperature [C°]')
ax.legend();
| 0.345989 | 0.957675 |
<a href="https://colab.research.google.com/github/GodingWal/Tesla-Stock-Forecast/blob/master/Tesla_Attempt.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install pandas-profiling==2.*
!pip install pyramid-arima
!pip install stepwise
!pip install pmdarima
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import pyplot
from pandas import read_csv
from matplotlib import pyplot
from sklearn.pipeline import make_pipeline
from sklearn.tree import DecisionTreeClassifier
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import accuracy_score
import numpy as np
from statsmodels.tsa.stattools import adfuller
from sklearn.impute import SimpleImputer
sns.set()
from sklearn.metrics import r2_score, median_absolute_error, mean_absolute_error
from sklearn.metrics import median_absolute_error, mean_squared_error, mean_squared_log_error
from scipy.optimize import minimize
import statsmodels.tsa.api as smt
import statsmodels.api as sm
from tqdm import tqdm_notebook
from itertools import product
import warnings
warnings.filterwarnings('ignore')
from statsmodels.tsa.arima_model import ARIMA
import datetime
from sklearn.model_selection import train_test_split
from statsmodels.tsa.stattools import acf, pacf
import statsmodels.tsa.stattools as ts
from google.colab import files
uploaded = files.upload()
tesla = 'Tesla.csv - Tesla.csv.csv'
df = pd.read_csv(tesla)
df = df.drop(['High', 'Low', 'Close', 'Volume', 'Adj Close'], axis=1)
df['Date_Time'] = pd.to_datetime(df['Date'])
df = df.set_index('Date_Time')
df.drop(['Date'], axis=1, inplace=True)
df.shape
sns.distplot(df['Open']);
df.describe()
df['Open'].skew()
lnprice = np.log(df['Open'])
lnprice
plt.plot(lnprice)
plt.show
acf_1 = acf(lnprice)[1:20]
test_df = pd.DataFrame([acf_1]).T
test_df.columns = ['Autocorrelation']
test_df.index += 1
test_df.plot(kind='bar')
plt.show()
pacf_1 = pacf(lnprice)[1:20]
test_df = pd.DataFrame([pacf_1]).T
test_df.columns = ['Partial Autocorrelation']
test_df.index += 1
test_df.plot(kind='bar')
plt.show()
X = lnprice
result = adfuller(pacf_1)
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
print('Critical Values:')
for key, value in result[4].items():
print('\t%s: %.3f' % (key, value))
results = ts.adfuller(lnprice, 1)
results
lnprice_diff=lnprice-lnprice.shift()
diff=lnprice_diff.dropna()
acf_1_diff = acf(diff)[1:20]
test_df = pd.DataFrame([acf_1_diff]).T
test_df.columns = ['First Difference Autocorrelation']
test_df.index += 1
test_df.plot(kind='bar')
pacf_1_diff = pacf(diff)[1:20]
plt.plot(pacf_1_diff)
plt.show()
test_df = pd.DataFrame([pacf_1_diff]).T
test_df.columns = ['First Difference Partial Autocorrelation']
test_df.index += 1
test_df.plot(kind='bar')
lnprice.head()
cutoff = pd.to_datetime('2015-10-01')
train = lnprice[lnprice.index < cutoff]
test = lnprice[lnprice.index > cutoff]
import math
price_matrix = train
model = ARIMA(train, order=(0,1,0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
predictions=model_fit.predict(start=1, typ='levels')
predictions
predictionsadjusted=np.exp(predictions)
predictionsadjusted
plt.plot(predictionsadjusted)
plt.title('Forecasted Price')
plt.show();
prediction = model_fit.predict(start=1, typ='levels')
from pyramid.arima import auto_arima
stepwise_arima = auto_arima(train, start_p=2, start_q=2, max_d=5,
max_p=16, max_q=5, m=12, scoring='mse',
start_P=2, max_order=20, random_state=78, seasonal=False,
d=1, D=1, trace=True, information_criterion='aic',
error_action='ignore', stationary=True,
suppress_warnings=True, with_intercept=False,
stepwise=True, maxiter=100, n_jobs=50, n_fits=20)
stepwise_arima.fit(train)
stepwise_arima.summary()
walk_forward, walk_forward_conf_int = stepwise_arima.predict(n_periods=367, return_conf_int=True)
dd = pd.DataFrame(pd.np.column_stack([test[:367], walk_forward])).plot()
# Actual vs Fitted
model_fit.plot_predict(dynamic=False)
plt.show()
# Make as pandas series
fc_series = pd.Series(walk_forward, index=test.index)
lower_series = pd.Series(walk_forward_conf_int[:, 0], index=test.index)
upper_series = pd.Series(walk_forward_conf_int[:, 1], index=test.index)
# Plot
plt.figure(figsize=(12,5), dpi=100)
plt.plot(train, label='training')
plt.plot(test, label='actual')
plt.plot(fc_series, label='forecast')
plt.fill_between(lower_series.index, lower_series, upper_series,
color='k', alpha=.15)
plt.title('Forecast vs Actuals')
plt.legend(loc='upper left', fontsize=8)
plt.show()
```
|
github_jupyter
|
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install pandas-profiling==2.*
!pip install pyramid-arima
!pip install stepwise
!pip install pmdarima
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import pyplot
from pandas import read_csv
from matplotlib import pyplot
from sklearn.pipeline import make_pipeline
from sklearn.tree import DecisionTreeClassifier
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import accuracy_score
import numpy as np
from statsmodels.tsa.stattools import adfuller
from sklearn.impute import SimpleImputer
sns.set()
from sklearn.metrics import r2_score, median_absolute_error, mean_absolute_error
from sklearn.metrics import median_absolute_error, mean_squared_error, mean_squared_log_error
from scipy.optimize import minimize
import statsmodels.tsa.api as smt
import statsmodels.api as sm
from tqdm import tqdm_notebook
from itertools import product
import warnings
warnings.filterwarnings('ignore')
from statsmodels.tsa.arima_model import ARIMA
import datetime
from sklearn.model_selection import train_test_split
from statsmodels.tsa.stattools import acf, pacf
import statsmodels.tsa.stattools as ts
from google.colab import files
uploaded = files.upload()
tesla = 'Tesla.csv - Tesla.csv.csv'
df = pd.read_csv(tesla)
df = df.drop(['High', 'Low', 'Close', 'Volume', 'Adj Close'], axis=1)
df['Date_Time'] = pd.to_datetime(df['Date'])
df = df.set_index('Date_Time')
df.drop(['Date'], axis=1, inplace=True)
df.shape
sns.distplot(df['Open']);
df.describe()
df['Open'].skew()
lnprice = np.log(df['Open'])
lnprice
plt.plot(lnprice)
plt.show
acf_1 = acf(lnprice)[1:20]
test_df = pd.DataFrame([acf_1]).T
test_df.columns = ['Autocorrelation']
test_df.index += 1
test_df.plot(kind='bar')
plt.show()
pacf_1 = pacf(lnprice)[1:20]
test_df = pd.DataFrame([pacf_1]).T
test_df.columns = ['Partial Autocorrelation']
test_df.index += 1
test_df.plot(kind='bar')
plt.show()
X = lnprice
result = adfuller(pacf_1)
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
print('Critical Values:')
for key, value in result[4].items():
print('\t%s: %.3f' % (key, value))
results = ts.adfuller(lnprice, 1)
results
lnprice_diff=lnprice-lnprice.shift()
diff=lnprice_diff.dropna()
acf_1_diff = acf(diff)[1:20]
test_df = pd.DataFrame([acf_1_diff]).T
test_df.columns = ['First Difference Autocorrelation']
test_df.index += 1
test_df.plot(kind='bar')
pacf_1_diff = pacf(diff)[1:20]
plt.plot(pacf_1_diff)
plt.show()
test_df = pd.DataFrame([pacf_1_diff]).T
test_df.columns = ['First Difference Partial Autocorrelation']
test_df.index += 1
test_df.plot(kind='bar')
lnprice.head()
cutoff = pd.to_datetime('2015-10-01')
train = lnprice[lnprice.index < cutoff]
test = lnprice[lnprice.index > cutoff]
import math
price_matrix = train
model = ARIMA(train, order=(0,1,0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
predictions=model_fit.predict(start=1, typ='levels')
predictions
predictionsadjusted=np.exp(predictions)
predictionsadjusted
plt.plot(predictionsadjusted)
plt.title('Forecasted Price')
plt.show();
prediction = model_fit.predict(start=1, typ='levels')
from pyramid.arima import auto_arima
stepwise_arima = auto_arima(train, start_p=2, start_q=2, max_d=5,
max_p=16, max_q=5, m=12, scoring='mse',
start_P=2, max_order=20, random_state=78, seasonal=False,
d=1, D=1, trace=True, information_criterion='aic',
error_action='ignore', stationary=True,
suppress_warnings=True, with_intercept=False,
stepwise=True, maxiter=100, n_jobs=50, n_fits=20)
stepwise_arima.fit(train)
stepwise_arima.summary()
walk_forward, walk_forward_conf_int = stepwise_arima.predict(n_periods=367, return_conf_int=True)
dd = pd.DataFrame(pd.np.column_stack([test[:367], walk_forward])).plot()
# Actual vs Fitted
model_fit.plot_predict(dynamic=False)
plt.show()
# Make as pandas series
fc_series = pd.Series(walk_forward, index=test.index)
lower_series = pd.Series(walk_forward_conf_int[:, 0], index=test.index)
upper_series = pd.Series(walk_forward_conf_int[:, 1], index=test.index)
# Plot
plt.figure(figsize=(12,5), dpi=100)
plt.plot(train, label='training')
plt.plot(test, label='actual')
plt.plot(fc_series, label='forecast')
plt.fill_between(lower_series.index, lower_series, upper_series,
color='k', alpha=.15)
plt.title('Forecast vs Actuals')
plt.legend(loc='upper left', fontsize=8)
plt.show()
| 0.512449 | 0.817064 |
# Credit Risk Resampling Techniques
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
```
# Read the CSV and Perform Basic Data Cleaning
```
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership",
"annual_inc", "verification_status", "issue_d", "loan_status",
"pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths",
"open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee",
"recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d",
"collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq",
"tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il",
"open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il",
"il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc",
"all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl",
"inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy",
"bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct",
"mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc",
"mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0",
"num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies",
"tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit",
"total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag"
]
target = ["loan_status"]
# Load the data
file_path = Path('Resources/LoanStats_2019Q1.csv/LoanStats_2019Q1.csv')
df = pd.read_csv(file_path, skiprows=1)[:-2]
df = df.loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
df.reset_index(inplace=True, drop=True)
pd.options.display.max_columns = 999
df.head()
# List how many labels each variable has
for col in df.columns:
print(col, ' : ', len(df[col].unique()), ' labels')
df = pd.get_dummies(df, columns=["hardship_flag", "debt_settlement_flag", "home_ownership", "verification_status", "issue_d", "pymnt_plan", "initial_list_status", "next_pymnt_d", "application_type"])
df.head()
```
# Split the Data into Training and Testing
```
# Create our features
X = df.drop(columns=['loan_status'])
# Create our target
y = df.loan_status
X.describe()
# Check the balance of our target values
y.value_counts()
from sklearn.model_selection import train_test_split
# Create X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_test_split(X,
y,
shuffle=True)
```
# Oversampling
In this section, you will compare two oversampling algorithms to determine which algorithm results in the best performance. You will oversample the data using the naive random oversampling algorithm and the SMOTE algorithm. For each algorithm, be sure to complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
### Naive Random Oversampling
```
# Resample the training data with the RandomOversampler
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
X_resampled, y_resampled = ros.fit_resample(X_train, y_train)
from collections import Counter
Counter(y_resampled)
Counter(y_train)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs',
random_state=1
)
model.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
y_predictions = model.predict(X_test)
balanced_accuracy_score(y_test, y_predictions)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
# y_predictions = model.predict(X_test)
confusion_matrix(y_test, y_predictions)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
target_names = ["High_risk", "Low_risk"]
print(classification_report_imbalanced(y_test, y_predictions, target_names=target_names))
```
### SMOTE Oversampling
```
# Resample the training data with SMOTE
from imblearn.over_sampling import SMOTE
from collections import Counter
X_resamp, y_resamp = SMOTE(random_state=1, sampling_strategy=1.0).fit_resample(X_train, y_train)
Counter(y_resamp)
# Train the Logistic Regression model using the resampled data
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resamp, y_resamp)
# Calculated the balanced accuracy score
y_prediction = model.predict(X_test)
balanced_accuracy_score(y_test, y_prediction)
# Display the confusion matrix
confusion_matrix(y_test, y_prediction)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_prediction))
```
# Undersampling
In this section, you will test an undersampling algorithms to determine which algorithm results in the best performance compared to the oversampling algorithms above. You will undersample the data using the Cluster Centroids algorithm and complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
```
# Resample the data using the ClusterCentroids resampler
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=1)
X_resampled, y_resampled = cc.fit_resample(X_train, y_train)
from collections import Counter
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred))
```
# Combination (Over and Under) Sampling
In this section, you will test a combination over- and under-sampling algorithm to determine if the algorithm results in the best performance compared to the other sampling algorithms above. You will resample the data using the SMOTEENN algorithm and complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
```
# Resample the training data with SMOTEENN
from imblearn.combine import SMOTEENN
sm = SMOTEENN(random_state=1)
X_resampled, y_resampled = sm.fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model2 = LogisticRegression(solver='lbfgs', random_state=1)
model2.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
y_pred2 = model2.predict(X_test)
balanced_accuracy_score(y_test, y_pred2)
# 0.4728126865477952
# Display the confusion matrix
confusion_matrix(y_test, y_pred2)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred2))
```
> Which model had the best balanced accuracy score?
The best balanced accuracy score was the RandomOverSampler with 59.6% and the SMOTE came in a close second place with a score of 58%. The ClusterCentroids scored the lowest with 47% and the SMOTEENN had a 53% accuracy score.
> Which model had the best recall score?
A good recall score would be 0.5 and the ROS classifier had 54% recall for the high risk and 65% for the low risk. In terms of our target, we would determine the 'high risk' as the predominant factor making the ROS model the best recall score. The other models SMOTE scored 49% for 'high risk', cc scored 48% for 'high risk', and the SMOTEENN scored 49% for 'high risk'.
> Which model had the best geometric mean score?
The ROS classifier has the higest geo score with 59%, meaning it has a higher accruacy score of the 'high risk' and 'low risk' target. The other classifiers scored 57% for SMOTE, 47% for cc, and 54% for the SMOTEENN.
|
github_jupyter
|
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership",
"annual_inc", "verification_status", "issue_d", "loan_status",
"pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths",
"open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee",
"recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d",
"collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq",
"tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il",
"open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il",
"il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc",
"all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl",
"inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy",
"bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct",
"mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc",
"mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0",
"num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies",
"tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit",
"total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag"
]
target = ["loan_status"]
# Load the data
file_path = Path('Resources/LoanStats_2019Q1.csv/LoanStats_2019Q1.csv')
df = pd.read_csv(file_path, skiprows=1)[:-2]
df = df.loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
df.reset_index(inplace=True, drop=True)
pd.options.display.max_columns = 999
df.head()
# List how many labels each variable has
for col in df.columns:
print(col, ' : ', len(df[col].unique()), ' labels')
df = pd.get_dummies(df, columns=["hardship_flag", "debt_settlement_flag", "home_ownership", "verification_status", "issue_d", "pymnt_plan", "initial_list_status", "next_pymnt_d", "application_type"])
df.head()
# Create our features
X = df.drop(columns=['loan_status'])
# Create our target
y = df.loan_status
X.describe()
# Check the balance of our target values
y.value_counts()
from sklearn.model_selection import train_test_split
# Create X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_test_split(X,
y,
shuffle=True)
# Resample the training data with the RandomOversampler
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
X_resampled, y_resampled = ros.fit_resample(X_train, y_train)
from collections import Counter
Counter(y_resampled)
Counter(y_train)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs',
random_state=1
)
model.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
y_predictions = model.predict(X_test)
balanced_accuracy_score(y_test, y_predictions)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
# y_predictions = model.predict(X_test)
confusion_matrix(y_test, y_predictions)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
target_names = ["High_risk", "Low_risk"]
print(classification_report_imbalanced(y_test, y_predictions, target_names=target_names))
# Resample the training data with SMOTE
from imblearn.over_sampling import SMOTE
from collections import Counter
X_resamp, y_resamp = SMOTE(random_state=1, sampling_strategy=1.0).fit_resample(X_train, y_train)
Counter(y_resamp)
# Train the Logistic Regression model using the resampled data
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resamp, y_resamp)
# Calculated the balanced accuracy score
y_prediction = model.predict(X_test)
balanced_accuracy_score(y_test, y_prediction)
# Display the confusion matrix
confusion_matrix(y_test, y_prediction)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_prediction))
# Resample the data using the ClusterCentroids resampler
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=1)
X_resampled, y_resampled = cc.fit_resample(X_train, y_train)
from collections import Counter
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred))
# Resample the training data with SMOTEENN
from imblearn.combine import SMOTEENN
sm = SMOTEENN(random_state=1)
X_resampled, y_resampled = sm.fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model2 = LogisticRegression(solver='lbfgs', random_state=1)
model2.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
y_pred2 = model2.predict(X_test)
balanced_accuracy_score(y_test, y_pred2)
# 0.4728126865477952
# Display the confusion matrix
confusion_matrix(y_test, y_pred2)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred2))
| 0.523664 | 0.811676 |
```
%reset
import tensorflow as tf
import numpy as np
import os
import time
import matplotlib.pyplot as plt
%matplotlib inline
st = time.time()
# data is assumed to be [V_vec I_vec C_vec]
# all vec are of the same size
# change the file name if using a different set
data = np.loadtxt(os.path.expanduser('~/quantum-ml/data/var_K_I_V_1000_3_10meV.txt'))
# data randomly permuted to avoid bias in the way the data is generated.
data = np.random.permutation(data)
n_tot = data.shape[0]
# train_total_factor : size of training set in comparison to the total
# 0.8 sound good
train_total_factor = 0.8
n_train = int(train_total_factor*n_tot)
n_test = n_tot - n_train
# input parameters
n_inp = int(data.shape[1]/3)
n_out = int(data.shape[1]/3)
print("Number of inputs:", n_inp)
print("Number of outputs:", n_out)
max_charge_state = int(np.max(data[:,n_inp:]))
num_classes = max_charge_state + 1
print("Number of classes: ",num_classes)
x_train_data = data[:n_train,n_inp:2*n_inp].reshape((n_train,n_inp))
# convert the y_data into the form of the output of a classifier
y_train_data = np.zeros((n_train,n_out,num_classes))
for i in range(n_train):
charge_state_vec = data[i,2*n_inp:].astype(int)
for j in range(len(charge_state_vec)):
y_train_data[i,j,charge_state_vec[j]] = 1.0
x_test_data = data[n_train:,n_inp:2*n_inp].reshape((n_test,n_inp))
y_test_data = np.zeros((n_test,n_out,num_classes))
for i in range(n_train,n_tot):
charge_state_vec = data[i,2*n_inp:].astype(int)
for j in range(len(charge_state_vec)):
y_test_data[i - n_train,j,charge_state_vec[j]] = 1.0
print("Total, Training, Test")
print(n_tot, n_train, n_test)
x = tf.placeholder(tf.float32, [None, n_inp])
W = tf.Variable(tf.zeros([n_inp,n_out,num_classes]))
b = tf.Variable(tf.zeros([n_out,num_classes]))
W_mul = tf.reshape(W,[n_inp,n_out*num_classes])
prod = tf.matmul(x, W_mul)
Wx = tf.reshape(prod,[-1,n_out,num_classes])
y = tf.nn.softmax(Wx + b)
# this node holds the expected output data
y_ = tf.placeholder(tf.float32, [None, n_out, num_classes])
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=y,labels=y_)
cost = tf.reduce_mean(cross_entropy)
train_step = tf.train.GradientDescentOptimizer(10).minimize(cost)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
N_steps = 10000
for _ in range(N_steps):
train_data = np.random.permutation(data[:n_train])
x_train_data = train_data[:n_train,n_inp:2*n_inp].reshape((n_train,n_inp))
# convert the y_data into the form of the output of a classifier
y_train_data = np.zeros((n_train,n_out,num_classes))
for i in range(n_train):
charge_state_vec = train_data[i,2*n_inp:].astype(int)
for j in range(len(charge_state_vec)):
y_train_data[i,j,charge_state_vec[j]] = 1.0
batch_xs, batch_ys = x_train_data,y_train_data
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
if (_ % (N_steps/10) == 0):
print(_)
correct_prediction = tf.equal(tf.argmax(y,-1), tf.argmax(y_,-1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: x_test_data, y_: y_test_data}))
print("Completed in ",time.time()-st,"seconds.")
z = tf.argmax(y,-1)
z_corr = tf.argmax(y_,-1)
output_model = sess.run([z,z_corr],{x:x_test_data,y_: y_test_data})
i = 9
plt.plot(output_model[1][i],'b+')
plt.plot(output_model[0][i],'rx')
plt.legend(['Acutal','Prediction'])
plt.xlabel('V_d',fontsize=16)
plt.ylabel('Charge state',fontsize=16)
print(x_train_data.shape)
import tensorflow as tf
```
|
github_jupyter
|
%reset
import tensorflow as tf
import numpy as np
import os
import time
import matplotlib.pyplot as plt
%matplotlib inline
st = time.time()
# data is assumed to be [V_vec I_vec C_vec]
# all vec are of the same size
# change the file name if using a different set
data = np.loadtxt(os.path.expanduser('~/quantum-ml/data/var_K_I_V_1000_3_10meV.txt'))
# data randomly permuted to avoid bias in the way the data is generated.
data = np.random.permutation(data)
n_tot = data.shape[0]
# train_total_factor : size of training set in comparison to the total
# 0.8 sound good
train_total_factor = 0.8
n_train = int(train_total_factor*n_tot)
n_test = n_tot - n_train
# input parameters
n_inp = int(data.shape[1]/3)
n_out = int(data.shape[1]/3)
print("Number of inputs:", n_inp)
print("Number of outputs:", n_out)
max_charge_state = int(np.max(data[:,n_inp:]))
num_classes = max_charge_state + 1
print("Number of classes: ",num_classes)
x_train_data = data[:n_train,n_inp:2*n_inp].reshape((n_train,n_inp))
# convert the y_data into the form of the output of a classifier
y_train_data = np.zeros((n_train,n_out,num_classes))
for i in range(n_train):
charge_state_vec = data[i,2*n_inp:].astype(int)
for j in range(len(charge_state_vec)):
y_train_data[i,j,charge_state_vec[j]] = 1.0
x_test_data = data[n_train:,n_inp:2*n_inp].reshape((n_test,n_inp))
y_test_data = np.zeros((n_test,n_out,num_classes))
for i in range(n_train,n_tot):
charge_state_vec = data[i,2*n_inp:].astype(int)
for j in range(len(charge_state_vec)):
y_test_data[i - n_train,j,charge_state_vec[j]] = 1.0
print("Total, Training, Test")
print(n_tot, n_train, n_test)
x = tf.placeholder(tf.float32, [None, n_inp])
W = tf.Variable(tf.zeros([n_inp,n_out,num_classes]))
b = tf.Variable(tf.zeros([n_out,num_classes]))
W_mul = tf.reshape(W,[n_inp,n_out*num_classes])
prod = tf.matmul(x, W_mul)
Wx = tf.reshape(prod,[-1,n_out,num_classes])
y = tf.nn.softmax(Wx + b)
# this node holds the expected output data
y_ = tf.placeholder(tf.float32, [None, n_out, num_classes])
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=y,labels=y_)
cost = tf.reduce_mean(cross_entropy)
train_step = tf.train.GradientDescentOptimizer(10).minimize(cost)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
N_steps = 10000
for _ in range(N_steps):
train_data = np.random.permutation(data[:n_train])
x_train_data = train_data[:n_train,n_inp:2*n_inp].reshape((n_train,n_inp))
# convert the y_data into the form of the output of a classifier
y_train_data = np.zeros((n_train,n_out,num_classes))
for i in range(n_train):
charge_state_vec = train_data[i,2*n_inp:].astype(int)
for j in range(len(charge_state_vec)):
y_train_data[i,j,charge_state_vec[j]] = 1.0
batch_xs, batch_ys = x_train_data,y_train_data
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
if (_ % (N_steps/10) == 0):
print(_)
correct_prediction = tf.equal(tf.argmax(y,-1), tf.argmax(y_,-1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: x_test_data, y_: y_test_data}))
print("Completed in ",time.time()-st,"seconds.")
z = tf.argmax(y,-1)
z_corr = tf.argmax(y_,-1)
output_model = sess.run([z,z_corr],{x:x_test_data,y_: y_test_data})
i = 9
plt.plot(output_model[1][i],'b+')
plt.plot(output_model[0][i],'rx')
plt.legend(['Acutal','Prediction'])
plt.xlabel('V_d',fontsize=16)
plt.ylabel('Charge state',fontsize=16)
print(x_train_data.shape)
import tensorflow as tf
| 0.274351 | 0.592755 |
# VacationPy
----
#### Note
* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
```
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
weather_data= os.path.join("../output_data/weather_data.csv")
vacation_data = pd.read_csv(weather_data)
vacation_data
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
gmaps.configure(api_key=g_key)
locations = vacation_data[["Lat", "Lng"]]
weights = vacation_data ["Humidity"]
fig = gmaps.figure()
heat_layer = gmaps.heatmap_layer(locations, weights=weights, dissipating=False)
heat_layer.max_intensity = 75
heat_layer.point_radius = 2
fig.add_layer(heat_layer)
fig
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
```
hotel_df = vacation_data[(vacation_data['Max_temp'] >= 25) & (vacation_data['Max_temp'] <= 75)]
hotel_df= hotel_df[(hotel_df['Windspeed'] <= 10) & (hotel_df['Cloudiness'] <= 10) & (hotel_df['Humidity'] <= 75)]
hotel_df
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
hotel_df["Hotel_name"]=""
hotel_df
hotel_df.count()
for index, row in hotel_df.iterrows():
try:
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
params = {
"keyword": "hotel",
"radius": 5000,
"key": g_key,
}
lat = row['Lat']
lng = row['Lng']
params['location'] = f"{lat}, {lng}"
response = requests.get(base_url, params=params).json()
hotel_df.loc[index, "Hotel_name"] = response["results"][0]["name"]
except IndexError:
hotel_df.loc[index, "Hotel_name"] = "NaN"
hotel_df
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel_name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
markers = gmaps.marker_layer(locations, info_box_content=hotel_info)
fig.add_layer(markers)
# Display figure
fig
```
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
weather_data= os.path.join("../output_data/weather_data.csv")
vacation_data = pd.read_csv(weather_data)
vacation_data
gmaps.configure(api_key=g_key)
locations = vacation_data[["Lat", "Lng"]]
weights = vacation_data ["Humidity"]
fig = gmaps.figure()
heat_layer = gmaps.heatmap_layer(locations, weights=weights, dissipating=False)
heat_layer.max_intensity = 75
heat_layer.point_radius = 2
fig.add_layer(heat_layer)
fig
hotel_df = vacation_data[(vacation_data['Max_temp'] >= 25) & (vacation_data['Max_temp'] <= 75)]
hotel_df= hotel_df[(hotel_df['Windspeed'] <= 10) & (hotel_df['Cloudiness'] <= 10) & (hotel_df['Humidity'] <= 75)]
hotel_df
hotel_df["Hotel_name"]=""
hotel_df
hotel_df.count()
for index, row in hotel_df.iterrows():
try:
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
params = {
"keyword": "hotel",
"radius": 5000,
"key": g_key,
}
lat = row['Lat']
lng = row['Lng']
params['location'] = f"{lat}, {lng}"
response = requests.get(base_url, params=params).json()
hotel_df.loc[index, "Hotel_name"] = response["results"][0]["name"]
except IndexError:
hotel_df.loc[index, "Hotel_name"] = "NaN"
hotel_df
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel_name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
markers = gmaps.marker_layer(locations, info_box_content=hotel_info)
fig.add_layer(markers)
# Display figure
fig
| 0.322419 | 0.86771 |
# Draw segments on the picture
This tutorial use DeepLab Semantic Segment Model to draw segments on the picture.
```
import (
"log"
"github.com/danielinspring/go-tflite"
)
```
Load model.
```
model := tflite.NewModelFromFile("deeplabv3_257_mv_gpu.tflite")
if model == nil {
log.Fatal("cannot load model")
}
interpreter := tflite.NewInterpreter(model, nil)
if interpreter == nil {
log.Fatal("cannot create interpreter")
}
interpreter.AllocateTensors()
input := interpreter.GetInputTensor(0)
wanted_height := input.Dim(1)
wanted_width := input.Dim(2)
wanted_type := input.Type()
```
Then load example image.
```
import (
"os"
"image"
_ "image/jpeg"
_ "image/png"
"github.com/nfnt/resize"
)
f, err := os.Open("example.jpg")
if err != nil {
log.Fatal(err)
}
img, _, err := image.Decode(f)
if err != nil {
log.Fatal(err)
}
resized := resize.Resize(uint(wanted_width), uint(wanted_height), img, resize.NearestNeighbor)
resized
```
input tensor is array of float32. So update pixels with loaded image using mean/std_dev.
```
bounds := resized.Bounds()
dx, dy := bounds.Dx(), bounds.Dy()
ff := input.Float32s()
for y := 0; y < dy; y++ {
for x := 0; x < dx; x++ {
col := resized.At(x, y)
r, g, b, _ := col.RGBA()
ff[(y*wanted_width+x)*3+0] = ((float32(r) / 255) - 127.0) / 127.0
ff[(y*wanted_width+x)*3+1] = ((float32(g) / 255) - 127.0) / 127.0
ff[(y*wanted_width+x)*3+2] = ((float32(b) / 255) - 127.0) / 127.0
}
}
interpreter.Invoke()
```
Output tensor have pixels of segments as 21 elements of candidates. Below is a table for the 21 colors.
```
import "image/color"
var colors = [21][3]uint8{
{0, 0, 0},
{128, 0, 0},
{0, 128, 0},
{128, 128, 0},
{0, 0, 128},
{128, 0, 128},
{0, 128, 128},
{128, 128, 128},
{64, 0, 0},
{192, 0, 0},
{64, 128, 0},
{192, 128, 0},
{64, 0, 128},
{192, 0, 128},
{64, 128, 128},
{192, 128, 128},
{0, 64, 0},
{128, 64, 0},
{0, 192, 0},
{128, 192, 0},
{0, 64, 128},
}
canvas := image.NewRGBA(resized.Bounds())
output := interpreter.GetOutputTensor(0)
ff := output.Float32s()
for y := 0; y < dy; y++ {
for x := 0; x < dx; x++ {
ci := 0
cv := float32(-32767)
off := (y*dx + x) * 21
for i := 0; i < 21; i++ {
v := ff[off+i]
if cv < v {
cv = v
ci = i
}
}
c := colors[ci]
canvas.Set(x, y, color.RGBA{R: c[0], G: c[1], B: c[2], A: 100})
}
}
canvas
```
Making picture original with overlay of the segments.
```
import "image/draw"
canvasImg := resize.Resize(uint(img.Bounds().Dx()), uint(img.Bounds().Dy()), canvas, resize.NearestNeighbor)
base := image.NewRGBA(img.Bounds())
draw.Draw(base, base.Bounds(), img, image.Pt(0, 0), draw.Src)
draw.Draw(base, base.Bounds(), canvasImg, image.Pt(0, 0), draw.Over)
resize.Resize(500, 0, base, resize.NearestNeighbor)
```
|
github_jupyter
|
import (
"log"
"github.com/danielinspring/go-tflite"
)
model := tflite.NewModelFromFile("deeplabv3_257_mv_gpu.tflite")
if model == nil {
log.Fatal("cannot load model")
}
interpreter := tflite.NewInterpreter(model, nil)
if interpreter == nil {
log.Fatal("cannot create interpreter")
}
interpreter.AllocateTensors()
input := interpreter.GetInputTensor(0)
wanted_height := input.Dim(1)
wanted_width := input.Dim(2)
wanted_type := input.Type()
import (
"os"
"image"
_ "image/jpeg"
_ "image/png"
"github.com/nfnt/resize"
)
f, err := os.Open("example.jpg")
if err != nil {
log.Fatal(err)
}
img, _, err := image.Decode(f)
if err != nil {
log.Fatal(err)
}
resized := resize.Resize(uint(wanted_width), uint(wanted_height), img, resize.NearestNeighbor)
resized
bounds := resized.Bounds()
dx, dy := bounds.Dx(), bounds.Dy()
ff := input.Float32s()
for y := 0; y < dy; y++ {
for x := 0; x < dx; x++ {
col := resized.At(x, y)
r, g, b, _ := col.RGBA()
ff[(y*wanted_width+x)*3+0] = ((float32(r) / 255) - 127.0) / 127.0
ff[(y*wanted_width+x)*3+1] = ((float32(g) / 255) - 127.0) / 127.0
ff[(y*wanted_width+x)*3+2] = ((float32(b) / 255) - 127.0) / 127.0
}
}
interpreter.Invoke()
import "image/color"
var colors = [21][3]uint8{
{0, 0, 0},
{128, 0, 0},
{0, 128, 0},
{128, 128, 0},
{0, 0, 128},
{128, 0, 128},
{0, 128, 128},
{128, 128, 128},
{64, 0, 0},
{192, 0, 0},
{64, 128, 0},
{192, 128, 0},
{64, 0, 128},
{192, 0, 128},
{64, 128, 128},
{192, 128, 128},
{0, 64, 0},
{128, 64, 0},
{0, 192, 0},
{128, 192, 0},
{0, 64, 128},
}
canvas := image.NewRGBA(resized.Bounds())
output := interpreter.GetOutputTensor(0)
ff := output.Float32s()
for y := 0; y < dy; y++ {
for x := 0; x < dx; x++ {
ci := 0
cv := float32(-32767)
off := (y*dx + x) * 21
for i := 0; i < 21; i++ {
v := ff[off+i]
if cv < v {
cv = v
ci = i
}
}
c := colors[ci]
canvas.Set(x, y, color.RGBA{R: c[0], G: c[1], B: c[2], A: 100})
}
}
canvas
import "image/draw"
canvasImg := resize.Resize(uint(img.Bounds().Dx()), uint(img.Bounds().Dy()), canvas, resize.NearestNeighbor)
base := image.NewRGBA(img.Bounds())
draw.Draw(base, base.Bounds(), img, image.Pt(0, 0), draw.Src)
draw.Draw(base, base.Bounds(), canvasImg, image.Pt(0, 0), draw.Over)
resize.Resize(500, 0, base, resize.NearestNeighbor)
| 0.485112 | 0.901053 |
```
from gravipy.tensorial import *
from sympy import *
```
Schwarzschild計量のChristoffel記号を計算する。
[gravipy](https://github.com/wojciechczaja/GraviPy)を使う。
レポジトリのdocsディレクトリにあるチュートリアル用ipnbファイルを参考にした。
以下$(−,+,+,+)$型の計量を採用する。
```
t, r, theta, phi, M = symbols('t, r, theta, phi, M')
chi = Coordinates('\chi', [t, r, theta, phi])
Metric = diag(-(1-2*M/r), 1/(1-2*M/r), r**2, r**2*sin(theta)**2) #Schwarzschild計量
g = MetricTensor('g', chi, Metric)
Ga = Christoffel('Ga', g)
#計量テンソル
g(All, All)
#Christoffel記号
Ga(-All, All, All)
```
次に測地線方程式による四元加速度
$$
a^{\alpha} = \frac{d^2x^{\alpha}}{d\lambda^2}= -\Gamma^{\alpha}_{\mu \nu} \frac{dx^{\mu}}{d\lambda} \frac{dx^{\nu}}{d\lambda} = -\Gamma^{\alpha}_{\mu \nu} v^{\mu}v^{\nu}
$$
を計算する。ここで$\lambda$は何らかのパラメータである(固有時ではない)。
```
from itertools import product
var("v_0, v_1, v_2, v_3")
var("a_0, a_1, a_2, a_3")
a_list = [a_0, a_1, a_2, a_3]
v_list = [v_0, v_1, v_2, v_3]
for i in range(4):
a_list[i] = 0
#縮約を取る
for i, j, k in product(range(4), repeat=3):
a_list[i] -= Ga( -i-1, j + 1, k + 1)*v_list[j]*v_list[k]
for i in range(4):
display(a_list[i])
```
実行速度の観点からsympy数式から関数へ変換する。
参考:https://docs.sympy.org/latest/modules/utilities/lambdify.html
```
from sympy.utilities.lambdify import lambdify
a_func = lambdify((t, r, theta, phi, M, v_0, v_1, v_2, v_3), a_list)
```
位置と速度の四元ベクトル$x^\mu, v^\mu$を入力すると四元加速度$a^\mu$を返す関数を定義する。
```
import numpy as np
a = lambda x, v: np.array(a_func(x[0], x[1], x[2], x[3], 1, v[0], v[1], v[2], v[3]))
```
時間発展はRunge–Kutta法([wikipedia](https://ja.wikipedia.org/wiki/%E3%83%AB%E3%83%B3%E3%82%B2%EF%BC%9D%E3%82%AF%E3%83%83%E3%82%BF%E6%B3%95))で計算する。
ニュートン運動方程式のRunge–Kutta法によるシュミレーションを参考にした。
https://www.compadre.org/PICUP/resources/Numerical-Integration/
今解きたい問題は、
$$
\begin{align}
&\frac{dv^\mu}{d\lambda} = a^\mu(x^\mu, v^\mu)\\
&\frac{dx^\mu}{d\lambda} = v^\mu
\end{align}
$$
であるので、
$$
\begin{align}
&k^\mu_{1v} = a^\mu(x^\mu, v^\mu)d\lambda \\
&k^\mu_{1x} = v^\mu d\lambda \\
&k^\mu_{2v} = a^\mu(x^\mu + \frac{k^\mu_{1x}}{2}, v^\mu+ \frac{k^\mu_{1v}}{2})d\lambda\\
&k^\mu_{2x} = ( v^\mu+ \frac{k^\mu_{1v}}{2})d\lambda \\
&k^\mu_{3v} = a^\mu(x^\mu + \frac{k^\mu_{2x}}{2}, v^\mu+ \frac{k^\mu_{2v}}{2})d\lambda\\
&k^\mu_{3x} = ( v^\mu+ \frac{k^\mu_{2v}}{2})d\lambda\\
&k^\mu_{4v} = a^\mu(x^\mu + k^\mu_{3x}, v^\mu + k^\mu_{3v})d\lambda\\
&k^\mu_{4x} = (v^\mu + k^\mu_{3v})d\lambda\\
\end{align}
$$
を計算して、$x^\mu$, $v^\mu$を
$$
\begin{align}
x^\mu_{\mathrm{next}} = x^\mu + \frac{1}{6}(k^\mu_{1x} + 2k^\mu_{2x} + 2k^\mu_{3x} + k^\mu_{4x}) \\
v^\mu_{\mathrm{next}} = v^\mu + \frac{1}{6}(k^\mu_{1v} + 2k^\mu_{2v} + 2k^\mu_{3v} + k^\mu_{4v})
\end{align}
$$
と更新していけば良い。ニュートン運動方程式で$t$に相当するのがパラメータ$\lambda$であることに注意。
```
N = 10**5 #計算ステップ数
x = np.array([0.0, 17.32050808, 0.95531662, -0.78539816]) #初期位置
# theta成分(x[2])を0にしてしまうと0/0計算が発生するので適当な値を持たせておく
v = np.array([1, -0.02886728, -0.00824957, 0.01750001]) #初期速度
#t=0付近で\lambda=tと選ぶとdt/d\lambda = 1なので時間成分の速さは1にする
#空間成分の速度は楕円軌道を描くように選ぶ
dlam = 0.1 #1ステップごとに進む\lambda幅
R = []
Theta = []
Phi = []
T = []
for _ in range(N):
T.append(x[0])
R.append(x[1])
Theta.append(x[2])
Phi.append(x[3])
k1v = a(x, v)*dlam
k1x = v*dlam
k2v = a(x+k1x/2, v+k1v/2)*dlam
k2x = (v+k1v/2)*dlam
k3v = a(x+k2x/2, v+k2v/2)*dlam
k3x = (v+k2v/2)*dlam
k4v = a(x+k3x, v+k3v)*dlam
k4x = (v+k3v)*dlam
v = v + (1/6)*(k1v+2*k2v+2*k3v+k4v)
x = x + (1/6)*(k1x+2*k2x+2*k3x+k4x)
X = R*np.cos(Phi)*np.sin(Theta)
Y = R*np.sin(Phi)*np.sin(Theta)
Z = R*np.cos(Theta)
```
今、謎のパラメータ$\lambda$で$x$, $y$, $z$がパラメーター付けされているので、$t$によるパラメータ付けに変えたい。時系列データの補間を行う。
参考:
https://qiita.com/kenichi-hamaguchi/items/3c5e63e195e06a21d1da
```
dt = 10 #時間幅
T_new = np.arange(0, T[-1], dt)
X_new = np.interp(T_new, T, X)
Y_new = np.interp(T_new, T, Y)
Z_new = np.interp(T_new, T, Z)
```
アニメーションを実装する。
```
%matplotlib nbagg
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.animation as animation
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
L = 50 #描画空間のサイズ
def update(i):
ax.clear()
ax.scatter(0, 0, 0, marker="o", c="orange", s=100)
ax.plot(X_new[:i], Y_new[:i], Z_new[:i], c="black", alpha = 0.4)
ax.scatter(X_new[i], Y_new[i], Z_new[i], marker="o", c="blue", s=10)
ax.set_title(r"$t=$"+str(int(T_new[i])))
ax.view_init(elev=30, azim=225)
ax.set_xlim(-L, L)
ax.set_ylim(-L, L)
ax.set_zlim(-L, L)
ani = animation.FuncAnimation(fig, update, frames=len(T_new), interval=1)
```
# 固有時の計算
```
var("ds2, dx_0, dx_1, dx_2, dx_3")
dx_list = [dx_0, dx_1, dx_2, dx_3]
ds2 = 0
for i, j in product(range(4), repeat=2):
ds2 += g(i+1,j+1)*dx_list[i]*dx_list[j]
ds2_func = lambdify((t, r, theta, phi, M, dx_0, dx_1, dx_2, dx_3), ds2)
dtau = lambda x, dx: np.sqrt(-ds2_func(x[0], x[1], x[2], x[3], 1, dx[0], dx[1], dx[2], dx[3]) + 0j)
N = 10**5
x = np.array([0.0, 17.32050808, 0.95531662, -0.78539816])
v = np.array([1, -0.02886728, -0.00824957, 0.01750001])
dlam = 0.1
R = []
Theta = []
Phi = []
T = []
tau = 0 #固有時
Tau = []
for _ in range(N):
Tau.append(tau)
T.append(x[0])
R.append(x[1])
Theta.append(x[2])
Phi.append(x[3])
k1v = a(x, v)*dlam
k1x = v*dlam
k2v = a(x+k1x/2, v+k1v/2)*dlam
k2x = (v+k1v/2)*dlam
k3v = a(x+k2x/2, v+k2v/2)*dlam
k3x = (v+k2v/2)*dlam
k4v = a(x+k3x, v+k3v)*dlam
k4x = (v+k3v)*dlam
v = v + (1/6)*(k1v+2*k2v+2*k3v+k4v)
x = x + (1/6)*(k1x+2*k2x+2*k3x+k4x)
tau = tau + dtau(x, (1/6)*(k1x+2*k2x+2*k3x+k4x))
X = R*np.cos(Phi)*np.sin(Theta)
Y = R*np.sin(Phi)*np.sin(Theta)
Z = R*np.cos(Theta)
dt = 10 #時間幅
T_new = np.arange(0, T[-1], dt)
X_new = np.interp(T_new, T, X)
Y_new = np.interp(T_new, T, Y)
Z_new = np.interp(T_new, T, Z)
R_new = np.interp(T_new, T, R)
Tau_new = np.interp(T_new, T, Tau)
Dtau_new = np.diff(Tau_new) #dtごとのd\tau
%matplotlib inline
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.plot(T_new[:-1], Dtau_new.real, label=r"$d\tau$")
ax2.plot(T_new[:-1], R_new[:-1], c="orange", label=r"$r$")
ax1.set_xlabel(r"$t$")
ax1.set_ylabel(r"$d\tau$")
ax2.set_ylabel(r"$r$")
handler1, label1 = ax1.get_legend_handles_labels()
handler2, label2 = ax2.get_legend_handles_labels()
ax1.legend(handler1 + handler2, label1 + label2, loc=2, borderaxespad=0.)
```
重力源に近づいたときに固有時がゆっくり進み、遠ざかると早く進むことがわかる。
# 事象の地平線に衝突する場合
$r=2M$付近では$d\lambda$を小さくして慎重に計算する必要がある。
```
N = 10**5
x = np.array([0, 4, np.pi/2, 0])
# theta成分を0にしてしまうと0/0計算が
#発生するので適当な値を持たせておく
v = np.array([1, 0, 0, 0.11])
R = []
Theta = []
Phi = []
T = []
tau = 0 #固有時
Tau = []
np.seterr(all="raise") #エラーが起こったら計算を止める
for _ in range(N):
try:
#事象の地平線に近づいたらd\lambdaを小さくする
dlam = 0.01*(np.abs(x[1] - 2))
Tau.append(tau)
T.append(x[0])
R.append(x[1])
Theta.append(x[2])
Phi.append(x[3])
k1v = a(x, v)*dlam
k1x = v*dlam
k2v = a(x+k1x/2, v+k1v/2)*dlam
k2x = (v+k1v/2)*dlam
k3v = a(x+k2x/2, v+k2v/2)*dlam
k3x = (v+k2v/2)*dlam
k4v = a(x+k3x, v+k3v)*dlam
k4x = (v+k3v)*dlam
v = v + (1/6)*(k1v+2*k2v+2*k3v+k4v)
x = x + (1/6)*(k1x+2*k2x+2*k3x+k4x)
tau = tau + dtau(x, (1/6)*(k1x+2*k2x+2*k3x+k4x))
except FloatingPointError:
break
X = R*np.cos(Phi)*np.sin(Theta)
Y = R*np.sin(Phi)*np.sin(Theta)
Z = R*np.cos(Theta)
dt = 1 #時間幅
T_new = np.arange(0, T[-1], dt)
X_new = np.interp(T_new, T, X)
Y_new = np.interp(T_new, T, Y)
Z_new = np.interp(T_new, T, Z)
R_new = np.interp(T_new, T, R)
Tau_new = np.interp(T_new, T, Tau)
Dtau_new = np.diff(Tau_new) #dtごとのds
%matplotlib nbagg
fig = plt.figure()
ax = fig.add_subplot(111)
circle_phi = np.linspace(0, 2*np.pi, 100)
circle_x = 2*np.cos(circle_phi)
circle_y = 2*np.sin(circle_phi)
L = 6 #描画する空間のサイズ
def update(i):
ax.clear()
ax.plot(circle_x, circle_y, c="black")
ax.plot(X_new[:i], Y_new[:i], c="black", alpha=0.6)
ax.scatter(X_new[i], Y_new[i], marker="o", c="blue", s=10)
ax.set_title(r"$t=$"+str(int(T_new[i]))+"\t"+r"$\tau=$"+str(round(Tau_new[i].real,2)))
ax.set_xlim(-L, L)
ax.set_ylim(-L, L)
ax.set_aspect('equal')
ani = animation.FuncAnimation(fig, update, frames=len(T_new), interval=10)
%matplotlib inline
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.plot(T_new,Tau_new.real, label=r"$\tau$")
ax2.plot(T_new, R_new, c="orange", label=r"$r$")
ax1.set_xlabel(r"$t$")
ax1.set_ylabel(r"$\tau$")
ax2.set_ylabel(r"$r$")
handler1, label1 = ax1.get_legend_handles_labels()
handler2, label2 = ax2.get_legend_handles_labels()
ax1.legend(handler1 + handler2, label1 + label2, loc=2, borderaxespad=0.)
```
# Kerr計量の場合
```
t, r, theta, phi, M, a, rhosq, Delta = symbols('t, r, theta, phi, M, a, rhosq, Delta')
chi = Coordinates('\chi', [t, r, theta, phi])
rhosq = r**2+(a**2)*cos(theta)**2
Delta = r**2-2*M*r+a**2
Metric_kerr = Matrix([[(1-(2*M*r)/rhosq),0,0,(2*a*M*r*sin(theta)**2)/rhosq], \
[0,-rhosq/Delta,0,0], [0,0,-rhosq,0], [(2*a*M*r*sin(theta)**2)/rhosq,0,0, \
-(sin(theta)**2)*((r**2+a**2)+(2*(a**2)*M*r*sin(theta)**2)/rhosq)]])
# 結構時間かかる
g_kerr = MetricTensor('g_kerr', chi, Metric_kerr)
Ga_kerr = Christoffel('Ga_kerr', g)
var("v_0, v_1, v_2, v_3")
var("a_0, a_1, a_2, a_3")
a_list = [a_0, a_1, a_2, a_3]
v_list = [v_0, v_1, v_2, v_3]
for i in range(4):
a_list[i] = 0
#縮約を取る
for i, j, k in product(range(4), repeat=3):
a_list[i] -= Ga_kerr( -i-1, j + 1, k + 1)*v_list[j]*v_list[k]
a_kerr_func= lambdify((t, r, theta, phi, a, M, v_0, v_1, v_2, v_3), a_list)
# a = 0.8, M = 1で計算
a_kerr = lambda x, v: np.array(a_kerr_func(x[0], x[1], x[2], x[3], 0.8, 1, v[0], v[1], v[2], v[3]))
var("ds2_kerr, dx_0, dx_1, dx_2, dx_3")
dx_list = [dx_0, dx_1, dx_2, dx_3]
ds2_kerr = 0
for i, j in product(range(4), repeat=2):
ds2_kerr += g_kerr(i+1,j+1)*dx_list[i]*dx_list[j]
ds2_kerr_func = lambdify((t, r, theta, phi, a, M, dx_0, dx_1, dx_2, dx_3), ds2_kerr)
dtau_kerr = lambda x, dx: np.sqrt(ds2_kerr_func(x[0], x[1], x[2], x[3], 0.8, 1, dx[0], dx[1], dx[2], dx[3]) + 0j)
N = 10**4
x = np.array([0.0, 17.32050808, 0.95531662, -0.78539816])
v = np.array([1, -0.02886728, -0.00824957, 0.01750001])
dlam = 0.5
R = []
Theta = []
Phi = []
T = []
tau = 0 #固有時
Tau = []
for _ in range(N):
Tau.append(tau)
T.append(x[0])
R.append(x[1])
Theta.append(x[2])
Phi.append(x[3])
k1v = a_kerr(x, v)*dlam
k1x = v*dlam
k2v = a_kerr(x+k1x/2, v+k1v/2)*dlam
k2x = (v+k1v/2)*dlam
k3v = a_kerr(x+k2x/2, v+k2v/2)*dlam
k3x = (v+k2v/2)*dlam
k4v = a_kerr(x+k3x, v+k3v)*dlam
k4x = (v+k3v)*dlam
v = v + (1/6)*(k1v+2*k2v+2*k3v+k4v)
x = x + (1/6)*(k1x+2*k2x+2*k3x+k4x)
tau = tau + dtau_kerr(x, (1/6)*(k1x+2*k2x+2*k3x+k4x))
X = R*np.cos(Phi)*np.sin(Theta)
Y = R*np.sin(Phi)*np.sin(Theta)
Z = R*np.cos(Theta)
%matplotlib nbagg
dt = 10 #時間幅
T_new = np.arange(0, T[-1], dt)
X_new = np.interp(T_new, T, X)
Y_new = np.interp(T_new, T, Y)
Z_new = np.interp(T_new, T, Z)
Tau_new = np.interp(T_new, T, Tau)
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
L = 50 #描画する空間のサイズ
def update(i):
ax.clear()
ax.scatter(0, 0, 0, marker="o", c="orange", s=100)
ax.plot(X_new[:i], Y_new[:i], Z_new[:i], c="black", alpha = 0.4)
ax.scatter(X_new[i], Y_new[i], Z_new[i], marker="o", c="blue", s=10)
ax.set_title(r"$t=$"+str(int(T_new[i]))+"\t"+r"$\tau=$"+str(int(Tau_new[i].real)))
ax.view_init(elev=30, azim=225)
ax.set_xlim(-L, L)
ax.set_ylim(-L, L)
ax.set_zlim(-L, L)
ani = animation.FuncAnimation(fig, update, frames=len(T_new), interval=1)
```
|
github_jupyter
|
from gravipy.tensorial import *
from sympy import *
t, r, theta, phi, M = symbols('t, r, theta, phi, M')
chi = Coordinates('\chi', [t, r, theta, phi])
Metric = diag(-(1-2*M/r), 1/(1-2*M/r), r**2, r**2*sin(theta)**2) #Schwarzschild計量
g = MetricTensor('g', chi, Metric)
Ga = Christoffel('Ga', g)
#計量テンソル
g(All, All)
#Christoffel記号
Ga(-All, All, All)
from itertools import product
var("v_0, v_1, v_2, v_3")
var("a_0, a_1, a_2, a_3")
a_list = [a_0, a_1, a_2, a_3]
v_list = [v_0, v_1, v_2, v_3]
for i in range(4):
a_list[i] = 0
#縮約を取る
for i, j, k in product(range(4), repeat=3):
a_list[i] -= Ga( -i-1, j + 1, k + 1)*v_list[j]*v_list[k]
for i in range(4):
display(a_list[i])
from sympy.utilities.lambdify import lambdify
a_func = lambdify((t, r, theta, phi, M, v_0, v_1, v_2, v_3), a_list)
import numpy as np
a = lambda x, v: np.array(a_func(x[0], x[1], x[2], x[3], 1, v[0], v[1], v[2], v[3]))
N = 10**5 #計算ステップ数
x = np.array([0.0, 17.32050808, 0.95531662, -0.78539816]) #初期位置
# theta成分(x[2])を0にしてしまうと0/0計算が発生するので適当な値を持たせておく
v = np.array([1, -0.02886728, -0.00824957, 0.01750001]) #初期速度
#t=0付近で\lambda=tと選ぶとdt/d\lambda = 1なので時間成分の速さは1にする
#空間成分の速度は楕円軌道を描くように選ぶ
dlam = 0.1 #1ステップごとに進む\lambda幅
R = []
Theta = []
Phi = []
T = []
for _ in range(N):
T.append(x[0])
R.append(x[1])
Theta.append(x[2])
Phi.append(x[3])
k1v = a(x, v)*dlam
k1x = v*dlam
k2v = a(x+k1x/2, v+k1v/2)*dlam
k2x = (v+k1v/2)*dlam
k3v = a(x+k2x/2, v+k2v/2)*dlam
k3x = (v+k2v/2)*dlam
k4v = a(x+k3x, v+k3v)*dlam
k4x = (v+k3v)*dlam
v = v + (1/6)*(k1v+2*k2v+2*k3v+k4v)
x = x + (1/6)*(k1x+2*k2x+2*k3x+k4x)
X = R*np.cos(Phi)*np.sin(Theta)
Y = R*np.sin(Phi)*np.sin(Theta)
Z = R*np.cos(Theta)
dt = 10 #時間幅
T_new = np.arange(0, T[-1], dt)
X_new = np.interp(T_new, T, X)
Y_new = np.interp(T_new, T, Y)
Z_new = np.interp(T_new, T, Z)
%matplotlib nbagg
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.animation as animation
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
L = 50 #描画空間のサイズ
def update(i):
ax.clear()
ax.scatter(0, 0, 0, marker="o", c="orange", s=100)
ax.plot(X_new[:i], Y_new[:i], Z_new[:i], c="black", alpha = 0.4)
ax.scatter(X_new[i], Y_new[i], Z_new[i], marker="o", c="blue", s=10)
ax.set_title(r"$t=$"+str(int(T_new[i])))
ax.view_init(elev=30, azim=225)
ax.set_xlim(-L, L)
ax.set_ylim(-L, L)
ax.set_zlim(-L, L)
ani = animation.FuncAnimation(fig, update, frames=len(T_new), interval=1)
var("ds2, dx_0, dx_1, dx_2, dx_3")
dx_list = [dx_0, dx_1, dx_2, dx_3]
ds2 = 0
for i, j in product(range(4), repeat=2):
ds2 += g(i+1,j+1)*dx_list[i]*dx_list[j]
ds2_func = lambdify((t, r, theta, phi, M, dx_0, dx_1, dx_2, dx_3), ds2)
dtau = lambda x, dx: np.sqrt(-ds2_func(x[0], x[1], x[2], x[3], 1, dx[0], dx[1], dx[2], dx[3]) + 0j)
N = 10**5
x = np.array([0.0, 17.32050808, 0.95531662, -0.78539816])
v = np.array([1, -0.02886728, -0.00824957, 0.01750001])
dlam = 0.1
R = []
Theta = []
Phi = []
T = []
tau = 0 #固有時
Tau = []
for _ in range(N):
Tau.append(tau)
T.append(x[0])
R.append(x[1])
Theta.append(x[2])
Phi.append(x[3])
k1v = a(x, v)*dlam
k1x = v*dlam
k2v = a(x+k1x/2, v+k1v/2)*dlam
k2x = (v+k1v/2)*dlam
k3v = a(x+k2x/2, v+k2v/2)*dlam
k3x = (v+k2v/2)*dlam
k4v = a(x+k3x, v+k3v)*dlam
k4x = (v+k3v)*dlam
v = v + (1/6)*(k1v+2*k2v+2*k3v+k4v)
x = x + (1/6)*(k1x+2*k2x+2*k3x+k4x)
tau = tau + dtau(x, (1/6)*(k1x+2*k2x+2*k3x+k4x))
X = R*np.cos(Phi)*np.sin(Theta)
Y = R*np.sin(Phi)*np.sin(Theta)
Z = R*np.cos(Theta)
dt = 10 #時間幅
T_new = np.arange(0, T[-1], dt)
X_new = np.interp(T_new, T, X)
Y_new = np.interp(T_new, T, Y)
Z_new = np.interp(T_new, T, Z)
R_new = np.interp(T_new, T, R)
Tau_new = np.interp(T_new, T, Tau)
Dtau_new = np.diff(Tau_new) #dtごとのd\tau
%matplotlib inline
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.plot(T_new[:-1], Dtau_new.real, label=r"$d\tau$")
ax2.plot(T_new[:-1], R_new[:-1], c="orange", label=r"$r$")
ax1.set_xlabel(r"$t$")
ax1.set_ylabel(r"$d\tau$")
ax2.set_ylabel(r"$r$")
handler1, label1 = ax1.get_legend_handles_labels()
handler2, label2 = ax2.get_legend_handles_labels()
ax1.legend(handler1 + handler2, label1 + label2, loc=2, borderaxespad=0.)
N = 10**5
x = np.array([0, 4, np.pi/2, 0])
# theta成分を0にしてしまうと0/0計算が
#発生するので適当な値を持たせておく
v = np.array([1, 0, 0, 0.11])
R = []
Theta = []
Phi = []
T = []
tau = 0 #固有時
Tau = []
np.seterr(all="raise") #エラーが起こったら計算を止める
for _ in range(N):
try:
#事象の地平線に近づいたらd\lambdaを小さくする
dlam = 0.01*(np.abs(x[1] - 2))
Tau.append(tau)
T.append(x[0])
R.append(x[1])
Theta.append(x[2])
Phi.append(x[3])
k1v = a(x, v)*dlam
k1x = v*dlam
k2v = a(x+k1x/2, v+k1v/2)*dlam
k2x = (v+k1v/2)*dlam
k3v = a(x+k2x/2, v+k2v/2)*dlam
k3x = (v+k2v/2)*dlam
k4v = a(x+k3x, v+k3v)*dlam
k4x = (v+k3v)*dlam
v = v + (1/6)*(k1v+2*k2v+2*k3v+k4v)
x = x + (1/6)*(k1x+2*k2x+2*k3x+k4x)
tau = tau + dtau(x, (1/6)*(k1x+2*k2x+2*k3x+k4x))
except FloatingPointError:
break
X = R*np.cos(Phi)*np.sin(Theta)
Y = R*np.sin(Phi)*np.sin(Theta)
Z = R*np.cos(Theta)
dt = 1 #時間幅
T_new = np.arange(0, T[-1], dt)
X_new = np.interp(T_new, T, X)
Y_new = np.interp(T_new, T, Y)
Z_new = np.interp(T_new, T, Z)
R_new = np.interp(T_new, T, R)
Tau_new = np.interp(T_new, T, Tau)
Dtau_new = np.diff(Tau_new) #dtごとのds
%matplotlib nbagg
fig = plt.figure()
ax = fig.add_subplot(111)
circle_phi = np.linspace(0, 2*np.pi, 100)
circle_x = 2*np.cos(circle_phi)
circle_y = 2*np.sin(circle_phi)
L = 6 #描画する空間のサイズ
def update(i):
ax.clear()
ax.plot(circle_x, circle_y, c="black")
ax.plot(X_new[:i], Y_new[:i], c="black", alpha=0.6)
ax.scatter(X_new[i], Y_new[i], marker="o", c="blue", s=10)
ax.set_title(r"$t=$"+str(int(T_new[i]))+"\t"+r"$\tau=$"+str(round(Tau_new[i].real,2)))
ax.set_xlim(-L, L)
ax.set_ylim(-L, L)
ax.set_aspect('equal')
ani = animation.FuncAnimation(fig, update, frames=len(T_new), interval=10)
%matplotlib inline
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.plot(T_new,Tau_new.real, label=r"$\tau$")
ax2.plot(T_new, R_new, c="orange", label=r"$r$")
ax1.set_xlabel(r"$t$")
ax1.set_ylabel(r"$\tau$")
ax2.set_ylabel(r"$r$")
handler1, label1 = ax1.get_legend_handles_labels()
handler2, label2 = ax2.get_legend_handles_labels()
ax1.legend(handler1 + handler2, label1 + label2, loc=2, borderaxespad=0.)
t, r, theta, phi, M, a, rhosq, Delta = symbols('t, r, theta, phi, M, a, rhosq, Delta')
chi = Coordinates('\chi', [t, r, theta, phi])
rhosq = r**2+(a**2)*cos(theta)**2
Delta = r**2-2*M*r+a**2
Metric_kerr = Matrix([[(1-(2*M*r)/rhosq),0,0,(2*a*M*r*sin(theta)**2)/rhosq], \
[0,-rhosq/Delta,0,0], [0,0,-rhosq,0], [(2*a*M*r*sin(theta)**2)/rhosq,0,0, \
-(sin(theta)**2)*((r**2+a**2)+(2*(a**2)*M*r*sin(theta)**2)/rhosq)]])
# 結構時間かかる
g_kerr = MetricTensor('g_kerr', chi, Metric_kerr)
Ga_kerr = Christoffel('Ga_kerr', g)
var("v_0, v_1, v_2, v_3")
var("a_0, a_1, a_2, a_3")
a_list = [a_0, a_1, a_2, a_3]
v_list = [v_0, v_1, v_2, v_3]
for i in range(4):
a_list[i] = 0
#縮約を取る
for i, j, k in product(range(4), repeat=3):
a_list[i] -= Ga_kerr( -i-1, j + 1, k + 1)*v_list[j]*v_list[k]
a_kerr_func= lambdify((t, r, theta, phi, a, M, v_0, v_1, v_2, v_3), a_list)
# a = 0.8, M = 1で計算
a_kerr = lambda x, v: np.array(a_kerr_func(x[0], x[1], x[2], x[3], 0.8, 1, v[0], v[1], v[2], v[3]))
var("ds2_kerr, dx_0, dx_1, dx_2, dx_3")
dx_list = [dx_0, dx_1, dx_2, dx_3]
ds2_kerr = 0
for i, j in product(range(4), repeat=2):
ds2_kerr += g_kerr(i+1,j+1)*dx_list[i]*dx_list[j]
ds2_kerr_func = lambdify((t, r, theta, phi, a, M, dx_0, dx_1, dx_2, dx_3), ds2_kerr)
dtau_kerr = lambda x, dx: np.sqrt(ds2_kerr_func(x[0], x[1], x[2], x[3], 0.8, 1, dx[0], dx[1], dx[2], dx[3]) + 0j)
N = 10**4
x = np.array([0.0, 17.32050808, 0.95531662, -0.78539816])
v = np.array([1, -0.02886728, -0.00824957, 0.01750001])
dlam = 0.5
R = []
Theta = []
Phi = []
T = []
tau = 0 #固有時
Tau = []
for _ in range(N):
Tau.append(tau)
T.append(x[0])
R.append(x[1])
Theta.append(x[2])
Phi.append(x[3])
k1v = a_kerr(x, v)*dlam
k1x = v*dlam
k2v = a_kerr(x+k1x/2, v+k1v/2)*dlam
k2x = (v+k1v/2)*dlam
k3v = a_kerr(x+k2x/2, v+k2v/2)*dlam
k3x = (v+k2v/2)*dlam
k4v = a_kerr(x+k3x, v+k3v)*dlam
k4x = (v+k3v)*dlam
v = v + (1/6)*(k1v+2*k2v+2*k3v+k4v)
x = x + (1/6)*(k1x+2*k2x+2*k3x+k4x)
tau = tau + dtau_kerr(x, (1/6)*(k1x+2*k2x+2*k3x+k4x))
X = R*np.cos(Phi)*np.sin(Theta)
Y = R*np.sin(Phi)*np.sin(Theta)
Z = R*np.cos(Theta)
%matplotlib nbagg
dt = 10 #時間幅
T_new = np.arange(0, T[-1], dt)
X_new = np.interp(T_new, T, X)
Y_new = np.interp(T_new, T, Y)
Z_new = np.interp(T_new, T, Z)
Tau_new = np.interp(T_new, T, Tau)
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
L = 50 #描画する空間のサイズ
def update(i):
ax.clear()
ax.scatter(0, 0, 0, marker="o", c="orange", s=100)
ax.plot(X_new[:i], Y_new[:i], Z_new[:i], c="black", alpha = 0.4)
ax.scatter(X_new[i], Y_new[i], Z_new[i], marker="o", c="blue", s=10)
ax.set_title(r"$t=$"+str(int(T_new[i]))+"\t"+r"$\tau=$"+str(int(Tau_new[i].real)))
ax.view_init(elev=30, azim=225)
ax.set_xlim(-L, L)
ax.set_ylim(-L, L)
ax.set_zlim(-L, L)
ani = animation.FuncAnimation(fig, update, frames=len(T_new), interval=1)
| 0.2522 | 0.922203 |
```
import logging
logging.basicConfig()
log = logging.getLogger('test')
expected = [1,2,3,4,5,6,7,8,9]
def validate(arr, sortmethod=False):
if arr is None or len(arr) != 9:
return False
if sortmethod:
s = sorted(arr)
for i in range(9):
if i+1 != s[i]:
return False
return True
else:
s = sorted(arr)
return s == expected
tc = [1, 2, 3, 4, 5, 9, 7, 6, 8]
tc2 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
def benchmark(n, sm=True):
for _ in range(n):
assert validate(tc, sortmethod=sm)
assert not validate(tc2, sortmethod=sm)
assert not validate(None, sortmethod=sm)
assert not validate([], sortmethod=sm)
%timeit benchmark(1, True)
%timeit benchmark(1, False)
### Part 3
# Given a 9 x 9 (2-dimensional array) board, ensure that it's a valid solved sudoku board. Optimize it
# Valid matrix
v1 = [ [5, 3, 4, 6, 7, 8, 9, 1, 2],
[6, 7, 2, 1, 9, 5, 3, 4, 8],
[1, 9, 8, 3, 4, 2, 5, 6, 7],
[8, 5, 9, 7, 6, 1, 4, 2, 3],
[4, 2, 6, 8, 5, 3, 7, 9, 1],
[7, 1, 3, 9, 2, 4, 8, 5, 6],
[9, 6, 1, 5, 3, 7, 2, 8, 4],
[2, 8, 7, 4, 1, 9, 6, 3, 5],
[3, 4, 5, 2, 8, 6, 1, 7, 9]]
inv1= [ [1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9]]
inv2 = [ [5, 3, 4, 6, 7, 8, 9, 1, 2],
[6, 7, 2, 1, 9, 5, 3, 4, 8],
[1, 9, 8, 3, 4, 2, 5, 6, 7],
[8, 5, 9, 7, 6, 1, 4, 2, 3],
[4, 2, 6, 8, 5, 3, 7, 9, 1],
[7, 1, 3, 9, 2, 4, 8, 5, 6],
[9, 6, 1, 5, 3, 7, 2, 8, 4],
[2, 8, 7, 4, 1, 9, 6, 3, 5],
[3, 4, 5, 2, 8, 6, 1, 9, 7]
]
inv3=[[5, 3, 4, 6, 7, 8, 9, 1, 2],
[6, 7, 2, 1, 9, 5, 3, 4, 8],
[1, 9, 8, 3, 4, 2, 5, 6, 7],
[8, 5, 9, 7, 6, 1, 4, 2, 3],
[4, 2, 6, 8, 5, 3, 7, 9, 1],
[7, 1, 3, 9, 2, 4, 8, 5, 6],
[9, 6, 1, 5, 3, 7, 3, 8, 4],
[2, 8, 7, 4, 1, 9, 6, 3, 5],
[3, 4, 5, 2, 8, 6, 1, 7, 9]]
def validateBoard(m, validateRowsAndColumns=True):
if validateRowsAndColumns:
if m is None or len(m) != 9:
return False
for inner in m:
if not validate(inner):
return False
for i in range(9):
col = [r[i] for r in m]
if not validate(col):
return False
for i in range(0,9,3):
for j in range(0,9,3):
box = [r[j:j+3] for r in m[i:i+3]]
line = [b for r in box for b in r]
log.debug('line: %s', line)
if not validate(line):
return False
return True
def check(testcase=None):
assert validateBoard(v1)
assert not validateBoard(inv1)
assert not validateBoard(inv2)
assert not validateBoard(inv3, False)
log.setLevel(logging.DEBUG)
check()
i = 3
# rows = v1[i:i+3]
# print(rows)
j = 3
box = [r[j:j+3] for r in v1[i:i+3]]
line = []
for r in box:
line.extend(r)
line
```
|
github_jupyter
|
import logging
logging.basicConfig()
log = logging.getLogger('test')
expected = [1,2,3,4,5,6,7,8,9]
def validate(arr, sortmethod=False):
if arr is None or len(arr) != 9:
return False
if sortmethod:
s = sorted(arr)
for i in range(9):
if i+1 != s[i]:
return False
return True
else:
s = sorted(arr)
return s == expected
tc = [1, 2, 3, 4, 5, 9, 7, 6, 8]
tc2 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
def benchmark(n, sm=True):
for _ in range(n):
assert validate(tc, sortmethod=sm)
assert not validate(tc2, sortmethod=sm)
assert not validate(None, sortmethod=sm)
assert not validate([], sortmethod=sm)
%timeit benchmark(1, True)
%timeit benchmark(1, False)
### Part 3
# Given a 9 x 9 (2-dimensional array) board, ensure that it's a valid solved sudoku board. Optimize it
# Valid matrix
v1 = [ [5, 3, 4, 6, 7, 8, 9, 1, 2],
[6, 7, 2, 1, 9, 5, 3, 4, 8],
[1, 9, 8, 3, 4, 2, 5, 6, 7],
[8, 5, 9, 7, 6, 1, 4, 2, 3],
[4, 2, 6, 8, 5, 3, 7, 9, 1],
[7, 1, 3, 9, 2, 4, 8, 5, 6],
[9, 6, 1, 5, 3, 7, 2, 8, 4],
[2, 8, 7, 4, 1, 9, 6, 3, 5],
[3, 4, 5, 2, 8, 6, 1, 7, 9]]
inv1= [ [1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 5, 6, 7, 8, 9]]
inv2 = [ [5, 3, 4, 6, 7, 8, 9, 1, 2],
[6, 7, 2, 1, 9, 5, 3, 4, 8],
[1, 9, 8, 3, 4, 2, 5, 6, 7],
[8, 5, 9, 7, 6, 1, 4, 2, 3],
[4, 2, 6, 8, 5, 3, 7, 9, 1],
[7, 1, 3, 9, 2, 4, 8, 5, 6],
[9, 6, 1, 5, 3, 7, 2, 8, 4],
[2, 8, 7, 4, 1, 9, 6, 3, 5],
[3, 4, 5, 2, 8, 6, 1, 9, 7]
]
inv3=[[5, 3, 4, 6, 7, 8, 9, 1, 2],
[6, 7, 2, 1, 9, 5, 3, 4, 8],
[1, 9, 8, 3, 4, 2, 5, 6, 7],
[8, 5, 9, 7, 6, 1, 4, 2, 3],
[4, 2, 6, 8, 5, 3, 7, 9, 1],
[7, 1, 3, 9, 2, 4, 8, 5, 6],
[9, 6, 1, 5, 3, 7, 3, 8, 4],
[2, 8, 7, 4, 1, 9, 6, 3, 5],
[3, 4, 5, 2, 8, 6, 1, 7, 9]]
def validateBoard(m, validateRowsAndColumns=True):
if validateRowsAndColumns:
if m is None or len(m) != 9:
return False
for inner in m:
if not validate(inner):
return False
for i in range(9):
col = [r[i] for r in m]
if not validate(col):
return False
for i in range(0,9,3):
for j in range(0,9,3):
box = [r[j:j+3] for r in m[i:i+3]]
line = [b for r in box for b in r]
log.debug('line: %s', line)
if not validate(line):
return False
return True
def check(testcase=None):
assert validateBoard(v1)
assert not validateBoard(inv1)
assert not validateBoard(inv2)
assert not validateBoard(inv3, False)
log.setLevel(logging.DEBUG)
check()
i = 3
# rows = v1[i:i+3]
# print(rows)
j = 3
box = [r[j:j+3] for r in v1[i:i+3]]
line = []
for r in box:
line.extend(r)
line
| 0.390825 | 0.646976 |
```
#This notebook finds all the raw data files CSV and JSON files, reads them and combines them, then seperates them to check-in and check-out and selects the metr records
#The notebook outputs two CSV-s that contain the check-in and check-out for the metro stations
import pandas as pd
file_names = []
import os
'''For the given path, get the List of all files in the directory tree '''
def getListOfFiles(dirName):
# create a list of file and sub directories
# names in the given directory
listOfFile = os.listdir(dirName)
allFiles = list()
# Iterate over all the entries
for entry in listOfFile:
# Create full path
fullPath = os.path.join(dirName, entry)
# If entry is a directory then get the list of files in this directory
if os.path.isdir(fullPath):
allFiles = allFiles + getListOfFiles(fullPath)
else:
allFiles.append(fullPath)
return allFiles
def main():
dirName = '/Users/annalorincz/Predicting Eventually/Raw_data_folders';
# Get the list of all files in directory tree at given path
listOfFiles = getListOfFiles(dirName)
# Print the files
for elem in listOfFiles:
print(elem)
print ("****************")
# Get the list of all files in directory tree at given path
listOfFiles = list()
for (dirpath, dirnames, filenames) in os.walk(dirName):
listOfFiles += [os.path.join(dirpath, file) for file in filenames]
# Print the files
for elem in listOfFiles:
file_names.append(elem)
if __name__ == '__main__':
main()
file_names_new = []
file_names_csv = []
file_names_new_hourly = []
file_names_json = []
for i in file_names:
if ".DS_Store" not in i:
file_names_new.append(i)
for i in file_names_new:
if ".json." in i:
file_names_json.append(i)
for i in file_names_new:
if ".csv" in i:
file_names_csv.append(i)
for i in file_names_csv:
if "_Uur_" in i:
file_names_new_hourly.append(i)
#combine all files in the list for CSV
combined_csv_hourly = pd.concat([pd.read_csv(f,";") for f in file_names_new_hourly])
#seperating check_ins and check_out
check_out_csv = combined_csv_hourly[combined_csv_hourly["UurgroepOmschrijving (van vertrek)"].isnull()]
check_in_csv = combined_csv_hourly[combined_csv_hourly["UurgroepOmschrijving (van aankomst)"].isnull()]
#combine all files in the list for JSON
combined_json_all = pd.concat([pd.read_json (f,lines=True) for f in file_names_json])
#seperating check_ins and check_out
check_out_json = combined_json_all[combined_json_all["UurgroepOmschrijving (van vertrek)"].isnull()]
check_in_json = combined_json_all[combined_json_all["UurgroepOmschrijving (van aankomst)"].isnull()]
print(len(file_names_new_hourly))
print(len(file_names_json))
print(len(file_names_csv))
#combining CSVs and JSONs
check_in = pd.concat([check_in_json,check_in_csv])
check_out = pd.concat([check_out_json,check_out_csv])
#Selecting the records for the metro stations
check_out = check_out[check_out['AankomstHalteCode'].str.contains('[A-Za-z]', na = False)]
check_out = check_out[check_out['AankomstHalteNaam'] != '[[ Onbekend ]]'] #take out unknown
#Selecting the records for the metro stations
check_in = check_in[check_in['VertrekHalteCode'].str.contains('[A-Za-z]', na = False)]
check_in = check_in[check_in['VertrekHalteNaam'] != '[[ Onbekend ]]'] #take out unknown
check_out = check_out[["Datum", "AantalReizen","UurgroepOmschrijving (van aankomst)", "AankomstHalteNaam", "AankomstHalteCode" ]]
check_out = check_out[check_out["AantalReizen"].notnull()]
check_in = check_in[["Datum", "AantalReizen","UurgroepOmschrijving (van vertrek)", "VertrekHalteNaam", "VertrekHalteCode" ]]
check_in = check_in[check_in["AantalReizen"].notnull()]
check_out_new = pd.DataFrame(columns = ['HalteNaam',"AantalReizen", 'datetime', 'hour', 'week', "month",'year', 'weekday'])
splitted = check_out["Datum"].str.split("/",expand=True)
months = splitted[0]
days = splitted[1]
years = splitted[2].str.split(" ",expand=True)[0]
hours = check_out["UurgroepOmschrijving (van aankomst)"].str.split(" ",expand=True)[0].str.split(":",expand=True)[0]
check_out_new["HalteNaam"] = check_out["AankomstHalteNaam"]
check_out_new["AantalReizen"] = check_out["AantalReizen"]
check_out_new["datetime"] = check_out["Datum"]
check_out_new["hour"] = hours
check_out_new["month"] = months
check_out_new["year"] = years
check_in_new = pd.DataFrame(columns = ['HalteNaam',"AantalReizen", 'datetime', 'hour', 'week', "month",'year', 'weekday'])
splitted = check_in["Datum"].str.split("/",expand=True)
months = splitted[0]
days = splitted[1]
years = splitted[2].str.split(" ",expand=True)[0]
hours = check_in["UurgroepOmschrijving (van vertrek)"].str.split(" ",expand=True)[0].str.split(":",expand=True)[0]
check_in_new["HalteNaam"] = check_in["VertrekHalteNaam"]
check_in_new["AantalReizen"] = check_in["AantalReizen"]
check_in_new["datetime"] = check_in["Datum"]
check_in_new["hour"] = hours
check_in_new["month"] = months
check_in_new["year"] = years
check_out_dates = check_out_new["datetime"]
check_out_dates = pd.to_datetime(check_out_dates)
check_out_week = []
check_out_weekday = []
check_out_date = []
for i in check_out_dates:
check_out_week.append(i.week)
check_out_weekday.append(i.weekday())
check_out_date.append(i.date())
check_out_new["week"] = check_out_week
check_out_new["weekday"] = check_out_weekday
check_out_new["datetime"] = check_out_date
check_in_dates = check_in_new["datetime"]
check_in_dates = pd.to_datetime(check_in_dates)
check_in_week = []
check_in_weekday = []
check_in_date = []
for i in check_in_dates:
check_in_week.append(i.week)
check_in_weekday.append(i.weekday())
check_in_date.append(i.date())
check_in_new["week"] = check_in_week
check_in_new["weekday"] = check_in_weekday
check_in_new["datetime"] = check_in_date
check_in_new.reset_index(inplace = True)
check_out_new.reset_index(inplace = True)
check_in_new.drop(columns= ["index"], inplace = True)
check_out_new.drop(columns= ["index"], inplace = True)
check_in_new["datetime"] = pd.to_datetime(check_in_new["datetime"])
check_in_new.to_csv("check_ins_metro_preprocessed.csv")
check_out_new.to_csv("check_out_metro_preprocessed.csv")
```
|
github_jupyter
|
#This notebook finds all the raw data files CSV and JSON files, reads them and combines them, then seperates them to check-in and check-out and selects the metr records
#The notebook outputs two CSV-s that contain the check-in and check-out for the metro stations
import pandas as pd
file_names = []
import os
'''For the given path, get the List of all files in the directory tree '''
def getListOfFiles(dirName):
# create a list of file and sub directories
# names in the given directory
listOfFile = os.listdir(dirName)
allFiles = list()
# Iterate over all the entries
for entry in listOfFile:
# Create full path
fullPath = os.path.join(dirName, entry)
# If entry is a directory then get the list of files in this directory
if os.path.isdir(fullPath):
allFiles = allFiles + getListOfFiles(fullPath)
else:
allFiles.append(fullPath)
return allFiles
def main():
dirName = '/Users/annalorincz/Predicting Eventually/Raw_data_folders';
# Get the list of all files in directory tree at given path
listOfFiles = getListOfFiles(dirName)
# Print the files
for elem in listOfFiles:
print(elem)
print ("****************")
# Get the list of all files in directory tree at given path
listOfFiles = list()
for (dirpath, dirnames, filenames) in os.walk(dirName):
listOfFiles += [os.path.join(dirpath, file) for file in filenames]
# Print the files
for elem in listOfFiles:
file_names.append(elem)
if __name__ == '__main__':
main()
file_names_new = []
file_names_csv = []
file_names_new_hourly = []
file_names_json = []
for i in file_names:
if ".DS_Store" not in i:
file_names_new.append(i)
for i in file_names_new:
if ".json." in i:
file_names_json.append(i)
for i in file_names_new:
if ".csv" in i:
file_names_csv.append(i)
for i in file_names_csv:
if "_Uur_" in i:
file_names_new_hourly.append(i)
#combine all files in the list for CSV
combined_csv_hourly = pd.concat([pd.read_csv(f,";") for f in file_names_new_hourly])
#seperating check_ins and check_out
check_out_csv = combined_csv_hourly[combined_csv_hourly["UurgroepOmschrijving (van vertrek)"].isnull()]
check_in_csv = combined_csv_hourly[combined_csv_hourly["UurgroepOmschrijving (van aankomst)"].isnull()]
#combine all files in the list for JSON
combined_json_all = pd.concat([pd.read_json (f,lines=True) for f in file_names_json])
#seperating check_ins and check_out
check_out_json = combined_json_all[combined_json_all["UurgroepOmschrijving (van vertrek)"].isnull()]
check_in_json = combined_json_all[combined_json_all["UurgroepOmschrijving (van aankomst)"].isnull()]
print(len(file_names_new_hourly))
print(len(file_names_json))
print(len(file_names_csv))
#combining CSVs and JSONs
check_in = pd.concat([check_in_json,check_in_csv])
check_out = pd.concat([check_out_json,check_out_csv])
#Selecting the records for the metro stations
check_out = check_out[check_out['AankomstHalteCode'].str.contains('[A-Za-z]', na = False)]
check_out = check_out[check_out['AankomstHalteNaam'] != '[[ Onbekend ]]'] #take out unknown
#Selecting the records for the metro stations
check_in = check_in[check_in['VertrekHalteCode'].str.contains('[A-Za-z]', na = False)]
check_in = check_in[check_in['VertrekHalteNaam'] != '[[ Onbekend ]]'] #take out unknown
check_out = check_out[["Datum", "AantalReizen","UurgroepOmschrijving (van aankomst)", "AankomstHalteNaam", "AankomstHalteCode" ]]
check_out = check_out[check_out["AantalReizen"].notnull()]
check_in = check_in[["Datum", "AantalReizen","UurgroepOmschrijving (van vertrek)", "VertrekHalteNaam", "VertrekHalteCode" ]]
check_in = check_in[check_in["AantalReizen"].notnull()]
check_out_new = pd.DataFrame(columns = ['HalteNaam',"AantalReizen", 'datetime', 'hour', 'week', "month",'year', 'weekday'])
splitted = check_out["Datum"].str.split("/",expand=True)
months = splitted[0]
days = splitted[1]
years = splitted[2].str.split(" ",expand=True)[0]
hours = check_out["UurgroepOmschrijving (van aankomst)"].str.split(" ",expand=True)[0].str.split(":",expand=True)[0]
check_out_new["HalteNaam"] = check_out["AankomstHalteNaam"]
check_out_new["AantalReizen"] = check_out["AantalReizen"]
check_out_new["datetime"] = check_out["Datum"]
check_out_new["hour"] = hours
check_out_new["month"] = months
check_out_new["year"] = years
check_in_new = pd.DataFrame(columns = ['HalteNaam',"AantalReizen", 'datetime', 'hour', 'week', "month",'year', 'weekday'])
splitted = check_in["Datum"].str.split("/",expand=True)
months = splitted[0]
days = splitted[1]
years = splitted[2].str.split(" ",expand=True)[0]
hours = check_in["UurgroepOmschrijving (van vertrek)"].str.split(" ",expand=True)[0].str.split(":",expand=True)[0]
check_in_new["HalteNaam"] = check_in["VertrekHalteNaam"]
check_in_new["AantalReizen"] = check_in["AantalReizen"]
check_in_new["datetime"] = check_in["Datum"]
check_in_new["hour"] = hours
check_in_new["month"] = months
check_in_new["year"] = years
check_out_dates = check_out_new["datetime"]
check_out_dates = pd.to_datetime(check_out_dates)
check_out_week = []
check_out_weekday = []
check_out_date = []
for i in check_out_dates:
check_out_week.append(i.week)
check_out_weekday.append(i.weekday())
check_out_date.append(i.date())
check_out_new["week"] = check_out_week
check_out_new["weekday"] = check_out_weekday
check_out_new["datetime"] = check_out_date
check_in_dates = check_in_new["datetime"]
check_in_dates = pd.to_datetime(check_in_dates)
check_in_week = []
check_in_weekday = []
check_in_date = []
for i in check_in_dates:
check_in_week.append(i.week)
check_in_weekday.append(i.weekday())
check_in_date.append(i.date())
check_in_new["week"] = check_in_week
check_in_new["weekday"] = check_in_weekday
check_in_new["datetime"] = check_in_date
check_in_new.reset_index(inplace = True)
check_out_new.reset_index(inplace = True)
check_in_new.drop(columns= ["index"], inplace = True)
check_out_new.drop(columns= ["index"], inplace = True)
check_in_new["datetime"] = pd.to_datetime(check_in_new["datetime"])
check_in_new.to_csv("check_ins_metro_preprocessed.csv")
check_out_new.to_csv("check_out_metro_preprocessed.csv")
| 0.296247 | 0.38856 |
# Download Data and Preprocess Data and Upload to kaggle dataset
1. In this notebook you are going to download data from zindi
2. Extract all the .zip files
3. Convert all train & test .wav files to 32k sample_rate
4. Upload dataset to kaggle for easy download for every notebook
### 1. Download data from zindi
```
%%time
import requests, os
import requests, zipfile
os.makedirs("input")
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/Train.csv'
myobj = {'auth_token': '############################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/Train.csv'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/SampleSubmission.csv'
myobj = {'auth_token': '############################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/SampleSubmission.csv'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/audio_files.zip'
myobj = {'auth_token': '###########################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/audio_files.zip'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/AdditionalUtterances.zip'
myobj = {'auth_token': '##########################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/AdditionalUtterances.zip'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/nlp_keywords_29Oct2020.zip'
myobj = {'auth_token': '###########################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/nlp_keywords_29Oct2020.zip'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
```
### 2.Extract all zipfiles
```
!unzip -q "input/audio_files.zip" -d "input"
!unzip -q "input/AdditionalUtterances.zip" -d "input"
!unzip -q "input/nlp_keywords_29Oct2020.zip" -d "input"
```
### 3.convert audio files to 32k sample rate
```
!pip -q install soundfile
import os, random, glob
import numpy as np, pandas as pd
import librosa
import soundfile as sf
from tqdm.auto import tqdm
train = pd.read_csv("input/Train.csv")
sample_submission = pd.read_csv("input/SampleSubmission.csv")
# move all training data to "input/audio_train"
# move all test data to "input/audio_test"
os.makedirs("input/audio_train", exist_ok=True)
os.makedirs("input/audio_test", exist_ok=True)
# convert all train data to 32k sample rate
target_sr = 32000
for f, label in tqdm(zip(train.fn.values, train.label.values), total=len(train.fn.values)):
os.makedirs(f"input/audio_train/{label}", exist_ok=True)
f_name = f.split("/")[-1]
y, sr = librosa.load(f"input/{f}", sr=target_sr)
sf.write(f"input/audio_train/{label}/{f_name}", y, samplerate=target_sr)
# convert all test data to 32k sample rate
target_sr = 32000
for f in tqdm(sample_submission.fn.values):
f_name = f.split("/")[-1]
y, sr = librosa.load(f"input/{f}", sr=target_sr)
sf.write(f"input/audio_test/{f_name}", y, samplerate=target_sr)
## latest_keywords
latest_keywords = glob.glob("input/latest_keywords/*/*.wav")
for f in tqdm(latest_keywords):
label = f.split("/")[-2]
f_name = f.split("/")[-1]
os.makedirs(f"input/audio_train/{label}", exist_ok=True)
y, sr = librosa.load(f, sr=target_sr)
sf.write(f"input/audio_train/{label}/{f_name}", y, samplerate=target_sr)
## nlp_keywords
nlp_keywords = glob.glob("input/nlp_keywords/*/*.wav")
for f in tqdm(nlp_keywords):
label = f.split("/")[-2]
f_name = f.split("/")[-1]
os.makedirs(f"input/audio_train/{label}", exist_ok=True)
y, sr = librosa.load(f, sr=target_sr)
sf.write(f"input/audio_train/{label}/{f_name}", y, samplerate=target_sr)
train_wav = glob.glob("input/audio_train/*/*.wav")
len(train_wav)
# create train dataframe
train_df = pd.DataFrame({
"fn" : train_wav,
})
train_df["label"] = train_df.fn.apply(lambda x: x.split("/")[-2])
train_df.head()
```
### 4.Upload those files to kaggle dataset so its easy to download every time
- for uploading to kaggle datasets we need to have kaggle account
- for more detalils https://www.kaggle.com/docs/api
- i made this dataset to public after end of this compitation
```
# make sure your kaggle api "kaggle.json" file in your drive
! mkdir /root/.kaggle
! cp '/content/drive/My Drive/kaggle.json' /root/.kaggle # <---- path for kaggle.json file
! chmod 400 /root/.kaggle/kaggle.json
!pip uninstall -y kaggle >> quit
!pip install --upgrade pip >> quit
!pip install kaggle==1.5.6 >> quit
!kaggle -v >> quit
!mkdir "dataset"
!zip -r "audio_train.zip" "input/audio_train" >> quit
!zip -r "dataset/audio_test.zip" "input/audio_test" >> quit
!cp "input/SampleSubmission.csv" "dataset"
data = '''{
"title": "giz-nlp-agricultural-keyword-spotter",
"id": "gopidurgaprasad/giz-nlp-agricultural-keyword-spotter",
"licenses": [
{
"name": "CC0-1.0"
}
]
}
'''
text_file = open("dataset/dataset-metadata.json", 'w+')
n = text_file.write(data)
text_file.close()
!kaggle datasets create -p "dataset"
```
|
github_jupyter
|
%%time
import requests, os
import requests, zipfile
os.makedirs("input")
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/Train.csv'
myobj = {'auth_token': '############################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/Train.csv'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/SampleSubmission.csv'
myobj = {'auth_token': '############################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/SampleSubmission.csv'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/audio_files.zip'
myobj = {'auth_token': '###########################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/audio_files.zip'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/AdditionalUtterances.zip'
myobj = {'auth_token': '##########################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/AdditionalUtterances.zip'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
#the url and auth_value from the website
url = 'https://api.zindi.africa/v1/competitions/giz-nlp-agricultural-keyword-spotter/files/nlp_keywords_29Oct2020.zip'
myobj = {'auth_token': '###########################'} #use your own
x = requests.post(url, data = myobj,stream=True)
target_path = 'input/nlp_keywords_29Oct2020.zip'
handle = open(target_path, "wb")
for chunk in x.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
handle.close()
!unzip -q "input/audio_files.zip" -d "input"
!unzip -q "input/AdditionalUtterances.zip" -d "input"
!unzip -q "input/nlp_keywords_29Oct2020.zip" -d "input"
!pip -q install soundfile
import os, random, glob
import numpy as np, pandas as pd
import librosa
import soundfile as sf
from tqdm.auto import tqdm
train = pd.read_csv("input/Train.csv")
sample_submission = pd.read_csv("input/SampleSubmission.csv")
# move all training data to "input/audio_train"
# move all test data to "input/audio_test"
os.makedirs("input/audio_train", exist_ok=True)
os.makedirs("input/audio_test", exist_ok=True)
# convert all train data to 32k sample rate
target_sr = 32000
for f, label in tqdm(zip(train.fn.values, train.label.values), total=len(train.fn.values)):
os.makedirs(f"input/audio_train/{label}", exist_ok=True)
f_name = f.split("/")[-1]
y, sr = librosa.load(f"input/{f}", sr=target_sr)
sf.write(f"input/audio_train/{label}/{f_name}", y, samplerate=target_sr)
# convert all test data to 32k sample rate
target_sr = 32000
for f in tqdm(sample_submission.fn.values):
f_name = f.split("/")[-1]
y, sr = librosa.load(f"input/{f}", sr=target_sr)
sf.write(f"input/audio_test/{f_name}", y, samplerate=target_sr)
## latest_keywords
latest_keywords = glob.glob("input/latest_keywords/*/*.wav")
for f in tqdm(latest_keywords):
label = f.split("/")[-2]
f_name = f.split("/")[-1]
os.makedirs(f"input/audio_train/{label}", exist_ok=True)
y, sr = librosa.load(f, sr=target_sr)
sf.write(f"input/audio_train/{label}/{f_name}", y, samplerate=target_sr)
## nlp_keywords
nlp_keywords = glob.glob("input/nlp_keywords/*/*.wav")
for f in tqdm(nlp_keywords):
label = f.split("/")[-2]
f_name = f.split("/")[-1]
os.makedirs(f"input/audio_train/{label}", exist_ok=True)
y, sr = librosa.load(f, sr=target_sr)
sf.write(f"input/audio_train/{label}/{f_name}", y, samplerate=target_sr)
train_wav = glob.glob("input/audio_train/*/*.wav")
len(train_wav)
# create train dataframe
train_df = pd.DataFrame({
"fn" : train_wav,
})
train_df["label"] = train_df.fn.apply(lambda x: x.split("/")[-2])
train_df.head()
# make sure your kaggle api "kaggle.json" file in your drive
! mkdir /root/.kaggle
! cp '/content/drive/My Drive/kaggle.json' /root/.kaggle # <---- path for kaggle.json file
! chmod 400 /root/.kaggle/kaggle.json
!pip uninstall -y kaggle >> quit
!pip install --upgrade pip >> quit
!pip install kaggle==1.5.6 >> quit
!kaggle -v >> quit
!mkdir "dataset"
!zip -r "audio_train.zip" "input/audio_train" >> quit
!zip -r "dataset/audio_test.zip" "input/audio_test" >> quit
!cp "input/SampleSubmission.csv" "dataset"
data = '''{
"title": "giz-nlp-agricultural-keyword-spotter",
"id": "gopidurgaprasad/giz-nlp-agricultural-keyword-spotter",
"licenses": [
{
"name": "CC0-1.0"
}
]
}
'''
text_file = open("dataset/dataset-metadata.json", 'w+')
n = text_file.write(data)
text_file.close()
!kaggle datasets create -p "dataset"
| 0.160562 | 0.628464 |
# Cat and Dog Image Classifier Using Convolutional Neural Networks
This notebook is meant to show you how you can use a pretrained Convolutional Neural Network for image classification.
The neural network we are using is the [VGG16](https://gist.github.com/baraldilorenzo/07d7802847aaad0a35d3) image classifier, which was a top-ranking image classifier in the 2014 [ImageNet competition](http://www.image-net.org/challenges/LSVRC/2014/).
The information in the hidden layer was frozen and a single neuron last layer with a **signmoid activation** was added to the end of the network, which was further tuned using training images of cats and dogs. The single neuron **sigmoid activation** layer allows us to pick between **two** classes, due to the functional form of the **sigmoid** plotted below

Receiving an output from the network near **1** signifies that the input image should be a **dog**. If the output is near **0**, the input image should be a **cat**.
```
from keras.models import load_model
import numpy as np
from skimage.io import imread
from skimage.transform import resize
from matplotlib import pyplot as plt
%matplotlib inline
model = load_model("cnn_image_classifier.h5")
URL = "https://i.ytimg.com/vi/SfLV8hD7zX4/maxresdefault.jpg"
test_image = imread(URL)
def show_image(image, prediction=None, confidence=None, show_confidence=False):
plt.imshow(image)
plt.axis('off')
if prediction is not None:
plt.rc('axes', titlesize=16)
if show_confidence:
plt.title("I predict this is a " + str(prediction) + "! (with confidence " + str(confidence) + ")")
else:
plt.title("I predict this is a " + str(prediction) + "!")
plt.show()
show_image(test_image)
def fix_image(image):
return resize(image, (150, 150, 3), mode='constant')
image_in = fix_image(test_image)
show_image(image_in)
def get_prediction(image):
images_to_predict = np.asarray([image])
predictions = model.predict(images_to_predict)
prediction = predictions[0][0]
return prediction
prediction = get_prediction(image_in)
print("I predict " + str(prediction) + "!")
def cat_or_dog(image):
images_to_predict = np.asarray([image])
predictions = model.predict(images_to_predict)
confidence = predictions[0][0]
if confidence > 0.5:
return "dog", confidence
else:
return "cat", 1 - confidence
prediction, confidence = cat_or_dog(image_in)
print("I predict " + prediction + "!")
print("My confidence is {:.3f}".format(confidence))
def cat_or_dog_url(URL, show_confidence=False):
test_image = imread(URL)
image_in = fix_image(test_image)
prediction, confidence = cat_or_dog(image_in)
show_image(test_image, prediction=prediction, confidence=confidence, show_confidence=show_confidence)
cat_or_dog_url("https://s7d2.scene7.com/is/image/PetSmart/PB1201_STORY_CARO-Authority-HealthyOutside-DOG-20160818?$PB1201$")
cat_or_dog_url("https://www.what-dog.net/Images/faces2/scroll001.jpg")
cat_or_dog_url("https://cdn.pixabay.com/photo/2016/08/10/02/55/kitten-1582384_960_720.jpg")
cat_or_dog_url("https://www.cats.org.uk/uploads/images/featurebox_sidebar_kids/grief-and-loss.jpg")
cat_or_dog_url("http://www.petsworld.in/blog/wp-content/uploads/2014/09/adorable-cat.jpg", show_confidence=True)
```
What else can we input into the network to see if we can trick it...?
```
# Put the url of an image here
my_url = "https://www.thewrap.com/wp-content/uploads/2015/11/Donald-Trump.jpg"
cat_or_dog_url(my_url, show_confidence=True)
```
|
github_jupyter
|
from keras.models import load_model
import numpy as np
from skimage.io import imread
from skimage.transform import resize
from matplotlib import pyplot as plt
%matplotlib inline
model = load_model("cnn_image_classifier.h5")
URL = "https://i.ytimg.com/vi/SfLV8hD7zX4/maxresdefault.jpg"
test_image = imread(URL)
def show_image(image, prediction=None, confidence=None, show_confidence=False):
plt.imshow(image)
plt.axis('off')
if prediction is not None:
plt.rc('axes', titlesize=16)
if show_confidence:
plt.title("I predict this is a " + str(prediction) + "! (with confidence " + str(confidence) + ")")
else:
plt.title("I predict this is a " + str(prediction) + "!")
plt.show()
show_image(test_image)
def fix_image(image):
return resize(image, (150, 150, 3), mode='constant')
image_in = fix_image(test_image)
show_image(image_in)
def get_prediction(image):
images_to_predict = np.asarray([image])
predictions = model.predict(images_to_predict)
prediction = predictions[0][0]
return prediction
prediction = get_prediction(image_in)
print("I predict " + str(prediction) + "!")
def cat_or_dog(image):
images_to_predict = np.asarray([image])
predictions = model.predict(images_to_predict)
confidence = predictions[0][0]
if confidence > 0.5:
return "dog", confidence
else:
return "cat", 1 - confidence
prediction, confidence = cat_or_dog(image_in)
print("I predict " + prediction + "!")
print("My confidence is {:.3f}".format(confidence))
def cat_or_dog_url(URL, show_confidence=False):
test_image = imread(URL)
image_in = fix_image(test_image)
prediction, confidence = cat_or_dog(image_in)
show_image(test_image, prediction=prediction, confidence=confidence, show_confidence=show_confidence)
cat_or_dog_url("https://s7d2.scene7.com/is/image/PetSmart/PB1201_STORY_CARO-Authority-HealthyOutside-DOG-20160818?$PB1201$")
cat_or_dog_url("https://www.what-dog.net/Images/faces2/scroll001.jpg")
cat_or_dog_url("https://cdn.pixabay.com/photo/2016/08/10/02/55/kitten-1582384_960_720.jpg")
cat_or_dog_url("https://www.cats.org.uk/uploads/images/featurebox_sidebar_kids/grief-and-loss.jpg")
cat_or_dog_url("http://www.petsworld.in/blog/wp-content/uploads/2014/09/adorable-cat.jpg", show_confidence=True)
# Put the url of an image here
my_url = "https://www.thewrap.com/wp-content/uploads/2015/11/Donald-Trump.jpg"
cat_or_dog_url(my_url, show_confidence=True)
| 0.681197 | 0.981471 |
```
import keras.backend as K
import tensorflow as tf
from keras.layers import Input
from keras.activations import softplus
from keras.models import Model
from keras.optimizers import Adam
import numpy as np
import matplotlib.pyplot as plt
import argparse
import keras.backend as K
import tensorflow as tf
import numpy as np
from keras.layers import Input, Reshape, Flatten,Dense,BatchNormalization,PReLU
from keras.layers import Activation, Add,Lambda,AveragePooling2D,LeakyReLU,GlobalAvgPool2D
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Model
from keras.initializers import RandomNormal,glorot_uniform
import pandas as pd
from keras.preprocessing.image import load_img
from tqdm import tqdm_notebook
conv_init = RandomNormal(0, 0.02)
def Resblock_generator(layer_input,channels):
# h1 = BatchNormalization(momentum=0.9)(layer_input)
h1 = UpSampling2D(size=2)(layer_input)
h1 = Conv2D(channels,3,strides=1,padding="same",kernel_initializer=conv_init)(h1)
# h1 = SubpixelConv2D(h1)(h1)
h1 = BatchNormalization(momentum=0.9)(h1)
# h1 = Activation("relu")(h1)
h2 = UpSampling2D(size=2)(layer_input)
h2 = Conv2D(channels,1,strides=1,padding="valid",kernel_initializer=conv_init)(h2)
# h2 = SubpixelConv2D(layer_input)(h2)
# h2 = Activation("relu")(h2)
return Add()([h2,h1])
def Generator(z_dim,base=64):
input = Input(shape=(z_dim,))
h = Dense(128*8*8)(input)
h = Reshape((8,8,128))(h)
# h = Resblock_generator(h,base*4)
# h = Resblock_generator(h,base*4)
h = Resblock_generator(h,base*4)
h = Resblock_generator(h,base*2)
h = Resblock_generator(h,base)
h = BatchNormalization(momentum=0.9)(h)
h = PReLU()(h)
h = Conv2D(3,3,strides=1,padding='same')(h)
output = Activation('tanh')(h)
model = Model(inputs=input,outputs=output)
model.summary()
return model
def Resblock_discriminator(layer_input,channels):
h1 = Conv2D(channels,3,strides=1,padding='same',kernel_initializer=conv_init)(layer_input)
h1 = LeakyReLU(alpha=0.2)(h1)
h1 = Conv2D(channels,3,strides=1,padding='same',kernel_initializer=conv_init)(h1)
h1 = AveragePooling2D(pool_size=(2, 2))(h1)
h2 = Conv2D(channels,1,strides=1,padding="valid",kernel_initializer=conv_init)(layer_input)
h2 = LeakyReLU(alpha=0.2)(h2)
h2 = AveragePooling2D(pool_size=(2, 2))(h2)
return Add()([h2,h1])
def Discriminator(input_shape,base=64):
input = Input(shape=input_shape)
h = Resblock_discriminator(input,base)
h = Resblock_discriminator(h,base*2)
h = Resblock_discriminator(h,base*4)
# h = Resblock_discriminator(h,base*4)
h = Resblock_discriminator(h,base*8)
h = LeakyReLU(alpha=0.2)(h)
# h = GlobalAvgPool2D()(h)
h = Flatten()(h)
# h = Dense(1024)(h)
output = Dense(1)(h)
model = Model(inputs=input,outputs=output)
model.summary()
return model
def combine_images(generated_images,row,col):
num = generated_images.shape[0]
width = col
height = row
shape = generated_images.shape[1:3]
image = np.zeros((height*shape[0], width*shape[1],3),
dtype=generated_images.dtype)
for index, img in enumerate(generated_images):
i = int(index/width)
j = index % width
image[i*shape[0]:(i+1)*shape[0], j*shape[1]:(j+1)*shape[1],:] = img[:, :, :]
return image
def linspace_two(v1,v2,sample_nums):
ans = [[0 for _ in range(sample_nums)] for i in range(np.shape(v1)[0])]
for i in range(len(ans)):
ans[i][:] = np.linspace(v1[i],v2[i],sample_nums)
return np.array(ans).T
def linspace_four(lu,ru,ld,rd,sample_nums):
lcol = linspace_two(lu,ld,sample_nums)
rcol = linspace_two(ru,rd,sample_nums)
ans = []
for i in range(len(lcol)):
ans.append(linspace_two(lcol[i],rcol[i],sample_nums))
res = [_ for i in ans for _ in i]
return np.array(res)
epochs = 50
save_interval = 1
model_interval = 1
batch_size = 32
_lambda = 0.5
z_dim = 100
img_shape = (64, 64, 3)
image_size = 64
channels = 3
lr_D = 2e-4
lr_G = 2e-4
b1 = 0.5
b2 = 0.99
gen = Generator(z_dim)
dis = Discriminator(img_shape)
gen.load_weights("./DRAGAN_model/model-40-epoch.h5")
noise = np.random.normal(size=(16, z_dim))
imgs = gen.predict(noise)
imgs = imgs/2 + 0.5
trial = 4
imgs = combine_images(imgs,4,4)
plt.imshow(imgs)
plt.imsave('./DRAGAN_result/Predict_{}.png'.format(trial),imgs)
plt.show()
v1 = noise[0]
v2 = noise[4]
v3 = noise[8]
v4 = noise[10]
noise_interpret = linspace_four(v1,v2,v3,v4,8)
imgs = gen.predict(noise_interpret)
imgs = imgs/2 + 0.5
imgs = combine_images(imgs,8,8)
trial = 4
plt.imshow(imgs)
plt.imsave('./DRAGAN_result/inter_{}.png'.format(trial),imgs)
plt.show()
```
|
github_jupyter
|
import keras.backend as K
import tensorflow as tf
from keras.layers import Input
from keras.activations import softplus
from keras.models import Model
from keras.optimizers import Adam
import numpy as np
import matplotlib.pyplot as plt
import argparse
import keras.backend as K
import tensorflow as tf
import numpy as np
from keras.layers import Input, Reshape, Flatten,Dense,BatchNormalization,PReLU
from keras.layers import Activation, Add,Lambda,AveragePooling2D,LeakyReLU,GlobalAvgPool2D
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Model
from keras.initializers import RandomNormal,glorot_uniform
import pandas as pd
from keras.preprocessing.image import load_img
from tqdm import tqdm_notebook
conv_init = RandomNormal(0, 0.02)
def Resblock_generator(layer_input,channels):
# h1 = BatchNormalization(momentum=0.9)(layer_input)
h1 = UpSampling2D(size=2)(layer_input)
h1 = Conv2D(channels,3,strides=1,padding="same",kernel_initializer=conv_init)(h1)
# h1 = SubpixelConv2D(h1)(h1)
h1 = BatchNormalization(momentum=0.9)(h1)
# h1 = Activation("relu")(h1)
h2 = UpSampling2D(size=2)(layer_input)
h2 = Conv2D(channels,1,strides=1,padding="valid",kernel_initializer=conv_init)(h2)
# h2 = SubpixelConv2D(layer_input)(h2)
# h2 = Activation("relu")(h2)
return Add()([h2,h1])
def Generator(z_dim,base=64):
input = Input(shape=(z_dim,))
h = Dense(128*8*8)(input)
h = Reshape((8,8,128))(h)
# h = Resblock_generator(h,base*4)
# h = Resblock_generator(h,base*4)
h = Resblock_generator(h,base*4)
h = Resblock_generator(h,base*2)
h = Resblock_generator(h,base)
h = BatchNormalization(momentum=0.9)(h)
h = PReLU()(h)
h = Conv2D(3,3,strides=1,padding='same')(h)
output = Activation('tanh')(h)
model = Model(inputs=input,outputs=output)
model.summary()
return model
def Resblock_discriminator(layer_input,channels):
h1 = Conv2D(channels,3,strides=1,padding='same',kernel_initializer=conv_init)(layer_input)
h1 = LeakyReLU(alpha=0.2)(h1)
h1 = Conv2D(channels,3,strides=1,padding='same',kernel_initializer=conv_init)(h1)
h1 = AveragePooling2D(pool_size=(2, 2))(h1)
h2 = Conv2D(channels,1,strides=1,padding="valid",kernel_initializer=conv_init)(layer_input)
h2 = LeakyReLU(alpha=0.2)(h2)
h2 = AveragePooling2D(pool_size=(2, 2))(h2)
return Add()([h2,h1])
def Discriminator(input_shape,base=64):
input = Input(shape=input_shape)
h = Resblock_discriminator(input,base)
h = Resblock_discriminator(h,base*2)
h = Resblock_discriminator(h,base*4)
# h = Resblock_discriminator(h,base*4)
h = Resblock_discriminator(h,base*8)
h = LeakyReLU(alpha=0.2)(h)
# h = GlobalAvgPool2D()(h)
h = Flatten()(h)
# h = Dense(1024)(h)
output = Dense(1)(h)
model = Model(inputs=input,outputs=output)
model.summary()
return model
def combine_images(generated_images,row,col):
num = generated_images.shape[0]
width = col
height = row
shape = generated_images.shape[1:3]
image = np.zeros((height*shape[0], width*shape[1],3),
dtype=generated_images.dtype)
for index, img in enumerate(generated_images):
i = int(index/width)
j = index % width
image[i*shape[0]:(i+1)*shape[0], j*shape[1]:(j+1)*shape[1],:] = img[:, :, :]
return image
def linspace_two(v1,v2,sample_nums):
ans = [[0 for _ in range(sample_nums)] for i in range(np.shape(v1)[0])]
for i in range(len(ans)):
ans[i][:] = np.linspace(v1[i],v2[i],sample_nums)
return np.array(ans).T
def linspace_four(lu,ru,ld,rd,sample_nums):
lcol = linspace_two(lu,ld,sample_nums)
rcol = linspace_two(ru,rd,sample_nums)
ans = []
for i in range(len(lcol)):
ans.append(linspace_two(lcol[i],rcol[i],sample_nums))
res = [_ for i in ans for _ in i]
return np.array(res)
epochs = 50
save_interval = 1
model_interval = 1
batch_size = 32
_lambda = 0.5
z_dim = 100
img_shape = (64, 64, 3)
image_size = 64
channels = 3
lr_D = 2e-4
lr_G = 2e-4
b1 = 0.5
b2 = 0.99
gen = Generator(z_dim)
dis = Discriminator(img_shape)
gen.load_weights("./DRAGAN_model/model-40-epoch.h5")
noise = np.random.normal(size=(16, z_dim))
imgs = gen.predict(noise)
imgs = imgs/2 + 0.5
trial = 4
imgs = combine_images(imgs,4,4)
plt.imshow(imgs)
plt.imsave('./DRAGAN_result/Predict_{}.png'.format(trial),imgs)
plt.show()
v1 = noise[0]
v2 = noise[4]
v3 = noise[8]
v4 = noise[10]
noise_interpret = linspace_four(v1,v2,v3,v4,8)
imgs = gen.predict(noise_interpret)
imgs = imgs/2 + 0.5
imgs = combine_images(imgs,8,8)
trial = 4
plt.imshow(imgs)
plt.imsave('./DRAGAN_result/inter_{}.png'.format(trial),imgs)
plt.show()
| 0.753285 | 0.439627 |
```
# Check GPU
import tensorflow as tf
tf.test.gpu_device_name()
# Install a Drive FUSE wrapper.
# https://github.com/astrada/google-drive-ocamlfuse
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
# Generate auth tokens for Colab
from google.colab import auth
auth.authenticate_user()
# Generate creds for the Drive FUSE library.
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p my_drive
!google-drive-ocamlfuse my_drive
!python3 /content/my_drive/Fashion/mnist_reader.py
import sys
sys.path.append('/content/my_drive/Fashion')
```
----------Everything above this is for Google Coloab----------
## Part 1: Fashion MNIST
```
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
import mnist_reader
X_train, y_train = mnist_reader.load_mnist('/content/my_drive/Fashion', kind='train')
X_train_set = X_train[0:40001]
y_train_set = y_train[0:40001]
X_validation_set = X_train[40001:]
y_validation_set = y_train[40001:]
X_test, y_test = mnist_reader.load_mnist('/content/my_drive/Fashion', kind='t10k')
X_train.shape
y_train.shape
```
### Labels
Each training and test example is assigned to one of the following labels:
Label Description
- 0 T-shirt/top
- 1 Trouser
- 2 Pullover
- 3 Dress
- 4 Coat
- 5 Sandal
- 6 Shirt
- 7 Sneaker
- 8 Bag
- 9 Ankle boot
# Voting Classifier
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.neural_network import MLPClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
forest_clf.fit(X_train_set, y_train_set)
et_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)
et_clf.fit(X_train_set, y_train_set)
mlp_clf = MLPClassifier(random_state=42)
mlp_clf.fit(X_train_set, y_train_set)
from sklearn.ensemble import VotingClassifier
voting_clf = VotingClassifier(
estimators=[('f', forest_clf), ('et', et_clf), ('mlp', mlp_clf)],
voting='hard')
voting_clf.fit(X_train_set, y_train_set)
from sklearn.metrics import accuracy_score
voting_pred = voting_clf.predict(X_validation_set)
accuracy_score(voting_pred, y_validation_set)
forest_pred = forest_clf.predict(X_validation_set)
accuracy_score(forest_pred, y_validation_set)
et_pred = et_clf.predict(X_validation_set)
accuracy_score(et_pred, y_validation_set)
mlp_pred = mlp_clf.predict(X_validation_set)
accuracy_score(mlp_pred, y_validation_set)
voting_test_pred = voting_clf.predict(X_test)
accuracy_score(voting_test_pred, y_test)
```
# Part 2
# Letter Data Reading
```
# load csv file
import pandas as pd
data = pd.read_csv("/content/my_drive/Fashion/letter-recognition.data.csv", header=None)
data.head()
data.shape
# Making training and test sets
X_train = data[:16000].drop(0, axis=1)
X_test = data[16000:].drop(0, axis=1)
y_train = data[:16000][0]
y_test = data[16000:][0]
X_train.shape
y_train.shape
```
# Letter Classification
```
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
forest_clf.fit(X_train, y_train)
forest_pred_letter = forest_clf.predict(X_test)
accuracy_score(forest_pred_letter, y_test)
```
# Improving letter classification
```
et_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)
et_clf.fit(X_train, y_train)
mlp_clf = MLPClassifier(random_state=42)
mlp_clf.fit(X_train, y_train)
voting_clf = VotingClassifier(
estimators=[('f', forest_clf), ('et', et_clf), ('mlp', mlp_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
pred_et = et_clf.predict(X_test)
pred_mlp = mlp_clf.predict(X_test)
accuracy_score(pred_et, y_test)
accuracy_score(pred_mlp, y_test)
accuracy_score(voting_clf.predict(X_test), y_test)
```
|
github_jupyter
|
# Check GPU
import tensorflow as tf
tf.test.gpu_device_name()
# Install a Drive FUSE wrapper.
# https://github.com/astrada/google-drive-ocamlfuse
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
# Generate auth tokens for Colab
from google.colab import auth
auth.authenticate_user()
# Generate creds for the Drive FUSE library.
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p my_drive
!google-drive-ocamlfuse my_drive
!python3 /content/my_drive/Fashion/mnist_reader.py
import sys
sys.path.append('/content/my_drive/Fashion')
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
import mnist_reader
X_train, y_train = mnist_reader.load_mnist('/content/my_drive/Fashion', kind='train')
X_train_set = X_train[0:40001]
y_train_set = y_train[0:40001]
X_validation_set = X_train[40001:]
y_validation_set = y_train[40001:]
X_test, y_test = mnist_reader.load_mnist('/content/my_drive/Fashion', kind='t10k')
X_train.shape
y_train.shape
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.neural_network import MLPClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
forest_clf.fit(X_train_set, y_train_set)
et_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)
et_clf.fit(X_train_set, y_train_set)
mlp_clf = MLPClassifier(random_state=42)
mlp_clf.fit(X_train_set, y_train_set)
from sklearn.ensemble import VotingClassifier
voting_clf = VotingClassifier(
estimators=[('f', forest_clf), ('et', et_clf), ('mlp', mlp_clf)],
voting='hard')
voting_clf.fit(X_train_set, y_train_set)
from sklearn.metrics import accuracy_score
voting_pred = voting_clf.predict(X_validation_set)
accuracy_score(voting_pred, y_validation_set)
forest_pred = forest_clf.predict(X_validation_set)
accuracy_score(forest_pred, y_validation_set)
et_pred = et_clf.predict(X_validation_set)
accuracy_score(et_pred, y_validation_set)
mlp_pred = mlp_clf.predict(X_validation_set)
accuracy_score(mlp_pred, y_validation_set)
voting_test_pred = voting_clf.predict(X_test)
accuracy_score(voting_test_pred, y_test)
# load csv file
import pandas as pd
data = pd.read_csv("/content/my_drive/Fashion/letter-recognition.data.csv", header=None)
data.head()
data.shape
# Making training and test sets
X_train = data[:16000].drop(0, axis=1)
X_test = data[16000:].drop(0, axis=1)
y_train = data[:16000][0]
y_test = data[16000:][0]
X_train.shape
y_train.shape
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
forest_clf.fit(X_train, y_train)
forest_pred_letter = forest_clf.predict(X_test)
accuracy_score(forest_pred_letter, y_test)
et_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)
et_clf.fit(X_train, y_train)
mlp_clf = MLPClassifier(random_state=42)
mlp_clf.fit(X_train, y_train)
voting_clf = VotingClassifier(
estimators=[('f', forest_clf), ('et', et_clf), ('mlp', mlp_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
pred_et = et_clf.predict(X_test)
pred_mlp = mlp_clf.predict(X_test)
accuracy_score(pred_et, y_test)
accuracy_score(pred_mlp, y_test)
accuracy_score(voting_clf.predict(X_test), y_test)
| 0.575707 | 0.436442 |

# Oracle Objects and Collections
Documentation reference link: [Fetching Oracle Database Objects and Collections](https://cx-oracle.readthedocs.io/en/latest/user_guide/sql_execution.html#fetching-oracle-database-objects-and-collections)
```
import cx_Oracle
import os
import platform
import time
if platform.system() == 'Darwin':
cx_Oracle.init_oracle_client(lib_dir = os.environ.get("HOME")+"/instantclient_19_8")
elif platform.system() == 'Windows':
cx_Oracle.init_oracle_client(lib_dir = r"C:\oracle\instantclient_19_14")
user = "pythondemo"
password = "welcome"
connect_string = "localhost/orclpdb1"
connection = cx_Oracle.connect(user=user, password=password, dsn=connect_string)
```
# Binding Named Objects
Create a demonstration table. This table uses the predefined SDO_GEOMETRY object which stores spatial information:
```
with connection.cursor() as cursor:
try:
cursor.execute("drop table TestGeometry")
except:
;
cursor.execute("""create table TestGeometry (
IntCol number(9) not null,
Geometry sdo_geometry not null)""")
print("Done")
```
Using cx_Oracle functions like `gettype()` and `extend()` you can create a Python representation of the database object:
```
with connection.cursor() as cursor:
typeObj = connection.gettype("SDO_GEOMETRY")
elementInfoTypeObj = connection.gettype("SDO_ELEM_INFO_ARRAY")
ordinateTypeObj = connection.gettype("SDO_ORDINATE_ARRAY")
obj = typeObj() # Alternatively use 'obj = typeObj.newobject()''
obj.SDO_GTYPE = 2003
obj.SDO_ELEM_INFO = elementInfoTypeObj()
obj.SDO_ELEM_INFO.extend([1, 1003, 3])
obj.SDO_ORDINATES = ordinateTypeObj()
obj.SDO_ORDINATES.extend([1, 1, 5, 7])
```
Calling `gettype()` requires multiple round-trips to the database, so avoid calling it unnecessarily.
The new object can be bound directly for insertion:
```
with connection.cursor() as cursor:
cursor.execute("insert into TestGeometry values (1, :objbv)", {"objbv": obj})
print("Done")
```
And then fetched back:
```
with connection.cursor() as cursor:
for (id, obj) in cursor.execute("select IntCol, Geometry from testgeometry"):
print(id, obj)
```
Simple attribute access is easy:
```
with connection.cursor() as cursor:
for (id, obj) in cursor.execute("select IntCol, Geometry from testgeometry"):
print("SDO_GTYPE is", obj.SDO_GTYPE)
```
To display all attributes, create a helper function:
```
# Oracle Database object dumper
def dumpobject(obj, prefix = " "):
if obj.type.iscollection:
print(prefix, "[")
for value in obj.aslist():
if isinstance(value, cx_Oracle.Object):
dumpobject(value, prefix + " ")
else:
print(prefix + " ", repr(value))
print(prefix, "]")
else:
print(prefix, "{")
for attr in obj.type.attributes:
value = getattr(obj, attr.name)
if isinstance(value, cx_Oracle.Object):
print(prefix + " " + attr.name + " :")
dumpobject(value, prefix + " ")
else:
print(prefix + " " + attr.name + " :", repr(value))
print(prefix, "}")
```
Using the helper function shows the full object structure:
```
with connection.cursor() as cursor:
for (id, obj) in cursor.execute("select IntCol, Geometry from testgeometry"):
print("Id: ", id)
dumpobject(obj)
```
# PL/SQL Collections
The sample schema uses PL/SQL collections
```
cursor = connection.cursor()
cursor.execute("select dbms_metadata.get_ddl('PACKAGE', 'PKG_DEMO') from dual")
ddl, = cursor.fetchone()
print(ddl.read())
```
To get a collection, create a Python variable with the database object type:
```
typeObj = connection.gettype("PKG_DEMO.UDT_STRINGLIST")
obj = typeObj()
# call the stored procedure which will populate the object
cursor = connection.cursor()
cursor.callproc("pkg_Demo.DemoCollectionOut", (obj,))
```
To show the collection indexes and values:
```
ix = obj.first()
while ix is not None:
print(ix, "->", obj.getelement(ix))
ix = obj.next(ix)
print()
```
Show the values as a simple list:
```
print(obj.aslist())
```
Show the values as a simple dictionary:
```
print(obj.asdict())
```
# Binding PL/SQL Records
Create a new Python object of the correct type and set attribute values:
```
import datetime
typeObj = connection.gettype("PKG_DEMO.UDT_DEMORECORD")
obj = typeObj()
obj.NUMBERVALUE = 6
obj.STRINGVALUE = "Test String"
obj.DATEVALUE = datetime.datetime(2016, 5, 28)
obj.BOOLEANVALUE = False
```
Call the stored procedure which will modify the object:
```
with connection.cursor() as cursor:
cursor.callproc("pkg_Demo.DemoRecordsInOut", (obj,))
```
Show the modified values:
```
print("NUMBERVALUE ->", obj.NUMBERVALUE)
print("STRINGVALUE ->", obj.STRINGVALUE)
print("DATEVALUE ->", obj.DATEVALUE)
print("BOOLEANVALUE ->", obj.BOOLEANVALUE)
```
|
github_jupyter
|
import cx_Oracle
import os
import platform
import time
if platform.system() == 'Darwin':
cx_Oracle.init_oracle_client(lib_dir = os.environ.get("HOME")+"/instantclient_19_8")
elif platform.system() == 'Windows':
cx_Oracle.init_oracle_client(lib_dir = r"C:\oracle\instantclient_19_14")
user = "pythondemo"
password = "welcome"
connect_string = "localhost/orclpdb1"
connection = cx_Oracle.connect(user=user, password=password, dsn=connect_string)
with connection.cursor() as cursor:
try:
cursor.execute("drop table TestGeometry")
except:
;
cursor.execute("""create table TestGeometry (
IntCol number(9) not null,
Geometry sdo_geometry not null)""")
print("Done")
with connection.cursor() as cursor:
typeObj = connection.gettype("SDO_GEOMETRY")
elementInfoTypeObj = connection.gettype("SDO_ELEM_INFO_ARRAY")
ordinateTypeObj = connection.gettype("SDO_ORDINATE_ARRAY")
obj = typeObj() # Alternatively use 'obj = typeObj.newobject()''
obj.SDO_GTYPE = 2003
obj.SDO_ELEM_INFO = elementInfoTypeObj()
obj.SDO_ELEM_INFO.extend([1, 1003, 3])
obj.SDO_ORDINATES = ordinateTypeObj()
obj.SDO_ORDINATES.extend([1, 1, 5, 7])
with connection.cursor() as cursor:
cursor.execute("insert into TestGeometry values (1, :objbv)", {"objbv": obj})
print("Done")
with connection.cursor() as cursor:
for (id, obj) in cursor.execute("select IntCol, Geometry from testgeometry"):
print(id, obj)
with connection.cursor() as cursor:
for (id, obj) in cursor.execute("select IntCol, Geometry from testgeometry"):
print("SDO_GTYPE is", obj.SDO_GTYPE)
# Oracle Database object dumper
def dumpobject(obj, prefix = " "):
if obj.type.iscollection:
print(prefix, "[")
for value in obj.aslist():
if isinstance(value, cx_Oracle.Object):
dumpobject(value, prefix + " ")
else:
print(prefix + " ", repr(value))
print(prefix, "]")
else:
print(prefix, "{")
for attr in obj.type.attributes:
value = getattr(obj, attr.name)
if isinstance(value, cx_Oracle.Object):
print(prefix + " " + attr.name + " :")
dumpobject(value, prefix + " ")
else:
print(prefix + " " + attr.name + " :", repr(value))
print(prefix, "}")
with connection.cursor() as cursor:
for (id, obj) in cursor.execute("select IntCol, Geometry from testgeometry"):
print("Id: ", id)
dumpobject(obj)
cursor = connection.cursor()
cursor.execute("select dbms_metadata.get_ddl('PACKAGE', 'PKG_DEMO') from dual")
ddl, = cursor.fetchone()
print(ddl.read())
typeObj = connection.gettype("PKG_DEMO.UDT_STRINGLIST")
obj = typeObj()
# call the stored procedure which will populate the object
cursor = connection.cursor()
cursor.callproc("pkg_Demo.DemoCollectionOut", (obj,))
ix = obj.first()
while ix is not None:
print(ix, "->", obj.getelement(ix))
ix = obj.next(ix)
print()
print(obj.aslist())
print(obj.asdict())
import datetime
typeObj = connection.gettype("PKG_DEMO.UDT_DEMORECORD")
obj = typeObj()
obj.NUMBERVALUE = 6
obj.STRINGVALUE = "Test String"
obj.DATEVALUE = datetime.datetime(2016, 5, 28)
obj.BOOLEANVALUE = False
with connection.cursor() as cursor:
cursor.callproc("pkg_Demo.DemoRecordsInOut", (obj,))
print("NUMBERVALUE ->", obj.NUMBERVALUE)
print("STRINGVALUE ->", obj.STRINGVALUE)
print("DATEVALUE ->", obj.DATEVALUE)
print("BOOLEANVALUE ->", obj.BOOLEANVALUE)
| 0.184841 | 0.711092 |
## Noise
For simulation, it is useful to have `Gate` objects that enact noisy quantum evolution.
Cirq supports modeling noise via *operator sum* representations of
noise (these evolutions are also known as quantum operations, quantum
dynamical maps, or superoperators).
This formalism models evolution of the
density matrix via:
$$\rho \rightarrow \sum_k A_k \rho A_k^\dagger$$
Where A<sub>k</sub> are *Krauss* operators. These operators are not
necessarily unitary and must satisfy the trace-preserving property:
$$\sum_k A_k^\dagger A_k = I$$
As a noisy channel, Krauss operators are not unique. For more details of these operators, see [John Preskill's notes](http://www.theory.caltech.edu/people/preskill/ph219/chap3_15.pdf).
### Magic methods
A `Gate` can represent an operator sum representation by supporting the
`channel` protocol. Alternatively, for channels that represent probabilistic
mixtures of unitaries, one can implement the `mixture` protocol.
#### cirq.channel and def _channel_
To represent an operator sum evolution, a `Gate` should implement the
`SupportsChannel` protocol. To do this, the `Gate` should implement the
`_channel_(self) -> Sequence[np.ndarray]:` method.
This method should return the sequence of `numpy` matrices corresponding to the Krauss operators. The basis in which this matrix is expressed is always implicit with respect to the object being called.
For example, in `GateOperations`, these matrices must be ordered with respect to the list of qubits that the channel is applied to. The qubit-to-amplitude order mapping matches the ordering of `numpy.kron(A, B)`, where `A` is a qubit earlier in the list than the qubit `B`.
If one has defined `_channel_`, then that `Gate` and any `GateOperation`
that uses that gate can be used as an argument to `cirq.channel` and
`cirq.channel` will return this sequence of matrices.
Besides objects that support `_channel_`, `cirq.channel` will also fall
back to other objects that can be interpreted as channels. For example, if a channel is a probabilistic mixture of unitary gates (see below), then `cirq.channel` will fall back to seeing if the object supports `_mixture_`. If `_mixture_` is not supported, then `cirq.channel` checks to see if `_unitary_` is supported.
In addition to supporting `_channel_`, objects that are channels should also implement `_has_channel_(self) -> bool` to return `True`. This method is used to determine whether an object has a `_channel_` or not without having to do the potentially expensive creation of the matrices for the channel.
#### cirq.mixture, cirq.mixture_channel, and def _mixture_
Some channels can be interpreted as probabilistically selecting between
different unitary evolutions:
$$\rho \rightarrow \sum_k p_k U_k \rho U_k^\dagger {\rm ~where~} \sum_k p_k =1 {\rm ~and~ U_k U_k^\dagger= I}$$
In this case, it is possible to perform **Monte Carlo simulations** of these gates using a wave function based simulator (and not a density matrix based simulator).
Instead of implementing the `SupportsChannel` protocol, one should implement the `SupportsMixture` protocol. To do this, one should implement the `_mixture_(self) -> Sequence[Tuple[float, np.ndarray]]` protocol. This returns a sequence of tuples.
The first element of each tuple is the probability of the unitary, and the second element is the unitary. Like the `_channel_` method described above, the basis for these matrices is implicit with respect to the object being called. One should also make `_has_mixture_` return `True` to indicate to callers that the object supports the mixture protocol.
If one wants to get the mixture channel directly, one can call `cirq.mixture_channel`.
### Common Channels
Cirq supports many commonly used quantum channels out of the box, see
[`ops/common_channels.py`](https://github.com/quantumlib/Cirq/blob/master/cirq/ops/common_channels.py).
#### AsymmetricDepolarizingChannel, DepolarizingChannel, BitFlipChannel, and PhaseFlipChannel
The asymmetric depolarizing channel represents probabilistically selecting
one of three Pauli gates to apply or doing nothing to the state. This is
implemented via a `_mixture_` method so that a Monte Carlo simulation with a
wave function simulator can be used.
This channel implements the evolution:
$$
\rho \rightarrow (1-p_x-p_y-p_z) \rho + p_x X \rho X + p_y Y \rho Y + p_z Z \rho Z
$$
Here p<sub>x</sub> is the probability that the X Pauli gate is applied and
no other gate is applied, and similarly for p<sub>y</sub> and p<sub>z</sub>.
A particular case of the asymmetric depolarizing channel is the case when each of the different Paulis occur with the same probability. This is
encapsulated in the `DepolarizingChannel` gate, which takes a probability `p`
such that each Pauli gate occurs with probability `p/3`.
To construct channels, useful helpers are provided `cirq.asymmetric_depolarize`
and `cirq.depolarize`.
Another common case is when only a Pauli X (bit flip) can occur, or
when only a Pauli Y (phase flip) can occur. These correspond to
`BitFlipChannel` and `PhaseFlipChannel` with helpers `cirq.bit_flip` and
`cirq.phase_flip`.
#### GeneralizedAmplitudeDampingChannel and AmplitudeDampingChannel
The generalized amplitude damping channel models the effect of energy
dissipation to a surrounding environment as well as dephasing that
does not exchange energy. The amplitude damping channel only models dissipation of energy to a surrounding environment.
Cirq has implementations of both of these channels. The generalized amplitude damping channel corresponds to:
$$
\begin{aligned}
\rho \rightarrow& \sum_{k=0}^3 M_k \rho M_k \newline
M_0 =& \sqrt{p} \begin{bmatrix} 1 & 0 \cr 0 & \sqrt{1 - \gamma} \end{bmatrix} \newline
M_1 =& \sqrt{p} \begin{bmatrix} 0 & \sqrt{\gamma} \cr 0 & 0 \end{bmatrix} \newline
M_2 =& \sqrt{1-p} \begin{bmatrix} \sqrt{1-\gamma} & 0 \cr 0 & 1 \\ \end{bmatrix} \newline
M_3 =& \sqrt{1-p} \begin{bmatrix} 0 & 0 \cr \sqrt{\gamma} & 0 \end{bmatrix}
\end{aligned}
$$
Where γ is the probability of the interaction being dissipative, and
`p` is the probability that the qubit and environment exchange energy. The amplitude damping channel corresponds to `p=1`.
Cirq provides the helpers `cirq.generalized_amplitude_damp` and
`cirq.amplitude_damp` to construct these noisy gates.
|
github_jupyter
|
## Noise
For simulation, it is useful to have `Gate` objects that enact noisy quantum evolution.
Cirq supports modeling noise via *operator sum* representations of
noise (these evolutions are also known as quantum operations, quantum
dynamical maps, or superoperators).
This formalism models evolution of the
density matrix via:
$$\rho \rightarrow \sum_k A_k \rho A_k^\dagger$$
Where A<sub>k</sub> are *Krauss* operators. These operators are not
necessarily unitary and must satisfy the trace-preserving property:
$$\sum_k A_k^\dagger A_k = I$$
As a noisy channel, Krauss operators are not unique. For more details of these operators, see [John Preskill's notes](http://www.theory.caltech.edu/people/preskill/ph219/chap3_15.pdf).
### Magic methods
A `Gate` can represent an operator sum representation by supporting the
`channel` protocol. Alternatively, for channels that represent probabilistic
mixtures of unitaries, one can implement the `mixture` protocol.
#### cirq.channel and def _channel_
To represent an operator sum evolution, a `Gate` should implement the
`SupportsChannel` protocol. To do this, the `Gate` should implement the
`_channel_(self) -> Sequence[np.ndarray]:` method.
This method should return the sequence of `numpy` matrices corresponding to the Krauss operators. The basis in which this matrix is expressed is always implicit with respect to the object being called.
For example, in `GateOperations`, these matrices must be ordered with respect to the list of qubits that the channel is applied to. The qubit-to-amplitude order mapping matches the ordering of `numpy.kron(A, B)`, where `A` is a qubit earlier in the list than the qubit `B`.
If one has defined `_channel_`, then that `Gate` and any `GateOperation`
that uses that gate can be used as an argument to `cirq.channel` and
`cirq.channel` will return this sequence of matrices.
Besides objects that support `_channel_`, `cirq.channel` will also fall
back to other objects that can be interpreted as channels. For example, if a channel is a probabilistic mixture of unitary gates (see below), then `cirq.channel` will fall back to seeing if the object supports `_mixture_`. If `_mixture_` is not supported, then `cirq.channel` checks to see if `_unitary_` is supported.
In addition to supporting `_channel_`, objects that are channels should also implement `_has_channel_(self) -> bool` to return `True`. This method is used to determine whether an object has a `_channel_` or not without having to do the potentially expensive creation of the matrices for the channel.
#### cirq.mixture, cirq.mixture_channel, and def _mixture_
Some channels can be interpreted as probabilistically selecting between
different unitary evolutions:
$$\rho \rightarrow \sum_k p_k U_k \rho U_k^\dagger {\rm ~where~} \sum_k p_k =1 {\rm ~and~ U_k U_k^\dagger= I}$$
In this case, it is possible to perform **Monte Carlo simulations** of these gates using a wave function based simulator (and not a density matrix based simulator).
Instead of implementing the `SupportsChannel` protocol, one should implement the `SupportsMixture` protocol. To do this, one should implement the `_mixture_(self) -> Sequence[Tuple[float, np.ndarray]]` protocol. This returns a sequence of tuples.
The first element of each tuple is the probability of the unitary, and the second element is the unitary. Like the `_channel_` method described above, the basis for these matrices is implicit with respect to the object being called. One should also make `_has_mixture_` return `True` to indicate to callers that the object supports the mixture protocol.
If one wants to get the mixture channel directly, one can call `cirq.mixture_channel`.
### Common Channels
Cirq supports many commonly used quantum channels out of the box, see
[`ops/common_channels.py`](https://github.com/quantumlib/Cirq/blob/master/cirq/ops/common_channels.py).
#### AsymmetricDepolarizingChannel, DepolarizingChannel, BitFlipChannel, and PhaseFlipChannel
The asymmetric depolarizing channel represents probabilistically selecting
one of three Pauli gates to apply or doing nothing to the state. This is
implemented via a `_mixture_` method so that a Monte Carlo simulation with a
wave function simulator can be used.
This channel implements the evolution:
$$
\rho \rightarrow (1-p_x-p_y-p_z) \rho + p_x X \rho X + p_y Y \rho Y + p_z Z \rho Z
$$
Here p<sub>x</sub> is the probability that the X Pauli gate is applied and
no other gate is applied, and similarly for p<sub>y</sub> and p<sub>z</sub>.
A particular case of the asymmetric depolarizing channel is the case when each of the different Paulis occur with the same probability. This is
encapsulated in the `DepolarizingChannel` gate, which takes a probability `p`
such that each Pauli gate occurs with probability `p/3`.
To construct channels, useful helpers are provided `cirq.asymmetric_depolarize`
and `cirq.depolarize`.
Another common case is when only a Pauli X (bit flip) can occur, or
when only a Pauli Y (phase flip) can occur. These correspond to
`BitFlipChannel` and `PhaseFlipChannel` with helpers `cirq.bit_flip` and
`cirq.phase_flip`.
#### GeneralizedAmplitudeDampingChannel and AmplitudeDampingChannel
The generalized amplitude damping channel models the effect of energy
dissipation to a surrounding environment as well as dephasing that
does not exchange energy. The amplitude damping channel only models dissipation of energy to a surrounding environment.
Cirq has implementations of both of these channels. The generalized amplitude damping channel corresponds to:
$$
\begin{aligned}
\rho \rightarrow& \sum_{k=0}^3 M_k \rho M_k \newline
M_0 =& \sqrt{p} \begin{bmatrix} 1 & 0 \cr 0 & \sqrt{1 - \gamma} \end{bmatrix} \newline
M_1 =& \sqrt{p} \begin{bmatrix} 0 & \sqrt{\gamma} \cr 0 & 0 \end{bmatrix} \newline
M_2 =& \sqrt{1-p} \begin{bmatrix} \sqrt{1-\gamma} & 0 \cr 0 & 1 \\ \end{bmatrix} \newline
M_3 =& \sqrt{1-p} \begin{bmatrix} 0 & 0 \cr \sqrt{\gamma} & 0 \end{bmatrix}
\end{aligned}
$$
Where γ is the probability of the interaction being dissipative, and
`p` is the probability that the qubit and environment exchange energy. The amplitude damping channel corresponds to `p=1`.
Cirq provides the helpers `cirq.generalized_amplitude_damp` and
`cirq.amplitude_damp` to construct these noisy gates.
| 0.93223 | 0.980224 |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
```
The set of Burger equations in 2D are coupled and are as shown:
$$
\dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y} = \nu \left( \dfrac{\partial^2 u}{\partial x^2} + \dfrac{\partial^2 u}{\partial y^2} \right)
$$
$$
\dfrac{\partial v}{\partial t} + u \dfrac{\partial v}{\partial x} + v \dfrac{\partial v}{\partial y} = \nu \left( \dfrac{\partial^2 v}{\partial x^2} + \dfrac{\partial^2 v}{\partial y^2} \right)
$$
Discretizing the equations as before:
$$
\dfrac{u_{i,j}^{n+1}-u_{i,j}^n}{\Delta t} + u_{i,j}^{n} \dfrac{u_{i,j}^{n}-u_{i-1,j}^n}{\Delta x}+ v_{i,j}^{n} \dfrac{u_{i,j}^{n}-u_{i,j-1}^{n}}{\Delta y} = \nu \left( \dfrac{u_{i+1,j}^n-2u^n_{i,j}+u^n_{i-1,j}}{\Delta x^2} + \nu \dfrac{u_{i,j+1}^n-2u^n_{i,j}+u^n_{i,j-1}}{\Delta y^2} \right)
$$
$$
\dfrac{v_{i,j}^{n+1}-v_{i,j}^n}{\Delta t} + u_{i,j}^{n} \dfrac{v_{i,j}^{n}-v_{i-1,j}^n}{\Delta x}+ v_{i,j}^{n} \dfrac{v_{i,j}^{n}-v_{i,j-1}^{n}}{\Delta y} = \nu \left( \dfrac{v_{i+1,j}^n-2v^n_{i,j}+v^n_{i-1,j}}{\Delta x^2} + \nu \dfrac{v_{i,j+1}^n-2v^n_{i,j}+v^n_{i,j-1}}{\Delta y^2} \right)
$$
And solving for the unknown components of the velocity $u,\ v$, the solution at the time $n+1$ will be:
$$
u^{n+1}_{i,j} = u_{i,j}^n - u_{i,j}^{n} \dfrac{\Delta t}{\Delta x} \left( u_{i,j}^{n}-u_{i-1,j}^n \right) - v_{i,j}^{n} \dfrac{\Delta t}{\Delta y} \left( u_{i,j}^{n}-u_{i,j-1}^n \right) + \nu \dfrac{\Delta t}{\Delta x^2} \left( u_{i+1,j}^n-2u^n_{i,j}+u^n_{i-1,j} \right) + \nu \dfrac{\Delta t}{\Delta y^2} \left( u_{i,j+1}^n-2u^n_{i,j}+u^n_{i,j+1} \right)
$$
$$
v^{n+1}_{i,j} = v_{i,j}^n - u_{i,j}^{n} \dfrac{\Delta t}{\Delta x} \left( v_{i,j}^{n}-v_{i-1,j}^n \right) - v_{i,j}^{n} \dfrac{\Delta t}{\Delta y} \left( v_{i,j}^{n}-v_{i,j-1}^n \right) + \nu \dfrac{\Delta t}{\Delta x^2} \left( v_{i+1,j}^n-2v^n_{i,j}+v^n_{i-1,j} \right) + \nu \dfrac{\Delta t}{\Delta y^2} \left( v_{i,j+1}^n-2v^n_{i,j}+v^n_{i,j+1} \right)
$$
```
nx = 41
ny = 41
nt = 120
c = 1
dx = 2 / (nx - 1)
dy = 2 / (ny - 1)
CFL = 0.0009
nu = 0.01
dt = CFL*dx*dy / nu
x = np.linspace(0,2,nx)
y = np.linspace(0,2,ny)
X, Y = np.meshgrid(x,y)
u = np.ones((ny, nx))
v = np.ones((ny, nx))
u[int(0.5/dy):int(1/dy+1),int(0.5/dx):int(1/dx+1)] = 2
v[int(0.5/dy):int(1/dy+1),int(0.5/dx):int(1/dx+1)] = 2
fig = plt.figure(figsize=(11,7), dpi=100)
ax1 = fig.add_subplot(1, 2, 1, projection='3d')
ax1.plot_surface(X, Y, u, cmap=cm.viridis)
ax1.set_xlabel('$x$')
ax1.set_ylabel('$y$')
ax1.set_zlabel('$z$')
ax1.set_title('U Velocity field map')
ax1.grid(True)
ax2 = fig.add_subplot(1, 2, 2, projection='3d')
ax2.plot_surface(X, Y, v, cmap=cm.viridis)
ax2.set_xlabel('$x$')
ax2.set_ylabel('$y$')
ax2.set_zlabel('$z$')
ax2.set_title('V Velocity field map')
ax2.grid(True)
for n in range(nt+1):
un = u.copy()
vn = v.copy()
u[1:-1,1:-1] = (un[1:-1, 1:-1] - dt / dx * un[1:-1,1:-1] * (un[1:-1, 1:-1] - un[1:-1, 0:-2]) - dt / dy * vn[1:-1, 1:-1] *
(un[1:-1, 1:-1] - un[0:-2, 1:-1]) + nu * dt / dx**2 * (un[1:-1,2:] - 2 * un[1:-1, 1:-1] + un[1:-1, 0:-2] +
nu *dt / dy**2 * (un[2:,1:-1] - 2 * un[1:-1,1:-1] + un[0:-2, 1:-1])))
v[1:-1,1:-1] = (vn[1:-1, 1:-1] - dt / dx * un[1:-1,1:-1] * (vn[1:-1, 1:-1] - vn[1:-1, 0:-2]) - dt / dy * vn[1:-1, 1:-1] *
(vn[1:-1, 1:-1] - vn[0:-2, 1:-1]) + nu * dt / dx**2 * (vn[1:-1,2:] - 2 * vn[1:-1, 1:-1] + vn[1:-1, 0:-2] +
nu *dt / dy**2 * (vn[2:,1:-1] - 2 * vn[1:-1,1:-1] + vn[0:-2, 1:-1])))
u[0, :] = 1
u[-1, :] = 1
u[:, 0] = 1
u[:, -1] = 1
v[0, :] = 1
v[-1, :] = 1
v[:, 0] = 1
v[:, -1] = 1
fig = plt.figure(figsize=(11,7), dpi=100)
ax1 = fig.add_subplot(1, 2, 1, projection='3d')
ax1.plot_surface(X, Y, u, cmap=cm.viridis)
ax1.set_xlabel('$x$')
ax1.set_ylabel('$y$')
ax1.set_zlabel('$z$')
ax1.set_title('U Velocity field map')
ax1.grid(True)
ax2 = fig.add_subplot(1, 2, 2, projection='3d')
ax2.plot_surface(X, Y, v, cmap=cm.viridis)
ax2.set_xlabel('$x$')
ax2.set_ylabel('$y$')
ax2.set_zlabel('$z$')
ax2.set_title('V Velocity field map')
ax2.grid(True)
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
nx = 41
ny = 41
nt = 120
c = 1
dx = 2 / (nx - 1)
dy = 2 / (ny - 1)
CFL = 0.0009
nu = 0.01
dt = CFL*dx*dy / nu
x = np.linspace(0,2,nx)
y = np.linspace(0,2,ny)
X, Y = np.meshgrid(x,y)
u = np.ones((ny, nx))
v = np.ones((ny, nx))
u[int(0.5/dy):int(1/dy+1),int(0.5/dx):int(1/dx+1)] = 2
v[int(0.5/dy):int(1/dy+1),int(0.5/dx):int(1/dx+1)] = 2
fig = plt.figure(figsize=(11,7), dpi=100)
ax1 = fig.add_subplot(1, 2, 1, projection='3d')
ax1.plot_surface(X, Y, u, cmap=cm.viridis)
ax1.set_xlabel('$x$')
ax1.set_ylabel('$y$')
ax1.set_zlabel('$z$')
ax1.set_title('U Velocity field map')
ax1.grid(True)
ax2 = fig.add_subplot(1, 2, 2, projection='3d')
ax2.plot_surface(X, Y, v, cmap=cm.viridis)
ax2.set_xlabel('$x$')
ax2.set_ylabel('$y$')
ax2.set_zlabel('$z$')
ax2.set_title('V Velocity field map')
ax2.grid(True)
for n in range(nt+1):
un = u.copy()
vn = v.copy()
u[1:-1,1:-1] = (un[1:-1, 1:-1] - dt / dx * un[1:-1,1:-1] * (un[1:-1, 1:-1] - un[1:-1, 0:-2]) - dt / dy * vn[1:-1, 1:-1] *
(un[1:-1, 1:-1] - un[0:-2, 1:-1]) + nu * dt / dx**2 * (un[1:-1,2:] - 2 * un[1:-1, 1:-1] + un[1:-1, 0:-2] +
nu *dt / dy**2 * (un[2:,1:-1] - 2 * un[1:-1,1:-1] + un[0:-2, 1:-1])))
v[1:-1,1:-1] = (vn[1:-1, 1:-1] - dt / dx * un[1:-1,1:-1] * (vn[1:-1, 1:-1] - vn[1:-1, 0:-2]) - dt / dy * vn[1:-1, 1:-1] *
(vn[1:-1, 1:-1] - vn[0:-2, 1:-1]) + nu * dt / dx**2 * (vn[1:-1,2:] - 2 * vn[1:-1, 1:-1] + vn[1:-1, 0:-2] +
nu *dt / dy**2 * (vn[2:,1:-1] - 2 * vn[1:-1,1:-1] + vn[0:-2, 1:-1])))
u[0, :] = 1
u[-1, :] = 1
u[:, 0] = 1
u[:, -1] = 1
v[0, :] = 1
v[-1, :] = 1
v[:, 0] = 1
v[:, -1] = 1
fig = plt.figure(figsize=(11,7), dpi=100)
ax1 = fig.add_subplot(1, 2, 1, projection='3d')
ax1.plot_surface(X, Y, u, cmap=cm.viridis)
ax1.set_xlabel('$x$')
ax1.set_ylabel('$y$')
ax1.set_zlabel('$z$')
ax1.set_title('U Velocity field map')
ax1.grid(True)
ax2 = fig.add_subplot(1, 2, 2, projection='3d')
ax2.plot_surface(X, Y, v, cmap=cm.viridis)
ax2.set_xlabel('$x$')
ax2.set_ylabel('$y$')
ax2.set_zlabel('$z$')
ax2.set_title('V Velocity field map')
ax2.grid(True)
| 0.332419 | 0.988279 |

<div style="font-family:monospace,courier;
background-color:#a1b8c9;
text-align:center;
padding:15px;
border:2px black solid;
border-radius:5px;">
<h1 style="font-size:45px">Object Oriented Programming in Python</h1>
<h2>Live Learning Session</h2>
</div>
<div style="background-color:#e6ebee;
padding:15px;
border:2px black solid;
border-radius:5px;">
# Table of Contents
1. <a style="text-decoration:none" href="#LO">Prerequisites and Learning Objectives</a><br>
1.1. <a style="text-decoration:none" href="#LO1">Prerequisites</a><br>
1.2. <a style="text-decoration:none" href="#LO2">Learning Objectives</a><br>
1.3. <a style="text-decoration:none" href="#L03">A Bit on Strings</a><br>
2. <a style="text-decoration:none" href="#OOP">Object Oriented Programming</a><br>
2.1. <a style="text-decoration:none" href="#OOP1">Attributes, Methods, and More</a><br>
2.2. <a style="text-decoration:none" href="#OOP2">A Glance at Common Objects in Python</a><br>
2.3. <a style="text-decoration:none" href="#OOP3">BYOC: Build Your Own Class</a><br>
</div>
# 1. Prerequisites and Learning Objectives <a id="LO"/>
## 1.1. Prerequisites <a id="LO1"/>
This lesson is created under the assumption that the audience is not a complete stranger to Python. Rather, the ideal audience member would have at least had some experience writing basic code; played with some basic objects and data types, such as lists, strings, and numeric type; has assigned values to variables; and has defined a basic function.
## 1.2. Learning Objectives <a id="LO2"/>
- Describe why Python is considered an Object Oriented Programming (OOP) language.
- Distinguish between the abstract object class and a specific instance of that object.
- Analyze the attributes within an object.
- Identify several common objects used in Python.
- Apply the object oriented programming paradigm within Python to create classes while connecting linked abstract concepts.
## 1.3. A Bit on Strings <a id="LO3"/>
You have already seen **strings** as the type of objects which we consider to be text-based. We bring special focus to them here as we use them often, particularly in printing ourselves nice messages within this notebook. We want to make sure your focus is in the right place, which is typically not the string formatting!
There are several "standard" ways to incorporate an object's value into a string. For example, if the variable `x` is assigned the number 2, so `x=2`, and we want to include the actual value of `x` in the string `s = "the value of x is 2"`, but we want the value `2` to vary if `x` does, we can do so with f-strings formatting. (Here, "f" stands for *fast*, because it takes less processing time than other formatting methods). The f-string formatting is only available after Python 3.6, so it's recent. The formatting is very simple, we just use curly braces { and } to house the object whose value we want.
```
# Example of f-string formatting
x = 2
s1 = f"The value of x is {x}"
print(s1)
```
For reference, here is how we might use other formatting techniques too, before f-strings were introduced.
```
# Original formatting in Python
print("The value of x is %s" % x)
# Missing link between Original and f-strings
print("The value of x is {val}".format(val = x))
```
This was a very brief introduction to string formatting when incorporating the value of another object. Within each formatting method, there are many other options for formatting too (e.g., what decimal place to include in the output of a floating type numeric value), but that's more than we touch on here.
Regardless of the formatting being used, very often when we print out nice messages to describe what we are printing out, yet we still want to focus on the actual code or object that we're printing within the string. So, **keep your eyes inside the brackets** of our f-strings!
# 2. Object Oriented Programming <a id="OOP"/>
Python is a _class-based_ Object Oriented Programming (OOP) language. What this means is that anything we can assign to a variable is created from a "blueprint", which we call a **class**. What is created by this blueprint (class) is called an **instance**. Collectively, we refer to either an instance of a class, or the class itself as an **object**, hence **Object Oriented Programming**. When we code in Python, we are genereally dealing with either objects themselves, operations on objects, or assignments of objects.
Let's first begin with some analogies to build up our class/object/instance vocabulary.
#### Analogy: 1995 Honda Civic
(The author of these notes used to drive a 1995 Honda Civic). You can imagine that the engineers and designers at Honda created a technical blueprint which defined the Civic model in 1995. The blueprint outlined exactly how to build each one of the Civics produced once a few individual attributes of the car were decided, such as color of the interior, color of the exterior, type of stereo etc etc. In OOP, the blueprint would be called the "1995 Honda Civic **class**" -- the abstract blueprint which is then used to create each specific 1995 Honda Civic. However, each individual 1995 Honda Civic created based on the blueprint would be called an **instance** of that class. If the author of these notes drove his 1995 Honda Civic into a tree, that was an instance-specific operation that doesn't affect other instances nor the whole class.
#### Analogy: Humankind
Although maybe it's a bit too much to write on paper, we all have some abstract idea of how to identify a human. Roughly, we can identify a human because there is an abstract blueprint of how humans are defined which we recognize as we differentiate between a human and, say, a fire hydrant. In this analogy, the abstract blueprint defining a human would be the "human **class**." However, each specific human (designed by the "human class blueprint") is an individual **instance** of the human class. So, the author is one instance of the human class, you are a different instance of the human class (probably, unless AI has taken over). Of course, each instance (person) is unique based on certain specific attributes (hair color, eye color, height, ...), but we are all designed roughly from the same blueprint.
#### Example: The Python calculator through the OOP lens
Moving from analogy to use of Python, let's discuss even basic calculator operations in Python in the OOP context.
```
# Example 1, addition
2 + 3
```
- Above the "values" 2 and 3 are distinct **instances** of the integer **class**, which we operated on through addition +. This returns a new integer **instance** with value 5, but this value is left unassigned.
```
# Example 2, exponentiation
x = 2.4**2
x
```
- Here we take two numeric type objects, an **instance** of the float **class** with value 2.4 and an **instance** of the integer **class** with value 2. We operate on them by exponentiation `**`, which returns another **instance** of a float class with value 5.76. This returned float object is **assigned** to the variable `x`, which allows us to reuse this returned object later!
**Note**: There are many operations on objects, depending on the object type, which you will get familiar with as you practice with Python. What we will see below is that sometimes we can even consider named operations (functions and methods) as objects. However, the abstract idea of an operation is not really considered an object, nor are some of the operator symbols such as `+` or `**`. Assignment is most commonly done via `=`.
#### Identifying the Instance
Everytime we create an instance, that instance is given a specific identifier, so the code knows which specific instance is being referred to. We use the `id` function to see that identifier.
**Note**: We rarely use `id` in our data analysis workflow, but its important for building up our mental model of what's happening behind the scenes.
**Aside**: A synonym to "create an instance" is "instantiate".
```
my_name = "Emmy Noether" # Instantiate a new string object
my_birth_year = 1882
print( f"- The id of my_name is {id(my_name)}." )
print( f"- The id of my_birth_year is {id(my_birth_year)}.")
```
#### Takeaways:
- Each instance is given an id, and the variables we assign are "user-friendly" ways to reference that id.
- Your assignment of an instance to a variable is the only connection the variable name has to that instance. If you later use that same variable name for a new or different instance, it will forget all about the previous reference.
- It happens that multiple variables can reference to the same instance (you'll see in the example below) and that can have funny consequences when the object is _mutable_, something we discuss in more depth later.
### Exercise 2.0.1
- Create the list `list1 = [1,2,3]`. Next, set `list2 = list1`; does `list2` represent the same instance as `list1`? Finally, set `list3 = [1,2,3]`; does `list3` represent the same instance as `list1`? Print `list1`, `list2`, and `list3`.
- Reassign the first element of `list1` to `"hello"` with `list1[0] = "hello"`. Again, print `list1`, `list2`, and `list3`.
**Double click here for solution**
<!--
# Create the lists as prescribed
list1 = [1,2,3]
list2 = list1
list3 = [1,2,3]
print(list1, list2, list3)
# Reassign the first element in list1 and print
list1[0] = "hello"
print(list1, list2, list3)
-->
## 2.1. Attributes, Methods, and More <a id="OOP1"/>
Objects are richer than they seem! In fact, the richness of an object is what makes OOP so attractive. In Python, every object is comprised of pieces called **attributes**.
**The attributes (defined within a class) are a suite of parameters and tools associated with each instance of that class**. Attributes are typcially considered landing in two piles: a **static attribute** (also ambiguously as _attribute_), which acts as just an "inner variable" for the instance; and a **method**, which acts as an "inner function" for the instance.
#### Analogy: 1995 Honda Civic
We picture an instance of a 1995 Honda Civic as its own entity, and indeed it is. However, that 1995 Honda Civic has many pieces which comprise it, such as the body, frame, seats, windshield, etc etc. Each of those pieces is specific to that instance, but created and placed due to the blueprint (class). In OOP, those pieces are like the attributes.
To stretch even further (with this already-too-thinly stretched example), you can imagine that a **static attribute** would be something like `body_paint_color`; something fixed. However, a **method** would be something like `push_gas_pedal()`; which is an action depending on that instance.
#### Accessing Attributes in Python
Looking at a few coding examples, we note that the function `dir` (for *directory*) in Python returns a list of the names of the attributes associated with an object.
```
# String object
my_name = 'Emmy Noether'
print(f"The attributes of an this string object are:\n{dir(my_name)}")
```
Each attribute is accessible via **period notation**. That is, to get to an attribute, we have the general form `<instance-reference>.<attribute-reference>`.
```
# The upper attribute of a string type object is a method
# (this will return a new string object with the text capitalized)
print(f"my_name.upper() => {my_name.upper()}")
```
- **Note**: Unlike `upper`, some methods take other parameters as inputs too.
### Exercise 2.1.1
Create a new string object `my_name` with your name. From that, assign a new variable `my_name_in_caps` to the object returned by using the `upper` method on `my_name`. Finally, check that `my_name` and `my_name_in_caps` are indeed different instances of string objects.
```
# Your Solution Here
```
**Double click here for solution**
<!--
my_name = 'Emmy Noether'
my_name_in_caps = my_name.upper()
print(f"My name is: {my_name}. In all caps: {my_name_in_caps}.")
print(f"The id of my_name is: {id(my_name)}. The id of my_name_in_caps is: {id(my_name_in_caps)}.")
-->
### Exercise 2.1.2
The `split` method within string objects is quite useful when trying to clean text. What it will do is split a string into a list of sub-strings, where the splitting happens on some character you specify (or, if you don't specify, it will default to splitting on the space character). So, for example, if we have `some_text = 'Hello world, this is Jane!'`, then `some_text.split()` will return `['Hello', 'world.', 'This', 'is', 'Jane!']`, where as `some_text.split('o')` will return `['Hell', ' w', 'rld. This is Jane!']`. (See below). For this exercise, you are to split a string `csv_line = '1.2,2,3.45,P'` at the comma values.
```
# Example of split
some_text = 'Hello world. This is Jane!'
print(some_text.split())
print(some_text.split('o'))
csv_line = '1.2,2,3.45,P'
# Your solution here: split along the commas
```
**Double click here for solution**
<!--
csv_line = '1.2,2,3.45,P'
csv_line.split(',')
-->
## 2.2. A Glance at Common Objects in Python <a id="OOP2"/>
While there are effectively infinitely many object classes that have been created within the Python community, more than anyone could ever master in a lifelong career, the foundational ["built-in" types](https://docs.python.org/3/library/stdtypes.html) are essential for a Python programmer. We give an overview of some of those foundational types here (we will focus on those with an asterisk \*), and introduce several other important packages and objects throughout this curriculum. We will use the `type` function to confirm the object types.
These types are:
**Numeric Type**:
- float*
- int*
- complex (we will not see these much in our work, so we will ignore it for the most part)
**Sequence Type**:
- list*
- tuple*
- set
- iterator
- range
- string (Yes! Strings are considered a textual sequence type)
**Mapping Type**:
- dict (this is really the only "mapping" type, which maps a key to a value)
### Numeric Type
#### Type: float
These are the decimal precision numbers often arising from calculations or observations drawn from a continuious distribution (such as height). Note that even if the calculation involves integers, it will become a float when any of the pieces are floats or in the case of (non-integer) division.
```
# Example
avg_num_students = (50+35+25+25)/4
print(avg_num_students)
print(type(avg_num_students))
```
#### Type: int
These are the integers which often arise in indexing and slicing sequence objects (more on this later), addition and subtraction of other integers, and also from observations with discrete levels.
```
# Example
num_students = 50+35+25+25
print(num_students)
print(type(num_students))
```
#### Numeric Type Coercion
We can _coerce_ floats to ints and ints to floats using the `float` and `int` functions.
```
# Example
print("int -> float")
print(3, type(3))
print(float(3), type(float(3)))
print("")
# Example
print("float -> int: Notice coercing to int floors (rounds down) the number.")
print(3.5, type(3.5))
print(int(3.5), type(int(3.5)))
```
#### Integer Division and Remainder
We saw above that division of ints (or floats) will result in a float. However, in Python 3, we can force the division to result in an int by using double-division `//`, which gives the floor (round down) integer of the division.
On the other side, the percentage sign `%` produces the remainder term when dividing two numerical values, most commonly used for integers. For example `17%5` would return `2` since `17 = 3*5 + 2` has `2` as the remainder after subtracting off the partition of `17` which `5` divides directly. This is an operation common in _modular arithmetic_ and we say `n % m` as "`n` mod `m`".
```
# Example
print(17/5, type(17/5))
print(17//5, type(17//5))
print(17%5, type(17%5)) # 17 mod 5
```
### Exercise 2.1.1
- Using the modular operation `%`, how would you quickly determine if an integer value `i` is even or odd?
- True or False: For any integer `n`, we must have `n` = `(n//m)*m` + `n%m`. Hint: remember `17 = 3*5 + 17%5`.
**Double click here for solution**
<!--
- For an integer `n`, we need only to look at `n%2`, which will be 0 if `n` is even, and 1 if `n` is odd.
- True. For any integers `n` and `m`, we have the identity `n = (n//m)*m + n%m`; this is known in math circles as Euclidean division.
-->
### Sequence Type
#### Type: list
Lists will be one of the most important sequence type objects you will use in your Python career. Roughly, a list is an indexed collection of other objects of any type. The lists are denoted and typically created with square-brackets `[]` with subsequent entries separated by commas. The indexing is always via integer values, starting at `0` and increasing consecutively, with the terminal index being the number of objects in the collection minus 1 (because of the index starting at 0).
```
# Example: List of names
students_list = ['Emmy','Carl','Maryam','Euphemia','John']
print(students_list)
print(students_list[0])
print(students_list[4])
print(type(students_list))
# Example: Nested List of Names and Birth year
## Note that the inner lists have mixed object types: that's totally allowed!
students_birth = [['Emmy',1882], ['Carl',1777],['Maryam',1977],['Euphemia',1890],['John',1903]]
print(students_birth)
print(students_birth[0])
print(students_birth[4])
```
#### Type: tuple
Tuples can be considered "immutable" lists. Tuples are denoted and typically created with parentheses `()` with subsequent entries separated by commas. Indexing and many operations on tuples work the same as with lists. The immutability of tuples (which we discuss more later) makes them into a good option for creating "user-protected" lists, where you don't want the user to alter the entries within the list.
```
# Example: List of names
students_tuple = ('Emmy','Carl','Maryam','Euphemia','John')
print(students_tuple)
print(students_tuple[0])
print(students_tuple[4])
print(type(students_tuple))
# Example: List of tuples of Names and Birth year
## Note that the inner tuples have mixed object types: that's totally allowed!
## Note also we could have made a tuple of tuples, tuple of lists.... etc
students_birth = [('Emmy',1882), ('Carl',1777),('Maryam',1977),('Euphemia',1890),('John',1903)]
print(students_birth)
print(students_birth[0])
print(students_birth[4])
```
#### List/Tuple Coersion
As we did with int/float types, we can move between lists and tuples using the `list()` and `tuple()` function. We will play with this more in a future lecture when we also introduce other sequence types.
### Mutability Discussion
For lists and tuples, we called lists mutable but tuples immutable. An object in Python is called **mutable** when a property or an attribute of an instance can be changed after the instance is created. Truly, all objects are somewhat mutable if you work hard enough to cause a change within an instance, but we say that objects in which such changes are hard are **immutable**. Perhaps a reasonable definition would be that mutable objects are ones which contain one or more methods that cause a change within an instance; immutable objects do not have such methods.
Lists have several methods built-in for mutating itself, some of the most useful are: `append`, `insert`, `extend`, `pop`, and assignment (which is enacted in the backend via `__setitem__`).
Numeric types and tuples do not have such methods, and are hence immutable.
### Exercise 2.2.1:
Create the empty list `my_list = []` and print your list to confirm it is indeed empty. (You could also intanciate an emtpy list via `my_list = list()`). Print the `id` of your empty `my_list` to see the intance identifier. Next, append your name to `my_list` via `my_list.append('..Your name here.. ')`, and print out the contents of `my_list` to see what appending has done. Finally, pring out the `id` of `my_list` after your change to confirm that the instance identifier is the same.
**Double click here for solution**
<!--
# Create empty list and print it
my_list = []
print(my_list)
# Store and print the id of the empty list
my_list_id = id(my_list)
print(my_list_id)
# Append name to my_list and print it
my_list.append('Emmy')
print(my_list)
# Check that even though we changed the list, it still has the same id
print(id(my_list))
-->
### Exercise 2.2.2:
With `my_list` above, append a new value which is your birth year as an int, so that `my_list` appears as `['..Your name..', birth_year]`; print it out to confirm. Then, via indexing of the list and the `upper` method in strings, reassign your name as all capital letters; i.e., fill in the blanks:
my_list[----] = ----[----].upper()
Finally, print out your updated list, and confirm that the instance identifier is still the same.
**Bonus Q**: What happens if you keep running this same cell several times? Why?
**Double click here for solution**
<!--
# Append birth year and print
my_list.append(1882)
print(my_list)
# Re-assign capital version of name and print
my_list[0] = my_list[0].upper()
print(my_list)
# Look at the id and confirm it hasn't changed from the first exercise
print(id(my_list))
## BONUS Q: Re-running this cell keeps appending the birth year time and again!
print("""Note: For a list L, setting the value of an element via L[index-num] = value is a
syntatically nice way to use the method L.__setitem__(index-num, value).
""")
-->
### Exercise 2.2.3:
Create the tuple `my_tuple` whose first element is your name and second is the year of your birth, then print it. Next, look at the directory of `my_tuple` and `my_list` to confirm none of `append`, `insert`, `extend`, `pop`, nor `__setitem__` are methods available in `my_tuple`, but are in `my_list`. See what happens when you try to reassign your birth year to the current year.
**Double click here for solution**
<!--
# Create my_tuple and print it
my_tuple = ('Emmy',1882)
print(my_tuple)
# Print out the dir. of my_tuple and my_list
print(f"TUPLE:\n{dir(my_tuple)}")
print()
print(f"LIST:\n{dir(my_list)}")
# Try to reassign a tuple value
my_tuple[1] = 1999 # Causes Error (because no __setitem__ method)
-->
### Exercise 2.2.4:
Define the integer object `idx` to have value `0`. Commonly we can increment an integer value with `+=` notation, such as `idx += 1` will add the value `1` to the current value of `idx`; in other symbols, `idx += 1` is the same as `idx = idx + 1`, but is just more convenient. (There are many other such algebraic syntax simplicities, like `-=`, `*=`, and `/=`, whose uses you can likely guess). Other than just defining `idx`, your task for this exercise is to figure out if `idx += 1` is a mutating operation on your original `idx` instance.
**Double click here for solution**
<!--
# Define idx and print out the id
idx = 0
print(idx, id(idx))
# Increment by 1 and print out id
idx += 1
print(idx, id(idx))
print("""The point here is that idx += 1 is not mutating. Instead it takes your original instance
and creates a new instance with value incremented by 1; then, this new instance is
assigned to the variable you had previously created, in this case idx. The old idx is gone,
the memory it used released for reuse by the computer.
""")
-->
### Quick Documentation ?
At this point, if your head is spinning with concern "how will I ever remember all the attributes of so many objects?" Just remember, the ones you end up using a lot will sink in by rote, the others have documentation! (NOBODY MEMORIZES EVERY ATTRIBUTE). If the code behind the class/object is well prepared and documented, then you will have access to information about each instance and attribute readily available through use of ?. This is not a question!
```
# Example: Run this code
two = 2
two?
# Example: Run this code
my_name = 'Emmy Noether'
my_name.lower?
# Example: Run this code
my_list = ["hello", "world"]
my_list.extend?
# Example: Run this code
list?
```
## 2.3. BYOC: Build Your Own Class <a id="OOP3"/>
Keep in mind is that a **class** is the _coded_ blueprint of how an object instance is designed. We will see, discuss, and design classes several times within these lectures.
To create a class, we use the `class` keyword, followed by the name we would like to give our class and a colon indicating that the code for the class will start, followed by the code defining all the attributes and behaviours of that class, keeping in mind the standard indentation rules of Python. If you are familiar with defining functions, you'll notice the appearance to be somewhat similar.
For this section we will continually develop our own class. The class will be called `DescrStats` which will eventually be initialized to take a list of numeric values and contain methods to report some of the descriptive statistics about that list.
```
# Update 0: We create a new class called DescrStats
class DescrStats:
""" This is a doc-string. It is important for users."""
pass
dstats = DescrStats()
dstats?
# Update 1: Doc-string update
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
pass
dstats = DescrStats()
dstats?
# Update 2: First glance at __init__, scoping, and self
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
att1 = 'attribute 1'
def __init__(self):
att2 = 'attribute 2'
self.att3 = 'attribute 3'
dstats = DescrStats()
print(dir(dstats)) # Where is att2?
# Update 3: Better use of __init__ and our own calc_mean method
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
def __init__(self, numeric_list):
self.numeric_list = numeric_list
# Error: def calc_mean():
def calc_mean(self):
# Error: n = len(numeric_list)
n = len(self.numeric_list)
s = sum(self.numeric_list)
return s/n
# Error: dstats = DescrStats()
dstats = DescrStats([1,2,3,2,1])
print(dir(dstats))
# Try: dstats?
# Try: dstats.calc_mean? ... No doc string!
print(f"The mean of {dstats.numeric_list} is {dstats.calc_mean()}")
# Update 4: Add calc_var method and don't forget Doc strings!
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
def __init__(self, numeric_list):
self.numeric_list = numeric_list
def calc_mean(self):
"""Method to calculate the mean of numeric_list"""
n = len(self.numeric_list)
s = sum(self.numeric_list)
return s/n
def calc_var(self):
"""Method to calculate the variance of numeric_list"""
mean = self.calc_mean() # Can call on other methods!
n = len(self.numeric_list)
ss = 0
for el in self.numeric_list:
ss += (el-mean)**2
return ss/n
dstats = DescrStats([1,2,3,2,1])
print(dir(dstats))
# Try dstats.calc_mean? or dstats.calc_var?
print(f"From {dstats.numeric_list}: Mean = {dstats.calc_mean()} & Var = {dstats.calc_var()}")
# Update 5: Make efficient with mean and var attributes pre-calcuated
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
def __init__(self, numeric_list):
self.numeric_list = numeric_list
self.mean = self.calc_mean()
self.var = self.calc_var()
def calc_mean(self):
"""Method to calculate the mean of numeric_list"""
n = len(self.numeric_list)
s = sum(self.numeric_list)
return s/n
def calc_var(self):
"""Method to calculate the variance of numeric_list"""
mean = self.calc_mean() # Can call on other methods!
n = len(self.numeric_list)
ss = 0
for el in self.numeric_list:
ss += (el-mean)**2
return ss/n
dstats = DescrStats([1,2,3,2,1])
print(dir(dstats))
# Try dstats.calc_mean? or dstats.calc_var?
print(f"From {dstats.numeric_list}: Mean = {dstats.mean} & Var = {dstats.mean}")
%%timeit
dstats.calc_mean()
%%timeit
dstats.mean
```
# Final Note. Portability and Readability <a id="STY"/>
We've learned some about OOP and classes, so what? As you become familiar with coding in any language, you will begin to appreciate the benifit from creating "reusable" code which is easily understood and well-documented. We've already spent some time on "doc strings," but there are other points to consider as well, such as giving meaningful names to your classes, variables, functions, etc. Moreover, wisely using classes, functions, and modules (python script files which house the code you want to "carry around") can act to play a role in all aspects.
|
github_jupyter
|
# Example of f-string formatting
x = 2
s1 = f"The value of x is {x}"
print(s1)
# Original formatting in Python
print("The value of x is %s" % x)
# Missing link between Original and f-strings
print("The value of x is {val}".format(val = x))
# Example 1, addition
2 + 3
# Example 2, exponentiation
x = 2.4**2
x
my_name = "Emmy Noether" # Instantiate a new string object
my_birth_year = 1882
print( f"- The id of my_name is {id(my_name)}." )
print( f"- The id of my_birth_year is {id(my_birth_year)}.")
# String object
my_name = 'Emmy Noether'
print(f"The attributes of an this string object are:\n{dir(my_name)}")
# The upper attribute of a string type object is a method
# (this will return a new string object with the text capitalized)
print(f"my_name.upper() => {my_name.upper()}")
# Your Solution Here
# Example of split
some_text = 'Hello world. This is Jane!'
print(some_text.split())
print(some_text.split('o'))
csv_line = '1.2,2,3.45,P'
# Your solution here: split along the commas
# Example
avg_num_students = (50+35+25+25)/4
print(avg_num_students)
print(type(avg_num_students))
# Example
num_students = 50+35+25+25
print(num_students)
print(type(num_students))
# Example
print("int -> float")
print(3, type(3))
print(float(3), type(float(3)))
print("")
# Example
print("float -> int: Notice coercing to int floors (rounds down) the number.")
print(3.5, type(3.5))
print(int(3.5), type(int(3.5)))
# Example
print(17/5, type(17/5))
print(17//5, type(17//5))
print(17%5, type(17%5)) # 17 mod 5
# Example: List of names
students_list = ['Emmy','Carl','Maryam','Euphemia','John']
print(students_list)
print(students_list[0])
print(students_list[4])
print(type(students_list))
# Example: Nested List of Names and Birth year
## Note that the inner lists have mixed object types: that's totally allowed!
students_birth = [['Emmy',1882], ['Carl',1777],['Maryam',1977],['Euphemia',1890],['John',1903]]
print(students_birth)
print(students_birth[0])
print(students_birth[4])
# Example: List of names
students_tuple = ('Emmy','Carl','Maryam','Euphemia','John')
print(students_tuple)
print(students_tuple[0])
print(students_tuple[4])
print(type(students_tuple))
# Example: List of tuples of Names and Birth year
## Note that the inner tuples have mixed object types: that's totally allowed!
## Note also we could have made a tuple of tuples, tuple of lists.... etc
students_birth = [('Emmy',1882), ('Carl',1777),('Maryam',1977),('Euphemia',1890),('John',1903)]
print(students_birth)
print(students_birth[0])
print(students_birth[4])
# Example: Run this code
two = 2
two?
# Example: Run this code
my_name = 'Emmy Noether'
my_name.lower?
# Example: Run this code
my_list = ["hello", "world"]
my_list.extend?
# Example: Run this code
list?
# Update 0: We create a new class called DescrStats
class DescrStats:
""" This is a doc-string. It is important for users."""
pass
dstats = DescrStats()
dstats?
# Update 1: Doc-string update
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
pass
dstats = DescrStats()
dstats?
# Update 2: First glance at __init__, scoping, and self
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
att1 = 'attribute 1'
def __init__(self):
att2 = 'attribute 2'
self.att3 = 'attribute 3'
dstats = DescrStats()
print(dir(dstats)) # Where is att2?
# Update 3: Better use of __init__ and our own calc_mean method
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
def __init__(self, numeric_list):
self.numeric_list = numeric_list
# Error: def calc_mean():
def calc_mean(self):
# Error: n = len(numeric_list)
n = len(self.numeric_list)
s = sum(self.numeric_list)
return s/n
# Error: dstats = DescrStats()
dstats = DescrStats([1,2,3,2,1])
print(dir(dstats))
# Try: dstats?
# Try: dstats.calc_mean? ... No doc string!
print(f"The mean of {dstats.numeric_list} is {dstats.calc_mean()}")
# Update 4: Add calc_var method and don't forget Doc strings!
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
def __init__(self, numeric_list):
self.numeric_list = numeric_list
def calc_mean(self):
"""Method to calculate the mean of numeric_list"""
n = len(self.numeric_list)
s = sum(self.numeric_list)
return s/n
def calc_var(self):
"""Method to calculate the variance of numeric_list"""
mean = self.calc_mean() # Can call on other methods!
n = len(self.numeric_list)
ss = 0
for el in self.numeric_list:
ss += (el-mean)**2
return ss/n
dstats = DescrStats([1,2,3,2,1])
print(dir(dstats))
# Try dstats.calc_mean? or dstats.calc_var?
print(f"From {dstats.numeric_list}: Mean = {dstats.calc_mean()} & Var = {dstats.calc_var()}")
# Update 5: Make efficient with mean and var attributes pre-calcuated
class DescrStats:
"""Class to produce descriptive stats from a numeric list."""
def __init__(self, numeric_list):
self.numeric_list = numeric_list
self.mean = self.calc_mean()
self.var = self.calc_var()
def calc_mean(self):
"""Method to calculate the mean of numeric_list"""
n = len(self.numeric_list)
s = sum(self.numeric_list)
return s/n
def calc_var(self):
"""Method to calculate the variance of numeric_list"""
mean = self.calc_mean() # Can call on other methods!
n = len(self.numeric_list)
ss = 0
for el in self.numeric_list:
ss += (el-mean)**2
return ss/n
dstats = DescrStats([1,2,3,2,1])
print(dir(dstats))
# Try dstats.calc_mean? or dstats.calc_var?
print(f"From {dstats.numeric_list}: Mean = {dstats.mean} & Var = {dstats.mean}")
%%timeit
dstats.calc_mean()
%%timeit
dstats.mean
| 0.75101 | 0.906901 |
```
import pandas as pd
from IPython.core.display import HTML, display
from ipywidgets import interact
pd.set_option("display.max_rows", None)
pd.set_option("display.max_columns", None)
pd.set_option("display.width", None)
pd.set_option("display.max_colwidth", None)
cols_to_print = ["number", "title", "repository"]
df = pd.read_csv("github-activity.csv")
cond = [column.startswith("Unnamed:") for column in df.columns]
for column in df.columns[cond].tolist():
df.drop(column, axis=1, inplace=True)
month_cond = ["last_month" in filter_name for filter_name in df["filter"]]
week_cond = ["last_week" in filter_name for filter_name in df["filter"]]
monthly_df = df[month_cond]
weekly_df = df[week_cond]
sort_columns = ["updated_at", "closed_at", "repo_name"]
```
# 🚀 Activity Summary
## ✅ Activity in the Last Week
```
repo_names = ["All"] + sorted(weekly_df["repo_name"].unique().tolist())
@interact
def view_weekly_summmary(
repo=repo_names,
sort=sort_columns,
sort_ascending=False,
assigned=False,
created=False,
issues=False,
pull_requests=False,
):
if sort != "repo_name":
columns_to_print = cols_to_print + [sort]
else:
columns_to_print = cols_to_print
if repo == "All":
filtered = weekly_df.sort_values(sort, ascending=sort_ascending)
else:
filtered = weekly_df.loc[monthly_df["repo_name"] == repo].sort_values(
sort, ascending=sort_ascending
)
if assigned and (not created):
# Only show tickets that I was assigned to
cond = [
filter_name.startswith("assigned") for filter_name in weekly_df["filter"]
]
filtered = filtered[cond]
elif (not assigned) and created:
# Only show tickets that I created
cond = [
filter_name.startswith("created") for filter_name in weekly_df["filter"]
]
filtered = filtered[cond]
else:
# No change. Show everything.
pass
if issues and (not pull_requests):
filtered = filtered[filtered["pull_request"] == False]
elif (not issues) and pull_requests:
filtered = filtered[filtered["pull_request"] == True]
else:
# No change. Show everything.
pass
print(f"Total: {len(filtered[columns_to_print])}")
display(HTML(filtered[columns_to_print].to_html(escape=False)))
```
## 📮 Closed in the Last Month
```
repo_names = ["All"] + sorted(monthly_df["repo_name"].unique().tolist())
@interact
def view_monthly_summmary(
repo=repo_names,
sort=sort_columns,
sort_ascending=False,
assigned=False,
created=False,
issues=False,
pull_requests=False,
):
if sort != "repo_name":
columns_to_print = cols_to_print + [sort]
else:
columns_to_print = cols_to_print
if repo == "All":
filtered = monthly_df.sort_values(sort, ascending=sort_ascending)
else:
filtered = monthly_df.loc[monthly_df["repo_name"] == repo].sort_values(
sort, ascending=sort_ascending
)
if assigned and (not created):
# Only show tickets that I was assigned to
cond = [
filter_name.startswith("assigned") for filter_name in monthly_df["filter"]
]
filtered = filtered[cond]
elif (not assigned) and created:
# Only show tickets that I created
cond = [
filter_name.startswith("created") for filter_name in monthly_df["filter"]
]
filtered = filtered[cond]
else:
# No change. Show everything.
pass
if issues and (not pull_requests):
filtered = filtered[filtered["pull_request"] == False]
elif (not issues) and pull_requests:
filtered = filtered[filtered["pull_request"] == True]
else:
# No change. Show everything.
pass
print(f"Total: {len(filtered[columns_to_print])}")
display(HTML(filtered[columns_to_print].to_html(escape=False)))
```
|
github_jupyter
|
import pandas as pd
from IPython.core.display import HTML, display
from ipywidgets import interact
pd.set_option("display.max_rows", None)
pd.set_option("display.max_columns", None)
pd.set_option("display.width", None)
pd.set_option("display.max_colwidth", None)
cols_to_print = ["number", "title", "repository"]
df = pd.read_csv("github-activity.csv")
cond = [column.startswith("Unnamed:") for column in df.columns]
for column in df.columns[cond].tolist():
df.drop(column, axis=1, inplace=True)
month_cond = ["last_month" in filter_name for filter_name in df["filter"]]
week_cond = ["last_week" in filter_name for filter_name in df["filter"]]
monthly_df = df[month_cond]
weekly_df = df[week_cond]
sort_columns = ["updated_at", "closed_at", "repo_name"]
repo_names = ["All"] + sorted(weekly_df["repo_name"].unique().tolist())
@interact
def view_weekly_summmary(
repo=repo_names,
sort=sort_columns,
sort_ascending=False,
assigned=False,
created=False,
issues=False,
pull_requests=False,
):
if sort != "repo_name":
columns_to_print = cols_to_print + [sort]
else:
columns_to_print = cols_to_print
if repo == "All":
filtered = weekly_df.sort_values(sort, ascending=sort_ascending)
else:
filtered = weekly_df.loc[monthly_df["repo_name"] == repo].sort_values(
sort, ascending=sort_ascending
)
if assigned and (not created):
# Only show tickets that I was assigned to
cond = [
filter_name.startswith("assigned") for filter_name in weekly_df["filter"]
]
filtered = filtered[cond]
elif (not assigned) and created:
# Only show tickets that I created
cond = [
filter_name.startswith("created") for filter_name in weekly_df["filter"]
]
filtered = filtered[cond]
else:
# No change. Show everything.
pass
if issues and (not pull_requests):
filtered = filtered[filtered["pull_request"] == False]
elif (not issues) and pull_requests:
filtered = filtered[filtered["pull_request"] == True]
else:
# No change. Show everything.
pass
print(f"Total: {len(filtered[columns_to_print])}")
display(HTML(filtered[columns_to_print].to_html(escape=False)))
repo_names = ["All"] + sorted(monthly_df["repo_name"].unique().tolist())
@interact
def view_monthly_summmary(
repo=repo_names,
sort=sort_columns,
sort_ascending=False,
assigned=False,
created=False,
issues=False,
pull_requests=False,
):
if sort != "repo_name":
columns_to_print = cols_to_print + [sort]
else:
columns_to_print = cols_to_print
if repo == "All":
filtered = monthly_df.sort_values(sort, ascending=sort_ascending)
else:
filtered = monthly_df.loc[monthly_df["repo_name"] == repo].sort_values(
sort, ascending=sort_ascending
)
if assigned and (not created):
# Only show tickets that I was assigned to
cond = [
filter_name.startswith("assigned") for filter_name in monthly_df["filter"]
]
filtered = filtered[cond]
elif (not assigned) and created:
# Only show tickets that I created
cond = [
filter_name.startswith("created") for filter_name in monthly_df["filter"]
]
filtered = filtered[cond]
else:
# No change. Show everything.
pass
if issues and (not pull_requests):
filtered = filtered[filtered["pull_request"] == False]
elif (not issues) and pull_requests:
filtered = filtered[filtered["pull_request"] == True]
else:
# No change. Show everything.
pass
print(f"Total: {len(filtered[columns_to_print])}")
display(HTML(filtered[columns_to_print].to_html(escape=False)))
| 0.199113 | 0.563618 |
```
! pip install --upgrade tables
!pip install eli5
!pip install xgboost
!pip install hyperopt
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score
from hyperopt import hp,fmin,tpe,STATUS_OK
import eli5
from eli5.sklearn import PermutationImportance
cd "/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car"
df=pd.read_hdf('data/car.h5')
df.shape
SUFFIX_CAT='__cat'
for feat in df.columns:
if isinstance(df[feat][0],list):continue
factorized_values=df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat]=factorized_values
else:
df[feat + SUFFIX_CAT]=factorized_values
df['param_rok-produkcji']=df['param_rok-produkcji'].map(lambda x: -1 if str(x)=='None' else int(x))
df['param_moc']=df['param_moc'].map(lambda x: -1 if str(x)=='None' else int(x.split(' ')[0]))
df['param_pojemność-skokowa']=df['param_pojemność-skokowa'].map(lambda x: -1 if str(x)=='None' else int(x.split('cm')[0].replace(' ','')))
def run_model(model,feats):
X=df[feats].values
y=df['price_value'].values
#model = DecisionTreeRegressor(max_depth=5)
scores=cross_val_score(model,X,y,cv=3,scoring='neg_mean_absolute_error')
return np.mean(scores),np.std(scores)
feats=[
'param_napęd__cat',
'param_rok-produkcji',
'param_stan__cat',
'param_skrzynia-biegów__cat',
'param_faktura-vat__cat',
'param_moc',
'param_marka-pojazdu__cat',
'feature_kamera-cofania__cat',
'param_typ__cat',
'param_pojemność-skokowa',
'seller_name__cat',
'feature_wspomaganie-kierownicy__cat',
'param_model-pojazdu__cat',
'param_wersja__cat',
'param_kod-silnika__cat',
'feature_system-start-stop__cat',
'feature_asystent-pasa-ruchu__cat',
'feature_czujniki-parkowania-przednie__cat',
'feature_łopatki-zmiany-biegów__cat',
'feature_regulowane-zawieszenie__cat'
]
xgb_params={
'max_depth':5,
'n_estimators':50,
'learning_rate':0.1,
'seed':0
}
run_model(xgb.XGBRegressor(**xgb_params),feats)
```
## Hyperopt
```
def obj_func(params):
print('Training with params:')
print(params)
mean_mae,score_std=run_model(xgb.XGBRegressor(**params),feats)
return{'loss':np.abs(mean_mae),'status':STATUS_OK}
#space
xgb_reg_params={
'learning_rate':hp.choice('learning_rate', np.arange(0.05,0.31,0.05)),
'max_depth':hp.choice('max_depth',np.arange(5,16,1,dtype=int)),
'subsample':hp.quniform('subsample',0.5,1,0.05),
'colsample_bytree':hp.quniform('colsample_bytree',0.5,1,0.05),
'objective':'reg:squarederror',
'n_estimators':100,
'seed':0
}
#run
best=fmin(obj_func,xgb_reg_params,algo=tpe.suggest,max_evals=25)
best
```
|
github_jupyter
|
! pip install --upgrade tables
!pip install eli5
!pip install xgboost
!pip install hyperopt
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score
from hyperopt import hp,fmin,tpe,STATUS_OK
import eli5
from eli5.sklearn import PermutationImportance
cd "/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car"
df=pd.read_hdf('data/car.h5')
df.shape
SUFFIX_CAT='__cat'
for feat in df.columns:
if isinstance(df[feat][0],list):continue
factorized_values=df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat]=factorized_values
else:
df[feat + SUFFIX_CAT]=factorized_values
df['param_rok-produkcji']=df['param_rok-produkcji'].map(lambda x: -1 if str(x)=='None' else int(x))
df['param_moc']=df['param_moc'].map(lambda x: -1 if str(x)=='None' else int(x.split(' ')[0]))
df['param_pojemność-skokowa']=df['param_pojemność-skokowa'].map(lambda x: -1 if str(x)=='None' else int(x.split('cm')[0].replace(' ','')))
def run_model(model,feats):
X=df[feats].values
y=df['price_value'].values
#model = DecisionTreeRegressor(max_depth=5)
scores=cross_val_score(model,X,y,cv=3,scoring='neg_mean_absolute_error')
return np.mean(scores),np.std(scores)
feats=[
'param_napęd__cat',
'param_rok-produkcji',
'param_stan__cat',
'param_skrzynia-biegów__cat',
'param_faktura-vat__cat',
'param_moc',
'param_marka-pojazdu__cat',
'feature_kamera-cofania__cat',
'param_typ__cat',
'param_pojemność-skokowa',
'seller_name__cat',
'feature_wspomaganie-kierownicy__cat',
'param_model-pojazdu__cat',
'param_wersja__cat',
'param_kod-silnika__cat',
'feature_system-start-stop__cat',
'feature_asystent-pasa-ruchu__cat',
'feature_czujniki-parkowania-przednie__cat',
'feature_łopatki-zmiany-biegów__cat',
'feature_regulowane-zawieszenie__cat'
]
xgb_params={
'max_depth':5,
'n_estimators':50,
'learning_rate':0.1,
'seed':0
}
run_model(xgb.XGBRegressor(**xgb_params),feats)
def obj_func(params):
print('Training with params:')
print(params)
mean_mae,score_std=run_model(xgb.XGBRegressor(**params),feats)
return{'loss':np.abs(mean_mae),'status':STATUS_OK}
#space
xgb_reg_params={
'learning_rate':hp.choice('learning_rate', np.arange(0.05,0.31,0.05)),
'max_depth':hp.choice('max_depth',np.arange(5,16,1,dtype=int)),
'subsample':hp.quniform('subsample',0.5,1,0.05),
'colsample_bytree':hp.quniform('colsample_bytree',0.5,1,0.05),
'objective':'reg:squarederror',
'n_estimators':100,
'seed':0
}
#run
best=fmin(obj_func,xgb_reg_params,algo=tpe.suggest,max_evals=25)
best
| 0.362292 | 0.272424 |
```
#@markdown Setup dependencies (Colab only)
%%capture
try:
import papyrus_scripts
except:
!pip uninstall papyrus-scripts -y
!pip install rdkit-pypi
!pip install https://github.com/OlivierBeq/Papyrus-scripts/tarball/master --no-cache-dir
get_ipython().kernel.do_shutdown(True)
```
# Simple examples: Using Papyrus scripts
[](https://colab.research.google.com/github/OlivierBeq/Papyrus-scripts/blob/master/notebook_examples/simple_examples.ipynb)
Herein it is assumed that the Papyrus <a href="https://doi.org/10.4121/16896406">bioactivity data</a> hosted on 4TU or Google Drive was ***NOT*** dowloaded.
```
%%html
<style>
table {align:left;display:block}
</style>
```
## Download the data
One can easily download (part of) the Papyrus data using *papyrus* command (*download* subcommand).<br/>
```
papyrus download --version latest -s without -S -d mold2 -d unirep
```
It can also be carried out programatically with the following.
```
from papyrus_scripts.download import download_papyrus
```
The default behaviour is to:
- download the curated 2D data (*nostereo* argument)
- omit the not curated 3D data (*stereo*)
- omit molecular structures (*structures*)
- download all molecular/sequence descriptors (*descriptors*)
Let's download the 2D curated data with molecular structures and only descriptors used below (total of 2.80 GB).
```
download_papyrus(version='latest', structures=True, descriptors=['mold2', 'unirep'])
```
At the time of writing (Apr. 26th, 2022) the latest version available is 05.5.
A custom directory to donwload the data to can be indicated with the *outdir* argument.
## Reading Papyrus files
Functions can be found under *papyrus_scripts.reader* to facilitate the dataset being read from disk.
```
from papyrus_scripts.reader import read_papyrus, read_protein_set
```
### Bioactivity data
Let's first read the bioactivity data.
We will use the *read_papyrus* function to read the bioactivity data as a pandas dataframe. <br/>
Let us first demonstrate the use of the function on systems with limited RAM (less than 50GB).
We first ensure to read the standardized data without stereochemistry in chunks of ten thousand lines.<br/>
Additionally we ensure the *source_path* is that given to the *download_papyrus* function above through the *outdir* argument.
```
sample_data = read_papyrus(is3d=False, chunksize=10000, source_path=None)
```
The return value is an iterator of Pandas dataframes of maximum ten thousand rows each.<br/>
Let's extract the first chunk as a pandas dataframe and have a look at few rows.
```
chunk1 = next(sample_data)
chunk1.head()
```
If you are sure your hardware can handle loading all the data, then you can drop *chunksize*.<br/>
Then the return value is a pandas dataframe.
Below, we will show how to use:
- a pandas dataframe (by calling our methods on *chunk1*)
- an iterator of pandas dataframes (by calling our methods on *sample_data*)
### Protein target data
But for now let's focus on protein data:<br/>
Information about the protein targets is available from a different file and can be loaded as easily as was demonstrated above.<br/>
This file being very limited in size, chunking is not needed.
```
protein_data = read_protein_set(source_path=None)
protein_data.head()
```
## Filtering Papyrus
The data contained in the dataset can be filtered very easily using functions under *papyrus_scripts.preprocess*.<br/>
All filtering functions start with ***keep_***.
```
from papyrus_scripts.preprocess import (keep_quality, keep_source, keep_type,
keep_organism, keep_accession, keep_protein_class,
keep_match, keep_contains
)
```
**The strength of the Papyrus scripts is that the data can be filtered whether chunked or not.** The only difference:
- when using chunked data, call *consume_chunks* once all filters are applied to reconstiture a pandas dataframe
In addition to the 8 **keep_** functions above, one can filter compounds similar to a reference using the *keep_similar* and *keep_substructures* functions (see [advanced_querying.ipynb](https://github.com/OlivierBeq/Papyrus-scripts/blob/master/notebook_examples/advanced_querying.ipynb)).
### Filtering pandas dataframes
Let's first keep the data with quality 'medium' and above (namely 'high' and 'medium').
```
filter1 = keep_quality(data=chunk1, min_quality='medium')
```
<u>Using <a href="https://www.ebi.ac.uk/chembl/visualise/">ChEMBL's protein target tree</a> is encouraged for this part.</u><br/>
<br/>
We will then filter out any protein not belonging to these two classes:
* Ligand-gated ion channels
* SLC superfamily of solute carriers
For this filter, passing protein information is required (the same applies for *keep_organism* and *keep_accession*).
```
filter2 = keep_protein_class(data=filter1, protein_data=protein_data, classes=[{'l2': 'Ligand-gated ion channels'}, {'l3': 'SLC superfamily of solute carriers'}])
```
We now keep only K<sub>i</sub> and K<sub>D</sub> data.<br/>
Here we will pass filter1 to the next *keep_* funtion.
```
filter3 = keep_type(data=filter2, activity_types=['Ki', 'KD'], njobs=1)
```
We finally keep only human and rat data (protein information is also required here).
```
filter4 = keep_organism(data=filter3, protein_data=protein_data, organism=['Human', 'Rat'], generic_regex=True)
```
Let us have a look at the filtered data.
```
filter4.head()
print(f'Number of activity points: {filter4.shape[0]}')
```
Remember that this result comes from only the first chunk of the entire dataset.
One can now save this dataframe like any other pandas object.
### Filtering iterators of dataframes
Now that the filtering capacity of the Papyrus scripts have been demonstrated for entire dataframes, we can try with chunked iterators.
Let's first reinstanciate sample data. This time we will use a chunk size of 1,000,000.
```
sample_data = read_papyrus(is3d=False, chunksize=1000000, source_path=None)
```
For this will will go through the same filters as above but iterate over the entire dataset.
```
filter1_it = keep_quality(data=sample_data, min_quality='medium')
filter2_it = keep_protein_class(data=filter1_it, protein_data=protein_data, classes=[{'l2': 'Ligand-gated ion channels'}, {'l3': 'SLC superfamily of solute carriers'}])
filter3_it = keep_type(data=filter2_it, activity_types=['Ki', 'KD'])
filter4_it = keep_organism(data=filter3_it, protein_data=protein_data, organism=['Human', 'Rat'], generic_regex=True)
```
The filters do not get applied directly on chunked iterators and one can easily check that *filter4_it* is not a pandas dataframe.
```
filter4_it
```
To apply the filters on the entire iterator, one needs to call *consume_chunks*.<br/>
This function can be found under *papyrus_scripts.preprocess* just like the *keep_* functions used for filtering.
```
from papyrus_scripts.preprocess import consume_chunks
```
In order to follow progress of the filtering process, one needs to pass the total number of chunks the filters will go through.<br/>
$Total = \displaystyle \Bigl \lceil\frac{Size_{dataset}}{chunksize}\Bigl \rceil $<br/>
In version 05.5 of the Papyrus dataset the number of compound-protein activity points depends on whether stereochemistry is used or not **(remember we discourage its usage)**.<br/>
| Stereochemistry | Size of dataset |
| :--- | :---: |
| Without | 59,775,912 |
| With (strongly discouraged) | 61,097,228 |
In this example $Total = \displaystyle \Bigl \lceil \frac{59,775,912}{1,000,000}\Bigl \rceil = 60 $<br/>
```
filtered_data = consume_chunks(filter4_it, progress=True, total=60)
```
Although this may take up to 30 minutes to filter the entire dataset, this is the ideal way to work with this dataset on laptops.
```
print(f'Number of activity points: {filtered_data.shape[0]}')
```
We hope these simple examples demonstrated how the Papyrus data can easily be filtered.
Let's now focus on the modelling
## Modelling the bioactivity data
The Papyrus scripts allow for both quantitative structure-activity relationship (QSAR) and proteochemometrics (PCM) modelling.<br/>
All functions related to modelling can be found under *papyrus_scripts.modelling*.
**Disclaimer:**<br/>
For now, only precomputed molecular descriptors can be used, preventing the use of models outside of Papyrus.<br/>
This major flaw will be soon fixed.
### QSAR models
```
from papyrus_scripts.modelling import qsar
import xgboost
```
Let us first restrict the data that we just extracted from Papyrus to the human serotonin receptor (accession P31645).
```
sample_data = read_papyrus(is3d=False, chunksize=1000000, source_path=None)
filter1_it = keep_accession(sample_data, 'P31645')
filter2_it = keep_quality(data=filter1_it, min_quality='medium')
filter3_it = keep_type(data=filter2_it, activity_types=['Ki', 'KD'])
SLC6A4_data = consume_chunks(filter3_it, total=60)
```
We will first create a regression model predicting the average pActivity values of a compound-target pair (i.e. *pchembl_value_Mean*).<br/>
Let's ensure the version below is that given to the *download_papyrus* function above.
```
reg_model = xgboost.XGBRegressor(verbosity=0)
reg_results, trained_reg_model = qsar(data=SLC6A4_data,
version='latest',
endpoint='pchembl_value_Mean',
num_points=30,
delta_activity=2,
descriptors='mold2',
descriptor_chunksize=50000,
activity_threshold=6.5,
model=reg_model,
folds=5,
stratify=False,
split_by='Year',
split_year=2013,
test_set_size=0.30,
cluster_method=None,
custom_groups=None,
random_state=1234,
verbose=True)
reg_results
```
When looking at average R<sup>2</sup>, performance over cross-validation is correct but the model show very little capacity to predict the temporally split test set.
To train a classifier, all that is needed is to change the type of model.
```
cls_model = xgboost.XGBClassifier(verbosity=0)
cls_results, trained_cls_model = qsar(data=SLC6A4_data,
version='latest',
endpoint='pchembl_value_Mean',
num_points=30,
delta_activity=2,
descriptors='mold2',
descriptor_chunksize=50000,
activity_threshold=6.5,
model=cls_model,
folds=5,
stratify=False,
split_by='Year',
split_year=2013,
test_set_size=0.30,
cluster_method=None,
custom_groups=None,
random_state=1234,
verbose=True)
cls_results
```
Looking at the active to inactive ratio (i.e. A:N) one can clearly identify the reason of this low prediction performance over the test set.<br/>
Oversampling and/or undersampling techniques could help the model better identify the boundary between actives and inactives in the mmecular descriptor space.<br/>
However the use of such techniques is not the focus here.
### PCM models
```
from papyrus_scripts.modelling import pcm
```
Let us see if including Rat data improves the quality of the model.
```
sample_data = read_papyrus(is3d=False, chunksize=1000000, source_path=None)
filter1_it = keep_accession(sample_data, ['P31645', 'P31652'])
filter2_it = keep_quality(data=filter1_it, min_quality='medium')
filter3_it = keep_type(data=filter2_it, activity_types=['Ki', 'KD'])
SLC6A4_human_rat = consume_chunks(filter3_it, total=60)
```
Let's ensure the version below is that given to the *download_papyrus* function above.
```
pcm_reg_model = xgboost.XGBRegressor(verbosity=0)
pcm_reg_results, pcm_reg_trained_model = pcm(data=SLC6A4_human_rat,
version='latest',
endpoint='pchembl_value_Mean',
num_points=30,
delta_activity=2,
mol_descriptors='mold2',
mol_descriptor_chunksize=50000,
prot_descriptors='unirep',
prot_descriptor_chunksize=50000,
activity_threshold=6.5,
model=pcm_reg_model,
folds=5,
stratify=False,
split_by='Year',
split_year=2013,
test_set_size=0.30,
cluster_method=None,
custom_groups=None,
random_state=1234,
verbose=True)
pcm_reg_results
```
As with QSAR models, training a classifier is a matter of changing the underlying model to be used.
```
pcm_cls_model = xgboost.XGBClassifier(verbosity=0)
pcm_cls_results, pcm_cls_trained_model = pcm(data=SLC6A4_human_rat,
version='latest',
endpoint='pchembl_value_Mean',
num_points=30,
delta_activity=2,
mol_descriptors='mold2',
mol_descriptor_chunksize=50000,
prot_descriptors='unirep',
prot_descriptor_chunksize=50000,
activity_threshold=6.5,
model=pcm_cls_model,
folds=5,
stratify=False,
split_by='Year',
split_year=2013,
test_set_size=0.30,
cluster_method=None,
custom_groups=None,
random_state=1234,
verbose=True)
pcm_cls_results
```
|
github_jupyter
|
#@markdown Setup dependencies (Colab only)
%%capture
try:
import papyrus_scripts
except:
!pip uninstall papyrus-scripts -y
!pip install rdkit-pypi
!pip install https://github.com/OlivierBeq/Papyrus-scripts/tarball/master --no-cache-dir
get_ipython().kernel.do_shutdown(True)
%%html
<style>
table {align:left;display:block}
</style>
papyrus download --version latest -s without -S -d mold2 -d unirep
from papyrus_scripts.download import download_papyrus
download_papyrus(version='latest', structures=True, descriptors=['mold2', 'unirep'])
from papyrus_scripts.reader import read_papyrus, read_protein_set
sample_data = read_papyrus(is3d=False, chunksize=10000, source_path=None)
chunk1 = next(sample_data)
chunk1.head()
protein_data = read_protein_set(source_path=None)
protein_data.head()
from papyrus_scripts.preprocess import (keep_quality, keep_source, keep_type,
keep_organism, keep_accession, keep_protein_class,
keep_match, keep_contains
)
filter1 = keep_quality(data=chunk1, min_quality='medium')
filter2 = keep_protein_class(data=filter1, protein_data=protein_data, classes=[{'l2': 'Ligand-gated ion channels'}, {'l3': 'SLC superfamily of solute carriers'}])
filter3 = keep_type(data=filter2, activity_types=['Ki', 'KD'], njobs=1)
filter4 = keep_organism(data=filter3, protein_data=protein_data, organism=['Human', 'Rat'], generic_regex=True)
filter4.head()
print(f'Number of activity points: {filter4.shape[0]}')
sample_data = read_papyrus(is3d=False, chunksize=1000000, source_path=None)
filter1_it = keep_quality(data=sample_data, min_quality='medium')
filter2_it = keep_protein_class(data=filter1_it, protein_data=protein_data, classes=[{'l2': 'Ligand-gated ion channels'}, {'l3': 'SLC superfamily of solute carriers'}])
filter3_it = keep_type(data=filter2_it, activity_types=['Ki', 'KD'])
filter4_it = keep_organism(data=filter3_it, protein_data=protein_data, organism=['Human', 'Rat'], generic_regex=True)
filter4_it
from papyrus_scripts.preprocess import consume_chunks
filtered_data = consume_chunks(filter4_it, progress=True, total=60)
print(f'Number of activity points: {filtered_data.shape[0]}')
from papyrus_scripts.modelling import qsar
import xgboost
sample_data = read_papyrus(is3d=False, chunksize=1000000, source_path=None)
filter1_it = keep_accession(sample_data, 'P31645')
filter2_it = keep_quality(data=filter1_it, min_quality='medium')
filter3_it = keep_type(data=filter2_it, activity_types=['Ki', 'KD'])
SLC6A4_data = consume_chunks(filter3_it, total=60)
reg_model = xgboost.XGBRegressor(verbosity=0)
reg_results, trained_reg_model = qsar(data=SLC6A4_data,
version='latest',
endpoint='pchembl_value_Mean',
num_points=30,
delta_activity=2,
descriptors='mold2',
descriptor_chunksize=50000,
activity_threshold=6.5,
model=reg_model,
folds=5,
stratify=False,
split_by='Year',
split_year=2013,
test_set_size=0.30,
cluster_method=None,
custom_groups=None,
random_state=1234,
verbose=True)
reg_results
cls_model = xgboost.XGBClassifier(verbosity=0)
cls_results, trained_cls_model = qsar(data=SLC6A4_data,
version='latest',
endpoint='pchembl_value_Mean',
num_points=30,
delta_activity=2,
descriptors='mold2',
descriptor_chunksize=50000,
activity_threshold=6.5,
model=cls_model,
folds=5,
stratify=False,
split_by='Year',
split_year=2013,
test_set_size=0.30,
cluster_method=None,
custom_groups=None,
random_state=1234,
verbose=True)
cls_results
from papyrus_scripts.modelling import pcm
sample_data = read_papyrus(is3d=False, chunksize=1000000, source_path=None)
filter1_it = keep_accession(sample_data, ['P31645', 'P31652'])
filter2_it = keep_quality(data=filter1_it, min_quality='medium')
filter3_it = keep_type(data=filter2_it, activity_types=['Ki', 'KD'])
SLC6A4_human_rat = consume_chunks(filter3_it, total=60)
pcm_reg_model = xgboost.XGBRegressor(verbosity=0)
pcm_reg_results, pcm_reg_trained_model = pcm(data=SLC6A4_human_rat,
version='latest',
endpoint='pchembl_value_Mean',
num_points=30,
delta_activity=2,
mol_descriptors='mold2',
mol_descriptor_chunksize=50000,
prot_descriptors='unirep',
prot_descriptor_chunksize=50000,
activity_threshold=6.5,
model=pcm_reg_model,
folds=5,
stratify=False,
split_by='Year',
split_year=2013,
test_set_size=0.30,
cluster_method=None,
custom_groups=None,
random_state=1234,
verbose=True)
pcm_reg_results
pcm_cls_model = xgboost.XGBClassifier(verbosity=0)
pcm_cls_results, pcm_cls_trained_model = pcm(data=SLC6A4_human_rat,
version='latest',
endpoint='pchembl_value_Mean',
num_points=30,
delta_activity=2,
mol_descriptors='mold2',
mol_descriptor_chunksize=50000,
prot_descriptors='unirep',
prot_descriptor_chunksize=50000,
activity_threshold=6.5,
model=pcm_cls_model,
folds=5,
stratify=False,
split_by='Year',
split_year=2013,
test_set_size=0.30,
cluster_method=None,
custom_groups=None,
random_state=1234,
verbose=True)
pcm_cls_results
| 0.370112 | 0.923868 |
Copyright (c) 2019 [윤기태]
https://github.com/yoonkt200/python-data-analysis
[MIT License](https://github.com/yoonkt200/python-data-analysis/blob/master/LICENSE.txt)
# (가제) 파이썬 데이터 분석
-----
# 4.1) 타이타닉호의 생존자 가려내기
### 바로가기
- [<Step1. 탐색> : Titanic 데이터 살펴보기](#<Step1.-탐색>-:-Titanic-데이터-살펴보기)
- [Titanic 데이터셋의 기본 정보]
- [탐색적 데이터 분석]
- [<Step2. 분류> : 생존자 분류 모델 만들기](#<Step2.-분류>-:-생존자-분류-모델-만들기)
- [분류 모델을 위한 전처리]
- [분류 모델링]
- [<Step3. 모델 개선> : 피처 엔지니어링 첫걸음](#<Step3.-모델-개선>-:-피처-엔지니어링-첫걸음)
- [피처에서 새로운 의미 추출해내기]
- [피처 스케일링]
- [피처 영향력 살펴보기]
- [<Step4. 평가> : 모델 검증하기](#<Step4.-평가>-:-모델-검증하기)
- [K-fold 교차 검증 수행하기]
- [학습 곡선 분석하기]
-----
```
# -*- coding: utf-8 -*-
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
```
# <Step1. 탐색> : Titanic 데이터 살펴보기
### [Titanic 데이터셋의 기본 정보]
##### 데이터 피처 설명
- pclass : Passenger Class, 승객 등급
- survived : 생존 여부
- name : 승객 이름
- sex : 승객 성별
- age : 승객 나이
- sibsp : 탑승 한 형제/배우자 수
- parch : 탑승 한 부모/자녀 수
- ticket : 티켓 번호
- fare : 승객 지불 요금
- cabin : 선실 이름
- embarked : 승선항 (C = 쉘 부르그, Q = 퀸즈타운, S = 사우스 햄튼)
- body : 사망자 확인 번호
- home.dest : 고향/목적지
```
df_train = pd.read_csv("../data/titanic_train.csv")
df_test = pd.read_csv("../data/titanic_test.csv")
df_train.head(5)
print(df_train.info())
print("-----------------")
print(df_test.info())
```
##### 불필요한 피처 제거
```
# 데이터셋에서 name, ticket, body, cabin, home.dest 피처를 제거합니다.
df_train = df_train.drop(['name', 'ticket', 'body', 'cabin', 'home.dest'], axis=1)
df_test = df_test.drop(['name', 'ticket', 'body', 'cabin', 'home.dest'], axis=1)
```
-----
### [탐색적 데이터 분석]
```
print(df_train['survived'].value_counts())
df_train['survived'].value_counts().plot.bar()
# survived 피처를 기준으로 그룹을 나누어, 그룹별 pclass 피처의 분포를 살펴봅니다.
print(df_train['pclass'].value_counts())
ax = sns.countplot(x='pclass', hue = 'survived', data = df_train)
from scipy import stats
# 두 집단의 피처를 비교해주며 탐색작업을 자동화하는 함수를 정의합니다.
def valid_features(df, col_name, distribution_check=True):
# 두 집단 (survived=1, survived=0)의 분포 그래프를 출력합니다.
g = sns.FacetGrid(df, col='survived')
g.map(plt.hist, col_name, bins=30)
# 두 집단 (survived=1, survived=0)의 표준편차를 각각 출력합니다.
titanic_survived = df[df['survived']==1]
titanic_survived_static = np.array(titanic_survived[col_name])
print("data std is", '%.2f' % np.std(titanic_survived_static))
titanic_n_survived = df[df['survived']==0]
titanic_n_survived_static = np.array(titanic_n_survived[col_name])
print("data std is", '%.2f' % np.std(titanic_n_survived_static))
# T-test로 두 집단의 평균 차이를 검정합니다.
tTestResult = stats.ttest_ind(titanic_survived[col_name], titanic_n_survived[col_name])
tTestResultDiffVar = stats.ttest_ind(titanic_survived[col_name], titanic_n_survived[col_name], equal_var=False)
print("The t-statistic and p-value assuming equal variances is %.3f and %.3f." % tTestResult)
print("The t-statistic and p-value not assuming equal variances is %.3f and %.3f" % tTestResultDiffVar)
if distribution_check:
# Shapiro-Wilk 검정 : 분포의 정규성 정도를 검증합니다.
print("The w-statistic and p-value in Survived %.3f and %.3f" % stats.shapiro(titanic_survived[col_name]))
print("The w-statistic and p-value in Non-Survived %.3f and %.3f" % stats.shapiro(titanic_n_survived[col_name]))
# 앞서 정의한 valid_features 함수를 실행합니다. age 피처를 탐색합니다.
valid_features(df_train[df_train['age'] > 0], 'age', distribution_check=True)
# 앞서 정의한 valid_features 함수를 실행합니다. sibsp 피처를 탐색합니다.
valid_features(df_train, 'sibsp', distribution_check=False)
```
-----
### `[미니 퀴즈 - 4.1]`
- `parch, fare, sex, embarked 피처에 대해 (생존자/비생존자) 간의 차이를 탐색해 보세요.`
- 위에서와 동일한 방법를 이용하여 생존자와 비생존자 그룹간의 평균과 분포가 어떻게 다른지, 혹은 통계적 유의성이 얼마나 있는지 대해 살펴보도록 합시다.
- sex : 남/여에서 생존자와 비생존자간의 비율이 크게 다른것을 확인할 수 있습니다.
- embarked : 3개의 승선항에 따라 생존자와 비생존자간의 비율이 부분적으로 다른것을 확인할 수 있습니다.
- parch : 편차에 약간 차이가 있고, t-test 결과 두 집단의 평균에도 약간의 차이가 있다는 것을 알 수 있습니다.
- fare : 편차에 많은 차이가 있고, t-test 결과 두 집단의 평균은 다르다고 확신할 수 있습니다.
```
ax = sns.countplot(x='sex', hue = 'survived', data = df_train)
ax = sns.countplot(x='embarked', hue = 'survived', data = df_train)
valid_features(df_train, 'parch', distribution_check=False)
valid_features(df_train, 'fare', distribution_check=False)
```
-----
# <Step2. 분류> : 생존자 분류 모델 만들기
### [분류 모델을 위한 전처리]
```
# age의 결측값을 평균값으로 대체합니다.
replace_mean = df_train[df_train['age'] > 0]['age'].mean()
df_train['age'] = df_train['age'].fillna(replace_mean)
df_test['age'] = df_test['age'].fillna(replace_mean)
# embark : 2개의 결측값을 최빈값으로 대체합니다.
embarked_mode = df_train['embarked'].value_counts().index[0]
df_train['embarked'] = df_train['embarked'].fillna(embarked_mode)
df_test['embarked'] = df_test['embarked'].fillna(embarked_mode)
# one-hot encoding을 위한 통합 데이터 프레임(whole_df)을 생성합니다.
whole_df = df_train.append(df_test)
train_idx_num = len(df_train)
# pandas 패키지를 이용한 one-hot 인코딩을 수행합니다.
whole_df_encoded = pd.get_dummies(whole_df)
df_train = whole_df_encoded[:train_idx_num]
df_test = whole_df_encoded[train_idx_num:]
df_train.head()
# 데이터를 학습 데이터셋, 테스트 데이터셋으로 분리합니다.
x_train, y_train = df_train.loc[:, df_train.columns != 'survived'].values, df_train['survived'].values
x_test, y_test = df_test.loc[:, df_test.columns != 'survived'].values, df_test['survived'].values
```
-----
### [분류 모델링]
##### Logistic Regression
```
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
# 로지스틱 회귀 모델을 학습합니다.
lr = LogisticRegression(random_state=0)
lr.fit(x_train, y_train)
# 학습한 모델의 테스트 데이터셋에 대한 예측 결과를 반환합니다.
y_pred = lr.predict(x_test)
y_pred_probability = lr.predict_proba(x_test)[:,1]
```
##### 분류 모델 평가
```
# 테스트 데이터셋에 대한 accuracy, precision, recall, f1 평가 지표를 각각 출력합니다.
print("accuracy: %.2f" % accuracy_score(y_test, y_pred))
print("Precision : %.3f" % precision_score(y_test, y_pred))
print("Recall : %.3f" % recall_score(y_test, y_pred))
print("F1 : %.3f" % f1_score(y_test, y_pred))
```
-----
##### 분류 모델의 여러가지 평가 방법들
- Confusion Matrix 기반
- Accuracy
- Precision
- Recall
- F1 score
- AUC (Area Under the Curve) & ROC (Receiver Operating Characteristic) curve
```
from sklearn.metrics import confusion_matrix
# Confusion Matrix를 출력합니다.
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
print(confmat)
```
-----
##### Logistic Regression model AUC
```
from sklearn.metrics import roc_curve, roc_auc_score
# AUC (Area Under the Curve)를 계산하여 출력합니다.
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_probability)
roc_auc = roc_auc_score(y_test, y_pred_probability)
print("AUC : %.3f" % roc_auc)
# ROC curve를 그래프로 출력합니다.
plt.rcParams['figure.figsize'] = [5, 4]
plt.plot(false_positive_rate, true_positive_rate, label='ROC curve (area = %0.3f)' % roc_auc,
color='red', linewidth=4.0)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve of Logistic regression')
plt.legend(loc="lower right")
```
-----
##### Decision Tree
```
from sklearn.tree import DecisionTreeClassifier
# 의사결정나무를 학습하고, 학습한 모델로 테스트 데이터셋에 대한 예측값을 반환합니다.
dtc = DecisionTreeClassifier()
dtc.fit(x_train, y_train)
y_pred = dtc.predict(x_test)
y_pred_probability = dtc.predict_proba(x_test)[:,1]
# 학습한 모델의 성능을 계산하여 출력합니다.
print("accuracy: %.2f" % accuracy_score(y_test, y_pred))
print("Precision : %.3f" % precision_score(y_test, y_pred))
print("Recall : %.3f" % recall_score(y_test, y_pred))
print("F1 : %.3f" % f1_score(y_test, y_pred))
# 학습한 모델의 AUC를 계산하여 출력합니다.
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_probability)
roc_auc = roc_auc_score(y_test, y_pred_probability)
print("AUC : %.3f" % roc_auc)
# ROC curve를 그래프로 출력합니다.
plt.rcParams['figure.figsize'] = [5, 4]
plt.plot(false_positive_rate, true_positive_rate, label='ROC curve (area = %0.3f)' % roc_auc,
color='red', linewidth=4.0)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve of Logistic regression')
plt.legend(loc="lower right")
```
-----
# <Step3. 모델 개선> : 피처 엔지니어링 첫걸음
### [피처에서 새로운 의미 추출해내기]
```
# 데이터를 다시 불러옵니다.
df_train = pd.read_csv("../data/titanic_train.csv")
df_test = pd.read_csv("../data/titanic_test.csv")
df_train = df_train.drop(['ticket', 'body', 'home.dest'], axis=1)
df_test = df_test.drop(['ticket', 'body', 'home.dest'], axis=1)
# age의 결측값을 평균값으로 대체합니다.
replace_mean = df_train[df_train['age'] > 0]['age'].mean()
df_train['age'] = df_train['age'].fillna(replace_mean)
df_test['age'] = df_test['age'].fillna(replace_mean)
# embark : 2개의 결측값을 최빈값으로 대체합니다.
embarked_mode = df_train['embarked'].value_counts().index[0]
df_train['embarked'] = df_train['embarked'].fillna(embarked_mode)
df_test['embarked'] = df_test['embarked'].fillna(embarked_mode)
# one-hot encoding을 위한 통합 데이터 프레임(whole_df)을 생성합니다.
whole_df = df_train.append(df_test)
train_idx_num = len(df_train)
```
##### cabin 피처 활용하기
```
print(whole_df['cabin'].value_counts()[:10])
# 결측 데이터의 경우는 ‘X’로 대체합니다.
whole_df['cabin'] = whole_df['cabin'].fillna('X')
# cabin 피처의 첫 번째 문자를 추출합니다.
whole_df['cabin'] = whole_df['cabin'].apply(lambda x: x[0])
# 추출한 문자 중, G와 T는 수가 너무 작기 때문에, 마찬가지로 ‘X’로 대체합니다.
whole_df['cabin'] = whole_df['cabin'].replace({"G":"X", "T":"X"})
ax = sns.countplot(x='cabin', hue = 'survived', data = whole_df)
plt.show()
```
-----
##### name 피처 활용하기
```
# 이름에서 호칭을 추출합니다.
name_grade = whole_df['name'].apply(lambda x : x.split(", ",1)[1].split(".")[0])
name_grade = name_grade.unique().tolist()
print(name_grade)
# 호칭에 따라 사회적 지위(1910년대 기준)를 정의합니다.
grade_dict = {'A': ['Rev', 'Col', 'Major', 'Dr', 'Capt', 'Sir'], # 명예직을 나타냅니다.
'B': ['Ms', 'Mme', 'Mrs', 'Dona'], # 여성을 나타냅니다.
'C': ['Jonkheer', 'the Countess'], # 귀족이나 작위를 나타냅니다.
'D': ['Mr', 'Don'], # 남성을 나타냅니다.
'E': ['Master'], # 젊은남성을 나타냅니다.
'F': ['Miss', 'Mlle', 'Lady']} # 젊은 여성을 나타냅니다.
# 정의한 호칭의 기준에 따라, A~F의 문자로 name 피처를 다시 정의하는 함수입니다.
def give_grade(x):
grade = x.split(", ", 1)[1].split(".")[0]
for key, value in grade_dict.items():
for title in value:
if grade == title:
return key
return 'G'
# 위의 함수를 적용하여 name 피처를 새롭게 정의합니다.
whole_df['name'] = whole_df['name'].apply(lambda x: give_grade(x))
print(whole_df['name'].value_counts())
```
------
### `[미니 퀴즈 - 4.2]`
- `‘cabin’ 피처와 마찬가지로, ‘name’ 피처에 대해 (생존자/비생존) 그룹 간의 차이를 탐색해 보세요.`
- 위와 동일한 방법을 이용하여 생존자와 비생존자 그룹간의 분포가 어떻게 다른지, 시각적으로 탐색해보도록 합시다.
- 동일한 countplot으로 아래처럼 시각화가 가능합니다.
- 두 그룹간의 '평균'값의 차이를 검정한 t-test와 같이, 두 그룹간의 '분포'의 차이를 검정하는 방법에는 '카이제곱 검정' 이라는 방법이 있습니다.
- 카이제곱 검정의 실행 결과, p-value 0.000으로 두 그룹간의 분포가 통계적으로 유의미하게 다르다는 것을 알 수 있습니다.
- `이에 대한 결과도 아래에 포함하였습니다.`
```
ax = sns.countplot(x='name', hue = 'survived', data = whole_df)
plt.show()
from scipy import stats
chis = stats.chisquare(whole_df[whole_df['survived']==1]['cabin'].value_counts().sort_index(),
whole_df[whole_df['survived']==0]['cabin'].value_counts().sort_index())
print("statistic = %.3f, pvalue = %.3f" % chis)
```
-----
##### one-hot encoding
```
# pandas 패키지를 이용한 one-hot 인코딩을 수행합니다.
whole_df_encoded = pd.get_dummies(whole_df)
df_train = whole_df_encoded[:train_idx_num]
df_test = whole_df_encoded[train_idx_num:]
df_train.head()
```
-----
##### 피처 엔지니어링이 완료된 데이터셋 학습
```
# 데이터를 학습 데이터셋, 테스트 데이터셋으로 분리합니다.
x_train, y_train = df_train.loc[:, df_train.columns != 'survived'].values, df_train['survived'].values
x_test, y_test = df_test.loc[:, df_test.columns != 'survived'].values, df_test['survived'].values
# 로지스틱 회귀 모델을 학습합니다.
lr = LogisticRegression(random_state=0)
lr.fit(x_train, y_train)
# 학습한 모델의 테스트 데이터셋에 대한 예측 결과를 반환합니다.
y_pred = lr.predict(x_test)
y_pred_probability = lr.predict_proba(x_test)[:,1]
# 테스트 데이터셋에 대한 accuracy, precision, recall, f1 평가 지표를 각각 출력합니다.
print("accuracy: %.2f" % accuracy_score(y_test, y_pred))
print("Precision : %.3f" % precision_score(y_test, y_pred))
print("Recall : %.3f" % recall_score(y_test, y_pred))
print("F1 : %.3f" % f1_score(y_test, y_pred)) # AUC (Area Under the Curve) & ROC curve
# AUC (Area Under the Curve)를 계산하여 출력합니다.
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_probability)
roc_auc = roc_auc_score(y_test, y_pred_probability)
print("AUC : %.3f" % roc_auc)
# ROC curve를 그래프로 출력합니다.
plt.rcParams['figure.figsize'] = [5, 4]
plt.plot(false_positive_rate, true_positive_rate, label='ROC curve (area = %0.3f)' % roc_auc,
color='red', linewidth=4.0)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve of Logistic regression')
plt.legend(loc="lower right")
```
-----
### [피처 영향력 살펴보기]
```
# 예측 대상인 survived 피처를 제외한 모든 피처를 리스트로 반환합니다. (그래프의 y축)
cols = df_train.columns.tolist()
cols.remove('survived')
y_pos = np.arange(len(cols))
# 각 피처별 회귀 분석 계수를 그래프의 x축으로 하여, 피처 영향력 그래프를 출력합니다.
plt.rcParams['figure.figsize'] = [5, 4]
fig, ax = plt.subplots()
ax.barh(y_pos, lr.coef_[0], align='center', color='green', ecolor='black')
ax.set_yticks(y_pos)
ax.set_yticklabels(cols)
ax.invert_yaxis()
ax.set_xlabel('Coef')
ax.set_title("Each Feature's Coef")
plt.show()
```
-----
# <Step4. 평가> : 모델 검증하기
### [K-fold 교차 검증 수행하기]
- K-fold 교차 검증을 통한 과적합 검증
```
from sklearn.model_selection import KFold
# K-fold 교차 검증의 k를 5로 설정합니다.
k = 5
cv = KFold(k, shuffle=True, random_state=0)
acc_history = []
# K-fold를 5번의 분할 학습으로 반복합니다.
for i, (train_data_row, test_data_row) in enumerate(cv.split(whole_df_encoded)):
# 5개로 분할된 fold 중 4개를 학습 데이터셋, 1개를 테스트 데이터셋으로 지정합니다. 매 반복시마다, 테스트 데이터셋은 변경됩니다.
df_train = whole_df_encoded.iloc[train_data_row]
df_test = whole_df_encoded.iloc[test_data_row]
# survived 피처를 y, 나머지 피처들을 x 데이터로 지정합니다.
splited_x_train, splited_y_train = df_train.loc[:, df_train.columns != 'survived'].values, df_train['survived'].values
splited_x_test, splited_y_test = df_test.loc[:, df_test.columns != 'survived'].values, df_test['survived'].values
# 주어진 데이터로 로지스틱 회귀 모델을 학습합니다.
lr = LogisticRegression(random_state=0)
lr.fit(splited_x_train, splited_y_train)
y_pred = lr.predict(splited_x_test)
# 테스트 데이터셋의 Accuracy를 계산하여 acc_history에 저장합니다.
splited_acc = accuracy_score(splited_y_test, y_pred)
acc_history.append(splited_acc)
# acc_history에 저장된 5번의 학습 결과(Accuracy)를 그래프로 출력합니다.
plt.xlabel("Each K-fold")
plt.ylabel("Acc of splited test data")
plt.plot(range(1, k+1), acc_history)
```
-----
### [학습 곡선 분석하기]
- 아래 코드 실행을 위해, anaconda prompt 혹은 Terminal에서 아래와 같은 패키지를 설치해 줍니다.
- (env_name) $ `pip install scikit-plot`
```
import scikitplot as skplt
skplt.estimators.plot_learning_curve(lr, x_train, y_train)
plt.show()
```
|
github_jupyter
|
# -*- coding: utf-8 -*-
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
df_train = pd.read_csv("../data/titanic_train.csv")
df_test = pd.read_csv("../data/titanic_test.csv")
df_train.head(5)
print(df_train.info())
print("-----------------")
print(df_test.info())
# 데이터셋에서 name, ticket, body, cabin, home.dest 피처를 제거합니다.
df_train = df_train.drop(['name', 'ticket', 'body', 'cabin', 'home.dest'], axis=1)
df_test = df_test.drop(['name', 'ticket', 'body', 'cabin', 'home.dest'], axis=1)
print(df_train['survived'].value_counts())
df_train['survived'].value_counts().plot.bar()
# survived 피처를 기준으로 그룹을 나누어, 그룹별 pclass 피처의 분포를 살펴봅니다.
print(df_train['pclass'].value_counts())
ax = sns.countplot(x='pclass', hue = 'survived', data = df_train)
from scipy import stats
# 두 집단의 피처를 비교해주며 탐색작업을 자동화하는 함수를 정의합니다.
def valid_features(df, col_name, distribution_check=True):
# 두 집단 (survived=1, survived=0)의 분포 그래프를 출력합니다.
g = sns.FacetGrid(df, col='survived')
g.map(plt.hist, col_name, bins=30)
# 두 집단 (survived=1, survived=0)의 표준편차를 각각 출력합니다.
titanic_survived = df[df['survived']==1]
titanic_survived_static = np.array(titanic_survived[col_name])
print("data std is", '%.2f' % np.std(titanic_survived_static))
titanic_n_survived = df[df['survived']==0]
titanic_n_survived_static = np.array(titanic_n_survived[col_name])
print("data std is", '%.2f' % np.std(titanic_n_survived_static))
# T-test로 두 집단의 평균 차이를 검정합니다.
tTestResult = stats.ttest_ind(titanic_survived[col_name], titanic_n_survived[col_name])
tTestResultDiffVar = stats.ttest_ind(titanic_survived[col_name], titanic_n_survived[col_name], equal_var=False)
print("The t-statistic and p-value assuming equal variances is %.3f and %.3f." % tTestResult)
print("The t-statistic and p-value not assuming equal variances is %.3f and %.3f" % tTestResultDiffVar)
if distribution_check:
# Shapiro-Wilk 검정 : 분포의 정규성 정도를 검증합니다.
print("The w-statistic and p-value in Survived %.3f and %.3f" % stats.shapiro(titanic_survived[col_name]))
print("The w-statistic and p-value in Non-Survived %.3f and %.3f" % stats.shapiro(titanic_n_survived[col_name]))
# 앞서 정의한 valid_features 함수를 실행합니다. age 피처를 탐색합니다.
valid_features(df_train[df_train['age'] > 0], 'age', distribution_check=True)
# 앞서 정의한 valid_features 함수를 실행합니다. sibsp 피처를 탐색합니다.
valid_features(df_train, 'sibsp', distribution_check=False)
ax = sns.countplot(x='sex', hue = 'survived', data = df_train)
ax = sns.countplot(x='embarked', hue = 'survived', data = df_train)
valid_features(df_train, 'parch', distribution_check=False)
valid_features(df_train, 'fare', distribution_check=False)
# age의 결측값을 평균값으로 대체합니다.
replace_mean = df_train[df_train['age'] > 0]['age'].mean()
df_train['age'] = df_train['age'].fillna(replace_mean)
df_test['age'] = df_test['age'].fillna(replace_mean)
# embark : 2개의 결측값을 최빈값으로 대체합니다.
embarked_mode = df_train['embarked'].value_counts().index[0]
df_train['embarked'] = df_train['embarked'].fillna(embarked_mode)
df_test['embarked'] = df_test['embarked'].fillna(embarked_mode)
# one-hot encoding을 위한 통합 데이터 프레임(whole_df)을 생성합니다.
whole_df = df_train.append(df_test)
train_idx_num = len(df_train)
# pandas 패키지를 이용한 one-hot 인코딩을 수행합니다.
whole_df_encoded = pd.get_dummies(whole_df)
df_train = whole_df_encoded[:train_idx_num]
df_test = whole_df_encoded[train_idx_num:]
df_train.head()
# 데이터를 학습 데이터셋, 테스트 데이터셋으로 분리합니다.
x_train, y_train = df_train.loc[:, df_train.columns != 'survived'].values, df_train['survived'].values
x_test, y_test = df_test.loc[:, df_test.columns != 'survived'].values, df_test['survived'].values
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
# 로지스틱 회귀 모델을 학습합니다.
lr = LogisticRegression(random_state=0)
lr.fit(x_train, y_train)
# 학습한 모델의 테스트 데이터셋에 대한 예측 결과를 반환합니다.
y_pred = lr.predict(x_test)
y_pred_probability = lr.predict_proba(x_test)[:,1]
# 테스트 데이터셋에 대한 accuracy, precision, recall, f1 평가 지표를 각각 출력합니다.
print("accuracy: %.2f" % accuracy_score(y_test, y_pred))
print("Precision : %.3f" % precision_score(y_test, y_pred))
print("Recall : %.3f" % recall_score(y_test, y_pred))
print("F1 : %.3f" % f1_score(y_test, y_pred))
from sklearn.metrics import confusion_matrix
# Confusion Matrix를 출력합니다.
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
print(confmat)
from sklearn.metrics import roc_curve, roc_auc_score
# AUC (Area Under the Curve)를 계산하여 출력합니다.
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_probability)
roc_auc = roc_auc_score(y_test, y_pred_probability)
print("AUC : %.3f" % roc_auc)
# ROC curve를 그래프로 출력합니다.
plt.rcParams['figure.figsize'] = [5, 4]
plt.plot(false_positive_rate, true_positive_rate, label='ROC curve (area = %0.3f)' % roc_auc,
color='red', linewidth=4.0)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve of Logistic regression')
plt.legend(loc="lower right")
from sklearn.tree import DecisionTreeClassifier
# 의사결정나무를 학습하고, 학습한 모델로 테스트 데이터셋에 대한 예측값을 반환합니다.
dtc = DecisionTreeClassifier()
dtc.fit(x_train, y_train)
y_pred = dtc.predict(x_test)
y_pred_probability = dtc.predict_proba(x_test)[:,1]
# 학습한 모델의 성능을 계산하여 출력합니다.
print("accuracy: %.2f" % accuracy_score(y_test, y_pred))
print("Precision : %.3f" % precision_score(y_test, y_pred))
print("Recall : %.3f" % recall_score(y_test, y_pred))
print("F1 : %.3f" % f1_score(y_test, y_pred))
# 학습한 모델의 AUC를 계산하여 출력합니다.
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_probability)
roc_auc = roc_auc_score(y_test, y_pred_probability)
print("AUC : %.3f" % roc_auc)
# ROC curve를 그래프로 출력합니다.
plt.rcParams['figure.figsize'] = [5, 4]
plt.plot(false_positive_rate, true_positive_rate, label='ROC curve (area = %0.3f)' % roc_auc,
color='red', linewidth=4.0)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve of Logistic regression')
plt.legend(loc="lower right")
# 데이터를 다시 불러옵니다.
df_train = pd.read_csv("../data/titanic_train.csv")
df_test = pd.read_csv("../data/titanic_test.csv")
df_train = df_train.drop(['ticket', 'body', 'home.dest'], axis=1)
df_test = df_test.drop(['ticket', 'body', 'home.dest'], axis=1)
# age의 결측값을 평균값으로 대체합니다.
replace_mean = df_train[df_train['age'] > 0]['age'].mean()
df_train['age'] = df_train['age'].fillna(replace_mean)
df_test['age'] = df_test['age'].fillna(replace_mean)
# embark : 2개의 결측값을 최빈값으로 대체합니다.
embarked_mode = df_train['embarked'].value_counts().index[0]
df_train['embarked'] = df_train['embarked'].fillna(embarked_mode)
df_test['embarked'] = df_test['embarked'].fillna(embarked_mode)
# one-hot encoding을 위한 통합 데이터 프레임(whole_df)을 생성합니다.
whole_df = df_train.append(df_test)
train_idx_num = len(df_train)
print(whole_df['cabin'].value_counts()[:10])
# 결측 데이터의 경우는 ‘X’로 대체합니다.
whole_df['cabin'] = whole_df['cabin'].fillna('X')
# cabin 피처의 첫 번째 문자를 추출합니다.
whole_df['cabin'] = whole_df['cabin'].apply(lambda x: x[0])
# 추출한 문자 중, G와 T는 수가 너무 작기 때문에, 마찬가지로 ‘X’로 대체합니다.
whole_df['cabin'] = whole_df['cabin'].replace({"G":"X", "T":"X"})
ax = sns.countplot(x='cabin', hue = 'survived', data = whole_df)
plt.show()
# 이름에서 호칭을 추출합니다.
name_grade = whole_df['name'].apply(lambda x : x.split(", ",1)[1].split(".")[0])
name_grade = name_grade.unique().tolist()
print(name_grade)
# 호칭에 따라 사회적 지위(1910년대 기준)를 정의합니다.
grade_dict = {'A': ['Rev', 'Col', 'Major', 'Dr', 'Capt', 'Sir'], # 명예직을 나타냅니다.
'B': ['Ms', 'Mme', 'Mrs', 'Dona'], # 여성을 나타냅니다.
'C': ['Jonkheer', 'the Countess'], # 귀족이나 작위를 나타냅니다.
'D': ['Mr', 'Don'], # 남성을 나타냅니다.
'E': ['Master'], # 젊은남성을 나타냅니다.
'F': ['Miss', 'Mlle', 'Lady']} # 젊은 여성을 나타냅니다.
# 정의한 호칭의 기준에 따라, A~F의 문자로 name 피처를 다시 정의하는 함수입니다.
def give_grade(x):
grade = x.split(", ", 1)[1].split(".")[0]
for key, value in grade_dict.items():
for title in value:
if grade == title:
return key
return 'G'
# 위의 함수를 적용하여 name 피처를 새롭게 정의합니다.
whole_df['name'] = whole_df['name'].apply(lambda x: give_grade(x))
print(whole_df['name'].value_counts())
ax = sns.countplot(x='name', hue = 'survived', data = whole_df)
plt.show()
from scipy import stats
chis = stats.chisquare(whole_df[whole_df['survived']==1]['cabin'].value_counts().sort_index(),
whole_df[whole_df['survived']==0]['cabin'].value_counts().sort_index())
print("statistic = %.3f, pvalue = %.3f" % chis)
# pandas 패키지를 이용한 one-hot 인코딩을 수행합니다.
whole_df_encoded = pd.get_dummies(whole_df)
df_train = whole_df_encoded[:train_idx_num]
df_test = whole_df_encoded[train_idx_num:]
df_train.head()
# 데이터를 학습 데이터셋, 테스트 데이터셋으로 분리합니다.
x_train, y_train = df_train.loc[:, df_train.columns != 'survived'].values, df_train['survived'].values
x_test, y_test = df_test.loc[:, df_test.columns != 'survived'].values, df_test['survived'].values
# 로지스틱 회귀 모델을 학습합니다.
lr = LogisticRegression(random_state=0)
lr.fit(x_train, y_train)
# 학습한 모델의 테스트 데이터셋에 대한 예측 결과를 반환합니다.
y_pred = lr.predict(x_test)
y_pred_probability = lr.predict_proba(x_test)[:,1]
# 테스트 데이터셋에 대한 accuracy, precision, recall, f1 평가 지표를 각각 출력합니다.
print("accuracy: %.2f" % accuracy_score(y_test, y_pred))
print("Precision : %.3f" % precision_score(y_test, y_pred))
print("Recall : %.3f" % recall_score(y_test, y_pred))
print("F1 : %.3f" % f1_score(y_test, y_pred)) # AUC (Area Under the Curve) & ROC curve
# AUC (Area Under the Curve)를 계산하여 출력합니다.
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_probability)
roc_auc = roc_auc_score(y_test, y_pred_probability)
print("AUC : %.3f" % roc_auc)
# ROC curve를 그래프로 출력합니다.
plt.rcParams['figure.figsize'] = [5, 4]
plt.plot(false_positive_rate, true_positive_rate, label='ROC curve (area = %0.3f)' % roc_auc,
color='red', linewidth=4.0)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve of Logistic regression')
plt.legend(loc="lower right")
# 예측 대상인 survived 피처를 제외한 모든 피처를 리스트로 반환합니다. (그래프의 y축)
cols = df_train.columns.tolist()
cols.remove('survived')
y_pos = np.arange(len(cols))
# 각 피처별 회귀 분석 계수를 그래프의 x축으로 하여, 피처 영향력 그래프를 출력합니다.
plt.rcParams['figure.figsize'] = [5, 4]
fig, ax = plt.subplots()
ax.barh(y_pos, lr.coef_[0], align='center', color='green', ecolor='black')
ax.set_yticks(y_pos)
ax.set_yticklabels(cols)
ax.invert_yaxis()
ax.set_xlabel('Coef')
ax.set_title("Each Feature's Coef")
plt.show()
from sklearn.model_selection import KFold
# K-fold 교차 검증의 k를 5로 설정합니다.
k = 5
cv = KFold(k, shuffle=True, random_state=0)
acc_history = []
# K-fold를 5번의 분할 학습으로 반복합니다.
for i, (train_data_row, test_data_row) in enumerate(cv.split(whole_df_encoded)):
# 5개로 분할된 fold 중 4개를 학습 데이터셋, 1개를 테스트 데이터셋으로 지정합니다. 매 반복시마다, 테스트 데이터셋은 변경됩니다.
df_train = whole_df_encoded.iloc[train_data_row]
df_test = whole_df_encoded.iloc[test_data_row]
# survived 피처를 y, 나머지 피처들을 x 데이터로 지정합니다.
splited_x_train, splited_y_train = df_train.loc[:, df_train.columns != 'survived'].values, df_train['survived'].values
splited_x_test, splited_y_test = df_test.loc[:, df_test.columns != 'survived'].values, df_test['survived'].values
# 주어진 데이터로 로지스틱 회귀 모델을 학습합니다.
lr = LogisticRegression(random_state=0)
lr.fit(splited_x_train, splited_y_train)
y_pred = lr.predict(splited_x_test)
# 테스트 데이터셋의 Accuracy를 계산하여 acc_history에 저장합니다.
splited_acc = accuracy_score(splited_y_test, y_pred)
acc_history.append(splited_acc)
# acc_history에 저장된 5번의 학습 결과(Accuracy)를 그래프로 출력합니다.
plt.xlabel("Each K-fold")
plt.ylabel("Acc of splited test data")
plt.plot(range(1, k+1), acc_history)
import scikitplot as skplt
skplt.estimators.plot_learning_curve(lr, x_train, y_train)
plt.show()
| 0.27048 | 0.913715 |
## Lambda-выражения
Под lambda-выражениями будем понимать обработку каждого элемента списка одной операцией. Данный функциональный подход наиболее оптимизирован в Python для обработки большого массива данных и позволяет грамотно и лаконично задать процесс обработки в одну строку
```
def more3(l):
result = []
for n in l:
if n > 3:
result.append(n)
return result #[x > 3 for x in l]
list1 = [1, 2, 5, 4, 6, 3, 4]
more3(list1)
def isMoreThan3(n):
return n > 4
list(filter(isMoreThan3, list1))
list4 = filter(lambda x: x > 4, list1)
list(list4)
%timeit more3(list1)
%timeit filter(lambda x: x > 3, list1)
%timeit filter(isMoreThan3, list1)
list5 = map(lambda x: x * 2, list1)
list(list5)
strings = ['a', 'b', 'c']
numbers = [1, 2, 3, 4, 5]
list(zip(strings, numbers))
from functools import reduce
list6 = [1, 2, 3, 4, 5]
reduce(lambda x, y: x * y, list6)
filter(operations)
import matplotlib.pyplot as plt
```
## Датафреймы
Для обработки данных в питоне обычно используют датафреймы. В датафреймах удобно работать с различного рода таблицами, проводить над ними операции, использовать некоторые операции из баз данных.
```
import pandas as pd
ser = pd.Series([1, 2, 3], index=[3, 2, 1])
ser
ser.reset_index()
```
`DataFrame` - это удобное представление табличных данных в Python. Каждая колонка данных состоит из специального масси `pd.Series`
```
df = pd.DataFrame({'a': [1, 3, 4], 'b': [2, 4, 5]})
df
```
Проверим тип с которым создалась переменная df
```
type(df)
```
В DataFrame так же можно добавлять значения в колонки, как в примере ниже
```
df = df.append({'a': 5, 'b': 6}, ignore_index=True)
df
```
Так же DataFrame поддерживает загрузки из популярных форматов хранения данных CSV (Comma Seporated Value), формат MS Excel, JSON и тд.
> **Запомни!**
>
> Не стоит загружать большие наборы данных через DataFrame. Почему? Потому что DataFrame загружается сразу весь файл в память, и твоей памяти может просто не хватить.
```
df = pd.read_csv('data/TSLA.csv') # можно считывать разные файлы: csv - read_csv, xlsx - read_excel и тд
```
При простом вызове переменной `df` вы увидете первые 5 значние и последние 5 значений
```
df
```
Или можем сразу вызвать функцию `head()` которая выведет первые 5 строк, ну или указав из количество `head(3)`
```
# head(x) дает взглянуть на x первых строк
df.head()
```
Тоже самое что и `head` только с конца
```
df.tail()
```
При помощи функции `info()` можно взглянуть на общую информацию о наборе данных.
```
#info() сразу дает информацию о типах данных и количестве NaN значений
df.info()
```
Доступ к конкретным колонкам, можно получить просто указав из как в словаре. Вам возратиться объект `Series`
```
df["Volume"]
# Чтобы посмотреть на колонку, надо указать ее название в ковычках и квадратных скобках
df['Date']
# либо через точку
df.Date
```
Так же если вы хотите получить несколко колонок сразу, то необходимо указать их в двойных квадратных скобках `[[ ]]`.
Пример ниже
```
df[['Date', 'Volume', "Close"]]
```
Получить можно не только колонку но и всю строку по последовательности, через экземпляр свойства (property) `loc`
```
# Строка с указанным номером
df.loc[1]
index = [100,101,1231, 2199]
```
Или же можем получить несколько строк сразу передав массив
```
# Либо строки
df.loc[index]
```
Получить можно не только колонку но и всю строку по индексу, через экземпляр свойства (property) `iloc`
```
# iloc - строка с указанным индексом
df.iloc[5]
df.iloc[[5, 7]]
df.iloc[20:41]
df[50:100]
```
Для поиска нужных строк можно использовать lambda-выражения или другие условия
```
df.iloc[lambda x: x.index % 5 == 0]
```
`iloc` так же позволяет вернуть не только строки которые интересны, но и колонки через указания их индексов
```
df.iloc[[100, 300], [1, 4]]
```
Вместо индексирование обычными числами можно производить индексацию при помощи дат (вприниципе иных типов данных datetime,str, object и тд.)
```
df.index = df.Date
```
Посмотрим что же получилось
```
df.head()
```
Возьмем объект по `df.index` при помощи `loc`
```
df.loc['2010-06-29']
```
А обратно можем вернуть все применим генератор `range`
```
df.index = range(len(df))
df
```
Объект `Series` имеет встроенные статисчиеские методы
```
df['Close'].mean(), df['Close'].max(),df['Close'].min(), df['Close'].median(),
```
Бывает иногда что занчение в колонке пропущены и иногда их проще удалить чем восстанавливать или апроксимировать
```
df = df.dropna()
```
Построение запроса с условием
```
# Поиск строк по условию
df[df['Volume'] > 17187100]['Close']
len(df[df['Open'] > df['Open'].mean()]) # количество дней, когда цена была выше стредней
df.info()
df['Open'].where(lambda x: x < 100).dropna()
df['Open']
```
Для модицикации всей колонки, допустим увеличение ее в 10 раз, можно изпользовать метод `apply`, который итерируемом достает элемент и передает его в `lambda` выражение
```
df['Open'].apply(lambda x: x * 10)
import numpy as np
def average(item):
if item > np.average(df['Close']):
return item
```
Ну или передовать элемент непосредственно в функцию. А вней написать большое количество взаимодейсвтия и элементов
```
df['Open'].apply(lambda x: average(x)).dropna()
mean = df['Open'].mean()
def is_too_low(x):
if x < mean*0.8:
return True
else:
return False
def replace_too_low(x):
if x < mean*0.9:
return mean*0.9
else:
return x
def my_replace(x):
return x*0.95
df[df['Open'].apply(is_too_low)]['Open'] = mean*0.8
df['Open'].apply(replace_too_low)
df['Open'] = df[df['Open'] < mean*0.95]['Open'].apply(my_replace)
np.mean([5, 4.6])
df.head()
df = df.drop('Adj Close', 1)
df
from matplotlib.pyplot import figure
figure(figsize=(15, 3), dpi=120)
df['Volume'].plot()
import numpy as np
ind = [np.random.randint(0, len(df)) for i in range(50)]
df['Close'][ind] = None
df.info()
df[df.Close.isna()]
plt.plot(df[df.Close.isna() == False]['Close'].index, df[df.Close.isna() == False]['Close'])
# Заполним пропуски средним значением между вчера и завтра
# а как
df['Close'].fillna(df['Close'].mean())
```
|
github_jupyter
|
def more3(l):
result = []
for n in l:
if n > 3:
result.append(n)
return result #[x > 3 for x in l]
list1 = [1, 2, 5, 4, 6, 3, 4]
more3(list1)
def isMoreThan3(n):
return n > 4
list(filter(isMoreThan3, list1))
list4 = filter(lambda x: x > 4, list1)
list(list4)
%timeit more3(list1)
%timeit filter(lambda x: x > 3, list1)
%timeit filter(isMoreThan3, list1)
list5 = map(lambda x: x * 2, list1)
list(list5)
strings = ['a', 'b', 'c']
numbers = [1, 2, 3, 4, 5]
list(zip(strings, numbers))
from functools import reduce
list6 = [1, 2, 3, 4, 5]
reduce(lambda x, y: x * y, list6)
filter(operations)
import matplotlib.pyplot as plt
import pandas as pd
ser = pd.Series([1, 2, 3], index=[3, 2, 1])
ser
ser.reset_index()
df = pd.DataFrame({'a': [1, 3, 4], 'b': [2, 4, 5]})
df
type(df)
df = df.append({'a': 5, 'b': 6}, ignore_index=True)
df
df = pd.read_csv('data/TSLA.csv') # можно считывать разные файлы: csv - read_csv, xlsx - read_excel и тд
df
# head(x) дает взглянуть на x первых строк
df.head()
df.tail()
#info() сразу дает информацию о типах данных и количестве NaN значений
df.info()
df["Volume"]
# Чтобы посмотреть на колонку, надо указать ее название в ковычках и квадратных скобках
df['Date']
# либо через точку
df.Date
df[['Date', 'Volume', "Close"]]
# Строка с указанным номером
df.loc[1]
index = [100,101,1231, 2199]
# Либо строки
df.loc[index]
# iloc - строка с указанным индексом
df.iloc[5]
df.iloc[[5, 7]]
df.iloc[20:41]
df[50:100]
df.iloc[lambda x: x.index % 5 == 0]
df.iloc[[100, 300], [1, 4]]
df.index = df.Date
df.head()
df.loc['2010-06-29']
df.index = range(len(df))
df
df['Close'].mean(), df['Close'].max(),df['Close'].min(), df['Close'].median(),
df = df.dropna()
# Поиск строк по условию
df[df['Volume'] > 17187100]['Close']
len(df[df['Open'] > df['Open'].mean()]) # количество дней, когда цена была выше стредней
df.info()
df['Open'].where(lambda x: x < 100).dropna()
df['Open']
df['Open'].apply(lambda x: x * 10)
import numpy as np
def average(item):
if item > np.average(df['Close']):
return item
df['Open'].apply(lambda x: average(x)).dropna()
mean = df['Open'].mean()
def is_too_low(x):
if x < mean*0.8:
return True
else:
return False
def replace_too_low(x):
if x < mean*0.9:
return mean*0.9
else:
return x
def my_replace(x):
return x*0.95
df[df['Open'].apply(is_too_low)]['Open'] = mean*0.8
df['Open'].apply(replace_too_low)
df['Open'] = df[df['Open'] < mean*0.95]['Open'].apply(my_replace)
np.mean([5, 4.6])
df.head()
df = df.drop('Adj Close', 1)
df
from matplotlib.pyplot import figure
figure(figsize=(15, 3), dpi=120)
df['Volume'].plot()
import numpy as np
ind = [np.random.randint(0, len(df)) for i in range(50)]
df['Close'][ind] = None
df.info()
df[df.Close.isna()]
plt.plot(df[df.Close.isna() == False]['Close'].index, df[df.Close.isna() == False]['Close'])
# Заполним пропуски средним значением между вчера и завтра
# а как
df['Close'].fillna(df['Close'].mean())
| 0.184437 | 0.902781 |
# Visualizing linear relationships
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
np.random.seed(sum(map(ord, "regression")))
tips = sns.load_dataset("tips")
```
```
sns.regplot(x="total_bill", y="tip", data=tips);
sns.lmplot(x="total_bill", y="tip", data=tips);
```
```
sns.lmplot(x="size", y="tip", data=tips);
```
```
sns.lmplot(x="size", y="tip", data=tips, x_jitter=.05);
```
```
sns.lmplot(x="size", y="tip", data=tips, x_estimator=np.mean);
```
```
anscombe = sns.load_dataset("anscombe")
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'I'"),
ci=None, scatter_kws={"s": 80});
```
```
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
ci=None, scatter_kws={"s": 80});
```
```
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
order=2, ci=None, scatter_kws={"s": 80});
```
```
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'III'"),
ci=None, scatter_kws={"s": 80});
```
```
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'III'"),
robust=True, ci=None, scatter_kws={"s": 80});
```
```
tips["big_tip"] = (tips.tip / tips.total_bill) > .15
sns.lmplot(x="total_bill", y="big_tip", data=tips,
y_jitter=.03);
```
```
sns.lmplot(x="total_bill", y="big_tip", data=tips,
logistic=True, y_jitter=.03);
```
```
sns.lmplot(x="total_bill", y="tip", data=tips,
lowess=True);
```
```
sns.residplot(x="x", y="y", data=anscombe.query("dataset == 'I'"),
scatter_kws={"s": 80});
```
```
sns.residplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
scatter_kws={"s": 80});
```
```
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips);
```
```
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips,
markers=["o", "x"], palette="Set1");
```
```
sns.lmplot(x="total_bill", y="tip", hue="smoker", col="time", data=tips);
sns.lmplot(x="total_bill", y="tip", hue="smoker",
col="time", row="sex", data=tips);
```
```
f, ax = plt.subplots(figsize=(5, 6))
sns.regplot(x="total_bill", y="tip", data=tips, ax=ax);
```
```
sns.lmplot(x="total_bill", y="tip", col="day", data=tips,
col_wrap=2, size=3);
sns.lmplot(x="total_bill", y="tip", col="day", data=tips,
aspect=.5);
```
```
sns.jointplot(x="total_bill", y="tip", data=tips, kind="reg");
```
```
sns.pairplot(tips, x_vars=["total_bill", "size"], y_vars=["tip"],
size=5, aspect=.8, kind="reg");
```
```
sns.pairplot(tips, x_vars=["total_bill", "size"], y_vars=["tip"],
hue="smoker", size=5, aspect=.8, kind="reg");
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
np.random.seed(sum(map(ord, "regression")))
tips = sns.load_dataset("tips")
sns.regplot(x="total_bill", y="tip", data=tips);
sns.lmplot(x="total_bill", y="tip", data=tips);
sns.lmplot(x="size", y="tip", data=tips);
sns.lmplot(x="size", y="tip", data=tips, x_jitter=.05);
sns.lmplot(x="size", y="tip", data=tips, x_estimator=np.mean);
anscombe = sns.load_dataset("anscombe")
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'I'"),
ci=None, scatter_kws={"s": 80});
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
ci=None, scatter_kws={"s": 80});
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
order=2, ci=None, scatter_kws={"s": 80});
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'III'"),
ci=None, scatter_kws={"s": 80});
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'III'"),
robust=True, ci=None, scatter_kws={"s": 80});
tips["big_tip"] = (tips.tip / tips.total_bill) > .15
sns.lmplot(x="total_bill", y="big_tip", data=tips,
y_jitter=.03);
sns.lmplot(x="total_bill", y="big_tip", data=tips,
logistic=True, y_jitter=.03);
sns.lmplot(x="total_bill", y="tip", data=tips,
lowess=True);
sns.residplot(x="x", y="y", data=anscombe.query("dataset == 'I'"),
scatter_kws={"s": 80});
sns.residplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
scatter_kws={"s": 80});
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips);
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips,
markers=["o", "x"], palette="Set1");
sns.lmplot(x="total_bill", y="tip", hue="smoker", col="time", data=tips);
sns.lmplot(x="total_bill", y="tip", hue="smoker",
col="time", row="sex", data=tips);
f, ax = plt.subplots(figsize=(5, 6))
sns.regplot(x="total_bill", y="tip", data=tips, ax=ax);
sns.lmplot(x="total_bill", y="tip", col="day", data=tips,
col_wrap=2, size=3);
sns.lmplot(x="total_bill", y="tip", col="day", data=tips,
aspect=.5);
sns.jointplot(x="total_bill", y="tip", data=tips, kind="reg");
sns.pairplot(tips, x_vars=["total_bill", "size"], y_vars=["tip"],
size=5, aspect=.8, kind="reg");
sns.pairplot(tips, x_vars=["total_bill", "size"], y_vars=["tip"],
hue="smoker", size=5, aspect=.8, kind="reg");
| 0.4856 | 0.986231 |
# Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
```
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
```
First we'll load the text file and convert it into integers for our network to use.
```
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
```
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
```
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
```
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
```
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
```
## Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are `lstm_size` and `num_layers`. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
```
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
```
## Write out the graph for TensorBoard
```
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
```
## Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint.
```
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
```
## Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
```
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
```
|
github_jupyter
|
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
| 0.757077 | 0.976265 |
# Merging, Joining, and Concatenating
There are 3 main ways of combining DataFrames together: Merging, Joining and Concatenating. In this lecture we will discuss these 3 methods with examples.
____
### Example DataFrames
```
import pandas as pd
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
df1
df2
df3
```
## Concatenation
Concatenation basically glues together DataFrames. Keep in mind that dimensions should match along the axis you are concatenating on. You can use **pd.concat** and pass in a list of DataFrames to concatenate together:
```
pd.concat([df1,df2,df3])
pd.concat([df1,df2,df3],axis=1)
```
_____
## Example DataFrames
```
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
```
___
## Merging
The **merge** function allows you to merge DataFrames together using a similar logic as merging SQL Tables together. For example:
```
pd.merge(left,right,how='inner',on='key')
```
Or to show a more complicated example:
```
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
pd.merge(left, right, on=['key1', 'key2'])
pd.merge(left, right, how='outer', on=['key1', 'key2'])
pd.merge(left, right, how='right', on=['key1', 'key2'])
pd.merge(left, right, how='left', on=['key1', 'key2'])
```
## Joining
Joining is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single result DataFrame.
```
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left.join(right)
left.join(right, how='outer')
```
# Great Job!
|
github_jupyter
|
import pandas as pd
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
df1
df2
df3
pd.concat([df1,df2,df3])
pd.concat([df1,df2,df3],axis=1)
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
pd.merge(left,right,how='inner',on='key')
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
pd.merge(left, right, on=['key1', 'key2'])
pd.merge(left, right, how='outer', on=['key1', 'key2'])
pd.merge(left, right, how='right', on=['key1', 'key2'])
pd.merge(left, right, how='left', on=['key1', 'key2'])
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left.join(right)
left.join(right, how='outer')
| 0.31944 | 0.975083 |
** [T. R. Knapp Instances of Simpson's paradox. The College Mathematics Journal, v. 16, p. 209-211, 1985.]:**
Temos dois jogadores de Baseball.
O jogador A rebateu 25.7% das vezes e o jogador B rebateu 25.1%. Isso significa que o jogador A é melhor?
Não. Dê uma olhada na informação completa abaixo:
**Jogador A**
- nAD: Número de vezes que jogou contra um lançador destro: 202
- nARD: Número de vezes que rebateu um lançador destro: 45
- nAE: Número de vezes que jogou contra um lançador canhoto: 250
- nARE: Número de vezes que rebateu um lançador canhoto: 71
**Jogador B**
- nBD: Número de vezes que jogou contra um lançador destro: 250
- nBRD: Número de vezes que rebateu um lançador destro: 58
- nBE: Número de vezes que jogou contra um lançador canhoto: 108
- nBRE: Número de vezes que rebateu um lançador canhoto: 32
Logo, o jogador A conseguiu rebater 22.3% das bolas quando lançadas por um lançador destro e 28.4% das bolas quando lançadas por um lançador canhoto. Por outro lado, o jogador B conseguiu rebater 23.2% das bolas quando lançadas por um lançador destro e 29.6% das bolas quando lançadas por um lançador canhoto.
**Como isso é possível?**
A explicação é simples. Os dois jogadores enfrentaram de forma bem diferente destros e canhotos. Enquanto o jogador A enfrentou destros em torno de 50% das vezes, o jogador B enfrentou em torno de 70% das vezes. Logo, as porcentagens em ambas as categorias para os jogadores são ponderadas de forma diferente trazendo um resultado que parece ser contra intuitivo. Além disso, o jogador B justamente se saiu pior contra os destros que ele enfrentou mais.
**Então, o que é o Paradoxo de Simpson?**
O paradoxo de Simpson ocorre quando existe uma tendencia de comportamento em dados de uma determinada variável quando dividida em grupos, mas é revertida quando os grupos são combinados.
**Detalhamento das contas**
(Você não precisa olhar isso para entender o problema - são contas simples de estatística)
O número total de lançadores que o jogador A enfrentou foi: nA=nAD+nAE=452
A probabilidade do jogador A rebater dado que ele jogou contra um jogador destro: p(R/A,D)=nARD/nAD=0.223
A probabilidade do jogador A rebater dado que ele jogou contra um jogador canhoto: p(R/A,E)=nARE/nAE=0.284
A probabilidade do jogador A jogar com um lançador destro: p(D/A)=nAD/nA=0.447
A probabilidade do jogador A jogar com um lançador canhoto: p(E/A)=nAE/nA=0.553
A probabilidade do jogador rebater independente da categoria (destro ou canhoto): p(R/A)=p(R/A,D)∗p(D/A)+p(R/A,E)∗p(E/A)=0.257
O número total de lançadores que o jogador B enfrentou foi: nB=nBD+nBE=358
A probabilidade do jogador B rebater dado que ele jogou contra um jogador destro: p(R/B,D)=nBRD/nBD=0.232
A probabilidade do jogador B rebater dado que ele jogou contra um jogador canhoto: p(R/B,E)=nBRE/nBE=0.296
A probabilidade do jogador B jogar com um lançador destro: p(D/B)=nBD/nB=0.698
A probabilidade do jogador B jogar com um lançador canhoto: p(E/B)=nBE/nB=0.302
A probabilidade do jogador rebater independente da categoria (destro ou canhoto): p(R/B)=p(R/B,D)∗p(D/B)+p(R/B,E)∗p(E/B)=0.251
**Existem exemplos reais desse paradoxo?**
Sim, existem MUITOS exemplos relatados na literatura. Provavelmente, o mais interessante é aquele que ocorreu na universidade de Berkeley na California que sugeria que numa seleção para programas de doutorado homens tinham mais chance que mulheres. Esse caso gerou até processo contra a universidade. De fato, olhando individualmente cada departamento, na maioria deles, ocorria exatamente o contrário. Mais tarde foi concluído que mulheres aplicavam para departamento mais competitivos que aqueles aplicados por homens.
texto retirado desse [link](http://prorum.com/?qa=1814/o-que-e-o-paradoxo-de-simpson-em-estatistica)
No final do texto a alguma simulações (monte carlo) para quem quer ir além.
|
github_jupyter
|
** [T. R. Knapp Instances of Simpson's paradox. The College Mathematics Journal, v. 16, p. 209-211, 1985.]:**
Temos dois jogadores de Baseball.
O jogador A rebateu 25.7% das vezes e o jogador B rebateu 25.1%. Isso significa que o jogador A é melhor?
Não. Dê uma olhada na informação completa abaixo:
**Jogador A**
- nAD: Número de vezes que jogou contra um lançador destro: 202
- nARD: Número de vezes que rebateu um lançador destro: 45
- nAE: Número de vezes que jogou contra um lançador canhoto: 250
- nARE: Número de vezes que rebateu um lançador canhoto: 71
**Jogador B**
- nBD: Número de vezes que jogou contra um lançador destro: 250
- nBRD: Número de vezes que rebateu um lançador destro: 58
- nBE: Número de vezes que jogou contra um lançador canhoto: 108
- nBRE: Número de vezes que rebateu um lançador canhoto: 32
Logo, o jogador A conseguiu rebater 22.3% das bolas quando lançadas por um lançador destro e 28.4% das bolas quando lançadas por um lançador canhoto. Por outro lado, o jogador B conseguiu rebater 23.2% das bolas quando lançadas por um lançador destro e 29.6% das bolas quando lançadas por um lançador canhoto.
**Como isso é possível?**
A explicação é simples. Os dois jogadores enfrentaram de forma bem diferente destros e canhotos. Enquanto o jogador A enfrentou destros em torno de 50% das vezes, o jogador B enfrentou em torno de 70% das vezes. Logo, as porcentagens em ambas as categorias para os jogadores são ponderadas de forma diferente trazendo um resultado que parece ser contra intuitivo. Além disso, o jogador B justamente se saiu pior contra os destros que ele enfrentou mais.
**Então, o que é o Paradoxo de Simpson?**
O paradoxo de Simpson ocorre quando existe uma tendencia de comportamento em dados de uma determinada variável quando dividida em grupos, mas é revertida quando os grupos são combinados.
**Detalhamento das contas**
(Você não precisa olhar isso para entender o problema - são contas simples de estatística)
O número total de lançadores que o jogador A enfrentou foi: nA=nAD+nAE=452
A probabilidade do jogador A rebater dado que ele jogou contra um jogador destro: p(R/A,D)=nARD/nAD=0.223
A probabilidade do jogador A rebater dado que ele jogou contra um jogador canhoto: p(R/A,E)=nARE/nAE=0.284
A probabilidade do jogador A jogar com um lançador destro: p(D/A)=nAD/nA=0.447
A probabilidade do jogador A jogar com um lançador canhoto: p(E/A)=nAE/nA=0.553
A probabilidade do jogador rebater independente da categoria (destro ou canhoto): p(R/A)=p(R/A,D)∗p(D/A)+p(R/A,E)∗p(E/A)=0.257
O número total de lançadores que o jogador B enfrentou foi: nB=nBD+nBE=358
A probabilidade do jogador B rebater dado que ele jogou contra um jogador destro: p(R/B,D)=nBRD/nBD=0.232
A probabilidade do jogador B rebater dado que ele jogou contra um jogador canhoto: p(R/B,E)=nBRE/nBE=0.296
A probabilidade do jogador B jogar com um lançador destro: p(D/B)=nBD/nB=0.698
A probabilidade do jogador B jogar com um lançador canhoto: p(E/B)=nBE/nB=0.302
A probabilidade do jogador rebater independente da categoria (destro ou canhoto): p(R/B)=p(R/B,D)∗p(D/B)+p(R/B,E)∗p(E/B)=0.251
**Existem exemplos reais desse paradoxo?**
Sim, existem MUITOS exemplos relatados na literatura. Provavelmente, o mais interessante é aquele que ocorreu na universidade de Berkeley na California que sugeria que numa seleção para programas de doutorado homens tinham mais chance que mulheres. Esse caso gerou até processo contra a universidade. De fato, olhando individualmente cada departamento, na maioria deles, ocorria exatamente o contrário. Mais tarde foi concluído que mulheres aplicavam para departamento mais competitivos que aqueles aplicados por homens.
texto retirado desse [link](http://prorum.com/?qa=1814/o-que-e-o-paradoxo-de-simpson-em-estatistica)
No final do texto a alguma simulações (monte carlo) para quem quer ir além.
| 0.447702 | 0.57329 |
```
from astropy.cosmology import WMAP5
from astropy.cosmology import FlatLambdaCDM
import astropy.units as u
import numpy as np
import matplotlib.pyplot as plt
import sympy as smp
WMAP5.H(0)
WMAP_5 = dict()
WMAP_5['ombh2'] = 0.02273 ## Omega_b * h**2
WMAP_5['omch2'] = 0.1099 ## Omega_c * h**2
WMAP_5['ln1010As'] = 3.0448 ## ln(10**10 * As), scalar amplitude
WMAP_5['ns'] = 0.96305 ## spectral index
WMAP_5['ommh2'] = 0.14314 ## Omega_m * h**2 , total matter
WMAP_5['H0'] = 70.2 ## H0 = 100h
WMAP_5['sigma8'] = 0.796 ## amplitude of density fluctuations
WMAP_5['tau'] = 0.087 ## Optical depth
WMAP_5['age_Gyr'] = 13.69 ## Age of the Universe
WMAP_5['h'] = WMAP_5['H0']/100
WMAP_5['Om'] = WMAP_5['ommh2']/WMAP_5['h']**2
WMAP_5['Ob'] = WMAP_5['ombh2']/WMAP_5['h']**2
WMAP_5['Oc'] = WMAP_5['omch2']/WMAP_5['h']**2
WMAP_5['As'] = np.exp(WMAP_5['ln1010As'])/np.power(10,10) ## As, scalar amplitude
WMAP_5['Om']
WMAP_5['h'] = 0.719 # km/Mpc s
WMAP_5['Or'] = 0.0000930479
WMAP_5['Ol'] = 1-np.array([WMAP_5[oo] for oo in ['Oc','Ob','Om']]).sum() ## Ol = Omega_Lambda
WMAP_5['Ol']
cosmo = FlatLambdaCDM(H0=70.2 * u.km / u.s / u.Mpc, Om0=0.3)
def a_of_z(z):
a=1/(1+z)
return a
def Omega_L(Omega_c, Omega_b, Omega_r):
"""
Function for Omega_Lambda, dark energy.
For a flat Universe:
Omega_Lambda = 1-Omega_c-Omega_b-Omega_r
"""
oL = 1 - Omega_c - Omega_b - Omega_r
return oL
def cosmological_parameters(cosmo_pars=dict()):
H0 = cosmo_pars.get('H0', WMAP_5['H0']) # WMAP5 cosmological parameters as default
Oc = cosmo_pars.get('Oc', WMAP_5['Oc'])
Ob = cosmo_pars.get('Ob', WMAP_5['Ob'])
Or = cosmo_pars.get('Or', WMAP_5['Or'])
Om = Ob+Oc
OL = Omega_L(Oc, Ob, Or)
return H0, Oc, Ob, Or, Om, OL
cosmological_parameters()
def Hubble(z, cosmo_pars=dict()):
H0, Oc, Ob, Or, Om, OL = cosmological_parameters(cosmo_pars)
H = H0 * np.sqrt(Om*(1+z)**3 + Or*(1+z)**4 + OL)
return H
z_arr = np.linspace(0.,10, 100)
fig, ax = plt.subplots(1, 1, sharey='row', sharex='col', figsize=(10,8)) #all plots in the same row, share the y-axis.
# once you specify an axis, it is in this instance where plots are performed
ax.semilogx(z_arr, Hubble(z_arr), '-', label='WMAP5', color='orange', lw=3)
ax.legend(fontsize=26)
ax.set_xlabel('redshift $z$', fontsize=26)
ax.set_ylabel(r'$H(z)$ in km/s/Mpc', fontsize=26);
```
----
------
----
```
h = 0.719
[P0, c500, gamma, alpha, beta] = [(8.130*(h/0.7)**(-3/2)),1.156,0.3292,1.0620,5.4807]
Y500,n,r500,r,x,z = smp.symbols('Y500 n r500 r x z')
v = (c500)**(alpha/beta-gamma)
v
w = 1 + (1/x**alpha) #Numerator
w.subs(alpha,1.0620)
y = c500 + (1/x**alpha) #Demominator
y.subs(alpha,1.0620)
v = w/y
v
x = (-0.865051903114187)**(1/alpha)
x
x = ((0.8577720337616777)**2 +(0.15910988366956264)**2 )**(1/2)
x
```
-------
--------
------
```
def E(z):
return ((Cm/L500) * ((M500*h/0.7)/(3*1e14*M0))**delta)**(-3/7)
# Let's get the value of E
L500,Cm,delta =smp.symbols("L500 Cm delta")
L500 = 1e44 # 'erg/s'
delta = 1.61 # alpha(m)
Cm = (smp.exp(0.295) * 1e44) / (h/0.7)**2 #erg/s
Cm
#rho = WMAP5.critical_density(0)
rho = 9.2565426 * 1e-30 # critical density at z = 0 in g/cm^3
M0 = (h * (4/3) * smp.pi * rho * 500 * r500 **3)/(3*1e14) # combining two reations of M500
M500 = (4/3) * smp.pi * rho * 500 * r500**3
M500
P500 = (1.65 * 1e-3) * (E(z)**smp.Rational(8,3) * (h/0.7)**2 * (M500 * (h/0.7)/((3 * 1e14) * M0))**smp.Rational(2,3))
P500
P500.subs([(h,0.719)])
r_s = r500/c500
x = r/r500 * 1.156
x = 0.8724040445716578 # by calculations from the paper2
def P(r):
return (P0 * P500) / ((x**gamma) * (1+ x**alpha)**((beta-gamma)/alpha))
P(r)
P(r).subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463243203779394),(h,0.719)])
from sympy import integrate
f = P(r) * (4 * smp.pi *r **2)
f
a = smp.integrate(f,(r,0,n * r500))
a
b = smp.integrate(f,(r,0,r500))
b
def Y_nr500(a):
return Y500 * (a/b)
Y_nr500(a)
a1 = a.subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,1),(h,0.719)])
a1
(Y_nr500(a = a1)/Y500).subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,1),(h,0.719)]) #n=1
a2 = a.subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,2),(h,0.719)])
a2
(Y_nr500(a = a2)/Y500).subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,2),(h,0.719)]) #n=2
a3 = a.subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,3),(h,0.719)])
a3
(Y_nr500(a = a3)/Y500).subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,3),(h,0.719)]) #n=3
```
|
github_jupyter
|
from astropy.cosmology import WMAP5
from astropy.cosmology import FlatLambdaCDM
import astropy.units as u
import numpy as np
import matplotlib.pyplot as plt
import sympy as smp
WMAP5.H(0)
WMAP_5 = dict()
WMAP_5['ombh2'] = 0.02273 ## Omega_b * h**2
WMAP_5['omch2'] = 0.1099 ## Omega_c * h**2
WMAP_5['ln1010As'] = 3.0448 ## ln(10**10 * As), scalar amplitude
WMAP_5['ns'] = 0.96305 ## spectral index
WMAP_5['ommh2'] = 0.14314 ## Omega_m * h**2 , total matter
WMAP_5['H0'] = 70.2 ## H0 = 100h
WMAP_5['sigma8'] = 0.796 ## amplitude of density fluctuations
WMAP_5['tau'] = 0.087 ## Optical depth
WMAP_5['age_Gyr'] = 13.69 ## Age of the Universe
WMAP_5['h'] = WMAP_5['H0']/100
WMAP_5['Om'] = WMAP_5['ommh2']/WMAP_5['h']**2
WMAP_5['Ob'] = WMAP_5['ombh2']/WMAP_5['h']**2
WMAP_5['Oc'] = WMAP_5['omch2']/WMAP_5['h']**2
WMAP_5['As'] = np.exp(WMAP_5['ln1010As'])/np.power(10,10) ## As, scalar amplitude
WMAP_5['Om']
WMAP_5['h'] = 0.719 # km/Mpc s
WMAP_5['Or'] = 0.0000930479
WMAP_5['Ol'] = 1-np.array([WMAP_5[oo] for oo in ['Oc','Ob','Om']]).sum() ## Ol = Omega_Lambda
WMAP_5['Ol']
cosmo = FlatLambdaCDM(H0=70.2 * u.km / u.s / u.Mpc, Om0=0.3)
def a_of_z(z):
a=1/(1+z)
return a
def Omega_L(Omega_c, Omega_b, Omega_r):
"""
Function for Omega_Lambda, dark energy.
For a flat Universe:
Omega_Lambda = 1-Omega_c-Omega_b-Omega_r
"""
oL = 1 - Omega_c - Omega_b - Omega_r
return oL
def cosmological_parameters(cosmo_pars=dict()):
H0 = cosmo_pars.get('H0', WMAP_5['H0']) # WMAP5 cosmological parameters as default
Oc = cosmo_pars.get('Oc', WMAP_5['Oc'])
Ob = cosmo_pars.get('Ob', WMAP_5['Ob'])
Or = cosmo_pars.get('Or', WMAP_5['Or'])
Om = Ob+Oc
OL = Omega_L(Oc, Ob, Or)
return H0, Oc, Ob, Or, Om, OL
cosmological_parameters()
def Hubble(z, cosmo_pars=dict()):
H0, Oc, Ob, Or, Om, OL = cosmological_parameters(cosmo_pars)
H = H0 * np.sqrt(Om*(1+z)**3 + Or*(1+z)**4 + OL)
return H
z_arr = np.linspace(0.,10, 100)
fig, ax = plt.subplots(1, 1, sharey='row', sharex='col', figsize=(10,8)) #all plots in the same row, share the y-axis.
# once you specify an axis, it is in this instance where plots are performed
ax.semilogx(z_arr, Hubble(z_arr), '-', label='WMAP5', color='orange', lw=3)
ax.legend(fontsize=26)
ax.set_xlabel('redshift $z$', fontsize=26)
ax.set_ylabel(r'$H(z)$ in km/s/Mpc', fontsize=26);
h = 0.719
[P0, c500, gamma, alpha, beta] = [(8.130*(h/0.7)**(-3/2)),1.156,0.3292,1.0620,5.4807]
Y500,n,r500,r,x,z = smp.symbols('Y500 n r500 r x z')
v = (c500)**(alpha/beta-gamma)
v
w = 1 + (1/x**alpha) #Numerator
w.subs(alpha,1.0620)
y = c500 + (1/x**alpha) #Demominator
y.subs(alpha,1.0620)
v = w/y
v
x = (-0.865051903114187)**(1/alpha)
x
x = ((0.8577720337616777)**2 +(0.15910988366956264)**2 )**(1/2)
x
def E(z):
return ((Cm/L500) * ((M500*h/0.7)/(3*1e14*M0))**delta)**(-3/7)
# Let's get the value of E
L500,Cm,delta =smp.symbols("L500 Cm delta")
L500 = 1e44 # 'erg/s'
delta = 1.61 # alpha(m)
Cm = (smp.exp(0.295) * 1e44) / (h/0.7)**2 #erg/s
Cm
#rho = WMAP5.critical_density(0)
rho = 9.2565426 * 1e-30 # critical density at z = 0 in g/cm^3
M0 = (h * (4/3) * smp.pi * rho * 500 * r500 **3)/(3*1e14) # combining two reations of M500
M500 = (4/3) * smp.pi * rho * 500 * r500**3
M500
P500 = (1.65 * 1e-3) * (E(z)**smp.Rational(8,3) * (h/0.7)**2 * (M500 * (h/0.7)/((3 * 1e14) * M0))**smp.Rational(2,3))
P500
P500.subs([(h,0.719)])
r_s = r500/c500
x = r/r500 * 1.156
x = 0.8724040445716578 # by calculations from the paper2
def P(r):
return (P0 * P500) / ((x**gamma) * (1+ x**alpha)**((beta-gamma)/alpha))
P(r)
P(r).subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463243203779394),(h,0.719)])
from sympy import integrate
f = P(r) * (4 * smp.pi *r **2)
f
a = smp.integrate(f,(r,0,n * r500))
a
b = smp.integrate(f,(r,0,r500))
b
def Y_nr500(a):
return Y500 * (a/b)
Y_nr500(a)
a1 = a.subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,1),(h,0.719)])
a1
(Y_nr500(a = a1)/Y500).subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,1),(h,0.719)]) #n=1
a2 = a.subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,2),(h,0.719)])
a2
(Y_nr500(a = a2)/Y500).subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,2),(h,0.719)]) #n=2
a3 = a.subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,3),(h,0.719)])
a3
(Y_nr500(a = a3)/Y500).subs([(alpha,1.0620), (beta,5.4807), (gamma,0.3292),(P0,8.463),(n,3),(h,0.719)]) #n=3
| 0.670608 | 0.658115 |
# The Robot World
A robot, much like you, perceives the world through its "senses." For example, self-driving cars use video, radar, and Lidar, to observe the world around them. As cars gather data, they build up a 3D world of observations that tells the car where it is, where other objects (like trees, pedestrians, and other vehicles) are, and where it should be going!
In this section, we'll be working with first a 1D then a 2D representation of the world for simplicity, and because two dimensions are often all you'll need to solve a certain problem.
* You'll be given a set of quizzes to solve to build up your understanding of robot localization.
* Try your best to solve these quizzes and consult the solution if you get stuck or want to confirm your answer.
<img src="files/images/lidar.png" width="50%" height="50%">
These grid representations of the environment are known as **discrete** representations. Discrete just means a limited number of places a robot can be (ex. in one grid cell). That's because robots, and autonomous vehicles like self-driving cars, use maps to figure out where they are, and maps lend themselves to being divided up into grids and sections.
You'll see **continuous** probability distributions when locating objects that are moving around the robot. Continuous means that these objects can be anywhere around the robot and their movement is smooth.
So, let's start with the 1D case.
### Robot World 1-D
First, imagine you have a robot living in a 1-D world. You can think of a 1D world as a one-lane road.
<img src="images/road_1.png" width="50%" height="50%">
We can treat this road as an array, and break it up into grid cells for a robot to understand. In this case, the road is a 1D grid with 5 different spaces. The robot can only move forwards or backwards. If the robot falls off the grid, it will loop back around to the other side (this is known as a cyclic world).
<img src="images/numbered_grid.png" width="50%" height="50%">
### Uniform Distribution
The robot has a map so that it knows there are only 5 spaces in this 1D world. However, it hasn't sensed anything or moved. For a length of 5 cells (a list of 5 values), what is the probability distribution, `p`, that the robot is in any one of these locations?
Since the robot does not know where it is at first, the probability of being in any space is the same! This is a probability distribution and so the sum of all these probabilities should be equal to 1, so `1/5 spaces = 0.2`. A distribution in which all the probabilities are the same (and we have maximum uncertainty) is called a **uniform distribution**.
```
# importing resources
import matplotlib.pyplot as plt
import numpy as np
# uniform distribution for 5 grid cells
p = [0.2, 0.2, 0.2, 0.2, 0.2]
print(p)
```
I'll also include a helper function for visualizing this distribution. The below function, `display_map` will output a bar chart showing the probability that a robot is in each grid space. The y-axis has a range of 0 to 1 for the range of probabilities. For a uniform distribution, this will look like a flat line. You can choose the width of each bar to be <= 1 should you want to space these out.
```
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
# call function on grid, p, from before
display_map(p)
```
Now, what about if the world was 8 grid cells in length instead of 5?
### QUIZ: Write a function that takes in the number of spaces in the robot's world (in this case 8), and returns the initial probability distribution `p` that the robot is in each space.
This function should store the probabilities in a list. So in this example, there would be a list with 8 probabilities.
**Solution**
We know that all the probabilities in these locations should sum up to 1. So, one solution to this includes dividing 1 by the number of grid cells, then appending that value to a list that is that same passed in number of grid cells in length.
```
# ex. initialize_robot(5) = [0.2, 0.2, 0.2, 0.2, 0.2]
def initialize_robot(grid_length):
''' Takes in a grid length and returns
a uniform distribution of location probabilities'''
p = []
# create a list that has the value of 1/grid_length for each cell
for i in range(grid_length):
p.append(1.0/grid_length)
return p
p = initialize_robot(8)
print(p)
display_map(p)
# Here is what this distribution looks like, with some spacing
# so you can clearly see the probabilty that a robot is in each grid cell
p = initialize_robot(8)
print(p)
display_map(p, bar_width=0.9)
```
Now that you know how a robot initially sees a simple 1D world, let's learn about how it can locate itself by moving around and sensing it's environment!
|
github_jupyter
|
# importing resources
import matplotlib.pyplot as plt
import numpy as np
# uniform distribution for 5 grid cells
p = [0.2, 0.2, 0.2, 0.2, 0.2]
print(p)
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
# call function on grid, p, from before
display_map(p)
# ex. initialize_robot(5) = [0.2, 0.2, 0.2, 0.2, 0.2]
def initialize_robot(grid_length):
''' Takes in a grid length and returns
a uniform distribution of location probabilities'''
p = []
# create a list that has the value of 1/grid_length for each cell
for i in range(grid_length):
p.append(1.0/grid_length)
return p
p = initialize_robot(8)
print(p)
display_map(p)
# Here is what this distribution looks like, with some spacing
# so you can clearly see the probabilty that a robot is in each grid cell
p = initialize_robot(8)
print(p)
display_map(p, bar_width=0.9)
| 0.505127 | 0.995479 |
<h1 style="padding-top: 25px;padding-bottom: 25px;text-align: left; padding-left: 10px; background-color: #DDDDDD;
color: black;"> <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> AC295: Advanced Practical Data Science </h1>
## Lecture 7: Distillation and Compression
**Harvard University**<br/>
**Spring 2020**<br/>
**Instructors**: Pavlos Protopapas <br>
**TF**: Michael Emanuel, Andrea Porelli and Giulia Zerbini <br>
**Author**: Andrea Porelli and Pavlos Protopapas
<hr style='height:2px'>
# Table of Contents
* [Lecture 7: Distillation and Compression](#Lecture-7:-Distillation-and-Compression)
* [Part 1: Knowledge distillation: Teacher student learning](#Part-1:-Knowledge-distillation:-Teacher-student-learning)
* [1.1 Matching logits is a special case of distillation](#1.1-Matching-logits-is-a-special-case-of-distillation)
* [1.2 Temperature](#1.2-Temperature)
* [1.3 Examples from the paper](#1.3-Examples-from-the-paper)
* [Part 2: Use Cases](#Part-2:-Use-Cases)
* [2.1 Transfer learning through Network Distillation](#2.1-Transfer-learning-through-Network-Distillation)
* [2.2 Another use case?](#2.2-Another-use-case?)
## Part 1: Knowledge distillation: Teacher student learning
Geoffrey Hinton's words:
- Many insects have two very different forms:
- a larval form: optimised to extract energy and nutrients from environment
- an adult form: optimized for traveling and reproduction
- ML typically uses the same model for training stage and the deployment stage! Despite very different requirements:
- Training: should extract structure, should not be real time, thus can use a huge amount of computation.
- Deployment: large number of users, more stringent requirements on latency and computational resources.
**Question:** is it possible to distill and compress the *knowledge* of the large and complex training model (the teacher) into a small and simple deployment model (the student)?
**Brings us to the question what is knowledge (in a NN)?**
- The weights of network?
- The mapping from input to output?
**Goal:** train a student model to generalize in the same way as the large model.
### 1.1 Matching logits is a special case of distillation
- Normal training objective is to maximize the average log probability of the correct class.
- Yet Hinton:
- "*Relative probabilities of incorrect answers tell us a lot about how the teacher model tends to generalize.*"
- Ex.: "*An image of a BMW, may only have a very small chance of being mistaken for a garbage truck, but that mistake is still many times more probable than mistaking it for a carrot.*"
<img src="https://i.imgur.com/zvTR1r7.png" alt="https://towardsdatascience.com/knowledge-distillation-simplified-dd4973dbc764" width=60%/>
- **The predictions of the teacher model contain a lot of usefull information regarding the generalization!**
- **Thus our student networks tries to match the teacher network predictions.**
<img src="https://i.imgur.com/l80RVDT.jpg" alt="https://towardsdatascience.com/knowledge-distillation-simplified-dd4973dbc764" width=80%/>
**The final loss-function of the student network ( $\mathscr{L}_\text{student }$ ) is a combination of:**
1. Standard cross entropy with correct labels ( $\mathscr{L}_\text{correct labels }$ )
- ex. match label: 100% BWM
2. Cross entropy with the soft targets from the teacher network predictions ( $\mathscr{L}_\text{soft teacher predictions }$ )
- ex. match teacher prediction: 99.5% BWM, 0.4% garbage truk, ... , 0.000001% carrot
How these two parts of the loss function should be weighted is determined by the hyperparameter $\lambda$:
$$\mathscr{L}_\text{student} = \mathscr{L}_\text{correct labels} + \lambda \mathscr{L}_\text{soft teacher predictions}$$
## **1.2 Temperature**
Much information resides in the ratios of very small probabilities in the predictions:
ex.: one version of a 2 may be given a probability of $10^{-6}$ of being a 3 and $10^{-9}$ of being a 7 , whereas for another version it may be the other way around.
- Since most probabilities are very close to zero we expect very little influence on the cross-entropy cost function.
- **How to fix this?**
- Raise the **"temperature" of the final softmax** until the teacher model produces a soft set of targets ($z_i$ are logits, T is Temperature):
$$q_i = \dfrac{\exp(z_i/T)}{\sum_j \exp(z_j/T)}$$
- Using a higher value for $T$ produces a softer probability distribution over classes. Illustrating:
```
import numpy as np
import matplotlib.pyplot as plt
z_i = np.array([0.5, 8 , 1.5, 3, 6 ,
11 , 2.5, 0.01 , 5, 0.2 ])
# Tested probabilities
Temperatures = [1, 4, 20]
plt.figure(figsize=(20, 4))
for i, T in enumerate(Temperatures):
plt.subplot(1, 4, i+1)
# Temperature adjusted soft probabilities:
q_i = np.exp(z_i/T)/np.sum(np.exp(z_i/T))
# Plotting the barchart
plt.bar(range(0,10), q_i)
plt.title('Temperature = '+ str(T), size=15)
plt.xticks(range(10) , range(10), size=10)
plt.xlabel('Classes', size=12)
plt.ylabel('Class Probabilities', size=12)
plt.axhline(y=1, linestyle = '--', color = 'r')
plt.subplot(1, 4, 4)
plt.bar(range(0,10), z_i/30)
plt.axhline(y=1, linestyle = '--', color = 'r')
plt.ylim(0,1.05)
plt.title('Logits ')
```
## **1.3 Examples from the paper**
- Experiment 1: simple MNIST
- Large Teacher network - 2 layers of **1200 neurons** hidden units: **67**/10000 test errors.
- Original student network - 2 layers of **800 neurons** hidden units: **146**/10000 test errors.
- Distilled student network - 2 layers of **800 neurons** hidden units: **74**/10000 test error.
<br/><br/>
- Experiment 2: Distillation can even teach a student network about classes it has never seen:
- During training all the "3" digits are hidden for the student network.
- So "3" is a mythicial digit the student network never has seen!
- Still using distillation it manages to correctly classify 877 out of 1010 "3"s in the test set!
- After adjusting the bias term 997/1010 3's are correctly classified!
## Part 2: Use Cases
Let's use Transfer Learning, to build some applications. It is convenient to run the applications on Google Colab. Check out the links below.
### 2.1Transfer learning through Network Distillation
- In distillation a small simple (*student*) network tries to extract or distill knowledge from a large and complex (*teacher*) network.
- This is also known as student-teacher networks or compression, as we try to compress a large model into a small model.
- Goal:
- Understand Knowledge Distillation
- Force a small segmentation network (based on Mobilenet) to learn from a large network (deeplab_v3).
- Find more on the colab notebook [Lecture 7: Use Case Distillation and Compression](https://colab.research.google.com/drive/1l8qVX9-CsV9oae02Kb9NXDmWUjNd79G6)
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
z_i = np.array([0.5, 8 , 1.5, 3, 6 ,
11 , 2.5, 0.01 , 5, 0.2 ])
# Tested probabilities
Temperatures = [1, 4, 20]
plt.figure(figsize=(20, 4))
for i, T in enumerate(Temperatures):
plt.subplot(1, 4, i+1)
# Temperature adjusted soft probabilities:
q_i = np.exp(z_i/T)/np.sum(np.exp(z_i/T))
# Plotting the barchart
plt.bar(range(0,10), q_i)
plt.title('Temperature = '+ str(T), size=15)
plt.xticks(range(10) , range(10), size=10)
plt.xlabel('Classes', size=12)
plt.ylabel('Class Probabilities', size=12)
plt.axhline(y=1, linestyle = '--', color = 'r')
plt.subplot(1, 4, 4)
plt.bar(range(0,10), z_i/30)
plt.axhline(y=1, linestyle = '--', color = 'r')
plt.ylim(0,1.05)
plt.title('Logits ')
| 0.709824 | 0.907024 |
# 16. Introduction to Raster Data
This is a very brief introduction to reading raster data and basic manipulations in Python. We'll walk through one of the most commonly used raster python packages, `rasterio`. We'll be using the [National Land Cover Database (NLCD)](https://www.mrlc.gov/data/legends/national-land-cover-database-2016-nlcd2016-legend) from 2011 that was downloaded from [here](https://viewer.nationalmap.gov/basic).
<img src="https://www.mdpi.com/remotesensing/remotesensing-11-02971/article_deploy/html/images/remotesensing-11-02971-g004.png" width="600">
> Note: They also have a [cool online viewer](https://www.mrlc.gov/viewer/) that is free and open access.
```
import pandas as pd
import geopandas as gpd
import matplotlib # base python plotting library
import matplotlib.pyplot as plt # submodule of matplotlib
from matplotlib.patches import Patch
import json
import numpy as np
# To display plots, maps, charts etc in the notebook
%matplotlib inline
```
To use raster data we'll be using the `rasterio` package, which is a popular package that helps you read, write, and manipulate raster data. We'll also be using `rasterstats`.
```
import rasterio
from rasterio.plot import show, plotting_extent
from rasterio.mask import mask
from rasterstats import zonal_stats
```
## 16.1 Import data and plot
To open our NLCD subset data, we'll use the `rasterio.open` function
```
nlcd_2011 = rasterio.open('notebook_data/raster/nlcd2011_sf.tif')
```
Let's check out what we get.
```
nlcd_2011
```
Let's dissect this output here. We can look at the helper documentation for clues.
```
?rasterio.open
```
Which reads that the function returns a ``DatasetReader`` or ``DatasetWriter`` object. Unlike in `GeoPandas` which we've been utilizing a lot of, we don't have a directly editable object here. However, `rasterio` does have functions in place where we can still use this returned object directly.
For example, we can easily plot our NLCD data using `rasterio.plot.show`.
```
rasterio.plot.show(nlcd_2011)
```
And just like how we formatted our `matplotlib` plots when we were using GeoDataFrames, we can still do that with this raster plotting function.
```
?rasterio.plot.show
fig, ax = plt.subplots(figsize=(8,8))
plt_nlcd = rasterio.plot.show(nlcd_2011, cmap='Pastel2', ax=ax)
```
(Take note of what you think could be improved here... we'll come back to this)
We can also plot a histogram of our data in a very similar way.
```
rasterio.plot.show_hist(nlcd_2011, bins=30)
```
We can see that we have more values on the lower end than on the higher end. To really understand the values that we see here let's [take a look at the legend](https://www.mrlc.gov/data/legends/national-land-cover-database-2016-nlcd2016-legend).
<img src ="assets/images/NLCD_Colour_Classification_Update.jpg" width="200" align="center">
## 16.2 Raster data structure
> *Note:* If you need a refresher on what raster data is and relevant terminology. Check out the first lesson that covers geospatial topics
Now that we have a basic grasp on how to pull in and plot raster data, we can dig a little deeper to see what information we have.
First let's check the number of bands there are in our dataset.
```
nlcd_2011.count
```
In this case we only have 1 band. If you're pulling in aerial image, you might have 3 bands (red, green, blue). In the case you're bringing in remote sensing data like Landsat or MODIS you might have more!
Not let's check out what meta data we have.
```
nlcd_2011.meta
```
So we have a lot of good information here. Let's unpack it:
- `driver`: the file type (simialr to what we see in `open` and Geopandas `open`)
- `dtype`: the data type of each of your pixels
- `nodata`: the value that is set for no data pixels
- `width`: the number of pixels wide your dataset is
- `height`: the number of pixels high your dataset is
- `count`: the number of bands in your dataset
- `crs`: the coordiante reference system (CRS) of your data
- `transform`: the affine transform matrix that tell us which pixel locations in each row and column align with spatial locations (longitude, latitude).
We can also get similar information by calling `profile`.
```
nlcd_2011.profile
nlcd_2011.crs
```
Okay, but now we want to actually access our data. We can read in our data as a Numpy ndarray.
```
nlcd_2011_array = nlcd_2011.read()
nlcd_2011_array
```
And we can call shape and see we have a 3D array.
```
nlcd_2011_array.shape
```
Much like other Numpy arrays, we can look at the min, mean, and max of our data
```
print("Minimum: ", np.nanmin(nlcd_2011_array))
print("Max: ", np.nanmean(nlcd_2011_array))
print("Mean: ", np.nanmax(nlcd_2011_array))
```
And since we have our data in an array form now, we can plot it using not a `rasterio` function, but simply `plt.imshow`.
```
plt.imshow(nlcd_2011_array[0,:,:])
```
Notice that we specified this plotting by making our array 2D. This gives us more flexibility about how we want to create our plots. You can do something like this:
> This definitely looks more scary than it actually is. Essentially we are:
> 1. constructing a full color spectrum with all the colors we want
> 2. If values are outside of this range, we set the color tot white
> 3. we set the boudnaries for each of these colors so we know which color to assign to what value
> 4. we create legend labels for our legend
>
> This process is only really needed if we want to have a color map for specific values outside of a specific named `matplotlib` named color map.
```
# Define the colors you want
cmap = matplotlib.colors.ListedColormap(['royalblue', #11
'white', #12
'beige', #21
'salmon', #22
'red', #23
'darkred', #24
'grey', #31
'yellowgreen', #41
'darkgreen', #42
'lightgreen', # 43
'darkgoldenrod', #51
'tan', # 52
'wheat', # 71
'darkkhaki', #72
'darkseagreen', #73
'mediumseagreen', #74
'gold', #81
'chocolate', #82
'lightsteelblue', #90
'steelblue', #95
])
cmap.set_under('#FFFFFF')
cmap.set_over('#FFFFFF')
# Define a normalization from values -> colors
norm = matplotlib.colors.BoundaryNorm([10.5,
11.5,
12.5,
21.5,
22.5,
23.5,
24.5,
31.5,
41.5,
42.5,
43.5,
51.5,
52.5,
71.5,
72.5,
73.5,
74.5,
81.5,
82.5,
90.5,
95.5,
],20)
legend_labels = { 'royalblue':'Open Water',
'white':'Perennial Ice/Snow',
'beige':'Developed, Open Space',
'salmon':'Developed, Low Intensity',
'red':'Developed, Medium Intensity',
'darkred':'Developed High Intensity',
'grey':'Barren Land (Rock/Sand/Clay)',
'yellowgreen':'Deciduous Forest',
'darkgreen':'Evergreen Forest',
'lightgreen':'Mixed Forest',
'darkgoldenrod':'Dwarf Scrub',
'tan':'Shrub/Scrub',
'wheat':'Grassland/Herbaceous',
'darkkhaki':'Sedge/Herbaceous',
'darkseagreen':'Lichens',
'mediumseagreen':'Moss',
'gold':'Pasture/Hay',
'chocolate':'Cultivated Crops',
'lightsteelblue':'Woody Wetlands',
'steelblue':'Emergent Herbaceous Wetlands'}
fig, ax = plt.subplots(figsize=(8, 8))
plt_nlcd = ax.imshow(nlcd_2011_array[0,:,:], cmap=cmap, norm=norm)
ax.set_title('NLCD 2011', fontsize=30)
# Remove axes
ax.set_frame_on(False)
plt.setp(ax.get_xticklabels(), visible=False)
plt.setp(ax.get_yticklabels(), visible=False)
ax.set_xticks([])
ax.set_yticks([])
# Add color bar
patches = [Patch(color=color, label=label)
for color, label in legend_labels.items()]
fig.legend(handles=patches, facecolor="white",bbox_to_anchor=(1.1, 1.05))
```
## 16.2 Mask raster data
*Masking* is a common action that is done with raster data where you "mask" everything outside of a certain geometry.
To do this let's first bring in the san francisco county data.
```
# Bring in census tracts
tracts_gdf = gpd.read_file("zip://notebook_data/census/Tracts/cb_2013_06_tract_500k.zip").to_crs('epsg:4326')
# Narrow it down to San Francisco County
tracts_gdf_sf = tracts_gdf[tracts_gdf['COUNTYFP']=='075']
tracts_gdf_sf.plot()
plt.show()
```
We forgot about the Farollon islands! Let's crop those out.
```
# Crop out Farallon
tracts_gdf_sf = tracts_gdf_sf.cx[-122.8:-122.35, 37.65:37.85].copy().reset_index(drop=True)
tracts_gdf_sf.plot()
plt.show()
```
We'll want to check the crs of our GeoDataFrame
```
tracts_gdf_sf.crs
```
Now we will call the `mask` function from `rasterio`. Let's look at the documentation first.
```
?mask
```
We actually recommend using the `rioxarray` method instesd. So we'll import a new package.
```
import rioxarray as rxr
```
Open our same NLCD data...
```
nlcd_2011 = rxr.open_rasterio('notebook_data/raster/nlcd2011_sf.tif',
masked=True).squeeze()
```
Reproject our NLCD to be in the same coordinate reference system as the san francisco data
```
from rasterio.crs import CRS
!rio --version
# Currently doesn't work
# Issue: https://github.com/mapbox/rasterio/issues/2103
test = nlcd_2011.rio.reproject(tracts_gdf_sf.crs)
```
And clip our data to the san francisco geometry
```
clipped = test.rio.clip(tracts_gdf_sf.geometry, tracts_gdf_sf.crs, drop=False, invert=False)
```
We can easily plot this using `.plot()`
```
clipped.plot()
```
And we can also make a pretty map like we did before.
```
fig, ax = plt.subplots(figsize=(8, 8))
clipped.plot(cmap=cmap, norm=norm, ax=ax, add_colorbar=False)
ax.set_title('NLCD 2011 (Cropped)', fontsize=30)
# Add color bar
patches = [Patch(color=color, label=label)
for color, label in legend_labels.items()]
fig.legend(handles=patches, facecolor="white",bbox_to_anchor=(1.1, 1.05))
# Remove axes
ax.set_frame_on(False)
plt.setp(ax.get_xticklabels(), visible=False)
plt.setp(ax.get_yticklabels(), visible=False)
ax.set_xticks([])
ax.set_yticks([])
```
and you can save your work out to a new file!
```
clipped.rio.to_raster("outdata/nlcd2011_sf_cropped.tif", tiled=True)
```
## 16.3 Aggregate raster to vector
Another common step we see in a lot of raster work flows is questions that go along the lines of "How do I find the average of my raster within my vector data shapes"?
We can do this by *aggregating* to our vector data. For this example we'll ask the question, "What is the majority class I have in each of the census tracts in San Francisco?"
For this we'll turn to the `rasterstas` pacakge which has a handy function called `zonal_stats`. By default, the function will give us the minimum, maximum, mean, and count. But there also a lot more statistics that the function can return beyond this:
- sum
- std
- median
- majority
- minority
- unique
- range
- nodata
- percentile
So we'll first bring back our clipped census tracts shapefile we have for san francisco.
```
tracts_gdf_sf.plot()
plt.show()
```
And we'll check out the `zonal_stats` documentation to get a better sense of how we can customize the arguments to better fit our needs.
```
?zonal_stats
```
Which doesn't tell us a ton. Since we don't have `gen_zonal_stas` loaded, we can go look at the documentation online: https://pythonhosted.org/rasterstats/rasterstats.html
After we check that out, let's get on rolling and actually get our zonal stats by census tract.
```
with rasterio.open('notebook_data/raster/nlcd2011_sf.tif') as src:
affine = src.transform
array = src.read(1)
df_zonal_stats = pd.DataFrame(zonal_stats(tracts_gdf_sf, array, affine=affine, stats=['majority', 'unique']))
```
There's a lot going on in the cell above, let's break it down:
- `affine` object grabbed the transform of our raster data
- `array` object read the first band we have in our raster dataset
- `df_zonal_stats` has the results of our `zonal_stats` and then coerced it to be a dataframe.
So from that caell, we get `df_zonal_stats` which looks like:
```
df_zonal_stats
```
So now, we can merge this back onto our geodataframe so we can add the majority classes and unique number of classes as attributes.
```
tracts_gdf_sf_zs = pd.concat([tracts_gdf_sf, df_zonal_stats[['majority','unique']]], axis=1)
```
And we can make a map that shows, for example, the majority class we have in each census tract.
```
fig, ax = plt.subplots(figsize=(8,8))
tracts_gdf_sf_zs.plot(column='majority', cmap=cmap, norm=norm, ax=ax)
# Add color bar
patches = [Patch(color=color, label=label)
for color, label in legend_labels.items()]
fig.legend(handles=patches, facecolor="white",bbox_to_anchor=(1.1, 1.05))
plt.show()
```
## 16.4 Other resources
We really only grazed the surface here. We've linked a couple of resources that dive into raster data.
- [EarthLab](https://www.earthdatascience.org)
- [Software Carpentry](https://carpentries-incubator.github.io/geospatial-python/aio/index.html)
- [Intro to Python GIS](https://automating-gis-processes.github.io/CSC/index.html)
---
<div style="display:inline-block;vertical-align:middle;">
<a href="https://dlab.berkeley.edu/" target="_blank"><img src ="assets/images/dlab_logo.png" width="75" align="left">
</a>
</div>
<div style="display:inline-block;vertical-align:middle;">
<div style="font-size:larger"> D-Lab @ University of California - Berkeley</div>
<div> Team Geo<div>
</div>
|
github_jupyter
|
import pandas as pd
import geopandas as gpd
import matplotlib # base python plotting library
import matplotlib.pyplot as plt # submodule of matplotlib
from matplotlib.patches import Patch
import json
import numpy as np
# To display plots, maps, charts etc in the notebook
%matplotlib inline
import rasterio
from rasterio.plot import show, plotting_extent
from rasterio.mask import mask
from rasterstats import zonal_stats
nlcd_2011 = rasterio.open('notebook_data/raster/nlcd2011_sf.tif')
nlcd_2011
?rasterio.open
rasterio.plot.show(nlcd_2011)
?rasterio.plot.show
fig, ax = plt.subplots(figsize=(8,8))
plt_nlcd = rasterio.plot.show(nlcd_2011, cmap='Pastel2', ax=ax)
rasterio.plot.show_hist(nlcd_2011, bins=30)
nlcd_2011.count
nlcd_2011.meta
nlcd_2011.profile
nlcd_2011.crs
nlcd_2011_array = nlcd_2011.read()
nlcd_2011_array
nlcd_2011_array.shape
print("Minimum: ", np.nanmin(nlcd_2011_array))
print("Max: ", np.nanmean(nlcd_2011_array))
print("Mean: ", np.nanmax(nlcd_2011_array))
plt.imshow(nlcd_2011_array[0,:,:])
# Define the colors you want
cmap = matplotlib.colors.ListedColormap(['royalblue', #11
'white', #12
'beige', #21
'salmon', #22
'red', #23
'darkred', #24
'grey', #31
'yellowgreen', #41
'darkgreen', #42
'lightgreen', # 43
'darkgoldenrod', #51
'tan', # 52
'wheat', # 71
'darkkhaki', #72
'darkseagreen', #73
'mediumseagreen', #74
'gold', #81
'chocolate', #82
'lightsteelblue', #90
'steelblue', #95
])
cmap.set_under('#FFFFFF')
cmap.set_over('#FFFFFF')
# Define a normalization from values -> colors
norm = matplotlib.colors.BoundaryNorm([10.5,
11.5,
12.5,
21.5,
22.5,
23.5,
24.5,
31.5,
41.5,
42.5,
43.5,
51.5,
52.5,
71.5,
72.5,
73.5,
74.5,
81.5,
82.5,
90.5,
95.5,
],20)
legend_labels = { 'royalblue':'Open Water',
'white':'Perennial Ice/Snow',
'beige':'Developed, Open Space',
'salmon':'Developed, Low Intensity',
'red':'Developed, Medium Intensity',
'darkred':'Developed High Intensity',
'grey':'Barren Land (Rock/Sand/Clay)',
'yellowgreen':'Deciduous Forest',
'darkgreen':'Evergreen Forest',
'lightgreen':'Mixed Forest',
'darkgoldenrod':'Dwarf Scrub',
'tan':'Shrub/Scrub',
'wheat':'Grassland/Herbaceous',
'darkkhaki':'Sedge/Herbaceous',
'darkseagreen':'Lichens',
'mediumseagreen':'Moss',
'gold':'Pasture/Hay',
'chocolate':'Cultivated Crops',
'lightsteelblue':'Woody Wetlands',
'steelblue':'Emergent Herbaceous Wetlands'}
fig, ax = plt.subplots(figsize=(8, 8))
plt_nlcd = ax.imshow(nlcd_2011_array[0,:,:], cmap=cmap, norm=norm)
ax.set_title('NLCD 2011', fontsize=30)
# Remove axes
ax.set_frame_on(False)
plt.setp(ax.get_xticklabels(), visible=False)
plt.setp(ax.get_yticklabels(), visible=False)
ax.set_xticks([])
ax.set_yticks([])
# Add color bar
patches = [Patch(color=color, label=label)
for color, label in legend_labels.items()]
fig.legend(handles=patches, facecolor="white",bbox_to_anchor=(1.1, 1.05))
# Bring in census tracts
tracts_gdf = gpd.read_file("zip://notebook_data/census/Tracts/cb_2013_06_tract_500k.zip").to_crs('epsg:4326')
# Narrow it down to San Francisco County
tracts_gdf_sf = tracts_gdf[tracts_gdf['COUNTYFP']=='075']
tracts_gdf_sf.plot()
plt.show()
# Crop out Farallon
tracts_gdf_sf = tracts_gdf_sf.cx[-122.8:-122.35, 37.65:37.85].copy().reset_index(drop=True)
tracts_gdf_sf.plot()
plt.show()
tracts_gdf_sf.crs
?mask
import rioxarray as rxr
nlcd_2011 = rxr.open_rasterio('notebook_data/raster/nlcd2011_sf.tif',
masked=True).squeeze()
from rasterio.crs import CRS
!rio --version
# Currently doesn't work
# Issue: https://github.com/mapbox/rasterio/issues/2103
test = nlcd_2011.rio.reproject(tracts_gdf_sf.crs)
clipped = test.rio.clip(tracts_gdf_sf.geometry, tracts_gdf_sf.crs, drop=False, invert=False)
clipped.plot()
fig, ax = plt.subplots(figsize=(8, 8))
clipped.plot(cmap=cmap, norm=norm, ax=ax, add_colorbar=False)
ax.set_title('NLCD 2011 (Cropped)', fontsize=30)
# Add color bar
patches = [Patch(color=color, label=label)
for color, label in legend_labels.items()]
fig.legend(handles=patches, facecolor="white",bbox_to_anchor=(1.1, 1.05))
# Remove axes
ax.set_frame_on(False)
plt.setp(ax.get_xticklabels(), visible=False)
plt.setp(ax.get_yticklabels(), visible=False)
ax.set_xticks([])
ax.set_yticks([])
clipped.rio.to_raster("outdata/nlcd2011_sf_cropped.tif", tiled=True)
tracts_gdf_sf.plot()
plt.show()
?zonal_stats
with rasterio.open('notebook_data/raster/nlcd2011_sf.tif') as src:
affine = src.transform
array = src.read(1)
df_zonal_stats = pd.DataFrame(zonal_stats(tracts_gdf_sf, array, affine=affine, stats=['majority', 'unique']))
df_zonal_stats
tracts_gdf_sf_zs = pd.concat([tracts_gdf_sf, df_zonal_stats[['majority','unique']]], axis=1)
fig, ax = plt.subplots(figsize=(8,8))
tracts_gdf_sf_zs.plot(column='majority', cmap=cmap, norm=norm, ax=ax)
# Add color bar
patches = [Patch(color=color, label=label)
for color, label in legend_labels.items()]
fig.legend(handles=patches, facecolor="white",bbox_to_anchor=(1.1, 1.05))
plt.show()
| 0.547222 | 0.990404 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Reducer/convert_raster_to_vector.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Reducer/convert_raster_to_vector.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Reducer/convert_raster_to_vector.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Reducer/convert_raster_to_vector.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Load a Japan boundary from the Large Scale International Boundary dataset.
japan = ee.FeatureCollection('USDOS/LSIB_SIMPLE/2017') \
.filter(ee.Filter.eq('country_na', 'Japan'))
# Load a 2012 nightlights image, clipped to the Japan border.
nl2012 = ee.Image('NOAA/DMSP-OLS/NIGHTTIME_LIGHTS/F182012') \
.select('stable_lights') \
.clipToCollection(japan)
# Define arbitrary thresholds on the 6-bit nightlights image.
zones = nl2012.gt(30).add(nl2012.gt(55)).add(nl2012.gt(62))
zones = zones.updateMask(zones.neq(0))
# Convert the zones of the thresholded nightlights to vectors.
vectors = zones.addBands(nl2012).reduceToVectors(**{
'geometry': japan,
'crs': nl2012.projection(),
'scale': 1000,
'geometryType': 'polygon',
'eightConnected': False,
'labelProperty': 'zone',
'reducer': ee.Reducer.mean()
})
# Display the thresholds.
Map.setCenter(139.6225, 35.712, 9)
Map.addLayer(zones, {'min': 1, 'max': 3, 'palette': ['0000FF', '00FF00', 'FF0000']}, 'raster')
# Make a display image for the vectors, add it to the map.
display = ee.Image(0).updateMask(0).paint(vectors, '000000', 3)
Map.addLayer(display, {'palette': '000000'}, 'vectors')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
|
github_jupyter
|
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
# Add Earth Engine dataset
# Load a Japan boundary from the Large Scale International Boundary dataset.
japan = ee.FeatureCollection('USDOS/LSIB_SIMPLE/2017') \
.filter(ee.Filter.eq('country_na', 'Japan'))
# Load a 2012 nightlights image, clipped to the Japan border.
nl2012 = ee.Image('NOAA/DMSP-OLS/NIGHTTIME_LIGHTS/F182012') \
.select('stable_lights') \
.clipToCollection(japan)
# Define arbitrary thresholds on the 6-bit nightlights image.
zones = nl2012.gt(30).add(nl2012.gt(55)).add(nl2012.gt(62))
zones = zones.updateMask(zones.neq(0))
# Convert the zones of the thresholded nightlights to vectors.
vectors = zones.addBands(nl2012).reduceToVectors(**{
'geometry': japan,
'crs': nl2012.projection(),
'scale': 1000,
'geometryType': 'polygon',
'eightConnected': False,
'labelProperty': 'zone',
'reducer': ee.Reducer.mean()
})
# Display the thresholds.
Map.setCenter(139.6225, 35.712, 9)
Map.addLayer(zones, {'min': 1, 'max': 3, 'palette': ['0000FF', '00FF00', 'FF0000']}, 'raster')
# Make a display image for the vectors, add it to the map.
display = ee.Image(0).updateMask(0).paint(vectors, '000000', 3)
Map.addLayer(display, {'palette': '000000'}, 'vectors')
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
| 0.804866 | 0.957158 |
```
import time
from termcolor import colored
import torch
import torch.autograd.profiler as profiler
from modules.Swc2d import Swc2d
from modules.Dcls2dFull import Dcls2dFull
assert torch.cuda.is_available()
cuda_device = torch.device("cuda") # device object representing GPU
in_channels = 1
out_channels = 1
kernel_size = (2,2)
dilation = (2,2)
stride = (1,1)
padding = (0,0)
groups = 1
bias = False
m = torch.nn.Conv2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
dilation=dilation,
stride=stride,
padding=padding,
groups=groups,
bias=bias).to(cuda_device)
n = Swc2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
dilation=dilation,
stride=stride,
padding=padding,
groups=groups,
bias=bias).to(cuda_device)
X1 = torch.nn.Parameter(
torch.tensor([[[[1., 2., 3., 4.],
[5., 6., 7., 8.],
[9., 10., 11., 12.],
[13., 14., 15., 16.]]]],device=cuda_device),
requires_grad = True)
X2 = torch.nn.Parameter(
torch.tensor([[[[1., 2., 3., 4.],
[5., 6., 7., 8.],
[9., 10., 11., 12.],
[13., 14., 15., 16.]]]],device=cuda_device),
requires_grad = True)
m.weight = torch.nn.Parameter(
torch.tensor([[[[20., 40.],
[60., 80.]]]],device=cuda_device),
requires_grad = True)
n.weight = torch.nn.Parameter(
torch.tensor([[[[20., 40.],
[60., 80.]]]],device=cuda_device),
requires_grad = True)
back_truth = torch.nn.Parameter(
torch.tensor([[[[1., 2.],
[4., 5.]]]],device=cuda_device),
requires_grad = True)
with torch.autograd.profiler.profile(use_cuda=True, profile_memory=True) as prof:
var2 = (n(X2) - back_truth).norm()
var1 = (m(X1) - back_truth).norm()
var1.backward();
var2.backward();
print(X1.size())
print(m.weight.size())
print(n.weight.size())
print(m(X1).size())
print(m(X1))
print(n(X2).size())
print(n(X2))
print(m.weight.grad)
print(n.weight.grad)
print(X1.grad)
print(X2.grad)
n.weight.nonzero().size(0)*100/n.weight.numel()
batch = 16
in_channels = 2**9
out_channels = 2**10
kernel_size = (3,3)
dilation = (8,8)
stride = (1,1)
padding = (0,0)
groups = 1
bias = False
h = 200
w = 200
h_o = int((h + 2 * padding[0] - (dilation[0] * (kernel_size[0] - 1) + 1)) / stride[0] + 1)
w_o = int((w + 2 * padding[1] - (dilation[1] * (kernel_size[1] - 1) + 1)) / stride[1] + 1)
n = Swc2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
dilation=dilation,
stride=stride,
padding=padding,
groups=groups,
bias=bias).to(cuda_device)
X2 = torch.nn.Parameter(torch.rand(batch,in_channels,h,w,device=cuda_device), requires_grad = True)
back_truth = torch.nn.Parameter(torch.rand(batch,out_channels,h_o,w_o,device=cuda_device), requires_grad = True)
with torch.autograd.profiler.profile(use_cuda=True, profile_memory=True) as prof:
var2 = (n(X2) - back_truth).norm()
var2.backward();
print(torch.cuda.memory_summary(device=cuda_device, abbreviated=True))
print(prof.key_averages().table( row_limit=1000))
#prof.export_chrome_trace("trace.json")
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg CPU Mem Self CPU Mem CUDA Mem Self CUDA Mem # of Calls
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
swc2d 54.55% 42.442ms 97.30% 75.707ms 75.707ms 42.461ms 34.57% 75.707ms 75.707ms 0 b 0 b 2.07 Gb -6.71 Gb 1
aten::view 0.06% 49.061us 0.06% 49.061us 8.177us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 6
aten::empty 5.61% 4.362ms 5.61% 4.362ms 218.108us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 11.62 Gb 11.62 Gb 20
aten::rand 0.01% 8.924us 5.50% 4.277ms 4.277ms 4.244ms 3.46% 11.145ms 11.145ms 0 b 0 b 3.35 Gb 0 b 1
aten::uniform_ 0.04% 31.211us 0.04% 31.211us 31.211us 6.900ms 5.62% 6.900ms 6.900ms 0 b 0 b 0 b 0 b 1
aten::eye 0.03% 21.628us 7.02% 5.463ms 2.731ms 9.345us 0.01% 11.951ms 5.976ms 0 b 0 b 6.71 Gb 0 b 2
aten::resize_ 3.52% 2.739ms 3.52% 2.739ms 249.006us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 5.42 Gb 5.42 Gb 11
aten::zero_ 0.08% 60.815us 0.20% 159.104us 39.776us 10.878us 0.01% 8.392ms 2.098ms 0 b 0 b 0 b 0 b 4
aten::fill_ 0.21% 162.000us 0.21% 162.000us 23.143us 8.400ms 6.84% 8.400ms 1.200ms 0 b 0 b 0 b 0 b 7
aten::stride 0.00% 1.372us 0.00% 1.372us 0.343us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 4
aten::as_strided 0.01% 10.736us 0.01% 10.736us 0.767us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 14
aten::to_sparse 0.05% 40.877us 33.59% 26.135ms 26.135ms 34.236us 0.03% 16.034ms 16.034ms 0 b 0 b 586.50 Kb -586.50 Kb 1
aten::nonzero 33.14% 25.790ms 33.17% 25.814ms 25.814ms 15.707ms 12.79% 15.718ms 15.718ms 0 b 0 b 469.00 Kb 0 b 1
aten::contiguous 0.00% 2.933us 0.00% 2.933us 2.933us 1.342us 0.00% 1.342us 1.342us 0 b 0 b 0 b 0 b 1
aten::t 0.00% 1.752us 0.01% 4.635us 4.635us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 1
aten::transpose 0.00% 2.686us 0.01% 4.451us 2.225us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 2
aten::set_ 0.03% 20.847us 0.03% 20.847us 10.424us 11.805us 0.01% 11.805us 5.902us 0 b 0 b 0 b 0 b 2
aten::clone 0.02% 15.979us 0.06% 50.166us 25.083us 19.070us 0.02% 44.832us 22.416us 0 b 0 b 586.50 Kb 0 b 2
aten::empty_strided 0.14% 110.099us 0.14% 110.099us 13.762us 0.000us 0.00% 0.000us 0.000us 32 b 32 b 1.24 Gb 1.24 Gb 8
aten::copy_ 0.09% 72.067us 0.09% 72.067us 14.413us 57.223us 0.05% 57.223us 11.445us 0 b 0 b 0 b 0 b 5
aten::chunk 0.01% 7.593us 0.03% 26.939us 26.939us 8.004us 0.01% 26.816us 26.816us 0 b 0 b 0 b 0 b 1
aten::split 0.01% 8.789us 0.02% 19.346us 19.346us 8.152us 0.01% 18.812us 18.812us 0 b 0 b 0 b 0 b 1
aten::narrow 0.01% 7.089us 0.01% 10.557us 5.279us 10.660us 0.01% 10.660us 5.330us 0 b 0 b 0 b 0 b 2
aten::slice 0.00% 2.324us 0.00% 3.468us 1.734us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 2
aten::index 0.03% 23.917us 0.05% 38.904us 38.904us 39.645us 0.03% 49.312us 49.312us 0 b 0 b 117.50 Kb 0 b 1
aten::reshape 0.01% 7.349us 0.01% 9.322us 4.661us 9.668us 0.01% 9.668us 4.834us 0 b 0 b 0 b 0 b 2
aten::squeeze 0.00% 1.897us 0.00% 2.481us 2.481us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 1
aten::sparse_coo_tensor 0.05% 39.419us 0.20% 157.886us 157.886us 32.449us 0.03% 157.695us 157.695us 0 b -32 b 0 b -2.00 Kb 1
aten::min 0.03% 19.995us 0.04% 30.820us 30.820us 37.312us 0.03% 37.312us 37.312us 0 b 0 b 1.00 Kb 0 b 1
aten::max 0.02% 15.761us 0.03% 25.731us 25.731us 28.672us 0.02% 28.672us 28.672us 0 b 0 b 1.00 Kb 0 b 1
aten::to 0.04% 27.329us 0.08% 62.993us 20.998us 21.406us 0.02% 46.977us 15.659us 32 b 0 b 0 b 0 b 3
aten::_sparse_coo_tensor_with_dims_and_tensors 0.01% 10.487us 0.02% 13.047us 13.047us 12.957us 0.01% 12.957us 12.957us 0 b 0 b 0 b 0 b 1
aten::_coalesced_ 0.00% 2.744us 0.00% 2.744us 2.744us 2.980us 0.00% 2.980us 2.980us 0 b 0 b 0 b 0 b 1
aten::_nnz 0.01% 10.091us 0.01% 10.091us 10.091us 10.910us 0.01% 10.910us 10.910us 0 b 0 b 0 b 0 b 1
aten::indices 0.04% 29.029us 0.05% 40.048us 20.024us 28.102us 0.02% 39.270us 19.635us 0 b 0 b 0 b 0 b 2
aten::is_coalesced 0.01% 6.990us 0.01% 6.990us 2.330us 7.742us 0.01% 7.742us 2.581us 0 b 0 b 0 b 0 b 3
aten::alias 0.01% 9.044us 0.01% 9.044us 3.015us 8.031us 0.01% 8.031us 2.677us 0 b 0 b 0 b 0 b 3
aten::select 0.02% 18.383us 0.03% 19.495us 9.748us 18.625us 0.02% 18.625us 9.312us 0 b 0 b 0 b 0 b 2
aten::values 0.02% 13.997us 0.02% 19.012us 19.012us 14.047us 0.01% 18.652us 18.652us 0 b 0 b 0 b 0 b 1
aten::sub 0.05% 35.924us 0.05% 42.716us 42.716us 12.824ms 10.44% 12.824ms 12.824ms 0 b 0 b 2.07 Gb 0 b 1
aten::frobenius_norm 0.02% 13.852us 0.15% 115.578us 115.578us 6.031us 0.00% 4.655ms 4.655ms 0 b 0 b 1.00 Kb 0 b 1
aten::norm 0.08% 64.747us 0.09% 71.100us 71.100us 4.643ms 3.78% 4.643ms 4.643ms 0 b 0 b 512 b 0 b 1
aten::ones_like 0.02% 12.675us 0.04% 32.108us 32.108us 5.211us 0.00% 8.188us 8.188us 0 b 0 b 512 b 0 b 1
aten::empty_like 0.03% 26.705us 0.15% 116.908us 29.227us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 1.24 Gb 0 b 4
torch::autograd::GraphRoot 0.00% 3.138us 0.00% 3.138us 3.138us 0.703us 0.00% 0.703us 0.703us 0 b 0 b 0 b 0 b 1
torch::autograd::CopyBackwards 0.09% 73.814us 0.11% 87.938us 87.938us 3.297us 0.00% 3.969us 3.969us 0 b 0 b 0 b 0 b 1
NormBackward1 0.21% 163.425us 0.84% 654.880us 654.880us 9.414us 0.01% 9.145ms 9.145ms 0 b 0 b 2.07 Gb -1.00 Kb 1
aten::div 0.12% 96.114us 0.15% 120.260us 120.260us 7.938us 0.01% 7.938us 7.938us 0 b 0 b 512 b 0 b 1
aten::eq 0.20% 152.497us 0.32% 246.627us 123.314us 11.938us 0.01% 16.070us 8.035us 0 b 0 b 1.00 Kb 0 b 2
aten::masked_fill_ 0.09% 72.085us 0.09% 72.085us 72.085us 6.141us 0.00% 6.141us 6.141us 0 b 0 b 0 b 0 b 1
aten::mul 0.28% 214.059us 0.32% 251.052us 125.526us 18.102ms 14.74% 18.102ms 9.051ms 0 b 0 b 4.13 Gb 0 b 2
SubBackward0 0.06% 42.875us 0.40% 311.953us 311.953us 5.820us 0.00% 17.995ms 17.995ms 0 b 0 b 2.07 Gb -2.07 Gb 1
aten::neg 0.15% 117.979us 0.27% 212.790us 106.395us 8.997ms 7.32% 17.988ms 8.994ms 0 b 0 b 4.13 Gb 0 b 2
torch::autograd::AccumulateGrad 0.14% 108.556us 0.28% 220.850us 55.213us 11.344us 0.01% 27.469us 6.867us 0 b 0 b 0 b 0 b 4
aten::detach 0.09% 71.968us 0.14% 112.294us 28.073us 12.117us 0.01% 16.125us 4.031us 0 b 0 b 0 b 0 b 4
detach 0.05% 40.326us 0.05% 40.326us 10.082us 4.008us 0.00% 4.008us 1.002us 0 b 0 b 0 b 0 b 4
swc2dBackward 0.25% 195.908us 0.82% 634.351us 634.351us 9.906us 0.01% 2.464ms 2.464ms 0 b 0 b 1.24 Gb -132.50 Kb 1
aten::zeros_like 0.09% 69.403us 0.40% 310.665us 103.555us 10.000us 0.01% 2.444ms 814.724us 0 b 0 b 1.24 Gb 0 b 3
aten::ones 0.03% 25.867us 0.11% 87.405us 87.405us 3.367us 0.00% 9.477us 9.477us 0 b 0 b 132.50 Kb 0 b 1
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 77.811ms
CUDA time total: 122.831ms
----------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg CPU Mem Self CPU Mem CUDA Mem Self CUDA Mem # of Calls
----------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
swc2d 1.29% 108.750us 72.35% 6.114ms 6.114ms 79.931us 0.00% 4.477s 4.477s 0 b 0 b 2.07 Gb -10.06 Gb 1
aten::view 0.60% 50.973us 0.60% 50.973us 12.743us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 4
aten::empty 34.44% 2.911ms 34.44% 2.911ms 223.887us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 14.97 Gb 14.97 Gb 13
aten::rand 0.11% 9.586us 0.54% 45.252us 45.252us 13.824us 0.00% 6.928ms 6.928ms 0 b 0 b 3.35 Gb 0 b 1
aten::uniform_ 0.35% 29.214us 0.35% 29.214us 29.214us 6.914ms 0.15% 6.914ms 6.914ms 0 b 0 b 0 b 0 b 1
aten::eye 0.38% 32.294us 71.68% 6.058ms 3.029ms 7.393us 0.00% 11.999ms 6.000ms 0 b 0 b 6.71 Gb 0 b 2
aten::resize_ 35.23% 2.977ms 35.23% 2.977ms 744.353us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 5.42 Gb 5.42 Gb 4
aten::zero_ 0.89% 75.369us 2.41% 204.009us 51.002us 11.763us 0.00% 8.201ms 2.050ms 0 b 0 b 0 b 0 b 4
aten::fill_ 2.49% 210.491us 2.49% 210.491us 30.070us 8.208ms 0.18% 8.208ms 1.173ms 0 b 0 b 0 b 0 b 7
aten::stride 0.02% 1.881us 0.02% 1.881us 0.376us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 5
aten::as_strided 0.05% 4.637us 0.05% 4.637us 2.319us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 2
aten::mm 1.79% 150.980us 34.44% 2.910ms 2.910ms 4.464s 98.79% 4.464s 4.464s 0 b 0 b 3.35 Gb 0 b 1
aten::sub 0.46% 38.492us 0.55% 46.634us 46.634us 11.613ms 0.26% 11.613ms 11.613ms 0 b 0 b 2.07 Gb 0 b 1
aten::frobenius_norm 0.21% 17.544us 1.60% 135.011us 135.011us 5.500us 0.00% 3.684ms 3.684ms 0 b 0 b 1.00 Kb 0 b 1
aten::norm 0.90% 75.887us 0.98% 82.475us 82.475us 3.672ms 0.08% 3.672ms 3.672ms 0 b 0 b 512 b 0 b 1
aten::copy_ 0.32% 26.860us 0.32% 26.860us 26.860us 6.500us 0.00% 6.500us 6.500us 0 b 0 b 0 b 0 b 1
aten::ones_like 0.17% 14.191us 0.45% 38.136us 38.136us 5.500us 0.00% 8.000us 8.000us 0 b 0 b 512 b 0 b 1
aten::empty_like 0.41% 34.608us 1.49% 125.759us 31.440us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 1.24 Gb 0 b 4
aten::empty_strided 1.08% 91.151us 1.08% 91.151us 22.788us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 1.24 Gb 1.24 Gb 4
torch::autograd::GraphRoot 0.04% 3.497us 0.04% 3.497us 3.497us 1.000us 0.00% 1.000us 1.000us 0 b 0 b 0 b 0 b 1
torch::autograd::CopyBackwards 1.02% 86.409us 1.32% 111.341us 111.341us 1.500us 0.00% 3.500us 3.500us 0 b 0 b 0 b 0 b 1
aten::to 0.30% 24.932us 0.30% 24.932us 24.932us 2.000us 0.00% 2.000us 2.000us 0 b 0 b 0 b 0 b 1
NormBackward1 2.04% 172.593us 8.68% 733.331us 733.331us 8.000us 0.00% 8.044ms 8.044ms 0 b 0 b 2.07 Gb -1.00 Kb 1
aten::div 1.34% 113.339us 1.73% 146.214us 146.214us 8.000us 0.00% 8.000us 8.000us 0 b 0 b 512 b 0 b 1
aten::eq 1.83% 154.649us 3.18% 268.835us 134.418us 14.000us 0.00% 19.000us 9.500us 0 b 0 b 1.00 Kb 0 b 2
aten::masked_fill_ 1.17% 99.092us 1.17% 99.092us 99.092us 6.000us 0.00% 6.000us 6.000us 0 b 0 b 0 b 0 b 1
aten::mul 2.57% 216.812us 3.14% 265.106us 132.553us 15.995ms 0.35% 15.995ms 7.997ms 0 b 0 b 4.13 Gb 0 b 2
SubBackward0 0.64% 54.098us 4.16% 351.768us 351.768us 6.000us 0.00% 15.970ms 15.970ms 0 b 0 b 2.07 Gb -2.07 Gb 1
aten::neg 1.60% 135.001us 2.80% 236.451us 118.226us 7.978ms 0.18% 15.951ms 7.976ms 0 b 0 b 4.13 Gb 0 b 2
torch::autograd::AccumulateGrad 1.30% 110.242us 2.57% 217.009us 54.252us 10.500us 0.00% 26.000us 6.500us 0 b 0 b 0 b 0 b 4
aten::detach 0.79% 67.093us 1.26% 106.767us 26.692us 10.500us 0.00% 15.500us 3.875us 0 b 0 b 0 b 0 b 4
detach 0.47% 39.674us 0.47% 39.674us 9.918us 5.000us 0.00% 5.000us 1.250us 0 b 0 b 0 b 0 b 4
swc2dBackward 2.45% 206.893us 8.28% 700.168us 700.168us 7.000us 0.00% 2.247ms 2.247ms 0 b 0 b 1.24 Gb -132.50 Kb 1
aten::zeros_like 0.87% 73.824us 4.01% 339.139us 113.046us 11.500us 0.00% 2.231ms 743.500us 0 b 0 b 1.24 Gb 0 b 3
aten::ones 0.38% 32.183us 1.31% 110.534us 110.534us 3.500us 0.00% 9.000us 9.000us 0 b 0 b 132.50 Kb 0 b 1
----------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 8.451ms
CUDA time total: 4.518s
```
|
github_jupyter
|
import time
from termcolor import colored
import torch
import torch.autograd.profiler as profiler
from modules.Swc2d import Swc2d
from modules.Dcls2dFull import Dcls2dFull
assert torch.cuda.is_available()
cuda_device = torch.device("cuda") # device object representing GPU
in_channels = 1
out_channels = 1
kernel_size = (2,2)
dilation = (2,2)
stride = (1,1)
padding = (0,0)
groups = 1
bias = False
m = torch.nn.Conv2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
dilation=dilation,
stride=stride,
padding=padding,
groups=groups,
bias=bias).to(cuda_device)
n = Swc2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
dilation=dilation,
stride=stride,
padding=padding,
groups=groups,
bias=bias).to(cuda_device)
X1 = torch.nn.Parameter(
torch.tensor([[[[1., 2., 3., 4.],
[5., 6., 7., 8.],
[9., 10., 11., 12.],
[13., 14., 15., 16.]]]],device=cuda_device),
requires_grad = True)
X2 = torch.nn.Parameter(
torch.tensor([[[[1., 2., 3., 4.],
[5., 6., 7., 8.],
[9., 10., 11., 12.],
[13., 14., 15., 16.]]]],device=cuda_device),
requires_grad = True)
m.weight = torch.nn.Parameter(
torch.tensor([[[[20., 40.],
[60., 80.]]]],device=cuda_device),
requires_grad = True)
n.weight = torch.nn.Parameter(
torch.tensor([[[[20., 40.],
[60., 80.]]]],device=cuda_device),
requires_grad = True)
back_truth = torch.nn.Parameter(
torch.tensor([[[[1., 2.],
[4., 5.]]]],device=cuda_device),
requires_grad = True)
with torch.autograd.profiler.profile(use_cuda=True, profile_memory=True) as prof:
var2 = (n(X2) - back_truth).norm()
var1 = (m(X1) - back_truth).norm()
var1.backward();
var2.backward();
print(X1.size())
print(m.weight.size())
print(n.weight.size())
print(m(X1).size())
print(m(X1))
print(n(X2).size())
print(n(X2))
print(m.weight.grad)
print(n.weight.grad)
print(X1.grad)
print(X2.grad)
n.weight.nonzero().size(0)*100/n.weight.numel()
batch = 16
in_channels = 2**9
out_channels = 2**10
kernel_size = (3,3)
dilation = (8,8)
stride = (1,1)
padding = (0,0)
groups = 1
bias = False
h = 200
w = 200
h_o = int((h + 2 * padding[0] - (dilation[0] * (kernel_size[0] - 1) + 1)) / stride[0] + 1)
w_o = int((w + 2 * padding[1] - (dilation[1] * (kernel_size[1] - 1) + 1)) / stride[1] + 1)
n = Swc2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
dilation=dilation,
stride=stride,
padding=padding,
groups=groups,
bias=bias).to(cuda_device)
X2 = torch.nn.Parameter(torch.rand(batch,in_channels,h,w,device=cuda_device), requires_grad = True)
back_truth = torch.nn.Parameter(torch.rand(batch,out_channels,h_o,w_o,device=cuda_device), requires_grad = True)
with torch.autograd.profiler.profile(use_cuda=True, profile_memory=True) as prof:
var2 = (n(X2) - back_truth).norm()
var2.backward();
print(torch.cuda.memory_summary(device=cuda_device, abbreviated=True))
print(prof.key_averages().table( row_limit=1000))
#prof.export_chrome_trace("trace.json")
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg CPU Mem Self CPU Mem CUDA Mem Self CUDA Mem # of Calls
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
swc2d 54.55% 42.442ms 97.30% 75.707ms 75.707ms 42.461ms 34.57% 75.707ms 75.707ms 0 b 0 b 2.07 Gb -6.71 Gb 1
aten::view 0.06% 49.061us 0.06% 49.061us 8.177us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 6
aten::empty 5.61% 4.362ms 5.61% 4.362ms 218.108us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 11.62 Gb 11.62 Gb 20
aten::rand 0.01% 8.924us 5.50% 4.277ms 4.277ms 4.244ms 3.46% 11.145ms 11.145ms 0 b 0 b 3.35 Gb 0 b 1
aten::uniform_ 0.04% 31.211us 0.04% 31.211us 31.211us 6.900ms 5.62% 6.900ms 6.900ms 0 b 0 b 0 b 0 b 1
aten::eye 0.03% 21.628us 7.02% 5.463ms 2.731ms 9.345us 0.01% 11.951ms 5.976ms 0 b 0 b 6.71 Gb 0 b 2
aten::resize_ 3.52% 2.739ms 3.52% 2.739ms 249.006us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 5.42 Gb 5.42 Gb 11
aten::zero_ 0.08% 60.815us 0.20% 159.104us 39.776us 10.878us 0.01% 8.392ms 2.098ms 0 b 0 b 0 b 0 b 4
aten::fill_ 0.21% 162.000us 0.21% 162.000us 23.143us 8.400ms 6.84% 8.400ms 1.200ms 0 b 0 b 0 b 0 b 7
aten::stride 0.00% 1.372us 0.00% 1.372us 0.343us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 4
aten::as_strided 0.01% 10.736us 0.01% 10.736us 0.767us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 14
aten::to_sparse 0.05% 40.877us 33.59% 26.135ms 26.135ms 34.236us 0.03% 16.034ms 16.034ms 0 b 0 b 586.50 Kb -586.50 Kb 1
aten::nonzero 33.14% 25.790ms 33.17% 25.814ms 25.814ms 15.707ms 12.79% 15.718ms 15.718ms 0 b 0 b 469.00 Kb 0 b 1
aten::contiguous 0.00% 2.933us 0.00% 2.933us 2.933us 1.342us 0.00% 1.342us 1.342us 0 b 0 b 0 b 0 b 1
aten::t 0.00% 1.752us 0.01% 4.635us 4.635us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 1
aten::transpose 0.00% 2.686us 0.01% 4.451us 2.225us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 2
aten::set_ 0.03% 20.847us 0.03% 20.847us 10.424us 11.805us 0.01% 11.805us 5.902us 0 b 0 b 0 b 0 b 2
aten::clone 0.02% 15.979us 0.06% 50.166us 25.083us 19.070us 0.02% 44.832us 22.416us 0 b 0 b 586.50 Kb 0 b 2
aten::empty_strided 0.14% 110.099us 0.14% 110.099us 13.762us 0.000us 0.00% 0.000us 0.000us 32 b 32 b 1.24 Gb 1.24 Gb 8
aten::copy_ 0.09% 72.067us 0.09% 72.067us 14.413us 57.223us 0.05% 57.223us 11.445us 0 b 0 b 0 b 0 b 5
aten::chunk 0.01% 7.593us 0.03% 26.939us 26.939us 8.004us 0.01% 26.816us 26.816us 0 b 0 b 0 b 0 b 1
aten::split 0.01% 8.789us 0.02% 19.346us 19.346us 8.152us 0.01% 18.812us 18.812us 0 b 0 b 0 b 0 b 1
aten::narrow 0.01% 7.089us 0.01% 10.557us 5.279us 10.660us 0.01% 10.660us 5.330us 0 b 0 b 0 b 0 b 2
aten::slice 0.00% 2.324us 0.00% 3.468us 1.734us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 2
aten::index 0.03% 23.917us 0.05% 38.904us 38.904us 39.645us 0.03% 49.312us 49.312us 0 b 0 b 117.50 Kb 0 b 1
aten::reshape 0.01% 7.349us 0.01% 9.322us 4.661us 9.668us 0.01% 9.668us 4.834us 0 b 0 b 0 b 0 b 2
aten::squeeze 0.00% 1.897us 0.00% 2.481us 2.481us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 1
aten::sparse_coo_tensor 0.05% 39.419us 0.20% 157.886us 157.886us 32.449us 0.03% 157.695us 157.695us 0 b -32 b 0 b -2.00 Kb 1
aten::min 0.03% 19.995us 0.04% 30.820us 30.820us 37.312us 0.03% 37.312us 37.312us 0 b 0 b 1.00 Kb 0 b 1
aten::max 0.02% 15.761us 0.03% 25.731us 25.731us 28.672us 0.02% 28.672us 28.672us 0 b 0 b 1.00 Kb 0 b 1
aten::to 0.04% 27.329us 0.08% 62.993us 20.998us 21.406us 0.02% 46.977us 15.659us 32 b 0 b 0 b 0 b 3
aten::_sparse_coo_tensor_with_dims_and_tensors 0.01% 10.487us 0.02% 13.047us 13.047us 12.957us 0.01% 12.957us 12.957us 0 b 0 b 0 b 0 b 1
aten::_coalesced_ 0.00% 2.744us 0.00% 2.744us 2.744us 2.980us 0.00% 2.980us 2.980us 0 b 0 b 0 b 0 b 1
aten::_nnz 0.01% 10.091us 0.01% 10.091us 10.091us 10.910us 0.01% 10.910us 10.910us 0 b 0 b 0 b 0 b 1
aten::indices 0.04% 29.029us 0.05% 40.048us 20.024us 28.102us 0.02% 39.270us 19.635us 0 b 0 b 0 b 0 b 2
aten::is_coalesced 0.01% 6.990us 0.01% 6.990us 2.330us 7.742us 0.01% 7.742us 2.581us 0 b 0 b 0 b 0 b 3
aten::alias 0.01% 9.044us 0.01% 9.044us 3.015us 8.031us 0.01% 8.031us 2.677us 0 b 0 b 0 b 0 b 3
aten::select 0.02% 18.383us 0.03% 19.495us 9.748us 18.625us 0.02% 18.625us 9.312us 0 b 0 b 0 b 0 b 2
aten::values 0.02% 13.997us 0.02% 19.012us 19.012us 14.047us 0.01% 18.652us 18.652us 0 b 0 b 0 b 0 b 1
aten::sub 0.05% 35.924us 0.05% 42.716us 42.716us 12.824ms 10.44% 12.824ms 12.824ms 0 b 0 b 2.07 Gb 0 b 1
aten::frobenius_norm 0.02% 13.852us 0.15% 115.578us 115.578us 6.031us 0.00% 4.655ms 4.655ms 0 b 0 b 1.00 Kb 0 b 1
aten::norm 0.08% 64.747us 0.09% 71.100us 71.100us 4.643ms 3.78% 4.643ms 4.643ms 0 b 0 b 512 b 0 b 1
aten::ones_like 0.02% 12.675us 0.04% 32.108us 32.108us 5.211us 0.00% 8.188us 8.188us 0 b 0 b 512 b 0 b 1
aten::empty_like 0.03% 26.705us 0.15% 116.908us 29.227us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 1.24 Gb 0 b 4
torch::autograd::GraphRoot 0.00% 3.138us 0.00% 3.138us 3.138us 0.703us 0.00% 0.703us 0.703us 0 b 0 b 0 b 0 b 1
torch::autograd::CopyBackwards 0.09% 73.814us 0.11% 87.938us 87.938us 3.297us 0.00% 3.969us 3.969us 0 b 0 b 0 b 0 b 1
NormBackward1 0.21% 163.425us 0.84% 654.880us 654.880us 9.414us 0.01% 9.145ms 9.145ms 0 b 0 b 2.07 Gb -1.00 Kb 1
aten::div 0.12% 96.114us 0.15% 120.260us 120.260us 7.938us 0.01% 7.938us 7.938us 0 b 0 b 512 b 0 b 1
aten::eq 0.20% 152.497us 0.32% 246.627us 123.314us 11.938us 0.01% 16.070us 8.035us 0 b 0 b 1.00 Kb 0 b 2
aten::masked_fill_ 0.09% 72.085us 0.09% 72.085us 72.085us 6.141us 0.00% 6.141us 6.141us 0 b 0 b 0 b 0 b 1
aten::mul 0.28% 214.059us 0.32% 251.052us 125.526us 18.102ms 14.74% 18.102ms 9.051ms 0 b 0 b 4.13 Gb 0 b 2
SubBackward0 0.06% 42.875us 0.40% 311.953us 311.953us 5.820us 0.00% 17.995ms 17.995ms 0 b 0 b 2.07 Gb -2.07 Gb 1
aten::neg 0.15% 117.979us 0.27% 212.790us 106.395us 8.997ms 7.32% 17.988ms 8.994ms 0 b 0 b 4.13 Gb 0 b 2
torch::autograd::AccumulateGrad 0.14% 108.556us 0.28% 220.850us 55.213us 11.344us 0.01% 27.469us 6.867us 0 b 0 b 0 b 0 b 4
aten::detach 0.09% 71.968us 0.14% 112.294us 28.073us 12.117us 0.01% 16.125us 4.031us 0 b 0 b 0 b 0 b 4
detach 0.05% 40.326us 0.05% 40.326us 10.082us 4.008us 0.00% 4.008us 1.002us 0 b 0 b 0 b 0 b 4
swc2dBackward 0.25% 195.908us 0.82% 634.351us 634.351us 9.906us 0.01% 2.464ms 2.464ms 0 b 0 b 1.24 Gb -132.50 Kb 1
aten::zeros_like 0.09% 69.403us 0.40% 310.665us 103.555us 10.000us 0.01% 2.444ms 814.724us 0 b 0 b 1.24 Gb 0 b 3
aten::ones 0.03% 25.867us 0.11% 87.405us 87.405us 3.367us 0.00% 9.477us 9.477us 0 b 0 b 132.50 Kb 0 b 1
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 77.811ms
CUDA time total: 122.831ms
----------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg CPU Mem Self CPU Mem CUDA Mem Self CUDA Mem # of Calls
----------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
swc2d 1.29% 108.750us 72.35% 6.114ms 6.114ms 79.931us 0.00% 4.477s 4.477s 0 b 0 b 2.07 Gb -10.06 Gb 1
aten::view 0.60% 50.973us 0.60% 50.973us 12.743us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 4
aten::empty 34.44% 2.911ms 34.44% 2.911ms 223.887us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 14.97 Gb 14.97 Gb 13
aten::rand 0.11% 9.586us 0.54% 45.252us 45.252us 13.824us 0.00% 6.928ms 6.928ms 0 b 0 b 3.35 Gb 0 b 1
aten::uniform_ 0.35% 29.214us 0.35% 29.214us 29.214us 6.914ms 0.15% 6.914ms 6.914ms 0 b 0 b 0 b 0 b 1
aten::eye 0.38% 32.294us 71.68% 6.058ms 3.029ms 7.393us 0.00% 11.999ms 6.000ms 0 b 0 b 6.71 Gb 0 b 2
aten::resize_ 35.23% 2.977ms 35.23% 2.977ms 744.353us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 5.42 Gb 5.42 Gb 4
aten::zero_ 0.89% 75.369us 2.41% 204.009us 51.002us 11.763us 0.00% 8.201ms 2.050ms 0 b 0 b 0 b 0 b 4
aten::fill_ 2.49% 210.491us 2.49% 210.491us 30.070us 8.208ms 0.18% 8.208ms 1.173ms 0 b 0 b 0 b 0 b 7
aten::stride 0.02% 1.881us 0.02% 1.881us 0.376us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 5
aten::as_strided 0.05% 4.637us 0.05% 4.637us 2.319us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 0 b 0 b 2
aten::mm 1.79% 150.980us 34.44% 2.910ms 2.910ms 4.464s 98.79% 4.464s 4.464s 0 b 0 b 3.35 Gb 0 b 1
aten::sub 0.46% 38.492us 0.55% 46.634us 46.634us 11.613ms 0.26% 11.613ms 11.613ms 0 b 0 b 2.07 Gb 0 b 1
aten::frobenius_norm 0.21% 17.544us 1.60% 135.011us 135.011us 5.500us 0.00% 3.684ms 3.684ms 0 b 0 b 1.00 Kb 0 b 1
aten::norm 0.90% 75.887us 0.98% 82.475us 82.475us 3.672ms 0.08% 3.672ms 3.672ms 0 b 0 b 512 b 0 b 1
aten::copy_ 0.32% 26.860us 0.32% 26.860us 26.860us 6.500us 0.00% 6.500us 6.500us 0 b 0 b 0 b 0 b 1
aten::ones_like 0.17% 14.191us 0.45% 38.136us 38.136us 5.500us 0.00% 8.000us 8.000us 0 b 0 b 512 b 0 b 1
aten::empty_like 0.41% 34.608us 1.49% 125.759us 31.440us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 1.24 Gb 0 b 4
aten::empty_strided 1.08% 91.151us 1.08% 91.151us 22.788us 0.000us 0.00% 0.000us 0.000us 0 b 0 b 1.24 Gb 1.24 Gb 4
torch::autograd::GraphRoot 0.04% 3.497us 0.04% 3.497us 3.497us 1.000us 0.00% 1.000us 1.000us 0 b 0 b 0 b 0 b 1
torch::autograd::CopyBackwards 1.02% 86.409us 1.32% 111.341us 111.341us 1.500us 0.00% 3.500us 3.500us 0 b 0 b 0 b 0 b 1
aten::to 0.30% 24.932us 0.30% 24.932us 24.932us 2.000us 0.00% 2.000us 2.000us 0 b 0 b 0 b 0 b 1
NormBackward1 2.04% 172.593us 8.68% 733.331us 733.331us 8.000us 0.00% 8.044ms 8.044ms 0 b 0 b 2.07 Gb -1.00 Kb 1
aten::div 1.34% 113.339us 1.73% 146.214us 146.214us 8.000us 0.00% 8.000us 8.000us 0 b 0 b 512 b 0 b 1
aten::eq 1.83% 154.649us 3.18% 268.835us 134.418us 14.000us 0.00% 19.000us 9.500us 0 b 0 b 1.00 Kb 0 b 2
aten::masked_fill_ 1.17% 99.092us 1.17% 99.092us 99.092us 6.000us 0.00% 6.000us 6.000us 0 b 0 b 0 b 0 b 1
aten::mul 2.57% 216.812us 3.14% 265.106us 132.553us 15.995ms 0.35% 15.995ms 7.997ms 0 b 0 b 4.13 Gb 0 b 2
SubBackward0 0.64% 54.098us 4.16% 351.768us 351.768us 6.000us 0.00% 15.970ms 15.970ms 0 b 0 b 2.07 Gb -2.07 Gb 1
aten::neg 1.60% 135.001us 2.80% 236.451us 118.226us 7.978ms 0.18% 15.951ms 7.976ms 0 b 0 b 4.13 Gb 0 b 2
torch::autograd::AccumulateGrad 1.30% 110.242us 2.57% 217.009us 54.252us 10.500us 0.00% 26.000us 6.500us 0 b 0 b 0 b 0 b 4
aten::detach 0.79% 67.093us 1.26% 106.767us 26.692us 10.500us 0.00% 15.500us 3.875us 0 b 0 b 0 b 0 b 4
detach 0.47% 39.674us 0.47% 39.674us 9.918us 5.000us 0.00% 5.000us 1.250us 0 b 0 b 0 b 0 b 4
swc2dBackward 2.45% 206.893us 8.28% 700.168us 700.168us 7.000us 0.00% 2.247ms 2.247ms 0 b 0 b 1.24 Gb -132.50 Kb 1
aten::zeros_like 0.87% 73.824us 4.01% 339.139us 113.046us 11.500us 0.00% 2.231ms 743.500us 0 b 0 b 1.24 Gb 0 b 3
aten::ones 0.38% 32.183us 1.31% 110.534us 110.534us 3.500us 0.00% 9.000us 9.000us 0 b 0 b 132.50 Kb 0 b 1
----------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 8.451ms
CUDA time total: 4.518s
| 0.602646 | 0.433382 |
# PASCAL VOC Translation Experiments
Make synthetic "video" dataset of a translating viewpoint from PASCAL VOC images and groundtruth to evaluate pipeline and clockwork methods.
```
import os
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from PIL import Image
from collections import namedtuple
import caffe
from lib import run_net
from lib import score_util
from datasets.pascal_voc import pascal
PV = pascal('/x/PASCAL/VOC2011')
valset = PV.get_dset()
plt.rcParams['image.cmap'] = 'gray'
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['figure.figsize'] = (12, 12)
```
Configure Caffe
```
caffe.set_device(0)
caffe.set_mode_gpu()
```
Check translated frames and boundaries.
```
idx = valset[0]
im, label = PV.load_image(idx), PV.load_label(idx)
im_t, label_t = PV.make_translated_frames(im, label, shift=32, num_frames=6)
plt.figure()
for i, im in enumerate(im_t):
plt.subplot(1, len(im_t), i + 1)
plt.imshow(im)
plt.axis('off')
plt.tight_layout()
plt.figure()
for i, label in enumerate(label_t):
plt.subplot(1, len(label_t), i + 1)
plt.imshow(PV.palette(label))
plt.axis('off')
plt.tight_layout()
idx = valset[0]
label = PV.load_label(idx)
bdry = PV.make_boundaries(label, thickness=2)
plt.figure()
plt.imshow(PV.palette(label))
plt.figure()
plt.imshow(bdry)
```
## Evaluation
Configure evaluation: number of classes, length of translation "video" in frames, boundary thickness, and amounts of shift, plus the catalogue of methods.
```
n_cl = len(PV.classes)
num_frames = 6
thickness = 5
shifts = (16, 32)
# catalogue methods: the full FCN, truncated FCNs for pool3 and pool4, and the 2 and 3 stage pipelines
# instantiate the nets that will be needed
Method = namedtuple('Method', 'method arch weights infer_func, input_offset')
fcn = Method('fcn', '../nets/voc-fcn8s.prototxt', '../nets/voc-fcn8s-heavy.caffemodel', run_net.segrun, 2)
baseline_3stage = Method('baseline_3stage', '../nets/voc-fcn-pool3.prototxt', '../nets/voc-fcn-pool3.caffemodel', run_net.segrun, 2)
baseline_2stage = Method('baseline_2stage', '../nets/voc-fcn-pool4.prototxt', '../nets/voc-fcn-pool4.caffemodel', run_net.segrun, 2)
pipeline_3stage = Method('pipeline_3stage', '../nets/stage-voc-fcn8s.prototxt', '../nets/voc-fcn8s-heavy.caffemodel', run_net.pipeline_3stage_forward, 0)
pipeline_2stage = Method('pipeline_2stage', '../nets/stage-voc-fcn8s.prototxt', '../nets/voc-fcn8s-heavy.caffemodel', run_net.pipeline_2stage_forward, 1)
def score_translations(method, shift, arch, weights, infer, offset):
"""
Score the translated "video" of PASCAL VOC seg11valid images
taking care of the net architecture and weights, the particular inference method,
and the input offset needed to align every frame and pipeline methods.
"""
net = caffe.Net(arch, weights, caffe.TEST)
hist, hist_b = np.zeros((n_cl, n_cl)), np.zeros((n_cl, n_cl))
for idx in valset:
sys.stdout.flush()
im, label = PV.load_image(idx), PV.load_label(idx)
im_frames, label_frames = PV.make_translated_frames(im, label, shift=shift, num_frames=num_frames)
im_frames, label_frames = im_frames[offset:], label_frames[offset:]
# prepare pipelines: feed initial inputs then skip accordingly
if method == 'pipeline_3stage':
run_net.pipeline_fill_3stage(net, PV.preprocess(im_frames[0]), PV.preprocess(im_frames[1]))
im_frames, label_frames = im_frames[2:], label_frames[2:]
elif method == 'pipeline_2stage':
run_net.pipeline_fill_2stage(net, PV.preprocess(im_frames[0]))
im_frames, label_frames = im_frames[1:], label_frames[1:]
for im_t, label_t in zip(im_frames, label_frames):
out = infer(net, PV.preprocess(im_t))
hist += score_util.score_out_gt(out, label_t, n_cl=n_cl)
bdry = PV.make_boundaries(label_t, thickness=thickness)
hist_b += score_util.score_out_gt_bdry(out, label_t, bdry, n_cl=n_cl)
for name, h in zip(('seg', 'bdry'), (hist, hist_b)):
accP, cl_accP, mean_iuP, fw_iuP = score_util.get_scores(h)
print '{}: {}, shift {}'.format(method, name, shift)
print 'acc\t\t cl acc\t\t mIU\t\t fwIU'
print '{:f}\t {:f}\t {:f}\t {:f}\t'.format(100*accP, 100*cl_accP, 100*mean_iuP, 100*fw_iuP)
for shift in shifts:
for m in (fcn, baseline_3stage, pipeline_3stage, baseline_2stage, pipeline_2stage):
score_translations(m.method, shift, m.arch, m.weights, m.infer_func, m.input_offset)
```
|
github_jupyter
|
import os
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from PIL import Image
from collections import namedtuple
import caffe
from lib import run_net
from lib import score_util
from datasets.pascal_voc import pascal
PV = pascal('/x/PASCAL/VOC2011')
valset = PV.get_dset()
plt.rcParams['image.cmap'] = 'gray'
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['figure.figsize'] = (12, 12)
caffe.set_device(0)
caffe.set_mode_gpu()
idx = valset[0]
im, label = PV.load_image(idx), PV.load_label(idx)
im_t, label_t = PV.make_translated_frames(im, label, shift=32, num_frames=6)
plt.figure()
for i, im in enumerate(im_t):
plt.subplot(1, len(im_t), i + 1)
plt.imshow(im)
plt.axis('off')
plt.tight_layout()
plt.figure()
for i, label in enumerate(label_t):
plt.subplot(1, len(label_t), i + 1)
plt.imshow(PV.palette(label))
plt.axis('off')
plt.tight_layout()
idx = valset[0]
label = PV.load_label(idx)
bdry = PV.make_boundaries(label, thickness=2)
plt.figure()
plt.imshow(PV.palette(label))
plt.figure()
plt.imshow(bdry)
n_cl = len(PV.classes)
num_frames = 6
thickness = 5
shifts = (16, 32)
# catalogue methods: the full FCN, truncated FCNs for pool3 and pool4, and the 2 and 3 stage pipelines
# instantiate the nets that will be needed
Method = namedtuple('Method', 'method arch weights infer_func, input_offset')
fcn = Method('fcn', '../nets/voc-fcn8s.prototxt', '../nets/voc-fcn8s-heavy.caffemodel', run_net.segrun, 2)
baseline_3stage = Method('baseline_3stage', '../nets/voc-fcn-pool3.prototxt', '../nets/voc-fcn-pool3.caffemodel', run_net.segrun, 2)
baseline_2stage = Method('baseline_2stage', '../nets/voc-fcn-pool4.prototxt', '../nets/voc-fcn-pool4.caffemodel', run_net.segrun, 2)
pipeline_3stage = Method('pipeline_3stage', '../nets/stage-voc-fcn8s.prototxt', '../nets/voc-fcn8s-heavy.caffemodel', run_net.pipeline_3stage_forward, 0)
pipeline_2stage = Method('pipeline_2stage', '../nets/stage-voc-fcn8s.prototxt', '../nets/voc-fcn8s-heavy.caffemodel', run_net.pipeline_2stage_forward, 1)
def score_translations(method, shift, arch, weights, infer, offset):
"""
Score the translated "video" of PASCAL VOC seg11valid images
taking care of the net architecture and weights, the particular inference method,
and the input offset needed to align every frame and pipeline methods.
"""
net = caffe.Net(arch, weights, caffe.TEST)
hist, hist_b = np.zeros((n_cl, n_cl)), np.zeros((n_cl, n_cl))
for idx in valset:
sys.stdout.flush()
im, label = PV.load_image(idx), PV.load_label(idx)
im_frames, label_frames = PV.make_translated_frames(im, label, shift=shift, num_frames=num_frames)
im_frames, label_frames = im_frames[offset:], label_frames[offset:]
# prepare pipelines: feed initial inputs then skip accordingly
if method == 'pipeline_3stage':
run_net.pipeline_fill_3stage(net, PV.preprocess(im_frames[0]), PV.preprocess(im_frames[1]))
im_frames, label_frames = im_frames[2:], label_frames[2:]
elif method == 'pipeline_2stage':
run_net.pipeline_fill_2stage(net, PV.preprocess(im_frames[0]))
im_frames, label_frames = im_frames[1:], label_frames[1:]
for im_t, label_t in zip(im_frames, label_frames):
out = infer(net, PV.preprocess(im_t))
hist += score_util.score_out_gt(out, label_t, n_cl=n_cl)
bdry = PV.make_boundaries(label_t, thickness=thickness)
hist_b += score_util.score_out_gt_bdry(out, label_t, bdry, n_cl=n_cl)
for name, h in zip(('seg', 'bdry'), (hist, hist_b)):
accP, cl_accP, mean_iuP, fw_iuP = score_util.get_scores(h)
print '{}: {}, shift {}'.format(method, name, shift)
print 'acc\t\t cl acc\t\t mIU\t\t fwIU'
print '{:f}\t {:f}\t {:f}\t {:f}\t'.format(100*accP, 100*cl_accP, 100*mean_iuP, 100*fw_iuP)
for shift in shifts:
for m in (fcn, baseline_3stage, pipeline_3stage, baseline_2stage, pipeline_2stage):
score_translations(m.method, shift, m.arch, m.weights, m.infer_func, m.input_offset)
| 0.563858 | 0.829181 |
```
import matplotlib.pyplot as plt
with open("out_g.txt", "r") as f:
lines = f.readlines()
clocks = []
addrs = []
for line in lines:
log = line.split()
if len(log) == 2 and log[0].isnumeric():
clocks.append(int(log[0][:-1]))
addrs.append(int(log[1], 16))
plt.plot(clocks, addrs, marker='+', markersize=2, linestyle="None", label="read")
plt.legend()
plt.show()
import matplotlib.pyplot as plt
with open("out_g.txt", "r") as f:
lines = f.readlines()
clocks = []
addrs = []
for line in lines:
log = line.split()
if len(log) == 2 and log[0].isnumeric():
clocks.append(int(log[0][:-1]))
addrs.append(int(log[1], 16))
plt.plot(clocks, addrs, marker='+', markersize=2, linestyle="None", label="read")
plt.legend()
plt.show()
import matplotlib.pyplot as plt
with open("out3.txt", "r") as f:
lines = f.readlines()
read_clocks = []
write_clocks = []
read_addrs = []
write_addrs = []
for line in lines:
log = line.split()
if len(log) == 14:
if log[2] == "Read":
read_clocks.append(int(log[0][:-1]))
read_addrs.append(int(log[10], 16)-0x140000000)
elif log[2] == "Write":
write_clocks.append(int(log[0][:-1]))
write_addrs.append(int(log[10], 16)-0x140000000)
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))
ax1.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax1.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
ax2.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax2.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
plt.legend()
ax1.set_ylim(0xffe00000, 0x100000000)
ax2.set_ylim(-10000, 500000)
plt.show()
import matplotlib.pyplot as plt
with open("out4.txt", "r") as f:
lines = f.readlines()
read_clocks = []
write_clocks = []
read_addrs = []
write_addrs = []
for line in lines:
log = line.split()
if len(log) >= 13:
if log[2] == "Read":
read_clocks.append(int(log[0][:-1]))
read_addrs.append(int(log[10], 16)-0x140000000)
elif log[2] == "Write":
write_clocks.append(int(log[0][:-1]))
write_addrs.append(int(log[10], 16)-0x140000000)
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))
ax1.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax1.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
ax2.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax2.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
plt.legend()
ax1.set_ylim(0xffe00000, 0x100000000)
ax2.set_ylim(-10000, 500000)
plt.show()
import matplotlib.pyplot as plt
with open("out.txt", "r") as f:
lines = f.readlines()
read_clocks = []
write_clocks = []
read_addrs = []
write_addrs = []
for line in lines:
log = line.split()
if len(log) >= 13:
if log[2] == "Read":
read_clocks.append(int(log[0][:-1]))
read_addrs.append(int(log[10], 16)-0x140000000)
elif log[2] == "Write":
write_clocks.append(int(log[0][:-1]))
write_addrs.append(int(log[10], 16)-0x140000000)
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))
ax1.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax1.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
ax2.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax2.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
plt.legend()
ax1.set_ylim(0xffe00000, 0x100000000)
ax2.set_ylim(-10000, 500000)
plt.show()
```
|
github_jupyter
|
import matplotlib.pyplot as plt
with open("out_g.txt", "r") as f:
lines = f.readlines()
clocks = []
addrs = []
for line in lines:
log = line.split()
if len(log) == 2 and log[0].isnumeric():
clocks.append(int(log[0][:-1]))
addrs.append(int(log[1], 16))
plt.plot(clocks, addrs, marker='+', markersize=2, linestyle="None", label="read")
plt.legend()
plt.show()
import matplotlib.pyplot as plt
with open("out_g.txt", "r") as f:
lines = f.readlines()
clocks = []
addrs = []
for line in lines:
log = line.split()
if len(log) == 2 and log[0].isnumeric():
clocks.append(int(log[0][:-1]))
addrs.append(int(log[1], 16))
plt.plot(clocks, addrs, marker='+', markersize=2, linestyle="None", label="read")
plt.legend()
plt.show()
import matplotlib.pyplot as plt
with open("out3.txt", "r") as f:
lines = f.readlines()
read_clocks = []
write_clocks = []
read_addrs = []
write_addrs = []
for line in lines:
log = line.split()
if len(log) == 14:
if log[2] == "Read":
read_clocks.append(int(log[0][:-1]))
read_addrs.append(int(log[10], 16)-0x140000000)
elif log[2] == "Write":
write_clocks.append(int(log[0][:-1]))
write_addrs.append(int(log[10], 16)-0x140000000)
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))
ax1.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax1.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
ax2.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax2.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
plt.legend()
ax1.set_ylim(0xffe00000, 0x100000000)
ax2.set_ylim(-10000, 500000)
plt.show()
import matplotlib.pyplot as plt
with open("out4.txt", "r") as f:
lines = f.readlines()
read_clocks = []
write_clocks = []
read_addrs = []
write_addrs = []
for line in lines:
log = line.split()
if len(log) >= 13:
if log[2] == "Read":
read_clocks.append(int(log[0][:-1]))
read_addrs.append(int(log[10], 16)-0x140000000)
elif log[2] == "Write":
write_clocks.append(int(log[0][:-1]))
write_addrs.append(int(log[10], 16)-0x140000000)
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))
ax1.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax1.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
ax2.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax2.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
plt.legend()
ax1.set_ylim(0xffe00000, 0x100000000)
ax2.set_ylim(-10000, 500000)
plt.show()
import matplotlib.pyplot as plt
with open("out.txt", "r") as f:
lines = f.readlines()
read_clocks = []
write_clocks = []
read_addrs = []
write_addrs = []
for line in lines:
log = line.split()
if len(log) >= 13:
if log[2] == "Read":
read_clocks.append(int(log[0][:-1]))
read_addrs.append(int(log[10], 16)-0x140000000)
elif log[2] == "Write":
write_clocks.append(int(log[0][:-1]))
write_addrs.append(int(log[10], 16)-0x140000000)
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))
ax1.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax1.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
ax2.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle="None", label="read")
ax2.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle="None", label="write")
plt.legend()
ax1.set_ylim(0xffe00000, 0x100000000)
ax2.set_ylim(-10000, 500000)
plt.show()
| 0.209308 | 0.432243 |
# Churn Rate Trend
```
from io import StringIO
import dsx_core_utils
import requests
import os
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
# Insert churn rate visualization (churn_rate_visualization.csv) dataset here, as pandas dataframe
import os, pandas as pd
# Add asset from file system
df_data_1 = pd.read_csv(os.environ['DSX_PROJECT_DIR']+'/datasets/churn_rate_visualization.csv')
df_data_1.head()
import brunel
churnDataRateDF=df_data_1
%brunel data('churnDataRateDF') x(QUARTER_YEAR) y(CHURN_RATE) bar tooltip(#all) axes(x:'Time', y:'Churn Rate':grid) sort(YEAR:ascending) :: width=800, height=500
# Insert customer summary visualization (cust_summary_visualization.csv) dataset here, as pandas dataframe
df_data_2 = pd.read_csv(os.environ['DSX_PROJECT_DIR']+'/datasets/cust_summary_visualization.csv')
df_data_2.head()
import numpy
churnData = df_data_2
churnData['mean_income'] = churnData['INCOME']
churnData_State_Income = churnData[['STATE','mean_income']]
groupedByState = churnData_State_Income.groupby("STATE").agg(numpy.mean)
groupedByState.head(5)
```
# Income by state
```
%brunel data('groupedByState') map key(STATE) x(STATE) color(mean_income) label(STATE) tooltip(#all) :: width=800, height=500
```
# Distribution by churn
```
%brunel data('churnData') x(AGE) y(#count:linear) color(CHURN_LABEL) bin(AGE) interaction(select) stack bar tooltip(#all) filter(CHURN_LABEL) legends(none) | x(AVG_DAILY_TX) y(#count:linear) color(CHURN_LABEL) opacity(#selection) bin(AVG_DAILY_TX) stack bar tooltip(#all) axes(x:10:'AVG DAILY TX', y)| x(AVG_TX_AMT) y(#count:linear) color(CHURN_LABEL) opacity(#selection) bin(AVG_TX_AMT) stack bar tooltip(#all) axes(y) legends(none) | x(INCOME) y(#count:linear) color(CHURN_LABEL) opacity(#selection) bin(INCOME) stack bar tooltip(#all) tooltip(#all) axes(y) legends(none) :: width=800, height=600
%brunel data('churnData') x(SEX) y(#count:linear) color(CHURN_LABEL) stack bar tooltip(#all) sort(SEX) interaction(select) filter(CHURN_LABEL) axes(x:'GENDER', y) legends(none) | x(EDUCATION_GROUP) y(#count:linear) color(CHURN_LABEL) stack bar tooltip(#all) sort(#count) opacity(#selection) axes(x:'EDUCATION', y) | x(ACTIVITY) y(#count:linear) color(CHURN_LABEL) stack bar tooltip(#all) sort(ACTIVITY) opacity(#selection) legends(none) :: width=800, height=600
churnData['mean_churn'] = churnData['CHURN']
churnData_State_Churn = churnData[['STATE','mean_churn']]
groupedchurnByState = churnData_State_Churn.groupby('STATE').agg(numpy.mean)
groupedchurnByState.head(5)
%brunel data('groupedchurnByState') map key(STATE) x(STATE) color(mean_churn) label(STATE) tooltip(#all) :: width=800, height=500
```
Developed/Updated by Aleksandr Petrov, Tim Bohn, Matt Walli, Anup Nair Data Science Elite Team, IBM Analytics
Copyright © IBM Corp. 2017,2018. IBM All Rights Reserved.
|
github_jupyter
|
from io import StringIO
import dsx_core_utils
import requests
import os
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
# Insert churn rate visualization (churn_rate_visualization.csv) dataset here, as pandas dataframe
import os, pandas as pd
# Add asset from file system
df_data_1 = pd.read_csv(os.environ['DSX_PROJECT_DIR']+'/datasets/churn_rate_visualization.csv')
df_data_1.head()
import brunel
churnDataRateDF=df_data_1
%brunel data('churnDataRateDF') x(QUARTER_YEAR) y(CHURN_RATE) bar tooltip(#all) axes(x:'Time', y:'Churn Rate':grid) sort(YEAR:ascending) :: width=800, height=500
# Insert customer summary visualization (cust_summary_visualization.csv) dataset here, as pandas dataframe
df_data_2 = pd.read_csv(os.environ['DSX_PROJECT_DIR']+'/datasets/cust_summary_visualization.csv')
df_data_2.head()
import numpy
churnData = df_data_2
churnData['mean_income'] = churnData['INCOME']
churnData_State_Income = churnData[['STATE','mean_income']]
groupedByState = churnData_State_Income.groupby("STATE").agg(numpy.mean)
groupedByState.head(5)
%brunel data('groupedByState') map key(STATE) x(STATE) color(mean_income) label(STATE) tooltip(#all) :: width=800, height=500
%brunel data('churnData') x(AGE) y(#count:linear) color(CHURN_LABEL) bin(AGE) interaction(select) stack bar tooltip(#all) filter(CHURN_LABEL) legends(none) | x(AVG_DAILY_TX) y(#count:linear) color(CHURN_LABEL) opacity(#selection) bin(AVG_DAILY_TX) stack bar tooltip(#all) axes(x:10:'AVG DAILY TX', y)| x(AVG_TX_AMT) y(#count:linear) color(CHURN_LABEL) opacity(#selection) bin(AVG_TX_AMT) stack bar tooltip(#all) axes(y) legends(none) | x(INCOME) y(#count:linear) color(CHURN_LABEL) opacity(#selection) bin(INCOME) stack bar tooltip(#all) tooltip(#all) axes(y) legends(none) :: width=800, height=600
%brunel data('churnData') x(SEX) y(#count:linear) color(CHURN_LABEL) stack bar tooltip(#all) sort(SEX) interaction(select) filter(CHURN_LABEL) axes(x:'GENDER', y) legends(none) | x(EDUCATION_GROUP) y(#count:linear) color(CHURN_LABEL) stack bar tooltip(#all) sort(#count) opacity(#selection) axes(x:'EDUCATION', y) | x(ACTIVITY) y(#count:linear) color(CHURN_LABEL) stack bar tooltip(#all) sort(ACTIVITY) opacity(#selection) legends(none) :: width=800, height=600
churnData['mean_churn'] = churnData['CHURN']
churnData_State_Churn = churnData[['STATE','mean_churn']]
groupedchurnByState = churnData_State_Churn.groupby('STATE').agg(numpy.mean)
groupedchurnByState.head(5)
%brunel data('groupedchurnByState') map key(STATE) x(STATE) color(mean_churn) label(STATE) tooltip(#all) :: width=800, height=500
| 0.532668 | 0.730891 |
```
%matplotlib inline
# Import dependencies
import matplotlib.pyplot as plt
# Set the x-axis to a list of strings for each month.
x_axis = ["Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sept", "Oct", "Nov", "Dec"]
# Set the y-axis to a list of floats as the total fare in US dollars accumulated for each month.
y_axis = [10.02, 23.24, 39.20, 35.42, 32.34, 27.04, 43.82, 10.56, 11.85, 27.90, 20.71, 20.09]
# Create the plot
plt.plot(x_axis, y_axis)
# Create the plot with ax.plt() OBJECT ORIENTED
fig, ax = plt.subplots() #returns a tuple that contains a figure and axes object(s).
ax.plot(x_axis, y_axis)
# Create the plot with ax.plt()
fig = plt.figure()
ax = fig.add_subplot()
# Create the plot with ax.plt()
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(x_axis, y_axis)
# Create the plot with ax.plt()
ax = plt.axes()
ax.plot(x_axis, y_axis)
# Create the plot.
plt.plot(x_axis, y_axis)
plt.show()
# 5.1.4 Annotate Charts
# Create the plot and add a label for the legend.
plt.plot(x_axis, y_axis, label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Set the y limit between 0 and 45.
plt.ylim(0, 45)
# Create a title.
plt.title("PyBer Fare by Month")
# Add the legend.
plt.legend()
# Create the plot.
plt.plot(x_axis, y_axis, marker="*", color="blue", linewidth=2, label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Set the y limit between 0 and 45.
plt.ylim(0, 45)
# Create a title.
plt.title("PyBer Fare by Month")
# Add a grid.
plt.grid()
# Add the legend.
plt.legend()
# Create the plot.
plt.plot(x_axis, y_axis, marker="d", color="green", linewidth=2, label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Set the y limit between 0 and 45.
plt.ylim(0, 45)
# Create a title.
plt.title("PyBer Fare by Month")
# Add a grid.
plt.grid()
# Add the legend.
plt.legend()
```
# 5.1.5 Create Bar Charts Using the MATLAB Approach
```
# Set the x-axis to a list of strings for each month.
x_axis = ["Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sept", "Oct", "Nov", "Dec"]
# Set the y-axis to a list of floats as the total fare in US dollars accumulated for each month.
y_axis = [10.02, 23.24, 39.20, 35.42, 32.34, 27.04, 43.82, 10.56, 11.85, 27.90, 20.71, 20.09]
# Create the plot
plt.bar(x_axis, y_axis)
# Create the plot.
plt.bar(x_axis, y_axis, color="green", label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Create a title.
plt.title("PyBer Fare by Month")
# Add the legend.
plt.legend()
# Create the plot
plt.barh(x_axis, y_axis)
# If you want the data on opposite axes, switch the arguments in the barh() function and run the cell again:
# Create the plot
plt.barh(y_axis, x_axis)
# To invert the y-axis to have the months in ascending order, use the gca() method.
# The gca() method means "get current axes." We can chain the gca() method to the
# invert_yaxis() method by using gca().invert_yaxis(), as shown here:
# Create the plot.
plt.barh(x_axis, y_axis)
plt.gca().invert_yaxis()
# Skill Drill
# Create the plot.
plt.barh(x_axis, y_axis, color="magenta", label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Create a title.
plt.title("PyBer Fare by Month")
# Add the legend.
plt.legend()
```
# 5.1.6 Create Bar Charts Using the Object-Oriented Approach
```
# Set the x-axis to a list of strings for each month.
x_axis = ["Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sept", "Oct", "Nov", "Dec"]
# Set the y-axis to a list of floats as the total fare in US dollars accumulated for each month.
y_axis = [10.02, 23.24, 39.20, 35.42, 32.34, 27.04, 43.82, 10.56, 11.85, 27.90, 20.71, 20.09]
# Create the plot with ax.plt()
fig, ax = plt.subplots()
ax.bar(x_axis, y_axis)
# Skill Drill
# Create the plot.
plt.barh(x_axis, y_axis, color="cyan", label='Chicago')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Create a title.
plt.title("PyBer Fare by Month")
# Add the legend.
plt.legend()
plt.gca().invert_yaxis()
```
# 5.1.7 Create Scatter Plots and Bubble Charts
```
import matplotlib.pyplot as plt
x = [50, 42, 39, 34, 32, 48, 22, 17, 8, 13]
y = [17, 19, 21, 22, 24, 23, 34, 31, 43, 35]
plt.scatter(x,y, c='green', marker = 'x', s = 100)
plt.xlabel ("total Number of Rides (Per City)")
plt.ylabel ('Total Fare ($)')
plt.show
# change s for varied marker size COOL
plt.scatter(x,y, c='green', marker = 'x', s = x)
plt.xlabel ("total Number of Rides (Per City)")
plt.ylabel ('Total Fare ($)')
plt.show
# Set the x-axis to a list of strings for each month.
x_axis = ["Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sept", "Oct", "Nov", "Dec"]
# Set the y-axis to a list of floats as the total fare in US dollars accumulated for each month.
y_axis = [10.02, 23.24, 39.20, 35.42, 32.34, 27.04, 43.82, 10.56, 11.85, 27.90, 20.71, 20.09]
plt.plot(x_axis, y_axis, 'o')
plt.scatter(x_axis, y_axis)
# skill drill
plt.scatter(y_axis,x_axis,c='red', label = 'Chicago')
plt.xlabel ("Fare ($)")
plt.ylabel ('Date')
plt.legend ()
plt.gca().invert_yaxis()
plt.show
```
# bubble charts
```
plt.scatter(x_axis, y_axis, s=y_axis)
# multiply each data point in the y-axis by 3
y_axis_larger = []
for data in y_axis:
y_axis_larger.append(data*3)
plt.scatter(x_axis, y_axis, s=y_axis_larger)
```
# You can use list comprehension to replace many for and while loops.
```
# new_list = [expression for item in list if conditional] e.g., [i * 3 for i in y_axis])
plt.scatter(x_axis, y_axis, s = [i * 3 for i in y_axis])
```
# Create a Scatter Plot Using the Object-Oriented Interface
```
fig, ax = plt.subplots()
ax.scatter(x_axis, y_axis)
fig, ax = plt.subplots()
ax.scatter(x_axis, y_axis, s=y_axis)
# skill drill
fig, ax = plt.subplots()
ax.scatter(y_axis, x_axis, c='skyblue', alpha =0.2, s=[i * 5 for i in y_axis], label = 'Boston', edgecolors='black', linewidth=2)
plt.legend()
plt.xlim(0, 45)
plt.title("Bullshit")
plt.gca().invert_yaxis()
plt.show()
```
# 5.1.8 Create Pie Charts
```
plt.pie(y_axis, labels=x_axis)
plt.show()
plt.subplots(figsize=(8, 8))
explode_values = (0, 0, 0, 0, 0, 0, 0.2, 0, 0, 0, 0, 0)
plt.pie(y_axis, explode=explode_values, labels=x_axis, autopct='%.1f%%')
plt.show()
# Assign 12 colors, one for each month.
colors = ["slateblue", "magenta", "lightblue", "green", "yellowgreen", "greenyellow", "yellow", "orange", "gold", "indianred", "tomato", "mistyrose"]
explode_values = (0, 0, 0, 0, 0, 0, 0.2, 0, 0, 0, 0, 0)
plt.subplots(figsize=(8, 8))
plt.pie(y_axis,
explode=explode_values,
colors=colors,
labels=x_axis,
autopct='%.1f%%')
plt.show()
```
# Create a Pie Chart Using the Object-Oriented Interface
```
fig, ax = plt.subplots()
ax.pie(y_axis,labels=x_axis)
plt.show()
# skill drill
colors = ["slateblue", "magenta", "lightblue", "green", "yellowgreen", "greenyellow", "yellow", "orange", "gold", "indianred", "tomato", "mistyrose"]
explode_values = (0, 0, 0.2, 0, 0, 0, 0.2, 0, 0, 0, 0, 0)
fig, ax = plt.subplots(figsize=(8, 8))
ax.pie(y_axis,labels=x_axis,
explode=explode_values,
colors=colors,
shadow=True,
counterclock=False,
startangle=90,
autopct='%.1f%%')
plt.show()
```
|
github_jupyter
|
%matplotlib inline
# Import dependencies
import matplotlib.pyplot as plt
# Set the x-axis to a list of strings for each month.
x_axis = ["Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sept", "Oct", "Nov", "Dec"]
# Set the y-axis to a list of floats as the total fare in US dollars accumulated for each month.
y_axis = [10.02, 23.24, 39.20, 35.42, 32.34, 27.04, 43.82, 10.56, 11.85, 27.90, 20.71, 20.09]
# Create the plot
plt.plot(x_axis, y_axis)
# Create the plot with ax.plt() OBJECT ORIENTED
fig, ax = plt.subplots() #returns a tuple that contains a figure and axes object(s).
ax.plot(x_axis, y_axis)
# Create the plot with ax.plt()
fig = plt.figure()
ax = fig.add_subplot()
# Create the plot with ax.plt()
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(x_axis, y_axis)
# Create the plot with ax.plt()
ax = plt.axes()
ax.plot(x_axis, y_axis)
# Create the plot.
plt.plot(x_axis, y_axis)
plt.show()
# 5.1.4 Annotate Charts
# Create the plot and add a label for the legend.
plt.plot(x_axis, y_axis, label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Set the y limit between 0 and 45.
plt.ylim(0, 45)
# Create a title.
plt.title("PyBer Fare by Month")
# Add the legend.
plt.legend()
# Create the plot.
plt.plot(x_axis, y_axis, marker="*", color="blue", linewidth=2, label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Set the y limit between 0 and 45.
plt.ylim(0, 45)
# Create a title.
plt.title("PyBer Fare by Month")
# Add a grid.
plt.grid()
# Add the legend.
plt.legend()
# Create the plot.
plt.plot(x_axis, y_axis, marker="d", color="green", linewidth=2, label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Set the y limit between 0 and 45.
plt.ylim(0, 45)
# Create a title.
plt.title("PyBer Fare by Month")
# Add a grid.
plt.grid()
# Add the legend.
plt.legend()
# Set the x-axis to a list of strings for each month.
x_axis = ["Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sept", "Oct", "Nov", "Dec"]
# Set the y-axis to a list of floats as the total fare in US dollars accumulated for each month.
y_axis = [10.02, 23.24, 39.20, 35.42, 32.34, 27.04, 43.82, 10.56, 11.85, 27.90, 20.71, 20.09]
# Create the plot
plt.bar(x_axis, y_axis)
# Create the plot.
plt.bar(x_axis, y_axis, color="green", label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Create a title.
plt.title("PyBer Fare by Month")
# Add the legend.
plt.legend()
# Create the plot
plt.barh(x_axis, y_axis)
# If you want the data on opposite axes, switch the arguments in the barh() function and run the cell again:
# Create the plot
plt.barh(y_axis, x_axis)
# To invert the y-axis to have the months in ascending order, use the gca() method.
# The gca() method means "get current axes." We can chain the gca() method to the
# invert_yaxis() method by using gca().invert_yaxis(), as shown here:
# Create the plot.
plt.barh(x_axis, y_axis)
plt.gca().invert_yaxis()
# Skill Drill
# Create the plot.
plt.barh(x_axis, y_axis, color="magenta", label='Boston')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Create a title.
plt.title("PyBer Fare by Month")
# Add the legend.
plt.legend()
# Set the x-axis to a list of strings for each month.
x_axis = ["Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sept", "Oct", "Nov", "Dec"]
# Set the y-axis to a list of floats as the total fare in US dollars accumulated for each month.
y_axis = [10.02, 23.24, 39.20, 35.42, 32.34, 27.04, 43.82, 10.56, 11.85, 27.90, 20.71, 20.09]
# Create the plot with ax.plt()
fig, ax = plt.subplots()
ax.bar(x_axis, y_axis)
# Skill Drill
# Create the plot.
plt.barh(x_axis, y_axis, color="cyan", label='Chicago')
# Create labels for the x and y axes.
plt.xlabel("Date")
plt.ylabel("Fare($)")
# Create a title.
plt.title("PyBer Fare by Month")
# Add the legend.
plt.legend()
plt.gca().invert_yaxis()
import matplotlib.pyplot as plt
x = [50, 42, 39, 34, 32, 48, 22, 17, 8, 13]
y = [17, 19, 21, 22, 24, 23, 34, 31, 43, 35]
plt.scatter(x,y, c='green', marker = 'x', s = 100)
plt.xlabel ("total Number of Rides (Per City)")
plt.ylabel ('Total Fare ($)')
plt.show
# change s for varied marker size COOL
plt.scatter(x,y, c='green', marker = 'x', s = x)
plt.xlabel ("total Number of Rides (Per City)")
plt.ylabel ('Total Fare ($)')
plt.show
# Set the x-axis to a list of strings for each month.
x_axis = ["Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sept", "Oct", "Nov", "Dec"]
# Set the y-axis to a list of floats as the total fare in US dollars accumulated for each month.
y_axis = [10.02, 23.24, 39.20, 35.42, 32.34, 27.04, 43.82, 10.56, 11.85, 27.90, 20.71, 20.09]
plt.plot(x_axis, y_axis, 'o')
plt.scatter(x_axis, y_axis)
# skill drill
plt.scatter(y_axis,x_axis,c='red', label = 'Chicago')
plt.xlabel ("Fare ($)")
plt.ylabel ('Date')
plt.legend ()
plt.gca().invert_yaxis()
plt.show
plt.scatter(x_axis, y_axis, s=y_axis)
# multiply each data point in the y-axis by 3
y_axis_larger = []
for data in y_axis:
y_axis_larger.append(data*3)
plt.scatter(x_axis, y_axis, s=y_axis_larger)
# new_list = [expression for item in list if conditional] e.g., [i * 3 for i in y_axis])
plt.scatter(x_axis, y_axis, s = [i * 3 for i in y_axis])
fig, ax = plt.subplots()
ax.scatter(x_axis, y_axis)
fig, ax = plt.subplots()
ax.scatter(x_axis, y_axis, s=y_axis)
# skill drill
fig, ax = plt.subplots()
ax.scatter(y_axis, x_axis, c='skyblue', alpha =0.2, s=[i * 5 for i in y_axis], label = 'Boston', edgecolors='black', linewidth=2)
plt.legend()
plt.xlim(0, 45)
plt.title("Bullshit")
plt.gca().invert_yaxis()
plt.show()
plt.pie(y_axis, labels=x_axis)
plt.show()
plt.subplots(figsize=(8, 8))
explode_values = (0, 0, 0, 0, 0, 0, 0.2, 0, 0, 0, 0, 0)
plt.pie(y_axis, explode=explode_values, labels=x_axis, autopct='%.1f%%')
plt.show()
# Assign 12 colors, one for each month.
colors = ["slateblue", "magenta", "lightblue", "green", "yellowgreen", "greenyellow", "yellow", "orange", "gold", "indianred", "tomato", "mistyrose"]
explode_values = (0, 0, 0, 0, 0, 0, 0.2, 0, 0, 0, 0, 0)
plt.subplots(figsize=(8, 8))
plt.pie(y_axis,
explode=explode_values,
colors=colors,
labels=x_axis,
autopct='%.1f%%')
plt.show()
fig, ax = plt.subplots()
ax.pie(y_axis,labels=x_axis)
plt.show()
# skill drill
colors = ["slateblue", "magenta", "lightblue", "green", "yellowgreen", "greenyellow", "yellow", "orange", "gold", "indianred", "tomato", "mistyrose"]
explode_values = (0, 0, 0.2, 0, 0, 0, 0.2, 0, 0, 0, 0, 0)
fig, ax = plt.subplots(figsize=(8, 8))
ax.pie(y_axis,labels=x_axis,
explode=explode_values,
colors=colors,
shadow=True,
counterclock=False,
startangle=90,
autopct='%.1f%%')
plt.show()
| 0.776369 | 0.965771 |
## Application: Exploring Hand-written Digits
To demonstrate these principles on a more interesting problem, let's consider one piece of the optical character recognition problem: the identification of hand-written digits.
In the wild, this problem involves both locating and identifying characters in an image. Here we'll take a shortcut and use Scikit-Learn's set of pre-formatted digits, which is built into the library.
Loading our usual libraries:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(rc={'figure.figsize':(10,8)}) # Figure size
```
### Loading and visualizing the digits data
We'll use Scikit-Learn's data access interface and take a look at this data:
```
from sklearn.datasets import load_digits
digits = load_digits()
digits.images.shape
```
The images data is a three-dimensional array: 1,797 samples each consisting of an 8 × 8 grid of pixels.
Let's visualize the first hundred of these:
```
import matplotlib.pyplot as plt
fig, axes = plt.subplots(10, 10, figsize=(8, 8),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(digits.images[i], cmap='binary', interpolation='nearest')
ax.text(0.05, 0.05, str(digits.target[i]),
transform=ax.transAxes, color='green')
```
In order to work with this data within Scikit-Learn, we need a two-dimensional, ``[n_samples, n_features]`` representation.
We can accomplish this by treating each pixel in the image as a feature: that is, by flattening out the pixel arrays so that we have a length-64 array of pixel values representing each digit.
Additionally, we need the target array, which gives the previously determined label for each digit.
These two quantities are built into the digits dataset under the ``data`` and ``target`` attributes, respectively:
```
X = digits.data
X.shape
y = digits.target
y.shape
```
We see here that there are 1,797 samples and 64 features.
### Unsupervised learning: Dimensionality reduction
We'd like to visualize our points within the 64-dimensional parameter space, but it's difficult to effectively visualize points in such a high-dimensional space.
Instead we'll reduce the dimensions to 2, using an unsupervised method.
Here, we'll make use of a manifold learning algorithm called *Isomap* and transform the data to two dimensions:
```
from sklearn.manifold import Isomap
iso = Isomap(n_components=2)
iso.fit(digits.data)
data_projected = iso.transform(digits.data)
data_projected.shape
```
We see that the projected data is now two-dimensional.
Let's plot this data to see if we can learn anything from its structure:
```
pkmn_type_colors = ['#FF3333', '#FF8333', '#FFC733', '#F9FF33', '#B8FF33', '#33FFA5', '#33FFF0', '#3371FF', '#8333FF', '#FF33E6']
sns.scatterplot(x=data_projected[:, 0], y=data_projected[:, 1], hue=digits.target, palette=pkmn_type_colors);
```
This plot gives us some good intuition into how well various numbers are separated in the larger 64-dimensional space. For example, zeros (in red) and ones (in orange) have very little overlap in parameter space.
Intuitively, this makes sense: a zero is empty in the middle of the image, while a one will generally have ink in the middle.
On the other hand, there seems to be a more or less continuous spectrum between ones and fours: we can understand this by realizing that some people draw ones with "hats" on them, which cause them to look similar to fours.
Overall, however, the different groups appear to be fairly well separated in the parameter space: this tells us that even a very straightforward supervised classification algorithm should perform suitably on this data.
Let's give it a try.
### Classification on digits
Let's apply a classification algorithm to the digits.
As with the Iris data previously, we will split the data into a training and testing set, and fit a Gaussian naive Bayes model:
```
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=833)
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(Xtrain, ytrain)
y_model = model.predict(Xtest)
```
Now that we have predicted our model, we can gauge its accuracy by comparing the true values of the test set to the predictions:
```
from sklearn.metrics import accuracy_score
accuracy_score(ytest, y_model)
```
With even this extremely simple model, we find about 80% accuracy for classification of the digits!
However, this single number doesn't tell us *where* we've gone wrong—one nice way to do this is to use the *confusion matrix*, which we can compute with Scikit-Learn and plot with Seaborn:
```
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(ytest, y_model)
sns.heatmap(mat, square=True, annot=True, cbar=False)
plt.xlabel('predicted value')
plt.ylabel('true value');
```
This shows us where the mis-labeled points tend to be: for example, a large number of twos here are mis-classified as either ones or eights.
Another way to gain intuition into the characteristics of the model is to plot the inputs again, with their predicted labels.
We'll use green for correct labels, and red for incorrect labels:
```
fig, axes = plt.subplots(10, 10, figsize=(8, 8),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
test_images = Xtest.reshape(-1, 8, 8)
for i, ax in enumerate(axes.flat):
ax.imshow(test_images[i], cmap='binary', interpolation='nearest')
ax.text(0.05, 0.05, str(y_model[i]),
transform=ax.transAxes,
color='green' if (ytest[i] == y_model[i]) else 'red')
```
Examining this subset of the data, we can gain insight regarding where the algorithm might be not performing optimally.
To go beyond our 80% classification rate, we might move to a more sophisticated algorithm such as convolutional neural networks (CNN), random forests or another classification approach.
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(rc={'figure.figsize':(10,8)}) # Figure size
from sklearn.datasets import load_digits
digits = load_digits()
digits.images.shape
import matplotlib.pyplot as plt
fig, axes = plt.subplots(10, 10, figsize=(8, 8),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(digits.images[i], cmap='binary', interpolation='nearest')
ax.text(0.05, 0.05, str(digits.target[i]),
transform=ax.transAxes, color='green')
X = digits.data
X.shape
y = digits.target
y.shape
from sklearn.manifold import Isomap
iso = Isomap(n_components=2)
iso.fit(digits.data)
data_projected = iso.transform(digits.data)
data_projected.shape
pkmn_type_colors = ['#FF3333', '#FF8333', '#FFC733', '#F9FF33', '#B8FF33', '#33FFA5', '#33FFF0', '#3371FF', '#8333FF', '#FF33E6']
sns.scatterplot(x=data_projected[:, 0], y=data_projected[:, 1], hue=digits.target, palette=pkmn_type_colors);
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=833)
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(Xtrain, ytrain)
y_model = model.predict(Xtest)
from sklearn.metrics import accuracy_score
accuracy_score(ytest, y_model)
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(ytest, y_model)
sns.heatmap(mat, square=True, annot=True, cbar=False)
plt.xlabel('predicted value')
plt.ylabel('true value');
fig, axes = plt.subplots(10, 10, figsize=(8, 8),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
test_images = Xtest.reshape(-1, 8, 8)
for i, ax in enumerate(axes.flat):
ax.imshow(test_images[i], cmap='binary', interpolation='nearest')
ax.text(0.05, 0.05, str(y_model[i]),
transform=ax.transAxes,
color='green' if (ytest[i] == y_model[i]) else 'red')
| 0.634996 | 0.99259 |
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import deepchem as dc
import tensorflow as tf
from deepchem.models.tensorgraph.layers import Layer, Input, Reshape, Flatten
from deepchem.models.tensorgraph.layers import Dense, SoftMaxCrossEntropy, ReduceMean, SoftMax
train = dc.data.NumpyDataset(mnist.train.images, mnist.train.labels)
valid = dc.data.NumpyDataset(mnist.validation.images, mnist.validation.labels)
tg = dc.models.TensorGraph(tensorboard=True, model_dir='/tmp/mnist')
feature = Input(shape=(None, 784))
tg.add_layer(feature)
tg.add_feature(feature)
# Images are square 28x28 (batch, height, width, channel)
make_image = Reshape(shape=(-1, 28, 28, 1))
tg.add_layer(make_image, parents=[feature])
class Conv2d(Layer):
def __init__(self, num_outputs, kernel_size=5, **kwargs):
self.num_outputs = num_outputs
self.kernel_size = kernel_size
super(Conv2d, self).__init__(**kwargs)
def __call__(self, *parents):
parent_tensor = parents[0].out_tensor
out_tensor = tf.contrib.layers.conv2d(parent_tensor,
num_outputs=self.num_outputs,
kernel_size = self.kernel_size,
padding="SAME",
activation_fn=tf.nn.relu,
normalizer_fn=tf.contrib.layers.batch_norm)
self.out_tensor = tf.nn.max_pool(out_tensor,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
return self.out_tensor
conv2d_1 = Conv2d(num_outputs=32)
tg.add_layer(conv2d_1, parents=[make_image])
conv2d_2 = Conv2d(num_outputs=64)
tg.add_layer(conv2d_2, parents=[conv2d_1])
flatten = Flatten()
tg.add_layer(flatten, parents=[conv2d_2])
dense1 = Dense(out_channels=1024, activation_fn=tf.nn.relu)
tg.add_layer(dense1, parents=[flatten])
dense2 = Dense(out_channels=10)
tg.add_layer(dense2, parents=[dense1])
label = Input(shape=(None, 10))
tg.add_layer(label, parents=list())
tg.add_label(label)
smce = SoftMaxCrossEntropy()
tg.add_layer(smce, parents=[label, dense2])
loss = ReduceMean()
tg.add_layer(loss, parents=[smce])
tg.set_loss(loss)
output = SoftMax()
tg.add_layer(output, parents=[dense2])
tg.add_output(output)
tg.fit(train, nb_epoch=10)
tg.save()
from sklearn.metrics import roc_curve, auc
import numpy as np
print("Validation")
prediction = np.squeeze(tg.predict_on_batch(valid.X))
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(10):
fpr[i], tpr[i], thresh = roc_curve(valid.y[:, i], prediction[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
print("class %s:auc=%s" % (i, roc_auc[i]))
```
|
github_jupyter
|
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import deepchem as dc
import tensorflow as tf
from deepchem.models.tensorgraph.layers import Layer, Input, Reshape, Flatten
from deepchem.models.tensorgraph.layers import Dense, SoftMaxCrossEntropy, ReduceMean, SoftMax
train = dc.data.NumpyDataset(mnist.train.images, mnist.train.labels)
valid = dc.data.NumpyDataset(mnist.validation.images, mnist.validation.labels)
tg = dc.models.TensorGraph(tensorboard=True, model_dir='/tmp/mnist')
feature = Input(shape=(None, 784))
tg.add_layer(feature)
tg.add_feature(feature)
# Images are square 28x28 (batch, height, width, channel)
make_image = Reshape(shape=(-1, 28, 28, 1))
tg.add_layer(make_image, parents=[feature])
class Conv2d(Layer):
def __init__(self, num_outputs, kernel_size=5, **kwargs):
self.num_outputs = num_outputs
self.kernel_size = kernel_size
super(Conv2d, self).__init__(**kwargs)
def __call__(self, *parents):
parent_tensor = parents[0].out_tensor
out_tensor = tf.contrib.layers.conv2d(parent_tensor,
num_outputs=self.num_outputs,
kernel_size = self.kernel_size,
padding="SAME",
activation_fn=tf.nn.relu,
normalizer_fn=tf.contrib.layers.batch_norm)
self.out_tensor = tf.nn.max_pool(out_tensor,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
return self.out_tensor
conv2d_1 = Conv2d(num_outputs=32)
tg.add_layer(conv2d_1, parents=[make_image])
conv2d_2 = Conv2d(num_outputs=64)
tg.add_layer(conv2d_2, parents=[conv2d_1])
flatten = Flatten()
tg.add_layer(flatten, parents=[conv2d_2])
dense1 = Dense(out_channels=1024, activation_fn=tf.nn.relu)
tg.add_layer(dense1, parents=[flatten])
dense2 = Dense(out_channels=10)
tg.add_layer(dense2, parents=[dense1])
label = Input(shape=(None, 10))
tg.add_layer(label, parents=list())
tg.add_label(label)
smce = SoftMaxCrossEntropy()
tg.add_layer(smce, parents=[label, dense2])
loss = ReduceMean()
tg.add_layer(loss, parents=[smce])
tg.set_loss(loss)
output = SoftMax()
tg.add_layer(output, parents=[dense2])
tg.add_output(output)
tg.fit(train, nb_epoch=10)
tg.save()
from sklearn.metrics import roc_curve, auc
import numpy as np
print("Validation")
prediction = np.squeeze(tg.predict_on_batch(valid.X))
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(10):
fpr[i], tpr[i], thresh = roc_curve(valid.y[:, i], prediction[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
print("class %s:auc=%s" % (i, roc_auc[i]))
| 0.896608 | 0.811003 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import DBSCAN
%matplotlib inline
```
## Read data
```
#data = np.array([[1, 2], [2, 2], [2, 3],[8, 7], [8, 8], [25, 80]])
data = np.genfromtxt("./data/sample_dataset.csv", delimiter=',')
```
## Plot the data
```
plt.figure(figsize=(15, 8))
plt.scatter(data[:, 0], data[:, 1])
```
## DBSCAN
- [sklearn.cluster.DBSCAN](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html)
```
dbscan = DBSCAN(eps=0.9, min_samples=4).fit(data)
clusters_labels = dbscan.labels_
plt.figure(figsize=(15, 8))
plt.scatter(data[:,0], data[:,1], s=40, c=clusters_labels)
plt.axis('equal')
plt.title('DBSCAN')
```
## [Option] Implementing the `DBSCAN` algorithm
```
# 計算兩個向量之間的歐式距離
def cal_euclidean_distance(x1, x2):
return np.sqrt(np.sum(np.square(x1 - x2), axis=0))
# 獲取一個點的ε-鄰域(存index)
def get_neighbor_points(cur_point, data, eps=0.5):
all_points = []
for i in range(len(data)):
if cal_euclidean_distance(cur_point, data[i]) <= eps:
all_points.append(i)
return all_points
def K_DBSCAN(data, eps=0.5, min_samples=5):
NOISE = 0
UNASSIGNED = 0
core=-1
edge=-2
# 找出所有核心點(core points),邊界點(border points)和雜訊(Noise)
point_neignbors = []
point_label = [UNASSIGNED] * len(data)
core_points = []
noncore_points = []
for i in range(len(data)):
points = get_neighbor_points(data[i], data, eps)
point_neignbors.append(points)
if len(points) >= min_samples:
core_points.append(i)
point_label[i] = core
else:
noncore_points.append(i)
for point in noncore_points:
for neighbor in point_neignbors[point]:
if neighbor in core_points:
point_label[point] = edge
break
# Start assigning point to cluster
clusters = 1
visited = set()
while core_points:
## 隨機選取一個核心點
index = np.random.randint(0, len(core_points))
core_point = core_points[index]
## Using a Queue to put core point and find all neighbor point
queue = [core_point]
core_points.remove(core_point)
visited.add(core_point)
while queue:
cur_point = queue.pop(0)
point_label[cur_point] = clusters
for x in point_neignbors[cur_point]:
if x not in visited:
visited.add(x)
if point_label[x] == core:
point_label[x] = clusters
queue.append(x)
core_points.remove(x)
elif point_label[x] == edge :
point_label[x] = clusters
clusters+=1 # move to next cluster
return point_label, clusters
point_lable, clusters = K_DBSCAN(data, eps=0.9, min_samples=4)
plt.figure(figsize=(15, 8))
plt.scatter(data[:,0], data[:,1], s=40, c=point_lable)
plt.axis('equal')
plt.title('DBSCAN')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import DBSCAN
%matplotlib inline
#data = np.array([[1, 2], [2, 2], [2, 3],[8, 7], [8, 8], [25, 80]])
data = np.genfromtxt("./data/sample_dataset.csv", delimiter=',')
plt.figure(figsize=(15, 8))
plt.scatter(data[:, 0], data[:, 1])
dbscan = DBSCAN(eps=0.9, min_samples=4).fit(data)
clusters_labels = dbscan.labels_
plt.figure(figsize=(15, 8))
plt.scatter(data[:,0], data[:,1], s=40, c=clusters_labels)
plt.axis('equal')
plt.title('DBSCAN')
# 計算兩個向量之間的歐式距離
def cal_euclidean_distance(x1, x2):
return np.sqrt(np.sum(np.square(x1 - x2), axis=0))
# 獲取一個點的ε-鄰域(存index)
def get_neighbor_points(cur_point, data, eps=0.5):
all_points = []
for i in range(len(data)):
if cal_euclidean_distance(cur_point, data[i]) <= eps:
all_points.append(i)
return all_points
def K_DBSCAN(data, eps=0.5, min_samples=5):
NOISE = 0
UNASSIGNED = 0
core=-1
edge=-2
# 找出所有核心點(core points),邊界點(border points)和雜訊(Noise)
point_neignbors = []
point_label = [UNASSIGNED] * len(data)
core_points = []
noncore_points = []
for i in range(len(data)):
points = get_neighbor_points(data[i], data, eps)
point_neignbors.append(points)
if len(points) >= min_samples:
core_points.append(i)
point_label[i] = core
else:
noncore_points.append(i)
for point in noncore_points:
for neighbor in point_neignbors[point]:
if neighbor in core_points:
point_label[point] = edge
break
# Start assigning point to cluster
clusters = 1
visited = set()
while core_points:
## 隨機選取一個核心點
index = np.random.randint(0, len(core_points))
core_point = core_points[index]
## Using a Queue to put core point and find all neighbor point
queue = [core_point]
core_points.remove(core_point)
visited.add(core_point)
while queue:
cur_point = queue.pop(0)
point_label[cur_point] = clusters
for x in point_neignbors[cur_point]:
if x not in visited:
visited.add(x)
if point_label[x] == core:
point_label[x] = clusters
queue.append(x)
core_points.remove(x)
elif point_label[x] == edge :
point_label[x] = clusters
clusters+=1 # move to next cluster
return point_label, clusters
point_lable, clusters = K_DBSCAN(data, eps=0.9, min_samples=4)
plt.figure(figsize=(15, 8))
plt.scatter(data[:,0], data[:,1], s=40, c=point_lable)
plt.axis('equal')
plt.title('DBSCAN')
| 0.340705 | 0.873161 |
<a href="https://colab.research.google.com/github/reevutrprog/TRPROG/blob/master/exercise2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Create the most suitable data structures to the data in the next cell.
a = {}# This is an empty dictionary
a[1104]="Universidade do Porto - Faculdade de Economia"
a[904]="Universidade Nova de Lisboa - Faculdade de Economia"
a[1000]="Universidade do Minho"
a[6800]="ISCTE - Instituto Universitário de Lisboa"
a[300]="Universidade de Aveiro"
a[503]="Universidade de Coimbra - Faculdade de Economia"
a[1517]="Universidade de Lisboa - Instituto Superior de Economia e Gestão"
a[1202]="Universidade de Trás-os-Montes e Alto Douro - Escola de Ciências Humanas e Sociais"
a[3117]="Instituto Politécnico de Lisboa - Instituto Superior de Contabilidade e Administra"
a[400]="Universidade da Beira Interior"
a[604]="Universidade de Évora - Escola de Ciências Sociais"
a[3163]="Instituto Politécnico de Viana do Castelo - Escola Superior de Tecnologia e Gestão"
a[1308]="Universidade da Madeira - Faculdade de Ciências Sociais"
a[3102]="Instituto Politécnico de Leiria - Escola Superior de Tecnologia e Gestão"
a[3065]="Instituto Politécnico de Coimbra - Escola Superior de Tecnologia e Gestão de Olive"
a[3082]="Universidade do Algarve - Escola Superior de Gestão, Hotelaria e Turismo"
a[170]="Universidade dos Açores - Faculdade de Economia e Gestão"
a[3043]="Instituto Politécnico de Bragança - Escola Superior de Tecnologia e de Gestão de B"
a[3087]="Universidade do Algarve - Escola Superior de Gestão, Hotelaria e Turismo (Portimão"
a[3092]="Instituto Politécnico da Guarda - Escola Superior de Tecnologia e Gestão"
a[3122]="Instituto Politécnico de Portalegre - Escola Superior de Tecnologia e Gestão"
a[3054]="Instituto Politécnico de Castelo Branco - Escola Superior de Gestão de Idanha-a-No"
grade = {} # Empty Dictionary
grade[1104]=185.0
grade[904]=182.0
grade[1000]=176.4
grade[6800]=173.5
grade[300]=171.8
grade[503]=169.0
grade[1517]=167.5
grade[1202]=157.8
grade[3117]=156.1
grade[400]=152.0
grade[604]=146.4
grade[3163]=144.2
grade[1308]=143.9
grade[3102]=140.0
grade[3065]=138.2
grade[3082]=133.3
grade[170]=133.2
grade[3043]=125.4
grade[3087]=123.5
grade[3092]=112.6
grade[3122]=109.7
a
grade
result = {}
for key in (a.keys() | grade.keys()):
if key in a: result.setdefault(key, []).append(a[key])
if key in grade: result.setdefault(key, []).append(grade[key])
print(result)
#How many schools have the program of Gestao?
result1 = {}
for key,values in a.items():
if 'Gestão' in values:
result1.setdefault(key, []).append(a[key])
result1
len(result1)
#What is the average grade in gestão?
store_grade = []
for key in result1.keys():
if key in grade.keys():
store_grade.append(grade[key])
store_grade
average = sum(store_grade)/len(store_grade)
average
#what is the minimum grade in gestao?
min(store_grade)
#what is the maximum grade in gestao?
max(store_grade)
```
|
github_jupyter
|
# Create the most suitable data structures to the data in the next cell.
a = {}# This is an empty dictionary
a[1104]="Universidade do Porto - Faculdade de Economia"
a[904]="Universidade Nova de Lisboa - Faculdade de Economia"
a[1000]="Universidade do Minho"
a[6800]="ISCTE - Instituto Universitário de Lisboa"
a[300]="Universidade de Aveiro"
a[503]="Universidade de Coimbra - Faculdade de Economia"
a[1517]="Universidade de Lisboa - Instituto Superior de Economia e Gestão"
a[1202]="Universidade de Trás-os-Montes e Alto Douro - Escola de Ciências Humanas e Sociais"
a[3117]="Instituto Politécnico de Lisboa - Instituto Superior de Contabilidade e Administra"
a[400]="Universidade da Beira Interior"
a[604]="Universidade de Évora - Escola de Ciências Sociais"
a[3163]="Instituto Politécnico de Viana do Castelo - Escola Superior de Tecnologia e Gestão"
a[1308]="Universidade da Madeira - Faculdade de Ciências Sociais"
a[3102]="Instituto Politécnico de Leiria - Escola Superior de Tecnologia e Gestão"
a[3065]="Instituto Politécnico de Coimbra - Escola Superior de Tecnologia e Gestão de Olive"
a[3082]="Universidade do Algarve - Escola Superior de Gestão, Hotelaria e Turismo"
a[170]="Universidade dos Açores - Faculdade de Economia e Gestão"
a[3043]="Instituto Politécnico de Bragança - Escola Superior de Tecnologia e de Gestão de B"
a[3087]="Universidade do Algarve - Escola Superior de Gestão, Hotelaria e Turismo (Portimão"
a[3092]="Instituto Politécnico da Guarda - Escola Superior de Tecnologia e Gestão"
a[3122]="Instituto Politécnico de Portalegre - Escola Superior de Tecnologia e Gestão"
a[3054]="Instituto Politécnico de Castelo Branco - Escola Superior de Gestão de Idanha-a-No"
grade = {} # Empty Dictionary
grade[1104]=185.0
grade[904]=182.0
grade[1000]=176.4
grade[6800]=173.5
grade[300]=171.8
grade[503]=169.0
grade[1517]=167.5
grade[1202]=157.8
grade[3117]=156.1
grade[400]=152.0
grade[604]=146.4
grade[3163]=144.2
grade[1308]=143.9
grade[3102]=140.0
grade[3065]=138.2
grade[3082]=133.3
grade[170]=133.2
grade[3043]=125.4
grade[3087]=123.5
grade[3092]=112.6
grade[3122]=109.7
a
grade
result = {}
for key in (a.keys() | grade.keys()):
if key in a: result.setdefault(key, []).append(a[key])
if key in grade: result.setdefault(key, []).append(grade[key])
print(result)
#How many schools have the program of Gestao?
result1 = {}
for key,values in a.items():
if 'Gestão' in values:
result1.setdefault(key, []).append(a[key])
result1
len(result1)
#What is the average grade in gestão?
store_grade = []
for key in result1.keys():
if key in grade.keys():
store_grade.append(grade[key])
store_grade
average = sum(store_grade)/len(store_grade)
average
#what is the minimum grade in gestao?
min(store_grade)
#what is the maximum grade in gestao?
max(store_grade)
| 0.150403 | 0.960473 |
### In this task you will need to predict student performance based on different characteristics with K-NN
```
%pylab inline
%precision 6
import sklearn
import sklearn as skl
import pandas as pd
from pdb import set_trace as bp
np.set_printoptions(linewidth=140,edgeitems=10)
rcParams['figure.figsize'] = (8.0, 5.0)
# you need to download module common from github.com/Apogentus/common and add it to your PythonPath system variable
from common.classes.Struct import Struct
from common.visualize.colors import COLORS
from common.visualize.cross_distributions import cross_distributions, cross_distributions_classification, cross_distributions_regression
```
#### Load and prepare data
```
Z=pd.read_csv('data.csv')
Z.head()
```
#### Make structure with feature groups
```
F=Struct()
F.numeric = 'VisitedResources AnnouncementsView Discussion'.split()
F.categorical = 'Gender Nationality PlaceofBirth StageID GradeID SectionID Topic Semester \
Relation ParentAnsweringSurvey ParentSchoolSatisfaction StudentAbsenceDays'.split()
```
#### Random shuffling
```
random.seed(0)
inds = random.permutation(arange(len(Z)))
Z=Z.loc[inds]
Z.index = arange(len(Z))
Z.index
len(Z)
classes = unique(Z['Class'])
classes
Z['Class'].value_counts()
Z[F.numeric] = Z[F.numeric].astype(float)
```
#### Form array X to consist of only NUMERIC variables and array Y of outputs.
```
X = Z[F.numeric].values
Y = Z['Class'].values
X.dtype
X.shape
```
#### Using *cross_distributions_classification* plot all pairwise feature distrubutions of features and Y.
#### What relatonships between features and Y can you see?
#### Project the *numeric* features onto 2 principal components and plot. Show Y values with color.
* Don't forget to normalize data passed to PCA with sklearn.preprocessing.StandardScaler
#### Can classes be easily separated in this 2D space?
#### Plot variance explained by each of the components
#### Get best cross-validation score by varying n_neighbors=[1,3,5,7,11, 16, 25, 51]
* Use sklearn.model_selection.GridSearchCV to find the classifier clf with best parameters
* Print clf.best\_score_, clf.best\_params_
#### Calculate standard deviations of numeric features. Do they have equal spread?
#### Normalize numeric features with sklearn.preprocessing.StandardScaler
#### For NORMALIZED numeic features get best cross-validation score by varying n_neighbors=[1,3,5,7,11, 16, 25, 51]
* Use sklearn.model_selection.GridSearchCV to find the classifier clf with best parameters
* Print clf.best\_score_ clf.best\_params_
#### Try optimizing weights in ['uniform','distance']
#### Try additionally optimize parameter p in [1,2,3,4,5,7]
#### Report overall best model cross-validation score and best model parameters for numeric features
# Adding categorical features
#### Calculate *sklearn.metrics.normalized_mutual_info_score* to find 3 categorical variables mostly connected with output.
#### What are those features? Explain.
#### Add one-hot-encoded categorical features to Z
```
from common.feature_transformations import get_one_hot_encoding
F.categorical_one_hot = []
for col in F.categorical:
print('Making one-hot-encoding of %s'%col)
feature_one_hot = get_one_hot_encoding(Z[col])
Z = pd.concat([Z, feature_one_hot],axis=1)
F.categorical_one_hot += list(feature_one_hot.columns)
Z.head()
Z.columns
```
#### Add one most significant categorical feature (in one-hot-encoded representation) to X
* don't forget to standardize numeric features to zero mean, unit variance
#### Make optimization over n_neighbors, weights, p. Did the quality improve?
#### Add several most significant categorical features (in one-hot-encoded representation) to X
#### Specify your own distance function to K-NN, calculating $L_1$ normed distance. Numeric features there should be added with weights 1 and one-hot encoded features should be added with custom constant weight w.
#### Finetune K-NN with your custom distance function by it's major parameters and w optimization. Report CV accuracy and best parameters.
### Free hunt: make whatever additional experiments you like to explore the data or improve the quality and report interesting findings.
|
github_jupyter
|
%pylab inline
%precision 6
import sklearn
import sklearn as skl
import pandas as pd
from pdb import set_trace as bp
np.set_printoptions(linewidth=140,edgeitems=10)
rcParams['figure.figsize'] = (8.0, 5.0)
# you need to download module common from github.com/Apogentus/common and add it to your PythonPath system variable
from common.classes.Struct import Struct
from common.visualize.colors import COLORS
from common.visualize.cross_distributions import cross_distributions, cross_distributions_classification, cross_distributions_regression
Z=pd.read_csv('data.csv')
Z.head()
F=Struct()
F.numeric = 'VisitedResources AnnouncementsView Discussion'.split()
F.categorical = 'Gender Nationality PlaceofBirth StageID GradeID SectionID Topic Semester \
Relation ParentAnsweringSurvey ParentSchoolSatisfaction StudentAbsenceDays'.split()
random.seed(0)
inds = random.permutation(arange(len(Z)))
Z=Z.loc[inds]
Z.index = arange(len(Z))
Z.index
len(Z)
classes = unique(Z['Class'])
classes
Z['Class'].value_counts()
Z[F.numeric] = Z[F.numeric].astype(float)
X = Z[F.numeric].values
Y = Z['Class'].values
X.dtype
X.shape
from common.feature_transformations import get_one_hot_encoding
F.categorical_one_hot = []
for col in F.categorical:
print('Making one-hot-encoding of %s'%col)
feature_one_hot = get_one_hot_encoding(Z[col])
Z = pd.concat([Z, feature_one_hot],axis=1)
F.categorical_one_hot += list(feature_one_hot.columns)
Z.head()
Z.columns
| 0.353651 | 0.902481 |
```
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import requests
from tensorflow.python.framework import ops
ops.reset_default_graph()
sess = tf.Session()
housing_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data'
housing_header = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
cols_used = ['CRIM', 'INDUS', 'NOX', 'RM', 'AGE', 'DIS', 'TAX', 'PTRATIO', 'B', 'LSTAT']
num_features = len(cols_used)
housing_file = requests.get(housing_url)
housing_data = [[float(x) for x in y.split(' ') if len(x)>=1] for y in housing_file.text.split('\n') if len(y)>=1]
y_vals = np.transpose([np.array([y[13] for y in housing_data])])
x_vals = np.array([[x for i,x in enumerate(y) if housing_header[i] in cols_used] for y in housing_data])
x_vals = (x_vals - x_vals.min(0)) / x_vals.ptp(0)
weight_diagonal = x_vals.std(0)
weight_matrix = tf.cast(tf.diag(weight_diagonal), dtype=tf.float32)
train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
k = 4
batch_size=len(x_vals_test)
x_data_train = tf.placeholder(shape=[None, num_features], dtype=tf.float32)
x_data_test = tf.placeholder(shape=[None, num_features], dtype=tf.float32)
y_target_train = tf.placeholder(shape=[None, 1], dtype=tf.float32)
y_target_test = tf.placeholder(shape=[None, 1], dtype=tf.float32)
subtraction_term = tf.subtract(x_data_train, tf.expand_dims(x_data_test,1))
first_product = tf.matmul(subtraction_term, tf.tile(tf.expand_dims(weight_matrix,0), [batch_size,1,1]))
second_product = tf.matmul(first_product, tf.transpose(subtraction_term, perm=[0,2,1]))
distance = tf.sqrt(tf.matrix_diag_part(second_product))
top_k_xvals, top_k_indices = tf.nn.top_k(tf.negative(distance), k=k)
x_sums = tf.expand_dims(tf.reduce_sum(top_k_xvals, 1),1)
x_sums_repeated = tf.matmul(x_sums,tf.ones([1, k], tf.float32))
x_val_weights = tf.expand_dims(tf.div(top_k_xvals,x_sums_repeated), 1)
top_k_yvals = tf.gather(y_target_train, top_k_indices)
prediction = tf.squeeze(tf.matmul(x_val_weights,top_k_yvals), squeeze_dims=[1])
mse = tf.div(tf.reduce_sum(tf.square(tf.subtract(prediction, y_target_test))), batch_size)
num_loops = int(np.ceil(len(x_vals_test)/batch_size))
for i in range(num_loops):
min_index = i*batch_size
max_index = min((i+1)*batch_size,len(x_vals_train))
x_batch = x_vals_test[min_index:max_index]
y_batch = y_vals_test[min_index:max_index]
predictions = sess.run(prediction, feed_dict={x_data_train: x_vals_train, x_data_test: x_batch,
y_target_train: y_vals_train, y_target_test: y_batch})
batch_mse = sess.run(mse, feed_dict={x_data_train: x_vals_train, x_data_test: x_batch,
y_target_train: y_vals_train, y_target_test: y_batch})
print('Batch #' + str(i+1) + ' MSE: ' + str(np.round(batch_mse,3)))
bins = np.linspace(5, 50, 45)
plt.hist(predictions, bins, alpha=0.5, label='Prediction')
plt.hist(y_batch, bins, alpha=0.5, label='Actual')
plt.title('Histogram of Predicted and Actual Values')
plt.xlabel('Med Home Value in $1,000s')
plt.ylabel('Frequency')
plt.legend(loc='upper right')
plt.show()
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import requests
from tensorflow.python.framework import ops
ops.reset_default_graph()
sess = tf.Session()
housing_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data'
housing_header = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
cols_used = ['CRIM', 'INDUS', 'NOX', 'RM', 'AGE', 'DIS', 'TAX', 'PTRATIO', 'B', 'LSTAT']
num_features = len(cols_used)
housing_file = requests.get(housing_url)
housing_data = [[float(x) for x in y.split(' ') if len(x)>=1] for y in housing_file.text.split('\n') if len(y)>=1]
y_vals = np.transpose([np.array([y[13] for y in housing_data])])
x_vals = np.array([[x for i,x in enumerate(y) if housing_header[i] in cols_used] for y in housing_data])
x_vals = (x_vals - x_vals.min(0)) / x_vals.ptp(0)
weight_diagonal = x_vals.std(0)
weight_matrix = tf.cast(tf.diag(weight_diagonal), dtype=tf.float32)
train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
k = 4
batch_size=len(x_vals_test)
x_data_train = tf.placeholder(shape=[None, num_features], dtype=tf.float32)
x_data_test = tf.placeholder(shape=[None, num_features], dtype=tf.float32)
y_target_train = tf.placeholder(shape=[None, 1], dtype=tf.float32)
y_target_test = tf.placeholder(shape=[None, 1], dtype=tf.float32)
subtraction_term = tf.subtract(x_data_train, tf.expand_dims(x_data_test,1))
first_product = tf.matmul(subtraction_term, tf.tile(tf.expand_dims(weight_matrix,0), [batch_size,1,1]))
second_product = tf.matmul(first_product, tf.transpose(subtraction_term, perm=[0,2,1]))
distance = tf.sqrt(tf.matrix_diag_part(second_product))
top_k_xvals, top_k_indices = tf.nn.top_k(tf.negative(distance), k=k)
x_sums = tf.expand_dims(tf.reduce_sum(top_k_xvals, 1),1)
x_sums_repeated = tf.matmul(x_sums,tf.ones([1, k], tf.float32))
x_val_weights = tf.expand_dims(tf.div(top_k_xvals,x_sums_repeated), 1)
top_k_yvals = tf.gather(y_target_train, top_k_indices)
prediction = tf.squeeze(tf.matmul(x_val_weights,top_k_yvals), squeeze_dims=[1])
mse = tf.div(tf.reduce_sum(tf.square(tf.subtract(prediction, y_target_test))), batch_size)
num_loops = int(np.ceil(len(x_vals_test)/batch_size))
for i in range(num_loops):
min_index = i*batch_size
max_index = min((i+1)*batch_size,len(x_vals_train))
x_batch = x_vals_test[min_index:max_index]
y_batch = y_vals_test[min_index:max_index]
predictions = sess.run(prediction, feed_dict={x_data_train: x_vals_train, x_data_test: x_batch,
y_target_train: y_vals_train, y_target_test: y_batch})
batch_mse = sess.run(mse, feed_dict={x_data_train: x_vals_train, x_data_test: x_batch,
y_target_train: y_vals_train, y_target_test: y_batch})
print('Batch #' + str(i+1) + ' MSE: ' + str(np.round(batch_mse,3)))
bins = np.linspace(5, 50, 45)
plt.hist(predictions, bins, alpha=0.5, label='Prediction')
plt.hist(y_batch, bins, alpha=0.5, label='Actual')
plt.title('Histogram of Predicted and Actual Values')
plt.xlabel('Med Home Value in $1,000s')
plt.ylabel('Frequency')
plt.legend(loc='upper right')
plt.show()
| 0.651133 | 0.675731 |
# Robust Principal Component Analysis
This notebook introduces what is adaptive best subset selection robust principal component analysis (abessRPCA) and then we show how it works using **abess** package on an artificial example.
## PCA
Principal component analysis (PCA) is an important method in the field of data science, which can reduce the dimension of data and simplify our model. It solves an optimization problem like:
$$
\max_{v} v^T\Sigma v,\qquad s.t.\quad v^Tv=1.
$$
where $\Sigma = X^TX/(n-1)$ and $X\in \mathbb{R}^{n\times p}$ is the centered sample matrix with each row containing one observation of $p$ variables.
## Robust-PCA (RPCA)
However, the original PCA is sensitive to outliers, which may be unavoidable in real data:
- Object has extreme performance due to fortuity, but he/she shows normal in repeated test;
- Wrong observation/recording/computing, e.g. missing or dead pixels, X-ray spikes.
In this situation, PCA may spend too much attention on unnecessary variables.
That's why Robust-PCA (RPCA) is presented, which can be used to recover the (low-rank) sample for subsequent processing.
In mathematics, RPCA manages to divide the sample matrix $X$ into two parts:
$$
X = S + L,
$$
where $S$ is the sparse "outlier" matrix and $L$ is the "information" matrix with a low rank.
Generally, we also suppose $S$ is not low-rank and $L$ is not sparse, in order to get unique solution.

In Lagrange format,
$$
\min _{S, L}\|X-S-L\|_{F} \leq \varepsilon, s . t . \quad \operatorname{rank}(L)=r,\|S\|_{0} \leq s
$$
where $s$ is the sparsity of $S$.
After RPCA, the information matrix $L$ can be used in further analysis.
> Note that it does NOT deal with "noise", which may stay in $L$ and need further procession.
## Hard Impute
To solve its sub-problem, RPCA under known outlier positions, we follow a process called "Hard Impute".
The main idea is to estimate the outlier values by precise values with KPCA, where $K=r$.
Here are the steps:
1. Input $X, outliers, M, \varepsilon$, where $outliers$ records the non-zero positions in $S$;
2. Denote $X_{\text{new}} \leftarrow {\bf 0}$ with the same shape of $X$;
3. For $i = 1,2, \dots, M$:
- $X_{\text{old}} = \begin{cases} X_{\text{new}},&\text{for } outliers\\X,&\text{for others}\end{cases}$;
- Form KPCA on $X_{\text{old}}$ with $K=r$, and denote $v$ as the eigenvectors;
- $X_{\text{new}} = X_{\text{old}}\cdot v\cdot v^T$;
- If $\|X_{\text{new}} - X_{\text{old}}\| < \varepsilon$, break;
End for;
4. Return $X_{\text{new}}$ as $L$;
where $M$ is the maximum iteration times and $\varepsilon$ is the convergence coefficient.
The final $X_{\text{new}}$ is supposed to be $L$ under given outlier positions.
## RPCA Application
Recently, RPCA is more widely used, for example,
- Video Decomposition:
in a surveillance video, the background may be unchanged for a long time while only a few pixels (e.g. people) update.
In order to improve the efficiency of store and analysis, we need to decomposite the video into background and
foreground. Since the background is unchanged, it can be stored well in a low-rank matrix, while the foreground, which is
usually quite small, can be indicated by a sparse matrix. That is what RPCA does.
- Face recognition:
due to complex lighting conditions, a small part of the facial features may be unrecognized (e.g. shadow).
In the face recognition, we need to remove the effects of shadows and focus on the face data. Actually, since the face data is almost unchanged (for one person), and the shadows affect only a small part, it is also a suitable situation to use RPCA. Here are some examples:

## Simulated Data Example
### Fitting model
Now we generate an example with $100$ rows and $100$ columns with $200$ outliers.
We are looking forward to recover it with a low rank $10$.
```
import numpy as np
def gen_data(n, p, s, r, seed = 0):
np.random.seed(seed)
outlier = np.random.choice(n*p, s, replace=False)
outlier = np.vstack((outlier//p, outlier%p)).T
L = np.dot(np.random.rand(n, r), np.random.rand(r, n))
S = np.zeros((n, p))
S[outlier[:, 0], outlier[:, 1]] = float(np.random.randn(1)) * 10
X = L + S
return X, S
n = 100 # rows
p = 100 # columns
s = 200 # outliers
r = 10 # rank(L)
X, S = gen_data(n, p, s, r)
print(f'X shape: {X.shape}')
# print(f'outlier: \n{outlier}')
```
In order to use our program, users should call `abessRPCA()` and give the outlier number to `support_size`. Note that it can be a specific integer or an integer interval. For the latter case, a support size will be chosen by information criterion (e.g. GIC) adaptively.
```
from abess.pca import abessRPCA
model = abessRPCA(support_size = s) # support_size can be a interval like `range(s_min, s_max)`
```
It is quite easy to fit this model, with `abessRPCA.fit` function. Given the original sample matrix $X$ and $rank(L)$ we wnat, the program will give a result quickly.
```
model.fit(X, r = r) # r=rank(L)
```
Now the estimated outlier matrix is stored in `model.coef_`.
```
S_est = model.coef_
print(f'estimated sparsity: {np.count_nonzero(S_est)}')
```
### More on the result
To check the performance of the program, we use TPR, FPR as the criterion.
```
def TPR(pred, real):
TP = (pred != 0) & (real != 0)
P = (real != 0)
return sum(sum(TP)) / sum(sum(P))
def FPR(pred, real):
FP = (pred != 0) & (real == 0)
N = (real == 0)
return sum(sum(FP)) / sum(sum(N))
def test_model(pred, real):
tpr = TPR(pred, real)
fpr = FPR(pred, real)
return np.array([tpr, fpr])
print(f'[TPR FPR] = {test_model(S_est, S)}')
```
We can also change different random seed to test for more situation:
```
M = 30 # use 30 different seed
res = np.zeros(2)
for seed in range(M):
X, S = gen_data(n, p, s, r, seed)
model = abessRPCA(support_size=s).fit(X, r=r)
res += test_model(model.coef_, S)
print(f'[TPR FPR] = {res/M}')
```
Under all of these situations, `abessRPCA` have a good performance.
|
github_jupyter
|
import numpy as np
def gen_data(n, p, s, r, seed = 0):
np.random.seed(seed)
outlier = np.random.choice(n*p, s, replace=False)
outlier = np.vstack((outlier//p, outlier%p)).T
L = np.dot(np.random.rand(n, r), np.random.rand(r, n))
S = np.zeros((n, p))
S[outlier[:, 0], outlier[:, 1]] = float(np.random.randn(1)) * 10
X = L + S
return X, S
n = 100 # rows
p = 100 # columns
s = 200 # outliers
r = 10 # rank(L)
X, S = gen_data(n, p, s, r)
print(f'X shape: {X.shape}')
# print(f'outlier: \n{outlier}')
from abess.pca import abessRPCA
model = abessRPCA(support_size = s) # support_size can be a interval like `range(s_min, s_max)`
model.fit(X, r = r) # r=rank(L)
S_est = model.coef_
print(f'estimated sparsity: {np.count_nonzero(S_est)}')
def TPR(pred, real):
TP = (pred != 0) & (real != 0)
P = (real != 0)
return sum(sum(TP)) / sum(sum(P))
def FPR(pred, real):
FP = (pred != 0) & (real == 0)
N = (real == 0)
return sum(sum(FP)) / sum(sum(N))
def test_model(pred, real):
tpr = TPR(pred, real)
fpr = FPR(pred, real)
return np.array([tpr, fpr])
print(f'[TPR FPR] = {test_model(S_est, S)}')
M = 30 # use 30 different seed
res = np.zeros(2)
for seed in range(M):
X, S = gen_data(n, p, s, r, seed)
model = abessRPCA(support_size=s).fit(X, r=r)
res += test_model(model.coef_, S)
print(f'[TPR FPR] = {res/M}')
| 0.358578 | 0.982356 |
## 1. Introduction
An _exception_ is a signal that a condition has occurred that can’t be easily handled using the normal flow-of-control of a Python program. _Exceptions_ are often defined as being “errors” but this is not always the case. All _errors_ in Python are dealt with using _exceptions_, but not all _exceptions_ are _errors_.
## 2. Raising and Catching Errors
With try/except, you tell the python interpreter:
- Try to execute a block of code, the “try” clause.
- If the whole block of code executes without any run-time errors, just carry on with the rest of the program after the try/except statement.
- If a run-time error does occur during execution of the block of code:
- skip the rest of that block of code (but don’t exit the whole program)
- execute a block of code in the “except” clause
- then carry on with the rest of the program after the try/except statement
The syntax is fairly straightforward. The only tricky part is that after the word except, there can optionally be a specification of the kinds of errors that will be handled. The catchall is the class Exception. If you write `except Exception:` all runtime errors will be handled. If you specify a more restricted class of errors, only those errors will be handled; any other kind of error will still cause the program to stop running and an error message to be printed.
The code below causes an error of type IndexError, by trying to access the third element of a two-element list.
```
items = ['a', 'b']
third = items[2]
```
The code below causes an error of type ZeroDivisionError, or less specifically ArithmeticError.
```
x = 5
y = x/0
```
Let’s see what happens if we wrap some of this problematic code in a try/except statement. Note that `this won't print` doesn’t print: when the error is encountered, the rest of the try block is skipped and the exception block is executed. When the except block is done, it continues on with the next line of code that’s outdented to the same level as the try: `continuing` is printed.
```
try:
items = ['a', 'b']
third = items[2]
print("This won't print")
except Exception:
print("got an error")
print("continuing")
```
If we catch only IndexEror, and we actually have a divide by zero error, the program does stop executing.
```
try:
items = ['a', 'b']
third = items[2]
print("This won't print")
except IndexError:
print("error 1")
print("continuing")
try:
x = 5
y = x/0
print("This won't print, either")
except IndexError:
print("error 2")
print("continuing again")
```
There’s one other useful feature. The exception code can access a variable that contains information about exactly what the error was. Thus, for example, in the except clause you could print out the information that would normally be printed as an error message but continue on with execution of the rest of the program. To do that, you specify a variable name after the exception class that’s being handled. The exception clause code can refer to that variable name.
```
try:
items = ['a', 'b']
third = items[2]
print("This won't print")
except Exception as e:
print("got an error")
print(e)
print("continuing")
```
## 3. Different Exception Types (Standard Exceptions)
## Practice
1. Below, we have provided a list of tuples that consist of student names, final exam scores, and whether or not they will pass the class. For some students, the tuple does not have a third element because it is unknown whether or not they will pass. Currently, the for loop does not work. Add a try/except clause so the code runs without an error - if there is no third element in the tuple, no changes should be made to the dictionary.
```
students = [('Timmy', 95, 'Will pass'), ('Martha', 70), ('Betty', 82, 'Will pass'), ('Stewart', 50, 'Will not pass'), ('Ashley', 68), ('Natalie', 99, 'Will pass'), ('Archie', 71), ('Carl', 45, 'Will not pass')]
passing = {'Will pass': 0, 'Will not pass': 0}
for tup in students:
try:
if tup[2] == 'Will pass':
passing['Will pass'] += 1
elif tup[2] == 'Will not pass':
passing['Will not pass'] += 1
except:
continue
```
2. Below, we have provided code that does not run. Add a try/except clause so the code runs without errors. If an element is not able to undergo the addition operation, the string ‘Error’ should be appended to plus_four.
```
nums = [5, 9, '4', 3, 2, 1, 6, 5, '7', 4, 3, 2, 6, 7, 8, '0', 3, 4, \
0, 6, 5, '3', 5, 6, 7, 8, '3', '1', 5, 6, 7, 9, 3, 2, 5, 6, '9', 2, 3, 4, 5, 1]
plus_four = []
for num in nums:
try:
plus_four.append(num+4)
except:
plus_four.append("Error")
```
3. If a blog post didn’t get any likes, a ‘Likes’ key should be added to that dictionary with a value of 0.
```
blog_posts = [{'Photos': 3, 'Likes': 21, 'Comments': 2}, {'Likes': 13, 'Comments': 2, 'Shares': 1},\
{'Photos': 5, 'Likes': 33, 'Comments': 8, 'Shares': 3}, {'Comments': 4, 'Shares': 2}, \
{'Photos': 8, 'Comments': 1, 'Shares': 1}, {'Photos': 3, 'Likes': 19, 'Comments': 3}]
total_likes = 0
for post in blog_posts:
try:
total_likes = total_likes + post['Likes']
except:
post['Likes'] = 0
```
4. The code below assigns the 5th letter of each word in food to the new list fifth. However, the code currently produces errors. Insert a try/except clause that will allow the code to run and produce of list of the 5th letter in each word. If the word is not long enough, it should not print anything out. Note: The pass statement is a null operation; nothing will happen when it executes.
```
food = ["chocolate", "chicken", "corn", "sandwich", "soup", "potatoes", "beef", "lox", "lemonade"]
fifth = []
for x in food:
try:
fifth.append(x[4])
except:
pass
```
5. The code below takes the list of country, country, and searches to see if it is in the dictionary gold which shows some countries who won gold during the Olympics. However, this code currently does not work. Correctly add try/except clause in the code so that it will correctly populate the list, country_gold, with either the number of golds won or the string “Did not get gold”.
```
gold = {"US":46, "Fiji":1, "Great Britain":27, "Cuba":5, "Thailand":2, "China":26, "France":10}
country = ["Fiji", "Chile", "Mexico", "France", "Norway", "US"]
country_gold = []
for x in country:
try:
country_gold.append(gold[x])
except:
country_gold.append("Did not get gold")
```
6. Provided is a buggy for loop that tries to accumulate some values out of some dictionaries. Insert a try/except so that the code passes.
```
di = [{"Puppies": 17, 'Kittens': 9, "Birds": 23, 'Fish': 90, "Hamsters": 49}, {"Puppies": 23, "Birds": 29, "Fish": 20, "Mice": 20, "Snakes": 7}, {"Fish": 203, "Hamsters": 93, "Snakes": 25, "Kittens": 89}, {"Birds": 20, "Puppies": 90, "Snakes": 21, "Fish": 10, "Kittens": 67}]
total = 0
for diction in di:
try:
total = total + diction['Puppies']
except:
pass
print("Total number of puppies:", total)
```
7. The list, numb, contains integers. Write code that populates the list remainder with the remainder of 36 divided by each number in numb. For example, the first element should be 0, because 36/6 has no remainder. If there is an error, have the string “Error” appear in the remainder
```
numb = [6, 0, 36, 8, 2, 36, 0, 12, 60, 0, 45, 0, 3, 23]
remainder = []
for num in numb:
try:
remainder.append(36%num)
except:
remainder.append("Error")
```
8. Provided is buggy code, insert a try/except so that the code passes.
```
lst = [2,4,10,42,12,0,4,7,21,4,83,8,5,6,8,234,5,6,523,42,34,0,234,1,435,465,56,7,3,43,23]
lst_three = []
for num in lst:
try:
if 3 % num == 0:
lst_three.append(num)
except:
pass
```
9. Write code so that the buggy code provided works using a try/except. When the codes does not work in the try, have it append to the list attempt the string “Error”.
```
full_lst = ["ab", 'cde', 'fgh', 'i', 'jkml', 'nop', 'qr', 's', 'tv', 'wxy', 'z']
attempt = []
for elem in full_lst:
try:
attempt.append(elem[1])
except:
attempt.append("Error")
```
10. The following code tries to append the third element of each list in conts to the new list third_countries. Currently, the code does not work. Add a try/except clause so the code runs without errors, and the string ‘Continent does not have 3 countries’ is appended to countries instead of producing an error.
```
conts = [['Spain', 'France', 'Greece', 'Portugal', 'Romania', 'Germany'], ['USA', 'Mexico', 'Canada'], ['Japan', 'China', 'Korea', 'Vietnam', 'Cambodia'], ['Argentina', 'Chile', 'Brazil', 'Ecuador', 'Uruguay', 'Venezuela'], ['Australia'], ['Zimbabwe', 'Morocco', 'Kenya', 'Ethiopa', 'South Africa'], ['Antarctica']]
third_countries = []
for c in conts:
try:
third_countries.append(c[2])
except:
third_countries.append("Continent does not have 3 countries")
```
11. The buggy code below prints out the value of the sport in the list sport. Use try/except so that the code will run properly. If the sport is not in the dictionary, ppl_play, add it in with the value of 1.
```
sport = ["hockey", "basketball", "soccer", "tennis", "football", "baseball"]
ppl_play = {"hockey":4, "soccer": 10, "football": 15, "tennis": 8}
for x in sport:
try:
print(ppl_play[x])
except:
ppl_play[x] = 1
print(ppl_play[x])
```
12. Provided is a buggy for loop that tries to accumulate some values out of some dictionaries. Insert a try/except so that the code passes. If the key is not there, initialize it in the dictionary and set the value to zero.
```
di = [{"Puppies": 17, 'Kittens': 9, "Birds": 23, 'Fish': 90, "Hamsters": 49}, {"Puppies": 23, "Birds": 29, "Fish": 20, "Mice": 20, "Snakes": 7}, {"Fish": 203, "Hamsters": 93, "Snakes": 25, "Kittens": 89}, {"Birds": 20, "Puppies": 90, "Snakes": 21, "Fish": 10, "Kittens": 67}]
total = 0
for diction in di:
try:
total = total + diction['Puppies']
except:
diction['Puppies']= 0
print("Total number of puppies:", total)
```
|
github_jupyter
|
items = ['a', 'b']
third = items[2]
x = 5
y = x/0
try:
items = ['a', 'b']
third = items[2]
print("This won't print")
except Exception:
print("got an error")
print("continuing")
try:
items = ['a', 'b']
third = items[2]
print("This won't print")
except IndexError:
print("error 1")
print("continuing")
try:
x = 5
y = x/0
print("This won't print, either")
except IndexError:
print("error 2")
print("continuing again")
try:
items = ['a', 'b']
third = items[2]
print("This won't print")
except Exception as e:
print("got an error")
print(e)
print("continuing")
students = [('Timmy', 95, 'Will pass'), ('Martha', 70), ('Betty', 82, 'Will pass'), ('Stewart', 50, 'Will not pass'), ('Ashley', 68), ('Natalie', 99, 'Will pass'), ('Archie', 71), ('Carl', 45, 'Will not pass')]
passing = {'Will pass': 0, 'Will not pass': 0}
for tup in students:
try:
if tup[2] == 'Will pass':
passing['Will pass'] += 1
elif tup[2] == 'Will not pass':
passing['Will not pass'] += 1
except:
continue
nums = [5, 9, '4', 3, 2, 1, 6, 5, '7', 4, 3, 2, 6, 7, 8, '0', 3, 4, \
0, 6, 5, '3', 5, 6, 7, 8, '3', '1', 5, 6, 7, 9, 3, 2, 5, 6, '9', 2, 3, 4, 5, 1]
plus_four = []
for num in nums:
try:
plus_four.append(num+4)
except:
plus_four.append("Error")
blog_posts = [{'Photos': 3, 'Likes': 21, 'Comments': 2}, {'Likes': 13, 'Comments': 2, 'Shares': 1},\
{'Photos': 5, 'Likes': 33, 'Comments': 8, 'Shares': 3}, {'Comments': 4, 'Shares': 2}, \
{'Photos': 8, 'Comments': 1, 'Shares': 1}, {'Photos': 3, 'Likes': 19, 'Comments': 3}]
total_likes = 0
for post in blog_posts:
try:
total_likes = total_likes + post['Likes']
except:
post['Likes'] = 0
food = ["chocolate", "chicken", "corn", "sandwich", "soup", "potatoes", "beef", "lox", "lemonade"]
fifth = []
for x in food:
try:
fifth.append(x[4])
except:
pass
gold = {"US":46, "Fiji":1, "Great Britain":27, "Cuba":5, "Thailand":2, "China":26, "France":10}
country = ["Fiji", "Chile", "Mexico", "France", "Norway", "US"]
country_gold = []
for x in country:
try:
country_gold.append(gold[x])
except:
country_gold.append("Did not get gold")
di = [{"Puppies": 17, 'Kittens': 9, "Birds": 23, 'Fish': 90, "Hamsters": 49}, {"Puppies": 23, "Birds": 29, "Fish": 20, "Mice": 20, "Snakes": 7}, {"Fish": 203, "Hamsters": 93, "Snakes": 25, "Kittens": 89}, {"Birds": 20, "Puppies": 90, "Snakes": 21, "Fish": 10, "Kittens": 67}]
total = 0
for diction in di:
try:
total = total + diction['Puppies']
except:
pass
print("Total number of puppies:", total)
numb = [6, 0, 36, 8, 2, 36, 0, 12, 60, 0, 45, 0, 3, 23]
remainder = []
for num in numb:
try:
remainder.append(36%num)
except:
remainder.append("Error")
lst = [2,4,10,42,12,0,4,7,21,4,83,8,5,6,8,234,5,6,523,42,34,0,234,1,435,465,56,7,3,43,23]
lst_three = []
for num in lst:
try:
if 3 % num == 0:
lst_three.append(num)
except:
pass
full_lst = ["ab", 'cde', 'fgh', 'i', 'jkml', 'nop', 'qr', 's', 'tv', 'wxy', 'z']
attempt = []
for elem in full_lst:
try:
attempt.append(elem[1])
except:
attempt.append("Error")
conts = [['Spain', 'France', 'Greece', 'Portugal', 'Romania', 'Germany'], ['USA', 'Mexico', 'Canada'], ['Japan', 'China', 'Korea', 'Vietnam', 'Cambodia'], ['Argentina', 'Chile', 'Brazil', 'Ecuador', 'Uruguay', 'Venezuela'], ['Australia'], ['Zimbabwe', 'Morocco', 'Kenya', 'Ethiopa', 'South Africa'], ['Antarctica']]
third_countries = []
for c in conts:
try:
third_countries.append(c[2])
except:
third_countries.append("Continent does not have 3 countries")
sport = ["hockey", "basketball", "soccer", "tennis", "football", "baseball"]
ppl_play = {"hockey":4, "soccer": 10, "football": 15, "tennis": 8}
for x in sport:
try:
print(ppl_play[x])
except:
ppl_play[x] = 1
print(ppl_play[x])
di = [{"Puppies": 17, 'Kittens': 9, "Birds": 23, 'Fish': 90, "Hamsters": 49}, {"Puppies": 23, "Birds": 29, "Fish": 20, "Mice": 20, "Snakes": 7}, {"Fish": 203, "Hamsters": 93, "Snakes": 25, "Kittens": 89}, {"Birds": 20, "Puppies": 90, "Snakes": 21, "Fish": 10, "Kittens": 67}]
total = 0
for diction in di:
try:
total = total + diction['Puppies']
except:
diction['Puppies']= 0
print("Total number of puppies:", total)
| 0.078552 | 0.942295 |
```
import sqlite3
import os
os.remove('checkers_cache.db')
conn = sqlite3.connect('checkers_cache.db')
c = conn.cursor()
c.execute("CREATE TABLE IF NOT EXISTS cache (hash INT PRIMARY KEY, depth INT, score REAL)")
c.execute("INSERT INTO cache VALUES (12,20,35.14)")
c.executemany("INSERT OR REPLACE INTO cache VALUES(?, ?, ?)", [(1, 1, 1), (2, 2, 2), (3, 3, 3)])
conn.commit()
conn.close()
import sqlite3
conn = sqlite3.connect('checkers_cache.db')
c = conn.cursor()
c.execute("SELECT * FROM cache")
a = c.fetchall()
print(a)
conn.close()
c.fetchone()
conn = sqlite3.connect('checkers_cache.db')
c = conn.cursor()
c.execute("DELETE FROM cache")
conn.commit()
conn.close()
for i in range(0, 1000000):
c.execute("INSERT INTO cache VALUES (?, ?, ?)", (i, i, i))
conn.commit()
import sqlite3
import os
class Cache(object):
def __init__(self, commit_every):
self.commit_every = commit_every
self.buffer = {}
#os.remove("checkers_cache.db")
self.conn = sqlite3.connect('checkers_cache.db')
self.c = self.conn.cursor()
self.c.execute("CREATE TABLE IF NOT EXISTS cache (hash INT PRIMARY KEY, depth INT, score REAL)")
self.conn.commit()
def add(self, board, score, depth):
self.buffer[board] = (depth, score)
if len(self.buffer) >= self.commit_every:
self.commit()
def commit(self):
vals = [(item.__hash__(), key[0], key[1]) for item, key in self.buffer.items()]
self.c.executemany("INSERT OR REPLACE INTO cache VALUES (?, ?, ?)", vals)
self.buffer = {}
self.conn.commit()
print("commited")
def get(self, b):
if b in self.buffer:
return self.buffer[b]
else:
self.c.execute("SELECT * FROM cache WHERE hash = " + str(b.__hash__()))
result = self.c.fetchone()
return result if result is None else (result[1], result[2]) # returns None if nothing found_
ch = Cache(7)
ch.add("ac", 1, 0)
ch.add("bc", 3, 2)
ch.add("cc", 2, 9)
ch.add("dc", 4, 1)
ch.add("ec", 5, 2)
ch.add("fc", 7, 5)
a = {"1": {"4": {"type": "piece white", "next_moves": []}}, "3": {"4": {"type": "piece white", "next_moves": []}}, "4": {"7": {"type": "piece white", "next_moves": []}}, "5": {"0": {"type": "piece black", "next_moves": []}, "2": {"type": "piece black", "next_moves": []}}, "6": {"1": {"type": "piece black", "next_moves": []}, "3": {"type": "piece white", "next_moves": [[4, 1]]}, "7": {"type": "piece white", "next_moves": []}}, "7": {"2": {"type": "piece white", "next_moves": []}, "4": {"type": "piece white", "next_moves": []}, "6": {"type": "piece white", "next_moves": []}}}
a
import json
from helpers import *
from search import *
jsn = """
{"next_state":{"0":{"1":{"type":"promoted piece white ","next_moves":[]},"5":{"type":"piece black ","next_moves":[]},"7":{"type":"piece black ","next_moves":[]}},"1":{"6":{"type":"piece black ","next_moves":[]}},"2":{"1":{"type":"piece white ","next_moves":[]},"5":{"type":"piece black ","next_moves":[]},"7":{"type":"piece black ","next_moves":[]}},"3":{"2":{"type":"piece white ","next_moves":[]},"6":{"type":"piece black ","next_moves":[]}},"4":{"7":{"type":"piece white ","next_moves":[]}},"5":{"4":{"type":"piece white ","next_moves":[]},"6":{"type":"piece white ","next_moves":[]}},"6":{"1":{"type":"piece white ","next_moves":[]},"3":{"type":"piece white ","next_moves":[]},"7":{"type":"piece white ","next_moves":[]}},"7":{"0":{"type":"piece white ","next_moves":[]},"2":{"type":"piece white ","next_moves":[]},"4":{"type":"piece white ","next_moves":[]},"6":{"type":"piece white ","next_moves":[]}}},"end_coord":[5,6]}
"""
bjson = json.loads('{"next_state":{"0":{"3":{"next_moves":[],"type":"piece black "},"7":{"next_moves":[],"type":"piece black "}},"1":{"2":{"next_moves":[],"type":"piece black "},"6":{"next_moves":[],"type":"piece black "}},"2":{"3":{"next_moves":[],"type":"piece black "},"5":{"next_moves":[],"type":"piece black "},"7":{"next_moves":[],"type":"piece black "}},"3":{"2":{"next_moves":[],"type":"piece black "}},"5":{"0":{"next_moves":[],"type":"piece white "},"2":{"next_moves":[],"type":"piece white "},"4":{"next_moves":[],"type":"piece white "},"6":{"next_moves":[],"type":"piece white "}},"6":{"1":{"next_moves":[],"type":"piece white "},"7":{"next_moves":[],"type":"piece white "}},"7":{"0":{"next_moves":[],"type":"piece white "},"2":{"next_moves":[],"type":"piece white "},"4":{"next_moves":[],"type":"piece white "}}},"end_coord":[5,4]}')
b = jsontoboard(bjson["next_state"], False)
moves = get_moves(b)
b
alphabeta(weird, 4)
weird
alphabeta(weird, 3)
score_board(weird)
score_board(get_moves(weird)[0].get_board())
score_board(get_moves(nonweird)[0].get_board())
```
|
github_jupyter
|
import sqlite3
import os
os.remove('checkers_cache.db')
conn = sqlite3.connect('checkers_cache.db')
c = conn.cursor()
c.execute("CREATE TABLE IF NOT EXISTS cache (hash INT PRIMARY KEY, depth INT, score REAL)")
c.execute("INSERT INTO cache VALUES (12,20,35.14)")
c.executemany("INSERT OR REPLACE INTO cache VALUES(?, ?, ?)", [(1, 1, 1), (2, 2, 2), (3, 3, 3)])
conn.commit()
conn.close()
import sqlite3
conn = sqlite3.connect('checkers_cache.db')
c = conn.cursor()
c.execute("SELECT * FROM cache")
a = c.fetchall()
print(a)
conn.close()
c.fetchone()
conn = sqlite3.connect('checkers_cache.db')
c = conn.cursor()
c.execute("DELETE FROM cache")
conn.commit()
conn.close()
for i in range(0, 1000000):
c.execute("INSERT INTO cache VALUES (?, ?, ?)", (i, i, i))
conn.commit()
import sqlite3
import os
class Cache(object):
def __init__(self, commit_every):
self.commit_every = commit_every
self.buffer = {}
#os.remove("checkers_cache.db")
self.conn = sqlite3.connect('checkers_cache.db')
self.c = self.conn.cursor()
self.c.execute("CREATE TABLE IF NOT EXISTS cache (hash INT PRIMARY KEY, depth INT, score REAL)")
self.conn.commit()
def add(self, board, score, depth):
self.buffer[board] = (depth, score)
if len(self.buffer) >= self.commit_every:
self.commit()
def commit(self):
vals = [(item.__hash__(), key[0], key[1]) for item, key in self.buffer.items()]
self.c.executemany("INSERT OR REPLACE INTO cache VALUES (?, ?, ?)", vals)
self.buffer = {}
self.conn.commit()
print("commited")
def get(self, b):
if b in self.buffer:
return self.buffer[b]
else:
self.c.execute("SELECT * FROM cache WHERE hash = " + str(b.__hash__()))
result = self.c.fetchone()
return result if result is None else (result[1], result[2]) # returns None if nothing found_
ch = Cache(7)
ch.add("ac", 1, 0)
ch.add("bc", 3, 2)
ch.add("cc", 2, 9)
ch.add("dc", 4, 1)
ch.add("ec", 5, 2)
ch.add("fc", 7, 5)
a = {"1": {"4": {"type": "piece white", "next_moves": []}}, "3": {"4": {"type": "piece white", "next_moves": []}}, "4": {"7": {"type": "piece white", "next_moves": []}}, "5": {"0": {"type": "piece black", "next_moves": []}, "2": {"type": "piece black", "next_moves": []}}, "6": {"1": {"type": "piece black", "next_moves": []}, "3": {"type": "piece white", "next_moves": [[4, 1]]}, "7": {"type": "piece white", "next_moves": []}}, "7": {"2": {"type": "piece white", "next_moves": []}, "4": {"type": "piece white", "next_moves": []}, "6": {"type": "piece white", "next_moves": []}}}
a
import json
from helpers import *
from search import *
jsn = """
{"next_state":{"0":{"1":{"type":"promoted piece white ","next_moves":[]},"5":{"type":"piece black ","next_moves":[]},"7":{"type":"piece black ","next_moves":[]}},"1":{"6":{"type":"piece black ","next_moves":[]}},"2":{"1":{"type":"piece white ","next_moves":[]},"5":{"type":"piece black ","next_moves":[]},"7":{"type":"piece black ","next_moves":[]}},"3":{"2":{"type":"piece white ","next_moves":[]},"6":{"type":"piece black ","next_moves":[]}},"4":{"7":{"type":"piece white ","next_moves":[]}},"5":{"4":{"type":"piece white ","next_moves":[]},"6":{"type":"piece white ","next_moves":[]}},"6":{"1":{"type":"piece white ","next_moves":[]},"3":{"type":"piece white ","next_moves":[]},"7":{"type":"piece white ","next_moves":[]}},"7":{"0":{"type":"piece white ","next_moves":[]},"2":{"type":"piece white ","next_moves":[]},"4":{"type":"piece white ","next_moves":[]},"6":{"type":"piece white ","next_moves":[]}}},"end_coord":[5,6]}
"""
bjson = json.loads('{"next_state":{"0":{"3":{"next_moves":[],"type":"piece black "},"7":{"next_moves":[],"type":"piece black "}},"1":{"2":{"next_moves":[],"type":"piece black "},"6":{"next_moves":[],"type":"piece black "}},"2":{"3":{"next_moves":[],"type":"piece black "},"5":{"next_moves":[],"type":"piece black "},"7":{"next_moves":[],"type":"piece black "}},"3":{"2":{"next_moves":[],"type":"piece black "}},"5":{"0":{"next_moves":[],"type":"piece white "},"2":{"next_moves":[],"type":"piece white "},"4":{"next_moves":[],"type":"piece white "},"6":{"next_moves":[],"type":"piece white "}},"6":{"1":{"next_moves":[],"type":"piece white "},"7":{"next_moves":[],"type":"piece white "}},"7":{"0":{"next_moves":[],"type":"piece white "},"2":{"next_moves":[],"type":"piece white "},"4":{"next_moves":[],"type":"piece white "}}},"end_coord":[5,4]}')
b = jsontoboard(bjson["next_state"], False)
moves = get_moves(b)
b
alphabeta(weird, 4)
weird
alphabeta(weird, 3)
score_board(weird)
score_board(get_moves(weird)[0].get_board())
score_board(get_moves(nonweird)[0].get_board())
| 0.228759 | 0.149531 |
__Hydrograph Development Notebooks__
\parskip = \baselineskip
__Breach Hydrographs, Lisle, NY__
\parskip = \baselineskip
\noindent PYTHON
\parskip = \baselineskip
Overview: This notebook was created to document the development of breach hydrographs using historical flow data for two locations along the levee at [Lisle, NY](https://www.google.com/maps/@42.3449088,-75.9925314,3206m/data=!3m1!1e3).
Updated 1.8.2017
# Develop a discharge hydrograph of the 1% storm for the main flooding source
## Exploratory Analysis
[Notebook](https://github.com/Dewberry-RSG/HydrologyTools/blob/master/nbs/GageExplorer.ipynb) developed to evaluate available gage data in the vicinity, plot available time series & qualitatively assess differences in hydrograph shapes.
## Discharge Hydrograph
Select the timeseries for the [highest recorded peak (2005)](https://nwis.waterdata.usgs.gov/ny/nwis/peak/?site_no=01509000&agency_cd=USGS) where [available instantaneous gage data](https://nwis.waterdata.usgs.gov/ny/nwis/uv?cb_00060=on&format=gif_default&site_no=01509000&period=&begin_date=2005-03-25&end_date=2005-04-15) exists.
## Calculate Peak Discharge
Using Bulletin 17B procedures and the USGS PeakFQ software, the 1% Storm (peak flow) value was determined at the nearest applicable gage.
[Input](https://raw.githubusercontent.com/Dewberry-RSG/HydrologyTools/master/nbs/peakfq/USGS01509520.inp)
[Output](https://raw.githubusercontent.com/Dewberry-RSG/HydrologyTools/master/nbs/peakfq/USGS01509520.PRT)
## Stretch the Hydrograph
Stretch the hydrograph to the calculated peak flow.
*Details on the methodology for this are described in the [Proof of Concepts Document](https://github.com/Dewberry-RSG/HydrologyTools/blob/master/documentation/ProofofConceptHydrologyStudies.pdf). Implementation using Jupyter Notebooks for the proof of concept cases are available in the [Methodology Overview](MethodologyOverview.ipynb).*
## Develop of a breach hydrograph using the flow hydrograph created in step 1.
In order to convert the flow hydrograph to a stage hydrograph at any given location, a hydraulic analysis is necessary to properly account for differences in the cross-sectional area at different locations along the reach. For this study a 1D, Steady State model was used to simulate a Natural Valley scenario in the levee impact area.
The geometry from this model was used to compute flows ranging from 1,000 cfs to 25,000 cfs in increments of 1,000 cfs. The results of these simulations were used to develop a rating curve at each area of interest to translate flow to stage. The image below is an example of the results at a cross section, illustrating how geometric differences at different flow levels may impact the resultant stage for a given reach.
Note that the change in water surface elevation when the flow is constrained by the channel and the levee during overbank flow rises at a greater rate when compared with the unconstrained flow when conveyance occurs on both sides of the levee (natural valley).

### Procedure to create Breach Hydrograph
__A__. Read in HEC-RAS data for the XS of interest & create a stage/discharge rating curve using computed flows.
__B__. Using the data from the rating curve in Part A, create a function (nth degree polynomial interpolation equation) to convert flow to stage.
__C__. Convert the 1% flow hydrograph created in Step 1 to a stage hydrograph using the rating curve function created in Part B.
__D__. Normalize the stage to 'feet above the breach point' using the stage hydrograph created in Part C and the breach elevation (head = 0 at breach point).
__E__. Using the head above breach hydrograph created in Part D, calculate weir flow for (use the Standard Weir Equation, below) each timestep & write to file.
__F__. Input weir flow hydrograph created in Part E into HEC-RAS unsteady flow file.
#### The Standard Weir Equation:
#### $\qquad$ $Q = CLH^{2/3}$
\parskip = \baselineskip
\noindent , Where:
\parskip = \baselineskip
$\qquad$ __Q__ = Discharge (cfs)
\parskip = \baselineskip
$\qquad$ __C__ = Weir coefficient (unitless)
\parskip = \baselineskip
$\qquad$ __L__ = Weir crest length (ft)
\parskip = \baselineskip
$\qquad$ __H__ = Energy head over the weir crest (ft)
\parskip = \baselineskip
\noindent *From HEC-RAS Lateral Weir Coefficients, use the default Weir Coefficient of 2.0 (range is 1.5-2.6, given on page 3-50 of the [2D Users Manual](http://www.hec.usace.army.mil/software/hec-ras/documentation/HEC-RAS%205.0%202D%20Modeling%20Users%20Manual.pdf))*
```
import os
from glob import glob
from importlib import reload
import utils; reload(utils)
from utils import *
import pandas as pd
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
```
## 1. Flow hydrogaphs for the 1% chance storm:
#### Read in gage data & develop base hydrograph
- Read in Base Hydrograph from [USGS Gage](https://waterdata.usgs.gov/usa/nwis/uv?site_no=01509000) & Scale to 1-pct using scale factor
- Manually Smooth the curve where needed
*See comment lines in [Helper Script](ny_clean_nb.py) for smoothing procedure.*
*Data for the falling limb of the April 2005 event was missing from the USGS database. To fill the missing data a third order polynomial interpolation was used to approximately mirror the rising limb.
```
printbold('Reading data from')
gage_data, data_dir = initialize()
base_storm_1pct = init_base_hydro(gage_data)
smooth_storm = smooth_base_hydro(base_storm_1pct)
```
# Breach Location # 1:
__Upstream Location:__ The upstream location selected for Lisle lies in the center of the levee. This is because the 1% flow calculated at the upstream section of the levee along Dudley Creek does not exceed the banks, and therefore a breach at this location would not occur. The backwater from Tioughnioga river does not reach the upper sections of the levee, therefore no breach was created in this location.
As described above, breach locations should be chosen at or very near a XS (or a XS added if not in the area of breaching) to get the stage discharge curve as accurate as possible.

#### Plots Summary (from top to bottom):
1. Stage/Discharge Rating curve at HEC-RAS Cross section shown above.
2. 1% chance discharge hydrograph on the left, converted to stage on the right. In red is the elevation of the levee toe (invert of the hypothetical breach).
3. 1% chance stage hydrograph on the left (limited to values above breaching threshold), converted to head over breach elevation in the center, final breach hydrograph (computed as described above) in cfs.
NOTE: For this analysis, __*hypothetical breach locations*__ have been selected at 2 locations along the levee. There is no evidence that a breach is likely to occur at this location.
```
rasdata = r'p:\02\NY\Broome_Co_36007C\LAMP2\TECH\Analysis\Modeling\WorkingModels\Lisle_WhitPt\LAMPRAS\Lisle_WhitPt.p05.hdf'
data_dir = r'C:\Users\slawler\Repos\HydrologyTools\sample_data'
community = 'Lisle'
station = 56045.65
breach_point = 1
breach_height = 969.45
GetBreachFlow(smooth_storm, community, rasdata, station, breach_point, breach_height, data_dir, date_int = 12)
```
# Breach Location # 2:
__Downstream Location__

#### Plots Summary (from top to bottom):
1. Stage/Discharge Rating curve at HEC-RAS cross section shown above.
2. 1% chance discharge hydrograph on the left, converted to stage on the right. In red is the elevation of the levee toe (invert of the hypothetical breach).
3. 1% chance stage hydrograph on the left (limited to values above breaching threshold), converted to head over breach elevation in the center, final breach hydrograph (computed as described above) in cfs.
NOTE: For this analysis, __*hypothetical breach locations*__ have been selected at 2 locations along the levee. There is no evidence that a breach is likely to occur at this location.
```
rasdata = r'p:\02\NY\Broome_Co_36007C\LAMP2\TECH\Analysis\Modeling\WorkingModels\Lisle_WhitPt\LAMPRAS\Lisle_WhitPt.p05.hdf'
community="Lisle"
station = 53914.48
breach_point = 2
breach_height = 964.71
GetBreachFlow(smooth_storm,community , rasdata, station, breach_point, breach_height, data_dir, date_int = 12)
```
|
github_jupyter
|
import os
from glob import glob
from importlib import reload
import utils; reload(utils)
from utils import *
import pandas as pd
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
printbold('Reading data from')
gage_data, data_dir = initialize()
base_storm_1pct = init_base_hydro(gage_data)
smooth_storm = smooth_base_hydro(base_storm_1pct)
rasdata = r'p:\02\NY\Broome_Co_36007C\LAMP2\TECH\Analysis\Modeling\WorkingModels\Lisle_WhitPt\LAMPRAS\Lisle_WhitPt.p05.hdf'
data_dir = r'C:\Users\slawler\Repos\HydrologyTools\sample_data'
community = 'Lisle'
station = 56045.65
breach_point = 1
breach_height = 969.45
GetBreachFlow(smooth_storm, community, rasdata, station, breach_point, breach_height, data_dir, date_int = 12)
rasdata = r'p:\02\NY\Broome_Co_36007C\LAMP2\TECH\Analysis\Modeling\WorkingModels\Lisle_WhitPt\LAMPRAS\Lisle_WhitPt.p05.hdf'
community="Lisle"
station = 53914.48
breach_point = 2
breach_height = 964.71
GetBreachFlow(smooth_storm,community , rasdata, station, breach_point, breach_height, data_dir, date_int = 12)
| 0.194291 | 0.909144 |
<a href="https://colab.research.google.com/github/rodrihgh/MerryXmasEU/blob/main/Wikimedia_Image_Downloader.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
pip install lxml aiohttp asyncio nest_asyncio aiofiles
import shutil
from lxml import etree
from lxml import html
import aiohttp
import asyncio
import aiofiles
import nest_asyncio
import os
import json
#fix for running in colab
nest_asyncio.apply()
#all categories will be compressed into storeDirectory + download.zip on completion
#ONLY CHNAGE THIS
url = 'https://commons.wikimedia.org/wiki/Category:Saint_Nicholas_in_art'
storeDirectory = '/content/'
checkForCategories = False
file_list = []
#DON'T CHANGE
tasks = []
categories = 0
categoryTasks = []
checkedCategories = []
completed = -1
totalImages = 0
completedImages = 0
async def fetch_page(session, url, cat = ''):
try:
async with session.get(url) as resp:
source = await resp.text()
dom = html.fromstring(source)
return [cat, dom]
except asyncio.TimeoutEraror or aiohttp.ClientConnectorError:
#print('Timeout')
return False
async def fetch_images(session, url):
global totalImages
dom = await fetch_page(session, url)
#timeout error
if dom == False:
return
images = dom[1].xpath('*//div[@class="thumb"]//a')
subcategories = dom[1].xpath('*//div[@class="CategoryTreeItem"]//a')
if(len(subcategories) > 0 and checkForCategories):
for category in subcategories:
if(category not in checkedCategories):
categoryTasks.append(asyncio.ensure_future(fetch_images(session, 'https://commons.wikimedia.org' + category.attrib['href'])))
checkedCategories.append(category)
print('Found category', category.attrib['href'])
if (len(images) > 0):
totalImages += len(images)
print("Found", len(images), "images")
#download images for each category
for image in images:
cat = url.split('Category:')[1]
tasks.append(asyncio.ensure_future(fetch_page(session, 'https://commons.wikimedia.org' + image.attrib['href'], cat)))
global completed
completed += 1
async def main(loop):
global url
global completedImages
global file_list
async with aiohttp.ClientSession(loop=loop) as session:
await fetch_images(session, url)
#fix to resolve finding all categories first
while True:
await asyncio.gather(*categoryTasks)
#check if images have been found on all category pages
if(completed == len(categoryTasks)):
break
pages = await asyncio.gather(*tasks)
for page in pages:
#timeout error
if(page == False):
continue
cat = page[0]
source = page[1]
#print(cat, source.xpath('*//div[@class="fullImageLink"]//img')[0].attrib['src'])
imgURL = source.xpath('*//div[@class="fullImageLink"]//img')[0].attrib['src']
filename = imgURL.split('/')[-1]
#TODO: save images into category folders
async with session.get(imgURL) as resp:
if resp.status == 200:
if(os.path.isdir(storeDirectory + cat + '/') == False):
os.mkdir(storeDirectory + cat + '/')
fname = storeDirectory + cat + '/' + filename
try:
f = await aiofiles.open(fname, mode='wb')
await f.write(await resp.read())
await f.close()
completedImages += 1
file_list.append(imgURL)
print(completedImages, '/', totalImages)
except OSError:
pass
#create zip file to download
shutil.make_archive(storeDirectory + 'download', 'zip', storeDirectory)
#main event loop
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
print(json.dumps(file_list, indent=0))
from google.colab import drive
drive.flush_and_unmount()
```
|
github_jupyter
|
pip install lxml aiohttp asyncio nest_asyncio aiofiles
import shutil
from lxml import etree
from lxml import html
import aiohttp
import asyncio
import aiofiles
import nest_asyncio
import os
import json
#fix for running in colab
nest_asyncio.apply()
#all categories will be compressed into storeDirectory + download.zip on completion
#ONLY CHNAGE THIS
url = 'https://commons.wikimedia.org/wiki/Category:Saint_Nicholas_in_art'
storeDirectory = '/content/'
checkForCategories = False
file_list = []
#DON'T CHANGE
tasks = []
categories = 0
categoryTasks = []
checkedCategories = []
completed = -1
totalImages = 0
completedImages = 0
async def fetch_page(session, url, cat = ''):
try:
async with session.get(url) as resp:
source = await resp.text()
dom = html.fromstring(source)
return [cat, dom]
except asyncio.TimeoutEraror or aiohttp.ClientConnectorError:
#print('Timeout')
return False
async def fetch_images(session, url):
global totalImages
dom = await fetch_page(session, url)
#timeout error
if dom == False:
return
images = dom[1].xpath('*//div[@class="thumb"]//a')
subcategories = dom[1].xpath('*//div[@class="CategoryTreeItem"]//a')
if(len(subcategories) > 0 and checkForCategories):
for category in subcategories:
if(category not in checkedCategories):
categoryTasks.append(asyncio.ensure_future(fetch_images(session, 'https://commons.wikimedia.org' + category.attrib['href'])))
checkedCategories.append(category)
print('Found category', category.attrib['href'])
if (len(images) > 0):
totalImages += len(images)
print("Found", len(images), "images")
#download images for each category
for image in images:
cat = url.split('Category:')[1]
tasks.append(asyncio.ensure_future(fetch_page(session, 'https://commons.wikimedia.org' + image.attrib['href'], cat)))
global completed
completed += 1
async def main(loop):
global url
global completedImages
global file_list
async with aiohttp.ClientSession(loop=loop) as session:
await fetch_images(session, url)
#fix to resolve finding all categories first
while True:
await asyncio.gather(*categoryTasks)
#check if images have been found on all category pages
if(completed == len(categoryTasks)):
break
pages = await asyncio.gather(*tasks)
for page in pages:
#timeout error
if(page == False):
continue
cat = page[0]
source = page[1]
#print(cat, source.xpath('*//div[@class="fullImageLink"]//img')[0].attrib['src'])
imgURL = source.xpath('*//div[@class="fullImageLink"]//img')[0].attrib['src']
filename = imgURL.split('/')[-1]
#TODO: save images into category folders
async with session.get(imgURL) as resp:
if resp.status == 200:
if(os.path.isdir(storeDirectory + cat + '/') == False):
os.mkdir(storeDirectory + cat + '/')
fname = storeDirectory + cat + '/' + filename
try:
f = await aiofiles.open(fname, mode='wb')
await f.write(await resp.read())
await f.close()
completedImages += 1
file_list.append(imgURL)
print(completedImages, '/', totalImages)
except OSError:
pass
#create zip file to download
shutil.make_archive(storeDirectory + 'download', 'zip', storeDirectory)
#main event loop
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
print(json.dumps(file_list, indent=0))
from google.colab import drive
drive.flush_and_unmount()
| 0.198142 | 0.681278 |
# Load Packages
```
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
```
# Load Data Points (Do not modify the following block)
```
with open('training_data.npz', 'rb') as f:
data = np.load(f)
x_list = data['x_list']
y_list = data['y_list']
x_data = data['x_data']
y_data = data['y_data']
n_data = len(x_data)
w = data['w']
original_degree = data['order']
# Print information of original function.
print("=================================")
print("We have", n_data, "number of data")
print("=================================")
weight_info_string = ''
for d in range(original_degree):
weight_info_string += 'w'+str(d)+':'+str(round(w[d],ndigits=3))+' '
print("Coefficients of the original polynomial")
print(weight_info_string)
print("=================================")
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
```
# Polynomial Regression (Programming Assignment)
### Variable Explanation (Do not change variable names)
- 'w' is true coefficients of the original polynomial function
- 'original_degree' is the order of the original polynomial function
- 'x_list' is a list of the points at $x$-axis
- 'y_list' is a list of function value $f(x)$ corresponding to 'x_list'. In other words, y_list = $f($x_list$)$
- 'x_data' is an input data
- 'y_data' is an output data
- 'n_data' is the number of data points
### Our goal is to estimate 'w' from data points, 'x_data' and 'y_data'. Answer the following problems.
### 1. Compute a Vandermonde matrix when the degree of polynomial is $4$ (30pt)
- The variable 'degree' is the order of polynomial. In this problem, we set degree=$4$
- Use the variable 'A' for the Vandermonde matrix. Now, 'A' is initialized as a zero matrix whose elements are all zero. Fill in the element of the Vandermonde matrix by using power operator (\*\*), for loop, and np.concatenation.
```
degree = 4
A = np.zeros((n_data, degree+1)) # Dummy initialization
```
### Print results (do not modify the following block)
```
print(A)
```
### 2. Compute the coefficients of polynomial regression using a $4$ degree polynomial (40pt)
- Use the variable 'degree' and the Vandermonde matrix 'A' in Problem 1.
- The variable 'w_est' is the coefficients of polynomial regression. Now, 'w_est' is initialized as a zero vector. Compute the 'w_est' from 'A' and 'y'
- The variable 'y_est' is an estimated function value corresponding to the input points 'x_list'. Now, it is a zero list and fill the list by computing the estimated function values. In other words, y_est = $\hat{f}($x_list$)$
```
w_est = np.zeros((est_order+1,1))
y_est = np.zeros_like(x_list)
```
### Print results (do not modify the following block)
```
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.plot(x_list, y_est, 'm-', linewidth=2, label="Polynomial Regression (d={})".format(degree))
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
```
### 3. Compute the polynomial regression with $1$ degree polynomials (15pt)
- Repeat Problem 1 and Problem 2 with degree $1$.
- Use the following variables.
> degree1, A1, w_est1, y_est1
```
degree1 = 1
A1 = np.zeros((n_data, degree1+1))
w_est1 = np.zeros((degree1+1,1))
y_est1 = np.zeros_like(x_list)
```
### Print results (do not modify the following block)
```
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.plot(x_list, y_est1, 'g-', linewidth=2, label="Polynomial Regression (d={})".format(degree1))
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
```
### 4. Compute the polynomial regression with $10$ degree polynomials (15pt)
- Repeat Problem 1 and Problem 2 with degree $10$.
- Use the following variables.
> degree2, A2, w_est2, y_est2
```
degree2 = 1
A2 = np.zeros((n_data, degree2+1))
w_est2 = np.zeros((degree2+1,1))
y_est2 = np.zeros_like(x_list)
```
### Print results (do not modify the following block)
```
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.plot(x_list, y_est2, 'c-', linewidth=2, label="Polynomial Regression (d={})".format(degree2))
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
```
### 5. [Challenging Problem] Explain the effect of degree (20pt)
- By solving the above problems, we can observe the behaviors of polynomial regression with different degrees (1, 4, 10)
- Explain pros and cons of high degree polynomial
- Explain pros and cons of low degree polynomial
- What is this phenomenon called in machine learning?
### The following figure shows all regression results with different degrees.
```
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.plot(x_list, y_est, 'm-', linewidth=2, label="Polynomial Regression (d={})".format(1))
plt.plot(x_list, y_est1, 'g-', linewidth=2, label="Polynomial Regression (d={})".format(4))
plt.plot(x_list, y_est2, 'c-', linewidth=2, label="Polynomial Regression (d={})".format(10))
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
```
Write your answer!!!
|
github_jupyter
|
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
with open('training_data.npz', 'rb') as f:
data = np.load(f)
x_list = data['x_list']
y_list = data['y_list']
x_data = data['x_data']
y_data = data['y_data']
n_data = len(x_data)
w = data['w']
original_degree = data['order']
# Print information of original function.
print("=================================")
print("We have", n_data, "number of data")
print("=================================")
weight_info_string = ''
for d in range(original_degree):
weight_info_string += 'w'+str(d)+':'+str(round(w[d],ndigits=3))+' '
print("Coefficients of the original polynomial")
print(weight_info_string)
print("=================================")
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
degree = 4
A = np.zeros((n_data, degree+1)) # Dummy initialization
print(A)
w_est = np.zeros((est_order+1,1))
y_est = np.zeros_like(x_list)
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.plot(x_list, y_est, 'm-', linewidth=2, label="Polynomial Regression (d={})".format(degree))
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
degree1 = 1
A1 = np.zeros((n_data, degree1+1))
w_est1 = np.zeros((degree1+1,1))
y_est1 = np.zeros_like(x_list)
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.plot(x_list, y_est1, 'g-', linewidth=2, label="Polynomial Regression (d={})".format(degree1))
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
degree2 = 1
A2 = np.zeros((n_data, degree2+1))
w_est2 = np.zeros((degree2+1,1))
y_est2 = np.zeros_like(x_list)
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.plot(x_list, y_est2, 'c-', linewidth=2, label="Polynomial Regression (d={})".format(degree2))
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
plt.plot(x_list, y_list, 'b:', linewidth=2, label="Original Function")
plt.plot(x_list, y_est, 'm-', linewidth=2, label="Polynomial Regression (d={})".format(1))
plt.plot(x_list, y_est1, 'g-', linewidth=2, label="Polynomial Regression (d={})".format(4))
plt.plot(x_list, y_est2, 'c-', linewidth=2, label="Polynomial Regression (d={})".format(10))
plt.scatter(x_data, y_data, s=50, c='r', label="Data Points")
plt.xlim([np.min(x_list),np.max(x_list)])
plt.ylim([np.min(y_data),np.max(y_data)])
plt.legend(prop={'size': 12})
plt.title("Data Plot")
plt.show()
| 0.581778 | 0.932883 |
```
# Import the necessary packages
import matplotlib.pyplot as plt
import numpy as np
import os
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
# Load preprocessed data stored in NumPy arrays
data = np.load('data.npy')
labels = np.load('labels.npy')
# Declare hyperparameter constants
# Initial learning rate
# Epoch number
# Batch size
INIT_LR = 1e-4
EPOCHS = 20
BS = 32
# Apply one-hot encoding on the labels
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels = to_categorical(labels)
# Split data to two partitions: training set (80%) and testing set (20%)
(trainX, testX, trainY, testY) = train_test_split(data, labels,
test_size=0.20, stratify=labels, random_state=42)
# Prepare image generator for training data augmentation
aug = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
imageSize=224
# Load the MobileNetV2 network with pre-trained weights (imagenet)
# and excluding head network layer
baseModel = MobileNetV2(weights="imagenet", include_top=False,
input_tensor=Input(shape=(imageSize, imageSize, 3)))
# Build a new head network layer
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(128, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(2, activation="softmax")(headModel)
# Prepend new head Fully Connected model to the base model
model = Model(inputs=baseModel.input, outputs=headModel)
# Loop over layers of base model and make sure they will not be
# trained or updated during the backpropagation process
for layer in baseModel.layers:
layer.trainable = False
# Compile the model
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])
# Train the head layer of the model
H = model.fit(
aug.flow(trainX, trainY, batch_size=BS),
steps_per_epoch=len(trainX) // BS,
validation_data=(testX, testY),
validation_steps=len(testX) // BS,
epochs=EPOCHS)
# Make predictions on the testing set
predIdxs = model.predict(testX, batch_size=BS)
# For each image in the testing set, find the index of
# the label with corresponding highest predicted probability
predIdxs = np.argmax(predIdxs, axis=1)
# Display classification report
print(classification_report(testY.argmax(axis=1), predIdxs,
target_names=lb.classes_))
# Serialize the model to file
model.save('mask_detector.model', save_format="h5")
# Plot training and testing accuracy
N = EPOCHS
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
plt.savefig('plot.png')
```
|
github_jupyter
|
# Import the necessary packages
import matplotlib.pyplot as plt
import numpy as np
import os
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
# Load preprocessed data stored in NumPy arrays
data = np.load('data.npy')
labels = np.load('labels.npy')
# Declare hyperparameter constants
# Initial learning rate
# Epoch number
# Batch size
INIT_LR = 1e-4
EPOCHS = 20
BS = 32
# Apply one-hot encoding on the labels
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels = to_categorical(labels)
# Split data to two partitions: training set (80%) and testing set (20%)
(trainX, testX, trainY, testY) = train_test_split(data, labels,
test_size=0.20, stratify=labels, random_state=42)
# Prepare image generator for training data augmentation
aug = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
imageSize=224
# Load the MobileNetV2 network with pre-trained weights (imagenet)
# and excluding head network layer
baseModel = MobileNetV2(weights="imagenet", include_top=False,
input_tensor=Input(shape=(imageSize, imageSize, 3)))
# Build a new head network layer
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(128, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(2, activation="softmax")(headModel)
# Prepend new head Fully Connected model to the base model
model = Model(inputs=baseModel.input, outputs=headModel)
# Loop over layers of base model and make sure they will not be
# trained or updated during the backpropagation process
for layer in baseModel.layers:
layer.trainable = False
# Compile the model
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])
# Train the head layer of the model
H = model.fit(
aug.flow(trainX, trainY, batch_size=BS),
steps_per_epoch=len(trainX) // BS,
validation_data=(testX, testY),
validation_steps=len(testX) // BS,
epochs=EPOCHS)
# Make predictions on the testing set
predIdxs = model.predict(testX, batch_size=BS)
# For each image in the testing set, find the index of
# the label with corresponding highest predicted probability
predIdxs = np.argmax(predIdxs, axis=1)
# Display classification report
print(classification_report(testY.argmax(axis=1), predIdxs,
target_names=lb.classes_))
# Serialize the model to file
model.save('mask_detector.model', save_format="h5")
# Plot training and testing accuracy
N = EPOCHS
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
plt.savefig('plot.png')
| 0.910927 | 0.725211 |
```
# imports
import pandas as pd
import sys
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
# allows animations to be opened in interactive window
%matplotlib tk
# read in files
preds = pd.read_pickle('/Users/Rohil/Documents/iGEM/yemen/y_df_for_feature_selection_new.pkl')
true = pd.read_csv('/Users/Rohil/Documents/iGEM/yemen/cholera_epi_data/yemen_cholera_case_data_differenced.csv')
true.date = pd.to_datetime(true.date, format = '%d-%m-%y')
true = true.set_index('date')
gov_pop_area_data = pd.read_excel('/Users/Rohil/Documents/iGEM/yemen/gov_area_pop_data.xlsx')
gov_pop_area_data = gov_pop_area_data[gov_pop_area_data.iso != 'YE-HD']
for index, row in gov_pop_area_data[['iso', 'population']].iterrows():
true[row.iso] = (true[row.iso] * 10000) / row.population
pred_crosstab_dict = {}
for col in ['week_1_to_2_cases', 'week_2_to_4_cases', 'week_4_to_6_cases', 'week_6_to_8_cases']:
pred_crosstab_dict[col] = preds[['gov_iso', 'date', col]].pivot_table(index = 'date', columns = 'gov_iso', values = col)
week_12_case_crosstab = pred_crosstab_dict['week_1_to_2_cases']
week_24_case_crosstab = pred_crosstab_dict['week_2_to_4_cases']
week_46_case_crosstab = pred_crosstab_dict['week_4_to_6_cases']
week_68_case_crosstab = pred_crosstab_dict['week_6_to_8_cases']
week_12_case_crosstab.columns
true.columns
ylim_df = true.max(axis=0)
y_12_line_dict = {}
y_24_line_dict = {}
y_46_line_dict = {}
y_68_line_dict = {}
true_line_dict = {}
# function that updates plot for each day
def update(i):
# Update the line and the axes (with a new xlabel). Return a tuple of
# "artists" that have to be redrawn for this frame.
true_date = pd.date_range(true.index[0], pred_crosstab_dict['week_1_to_2_cases'].index[i])
dates12 = pd.date_range(pred_crosstab_dict['week_1_to_2_cases'].index[i] + pd.to_timedelta(0, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_1_to_2_cases'].index[i] + pd.to_timedelta(2, 'W'), freq = "D")
dates24 = pd.date_range(pred_crosstab_dict['week_2_to_4_cases'].index[i] + pd.to_timedelta(2, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_2_to_4_cases'].index[i] + pd.to_timedelta(4, 'W'), freq = "D")
dates46 = pd.date_range(pred_crosstab_dict['week_4_to_6_cases'].index[i] + pd.to_timedelta(4, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_4_to_6_cases'].index[i] + pd.to_timedelta(6, 'W'), freq = "D")
dates68 = pd.date_range(pred_crosstab_dict['week_6_to_8_cases'].index[i] + pd.to_timedelta(6, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_6_to_8_cases'].index[i] + pd.to_timedelta(8, 'W'), freq = "D")
for e in range(0,21):
gov = govs[e]
true_val = true.loc[true_date][gov]
true_line_dict[gov].set_data(true_date, true_val)
vals12 = np.repeat(pred_crosstab_dict['week_1_to_2_cases'][gov].iloc[i] / 14, 14)
y_12_line_dict[gov].set_data(dates12, vals12)
vals24 = np.repeat(pred_crosstab_dict['week_2_to_4_cases'][gov].iloc[i] / 14, 14)
y_24_line_dict[gov].set_data(dates24, vals24)
vals46 = np.repeat(pred_crosstab_dict['week_4_to_6_cases'][gov].iloc[i] / 14, 14)
y_46_line_dict[gov].set_data(dates46, vals46)
vals68 = np.repeat(pred_crosstab_dict['week_6_to_8_cases'][gov].iloc[i] / 14, 14)
y_68_line_dict[gov].set_data(dates68, vals68)
label = 'day {0}'.format(i)
ax[-1].set_xlabel(label)
return (ax)
fig, ax = plt.subplots(21,1,figsize = (6,17), sharex=True, sharey = False)
govs = true.columns
for i in range(0,21):
ax[i].set_xlim(true.index.min(), true.index.max())
ax[i].set_ylim(0,10)#ylim_df[govs[i]])
ax[i].legend().set_visible(False)
ax[i].set_ylabel(govs[i])
ax[i].yaxis.set_label_position('right')
ax[i].spines['right'].set_visible(False)
ax[i].spines['top'].set_visible(False)
ax[i].spines['bottom'].set_visible(True)
true_date_start = pd.date_range(true.index[0], pred_crosstab_dict['week_1_to_2_cases'].index[i])
dates12_start = pd.date_range(pred_crosstab_dict['week_1_to_2_cases'].index[0] + pd.to_timedelta(0, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_1_to_2_cases'].index[0] + pd.to_timedelta(2, 'W'), freq = "D")
dates24_start = pd.date_range(pred_crosstab_dict['week_2_to_4_cases'].index[0] + pd.to_timedelta(2, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_2_to_4_cases'].index[0] + pd.to_timedelta(4, 'W'), freq = "D")
dates46_start = pd.date_range(pred_crosstab_dict['week_4_to_6_cases'].index[0] + pd.to_timedelta(4, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_4_to_6_cases'].index[0] + pd.to_timedelta(6, 'W'), freq = "D")
dates68_start = pd.date_range(pred_crosstab_dict['week_6_to_8_cases'].index[0] + pd.to_timedelta(6, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_6_to_8_cases'].index[0] + pd.to_timedelta(8, 'W'), freq = "D")
for e in range(0,21):
gov = govs[e]
true_val_start = true.loc[true_date_start][gov]
true_line_dict[gov], = ax[e].plot(true_date_start, true_val_start, color = 'red')
vals12_start = np.repeat(pred_crosstab_dict['week_1_to_2_cases'][gov].iloc[0] / 14, 14)
y_12_line_dict[gov], = ax[e].plot(dates12_start, vals12_start, linestyle= '-.', color = 'seagreen')
vals24_start = np.repeat(pred_crosstab_dict['week_2_to_4_cases'][gov].iloc[0] / 14, 14)
y_24_line_dict[gov], = ax[e].plot(dates24_start, vals24_start, linestyle= '-.', color = 'blue')
vals46_start = np.repeat(pred_crosstab_dict['week_4_to_6_cases'][gov].iloc[0] / 14, 14)
y_46_line_dict[gov], = ax[e].plot(dates46_start, vals46_start, linestyle= '-.', color = 'plum')
vals68_start = np.repeat(pred_crosstab_dict['week_6_to_8_cases'][gov].iloc[0] / 14, 14)
y_68_line_dict[gov], = ax[e].plot(dates68_start, vals68_start, linestyle= '-.', color = 'magenta')
anim = FuncAnimation(fig, update, frames=np.arange(0, 200), interval=10)
anim.save('/Users/Rohil/Documents/iGEM/yemen/gif_with_true_vals.gif', dpi=100, writer='imagemagick')
fig, ax = plt.subplots()
fig.set_tight_layout(True)
# Query the figure's on-screen size and DPI. Note that when saving the figure to
# a file, we need to provide a DPI for that separately.
print('fig size: {0} DPI, size in inches {1}'.format(
fig.get_dpi(), fig.get_size_inches()))
ax.set_xlim(pd.Timestamp('2017-05-23'), pd.Timestamp('2018-02-18'))
ax.set_ylim(0, 50)
line, = ax.plot(pd.date_range(pred_crosstab_dict['week_4_to_6_cases'].index[0] + pd.to_timedelta(4, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_4_to_6_cases'].index[0] + pd.to_timedelta(6, 'W'), freq = "D"), np.repeat(pred_crosstab_dict['week_4_to_6_cases']['YE-AB'].iloc[0] / 14, 14), color = 'g')
anim = FuncAnimation(fig, update, frames=np.arange(0, 200), interval=200)
```
|
github_jupyter
|
# imports
import pandas as pd
import sys
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
# allows animations to be opened in interactive window
%matplotlib tk
# read in files
preds = pd.read_pickle('/Users/Rohil/Documents/iGEM/yemen/y_df_for_feature_selection_new.pkl')
true = pd.read_csv('/Users/Rohil/Documents/iGEM/yemen/cholera_epi_data/yemen_cholera_case_data_differenced.csv')
true.date = pd.to_datetime(true.date, format = '%d-%m-%y')
true = true.set_index('date')
gov_pop_area_data = pd.read_excel('/Users/Rohil/Documents/iGEM/yemen/gov_area_pop_data.xlsx')
gov_pop_area_data = gov_pop_area_data[gov_pop_area_data.iso != 'YE-HD']
for index, row in gov_pop_area_data[['iso', 'population']].iterrows():
true[row.iso] = (true[row.iso] * 10000) / row.population
pred_crosstab_dict = {}
for col in ['week_1_to_2_cases', 'week_2_to_4_cases', 'week_4_to_6_cases', 'week_6_to_8_cases']:
pred_crosstab_dict[col] = preds[['gov_iso', 'date', col]].pivot_table(index = 'date', columns = 'gov_iso', values = col)
week_12_case_crosstab = pred_crosstab_dict['week_1_to_2_cases']
week_24_case_crosstab = pred_crosstab_dict['week_2_to_4_cases']
week_46_case_crosstab = pred_crosstab_dict['week_4_to_6_cases']
week_68_case_crosstab = pred_crosstab_dict['week_6_to_8_cases']
week_12_case_crosstab.columns
true.columns
ylim_df = true.max(axis=0)
y_12_line_dict = {}
y_24_line_dict = {}
y_46_line_dict = {}
y_68_line_dict = {}
true_line_dict = {}
# function that updates plot for each day
def update(i):
# Update the line and the axes (with a new xlabel). Return a tuple of
# "artists" that have to be redrawn for this frame.
true_date = pd.date_range(true.index[0], pred_crosstab_dict['week_1_to_2_cases'].index[i])
dates12 = pd.date_range(pred_crosstab_dict['week_1_to_2_cases'].index[i] + pd.to_timedelta(0, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_1_to_2_cases'].index[i] + pd.to_timedelta(2, 'W'), freq = "D")
dates24 = pd.date_range(pred_crosstab_dict['week_2_to_4_cases'].index[i] + pd.to_timedelta(2, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_2_to_4_cases'].index[i] + pd.to_timedelta(4, 'W'), freq = "D")
dates46 = pd.date_range(pred_crosstab_dict['week_4_to_6_cases'].index[i] + pd.to_timedelta(4, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_4_to_6_cases'].index[i] + pd.to_timedelta(6, 'W'), freq = "D")
dates68 = pd.date_range(pred_crosstab_dict['week_6_to_8_cases'].index[i] + pd.to_timedelta(6, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_6_to_8_cases'].index[i] + pd.to_timedelta(8, 'W'), freq = "D")
for e in range(0,21):
gov = govs[e]
true_val = true.loc[true_date][gov]
true_line_dict[gov].set_data(true_date, true_val)
vals12 = np.repeat(pred_crosstab_dict['week_1_to_2_cases'][gov].iloc[i] / 14, 14)
y_12_line_dict[gov].set_data(dates12, vals12)
vals24 = np.repeat(pred_crosstab_dict['week_2_to_4_cases'][gov].iloc[i] / 14, 14)
y_24_line_dict[gov].set_data(dates24, vals24)
vals46 = np.repeat(pred_crosstab_dict['week_4_to_6_cases'][gov].iloc[i] / 14, 14)
y_46_line_dict[gov].set_data(dates46, vals46)
vals68 = np.repeat(pred_crosstab_dict['week_6_to_8_cases'][gov].iloc[i] / 14, 14)
y_68_line_dict[gov].set_data(dates68, vals68)
label = 'day {0}'.format(i)
ax[-1].set_xlabel(label)
return (ax)
fig, ax = plt.subplots(21,1,figsize = (6,17), sharex=True, sharey = False)
govs = true.columns
for i in range(0,21):
ax[i].set_xlim(true.index.min(), true.index.max())
ax[i].set_ylim(0,10)#ylim_df[govs[i]])
ax[i].legend().set_visible(False)
ax[i].set_ylabel(govs[i])
ax[i].yaxis.set_label_position('right')
ax[i].spines['right'].set_visible(False)
ax[i].spines['top'].set_visible(False)
ax[i].spines['bottom'].set_visible(True)
true_date_start = pd.date_range(true.index[0], pred_crosstab_dict['week_1_to_2_cases'].index[i])
dates12_start = pd.date_range(pred_crosstab_dict['week_1_to_2_cases'].index[0] + pd.to_timedelta(0, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_1_to_2_cases'].index[0] + pd.to_timedelta(2, 'W'), freq = "D")
dates24_start = pd.date_range(pred_crosstab_dict['week_2_to_4_cases'].index[0] + pd.to_timedelta(2, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_2_to_4_cases'].index[0] + pd.to_timedelta(4, 'W'), freq = "D")
dates46_start = pd.date_range(pred_crosstab_dict['week_4_to_6_cases'].index[0] + pd.to_timedelta(4, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_4_to_6_cases'].index[0] + pd.to_timedelta(6, 'W'), freq = "D")
dates68_start = pd.date_range(pred_crosstab_dict['week_6_to_8_cases'].index[0] + pd.to_timedelta(6, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_6_to_8_cases'].index[0] + pd.to_timedelta(8, 'W'), freq = "D")
for e in range(0,21):
gov = govs[e]
true_val_start = true.loc[true_date_start][gov]
true_line_dict[gov], = ax[e].plot(true_date_start, true_val_start, color = 'red')
vals12_start = np.repeat(pred_crosstab_dict['week_1_to_2_cases'][gov].iloc[0] / 14, 14)
y_12_line_dict[gov], = ax[e].plot(dates12_start, vals12_start, linestyle= '-.', color = 'seagreen')
vals24_start = np.repeat(pred_crosstab_dict['week_2_to_4_cases'][gov].iloc[0] / 14, 14)
y_24_line_dict[gov], = ax[e].plot(dates24_start, vals24_start, linestyle= '-.', color = 'blue')
vals46_start = np.repeat(pred_crosstab_dict['week_4_to_6_cases'][gov].iloc[0] / 14, 14)
y_46_line_dict[gov], = ax[e].plot(dates46_start, vals46_start, linestyle= '-.', color = 'plum')
vals68_start = np.repeat(pred_crosstab_dict['week_6_to_8_cases'][gov].iloc[0] / 14, 14)
y_68_line_dict[gov], = ax[e].plot(dates68_start, vals68_start, linestyle= '-.', color = 'magenta')
anim = FuncAnimation(fig, update, frames=np.arange(0, 200), interval=10)
anim.save('/Users/Rohil/Documents/iGEM/yemen/gif_with_true_vals.gif', dpi=100, writer='imagemagick')
fig, ax = plt.subplots()
fig.set_tight_layout(True)
# Query the figure's on-screen size and DPI. Note that when saving the figure to
# a file, we need to provide a DPI for that separately.
print('fig size: {0} DPI, size in inches {1}'.format(
fig.get_dpi(), fig.get_size_inches()))
ax.set_xlim(pd.Timestamp('2017-05-23'), pd.Timestamp('2018-02-18'))
ax.set_ylim(0, 50)
line, = ax.plot(pd.date_range(pred_crosstab_dict['week_4_to_6_cases'].index[0] + pd.to_timedelta(4, 'W') + pd.to_timedelta(1, 'D'), pred_crosstab_dict['week_4_to_6_cases'].index[0] + pd.to_timedelta(6, 'W'), freq = "D"), np.repeat(pred_crosstab_dict['week_4_to_6_cases']['YE-AB'].iloc[0] / 14, 14), color = 'g')
anim = FuncAnimation(fig, update, frames=np.arange(0, 200), interval=200)
| 0.272508 | 0.409988 |
# R 文法確認ノートブック
## 代入と四則演算
```
# 代入記号として = を使用することも可能ですが,
# Rでは一般的に <- を用いるのが一般的です.
x<-3
x
2-1
2*3
4/2
3^4
3**4
```
### メモ:複数の処理を一度に行いたい場合はセミコロン;で繋いでコードを書きます.
```
2-1;2*3;4/2;3^4;3**4
x=0
x=x+1
x
#どうやらダメっぽい
x+=1
```
# ベクトルと行列
```
x_vector <- c(1, 2, 3, 4, 5)
x_vector
length(x_vector)
```
## 要素アクセス
```
vec <- c(10,9,8,7,6,5,4,3,2,1)
vec
# 添え字と要素の結果の関係に注意
vec[1]; vec[3]; vec[10]
# スライスによるアクセス
vec[4:8]
```
## 演算
```
# 加法と減法
vec1<-c(1,3,5)
vec2<-c(2,4,6)
vec1+vec2; vec1-vec2
#スカラー倍
scalar=3
# Rでは変数名にピリオド . を用いて良い.
x.vector<-c(2,1,0)
scalar*x.vector
# * と /
# 各々の要素に演算が適用される
vec1*vec2;vec1/vec2
```
## 要素の結合
```
# 結合
joined<-append(vec1,vec2)
joined
```
## vec<- c(....) がめんどくさい
```
mendokusai<-c(1,2,3,4,5,6,7,8,9,10)
rakuchin<-c(1:10)
mendokusai
rakuchin
all(mendokusai==rakuchin)
vec<-1:10
vec
```
## ドキュメンテーション
```
help(c)
```
# 行列
```
elements<-c(1,2,3,4,5,6)
matrix(elements,2,3)
matrix(elements,nrow=2,ncol=3)
#3行2列 オプション引数を省略してもいいらしい
matrix(elements,nrow=3,2)
#2行3列.可読性を考えると望ましくない書き方である
matrix(elements,3,nrow=2)
#どちらかのみを指定してOK
matrix(elements,2)
matrix(elements,ncol=3)
```
## 情報取得
```
mat<-matrix(elements,nrow=2,ncol=3)
ncol(mat)
nrow(mat)
dim(mat)
```
## データアクセス
```
mat<-matrix(elements,nrow=2,ncol=3)
mat
#2行3列目の値
mat[2,3]
#1行目の要素を取り出す
mat[1,]
#3列目の要素を取り出す
mat[,3]
#2列目から3列目までの要素を取り出す
mat[,2:3]
# ~以外のを取り出す
#1行以外
mat[-1,]
#3列以外
mat[,-3]
#2列目から3列目までの要素以外
mat[,-(2:3)]
```
## 演算
```
mat<-matrix(elements,nrow=2,ncol=3)
mat+1
2*mat
elements.1<-c(1,3,5,7,9,11)
elements.2<-c(2,4,6,8,10,12)
mat.1<-matrix(elements.1,2,3)
mat.2<-matrix(elements.2,2,3)
mat.1; mat.2
mat.1+mat.2; mat.1-mat.2
# 要素ごとに演算がブロードキャストされる
mat.1*mat.2;mat.1/mat.2
```
## 行列の転置
```
mat
t(mat)
```
## label を付与する
```
mat
colnames(mat)<-c("c1","c2","c3")
mat
rownames(mat)<-c("r1","r2")
mat
```
## ドキュメンテーション
```
help(matrix)
```
# 統計値算出
```
x<-1:5
x
#max min
max(x); min(x)
#平均
sum(x);sum(x)/length(x);mean(x)
#分散,標準偏差
var(x); sd(x)
#中央値
median(x)
income.a<-c(100,200,300,400,500)
mean(income.a); median(income.a)
income.b<-c(100,200,300,400,100000)
mean(income.b);median(income.b)
```
## 一気に見たいんです
```
x
# 1st Qu =下側25%点
# 3rd Qu = 上側25%点
summary(x)
```
# 2次元配列の統計値算出
```
mat<-matrix(1:12,nrow=3,ncol=4,byrow=TRUE)
mat
```
## 要素全ての統計値
```
sum(mat);sum(1:12)
mean(mat);mean(1:12)
```
## 行ごと,列ごとの統計値
```
rowSums(mat); colSums(mat)
rowMeans(mat); colMeans(mat)
```
## apply function
```
apply(X,MARGIN,FUN)
MARGIN
a vector giving the subscripts which the function will be applied over. E.g., for a matrix 1 indicates rows, 2 indicates columns, c(1, 2) indicates rows and columns. Where X has named dimnames, it can be a character vector selecting dimension names.
```
```
help(apply)
apply(mat,1,sum)
apply(mat,2,sum)
summary(mat)
apply(mat,1,summary); apply(mat,2,summary)
```
|
github_jupyter
|
# 代入記号として = を使用することも可能ですが,
# Rでは一般的に <- を用いるのが一般的です.
x<-3
x
2-1
2*3
4/2
3^4
3**4
2-1;2*3;4/2;3^4;3**4
x=0
x=x+1
x
#どうやらダメっぽい
x+=1
x_vector <- c(1, 2, 3, 4, 5)
x_vector
length(x_vector)
vec <- c(10,9,8,7,6,5,4,3,2,1)
vec
# 添え字と要素の結果の関係に注意
vec[1]; vec[3]; vec[10]
# スライスによるアクセス
vec[4:8]
# 加法と減法
vec1<-c(1,3,5)
vec2<-c(2,4,6)
vec1+vec2; vec1-vec2
#スカラー倍
scalar=3
# Rでは変数名にピリオド . を用いて良い.
x.vector<-c(2,1,0)
scalar*x.vector
# * と /
# 各々の要素に演算が適用される
vec1*vec2;vec1/vec2
# 結合
joined<-append(vec1,vec2)
joined
mendokusai<-c(1,2,3,4,5,6,7,8,9,10)
rakuchin<-c(1:10)
mendokusai
rakuchin
all(mendokusai==rakuchin)
vec<-1:10
vec
help(c)
elements<-c(1,2,3,4,5,6)
matrix(elements,2,3)
matrix(elements,nrow=2,ncol=3)
#3行2列 オプション引数を省略してもいいらしい
matrix(elements,nrow=3,2)
#2行3列.可読性を考えると望ましくない書き方である
matrix(elements,3,nrow=2)
#どちらかのみを指定してOK
matrix(elements,2)
matrix(elements,ncol=3)
mat<-matrix(elements,nrow=2,ncol=3)
ncol(mat)
nrow(mat)
dim(mat)
mat<-matrix(elements,nrow=2,ncol=3)
mat
#2行3列目の値
mat[2,3]
#1行目の要素を取り出す
mat[1,]
#3列目の要素を取り出す
mat[,3]
#2列目から3列目までの要素を取り出す
mat[,2:3]
# ~以外のを取り出す
#1行以外
mat[-1,]
#3列以外
mat[,-3]
#2列目から3列目までの要素以外
mat[,-(2:3)]
mat<-matrix(elements,nrow=2,ncol=3)
mat+1
2*mat
elements.1<-c(1,3,5,7,9,11)
elements.2<-c(2,4,6,8,10,12)
mat.1<-matrix(elements.1,2,3)
mat.2<-matrix(elements.2,2,3)
mat.1; mat.2
mat.1+mat.2; mat.1-mat.2
# 要素ごとに演算がブロードキャストされる
mat.1*mat.2;mat.1/mat.2
mat
t(mat)
mat
colnames(mat)<-c("c1","c2","c3")
mat
rownames(mat)<-c("r1","r2")
mat
help(matrix)
x<-1:5
x
#max min
max(x); min(x)
#平均
sum(x);sum(x)/length(x);mean(x)
#分散,標準偏差
var(x); sd(x)
#中央値
median(x)
income.a<-c(100,200,300,400,500)
mean(income.a); median(income.a)
income.b<-c(100,200,300,400,100000)
mean(income.b);median(income.b)
x
# 1st Qu =下側25%点
# 3rd Qu = 上側25%点
summary(x)
mat<-matrix(1:12,nrow=3,ncol=4,byrow=TRUE)
mat
sum(mat);sum(1:12)
mean(mat);mean(1:12)
rowSums(mat); colSums(mat)
rowMeans(mat); colMeans(mat)
apply(X,MARGIN,FUN)
MARGIN
a vector giving the subscripts which the function will be applied over. E.g., for a matrix 1 indicates rows, 2 indicates columns, c(1, 2) indicates rows and columns. Where X has named dimnames, it can be a character vector selecting dimension names.
help(apply)
apply(mat,1,sum)
apply(mat,2,sum)
summary(mat)
apply(mat,1,summary); apply(mat,2,summary)
| 0.225587 | 0.930521 |
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)
# Gaussian Probabilities
```
#format the book
%matplotlib notebook
from __future__ import division, print_function
from book_format import load_style
load_style()
```
## Introduction
The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.
We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features.
## Mean, Variance, and Standard Deviations
### Random Variables
Each time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6.
This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). *Random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.
While we are defining things, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.
Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.
Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.
Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable.
In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. In later chapters we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context.
## Probability Distribution
The [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:
|Value|Probability|
|-----|-----------|
|1|1/6|
|2|1/6|
|3|1/6|
|4|1/6|
|5|1/6|
|6|1/6|
Some sources call this the *probability function*. Using ordinary function notation, we would write:
$$P(X{=}4) = f(4) = \frac{1}{6}$$
This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.
Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as
$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$
Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.
The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.
To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as
$$\sum\limits_u P(X{=}u)= 1$$
for discrete distributions, and as
$$\int P(X{=}u) \,du= 1$$
for continuous distributions.
### The Mean, Median, and Mode of a Random Variable
Given a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we will want to know the *average* height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is
$$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$
we compute the mean as
$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$
It is traditional to use the symbol $\mu$ (mu) to denote the mean.
We can formalize this computation with the equation
$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$
NumPy provides `numpy.mean()` for computing the mean.
```
import numpy as np
x = [1.85, 2.0, 1.7, 1.9, 1.6]
print(np.mean(x))
```
The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than te set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.
Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.
Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted.
```
print(np.median(x))
```
## Expected Value of a Random Variable
The [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What would we *expect* $x$ to have, on average?
It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.
Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute
$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$
Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.
We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us
$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$
A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:
$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$
If $x$ is continuous we substitute the sum for an integral, like so
$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$
where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.
### Variance of a Random Variable
The computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights:
```
X = [1.8, 2.0, 1.7, 1.9, 1.6]
Y = [2.2, 1.5, 2.3, 1.7, 1.3]
Z = [1.8, 1.8, 1.8, 1.8, 1.8]
```
Using NumPy we see that the mean height of each class is the same.
```
print(np.mean(X))
print(np.mean(Y))
print(np.mean(Z))
```
The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.
The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students.
Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is
$$\mathit{VAR}(X) = E[(X - \mu)^2]$$
Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean (squared, of course). We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get
$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$
Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.
The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute
$$
\begin{aligned}
\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\
&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\
\mathit{VAR}(X)&= 0.02 \, m^2
\end{aligned}$$
NumPy provides the function `var()` to compute the variance:
```
print(np.var(X), "meters squared")
```
This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:
$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$
It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.
For the first class we compute the standard deviation with
$$
\begin{aligned}
\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\
&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\
\sigma_x&= 0.1414
\end{aligned}$$
We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation.
```
print('std {:.4f}'.format(np.std(X)))
print('var {:.4f}'.format(np.std(X)**2))
```
And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.
What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters.
We can view this in a plot:
```
from book_format import set_figsize, figsize
from code.book_plots import interactive_plot
from code.gaussian_internal import plot_height_std
import matplotlib.pyplot as plt
with interactive_plot():
plot_height_std(X)
```
For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.
> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on.
```
from numpy.random import randn
data = [1.8 + .1414*randn() for i in range(100)]
with interactive_plot():
plot_height_std(data, lw=2)
print('mean = {:.3f}'.format(np.mean(data)))
print('std = {:.3f}'.format(np.std(data)))
```
We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.
We'll discuss this in greater depth soon. For now let's compute the standard deviation for
$$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$
The mean of $Y$ is $\mu=1.8$ m, so
$$
\begin{aligned}
\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\
&= \sqrt{0.152} = 0.39 \ m
\end{aligned}$$
We will verify that with NumPy with
```
print('std of Y is {:.4f} m'.format(np.std(Y)))
```
This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.
Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with
$$
\begin{aligned}
\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\
&= \sqrt{\frac{0+0+0+0+0}{5}} \\
\sigma_z&= 0.0 \ m
\end{aligned}$$
```
print(np.std(Z))
```
Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account.
I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school!
It's too early to understand why, but we will not normally be faced with these problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues.
### Why the Square of the Differences
Why are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$
```
with interactive_plot():
X = [3, -3, 3, -3]
mean = np.average(X)
for i in range(len(X)):
plt.plot([i ,i], [mean, X[i]], color='k')
plt.axhline(mean)
plt.xlim(-1, len(X))
plt.tick_params(axis='x', labelbottom='off')
```
If we didn't take the square of the differences the signs would cancel everything out:
$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$
This is clearly incorrect, as there is more than 0 variance in the data.
Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the correct formula we get a variance of 3.5 for $Y$, which reflects its larger variation.
This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$.
```
X = [1, -1, 1, -2, 3, 2, 100]
print('Variance of X = {:.2f}'.format(np.var(X)))
```
Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3].
## Gaussians
We are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.
> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.
Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about.
```
from filterpy.stats import plot_gaussian_pdf
plt.figure()
ax = plot_gaussian_pdf(mean=1.8, variance=0.1414**2,
xlabel='Student Height', ylabel='pdf')
```
This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.1 m.
> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the
Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].
This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.
This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.
To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter!
```
import code.book_plots as book_plots
belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0]
with interactive_plot():
book_plots.bar_plot(belief)
```
## Nomenclature
A bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this:
```
with interactive_plot():
ax = plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)')
```
The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.
You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $-\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives.
You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*.
## Gaussian Distributions
Let's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:
$$
f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]
$$
$\exp[x]$ is notation for $e^x$.
<p> Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.
> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.
```python
%load -s gaussian stats.py
def gaussian(x, mean, var):
"""returns normal distribution for x given a
gaussian with the specified mean and variance.
"""
return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) /
math.sqrt(2*math.pi*var))
```
<p><p><p><p>We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means.
```
from filterpy.stats import gaussian, norm_cdf
with interactive_plot():
ax = plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$')
```
What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C.
Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.
What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures.
We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve.
How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian
$$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$
I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute
```
print('Probability of range 21.5 to 22.5 is {:.2f}%'.format(
norm_cdf((21.5, 22.5), 22,4)*100))
print('Probability of range 23.5 to 24.5 is {:.2f}%'.format(
norm_cdf((23.5, 24.5), 22,4)*100))
```
The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean.
The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as
$$\text{temp} \sim \mathcal{N}(22,4)$$
This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.
> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example.
## The Variance and Belief
Since this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$)
```
print(norm_cdf((-1e8, 1e8), mu=0, var=4))
```
This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.
Let's look at that graphically:
```
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(15, 30, 0.05)
with interactive_plot():
plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b')
plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b')
plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b')
plt.legend()
```
What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.
If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.
An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.
I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using.
## The 68-95-99.7 Rule
It is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$).
Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.
The following graph depicts the relationship between the standard deviation and the normal distribution.
```
from code.gaussian_internal import display_stddev_plot
with interactive_plot():
display_stddev_plot()
```
## Interactive Gaussians
For those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner.
```
import math
from IPython.html.widgets import interact, interactive, fixed
set_figsize(y=3)
def plt_g(mu,variance):
plt.figure()
xs = np.arange(2, 8, 0.1)
ys = gaussian(xs, mu, variance)
plt.plot(xs, ys)
plt.ylim((0, 1))
interact (plt_g, mu=(0., 10), variance = (.2, 1.));
```
Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified.
<img src='animations/04_gaussian_animate.gif'>
## Computational Properties of Gaussians
A remarkable property of Gaussians is that the product of two independent Gaussians is another Gaussian! The sum is not Gaussian, but proportional to a Gaussian.
The discrete Bayes filter works by multiplying and adding probabilities. I'm getting ahead of myself, but the Kalman filter uses Gaussians instead of probabilities, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians.
The Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice.
The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results.
### Product of Gaussians
The product of two independent Gaussians is given by:
$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\
\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2}
\end{aligned}$$
You can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?
Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state
$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$
$P(z)$ is a normalizing constant, so we can create a proportinality
$$P(x \mid z) \propto P(z|x)P(x)$$
Now we subtitute in the equations for the Gaussians, which are
$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$
$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$
We can drop the leading terms, as they are constants, giving us
$$\begin{aligned}
P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\
&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\
&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]
\end{aligned}$$
Now we multiply out the squared terms and group in terms of the posterior $x$.
$$\begin{aligned}
P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\
&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]
\end{aligned}$$
The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.
$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]
$$
Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get
$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]
$$
Proportionality lets us create or delete constants at will, so we can factor this into
$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]
$$
A Gaussian is
$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$
So we can see that $P(x \mid z)$ has a mean of
$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$
and a variance of
$$
\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}
$$
I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.
$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$
### Sum of Gaussians
The sum of two Gaussians is given by
$$\begin{gathered}\mu = \mu_1 + \mu_2 \\
\sigma^2 = \sigma^2_1 + \sigma^2_2
\end{gathered}$$
There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities.
To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with
$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$
This is the equation for a convolution. Now we just do some math:
$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$
$= \int\limits_{-\infty}^\infty
\frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{x - z - \mu_z}{2\sigma^2_z}\right]
\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{x - \mu_p}{2\sigma^2_p}\right] \, dx$
$= \int\limits_{-\infty}^\infty
\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]
\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$
$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty
\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$
The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us
$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$
This is in the form of a normal, where
$$\begin{gathered}\mu_x = \mu_p + \mu_z \\
\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$
## Computing Probabilities with scipy.stats
In this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.
The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy.
```
from scipy.stats import norm
import filterpy.stats
print(norm(2, 3).pdf(1.5))
print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3))
```
The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so:
```
n23 = norm(2, 3)
print('pdf of 1.5 is %.4f' % n23.pdf(1.5))
print('pdf of 2.5 is also %.4f' % n23.pdf(2.5))
print('pdf of 2 is %.4f' % n23.pdf(2))
```
The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function.
```
np.set_printoptions(precision=3, linewidth=50)
print(n23.rvs(size=15))
```
We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$.
```
# probability that a random value is less than the mean 2
print(n23.cdf(2))
```
We can get various properties of the distribution:
```
print('variance is', n23.var())
print('standard deviation is', n23.std())
print('mean is', n23.mean())
```
## Fat Tails
Earlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions.
However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.
Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The *tails* of a Gaussian distribution are infinitely long.
But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution.
```
xs = np.arange(10,100, 0.05)
ys = [gaussian(x, 90, 30) for x in xs]
with interactive_plot():
plt.plot(xs, ys, label='var=0.2')
plt.xlim((0,120))
plt.ylim(0, 0.09);
```
The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution).
Kalman filters use sensors to measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution).
Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with:
```
from numpy.random import randn
def sense():
return 10 + randn()*2
```
Let's plot that signal and see what it looks like.
```
zs = [sense() for i in range(5000)]
with interactive_plot():
plt.plot(zs, lw=1)
```
That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening.
Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it.
```
import random
import math
def rand_student_t(df, mu=0, std=1):
"""return random number distributed by Student's t
distribution with `df` degrees of freedom with the
specified mean and standard deviation.
"""
x = random.gauss(0, std)
y = 2.0*random.gammavariate(0.5*df, 2.0)
return x / (math.sqrt(y / df)) + mu
def sense_t():
return 10 + rand_student_t(7)*2
zs = [sense_t() for i in range(5000)]
with interactive_plot():
plt.plot(zs, lw=1)
```
We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). This is what causes the 'fat tail'.
It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests.
This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft.
The code for rand_student_t is included in `filterpy.stats`. You may use it with
```python
from filterpy.stats import rand_student_t
```
## Summary and Key Points
This chapter is a poor introduction to statistics in general. I've only covered the concepts that needed to use Gaussians in the remainder of the book, no more. What I've covered will not get you very far if you intend to read the Kalman filter literature. If this is a new topic to you I suggest reading a statistics textbook. I've always liked the Schaum series for self study, and Alan Downey's *Think Stats* [5] is also very good.
The following points **must** be understood by you before we continue:
* Normals express a continuous probability distribution
* They are completely described by two parameters: the mean ($\mu$) and variance ($\sigma^2$)
* $\mu$ is the average of all possible values
* The variance $\sigma^2$ represents how much our measurements vary from the mean
* The standard deviation ($\sigma$) is the square root of the variance ($\sigma^2$)
* Many things in nature approximate a normal distribution
## References
[1] https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb
[2] http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html
[3] http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html
[4] Huber, Peter J. *Robust Statistical Procedures*, Second Edition. Society for Industrial and Applied Mathematics, 1996.
[5] Downey, Alan. *Think Stats*, Second Edition. O'Reilly Media.
https://github.com/AllenDowney/ThinkStats2
http://greenteapress.com/thinkstats/
|
github_jupyter
|
#format the book
%matplotlib notebook
from __future__ import division, print_function
from book_format import load_style
load_style()
import numpy as np
x = [1.85, 2.0, 1.7, 1.9, 1.6]
print(np.mean(x))
print(np.median(x))
X = [1.8, 2.0, 1.7, 1.9, 1.6]
Y = [2.2, 1.5, 2.3, 1.7, 1.3]
Z = [1.8, 1.8, 1.8, 1.8, 1.8]
print(np.mean(X))
print(np.mean(Y))
print(np.mean(Z))
print(np.var(X), "meters squared")
print('std {:.4f}'.format(np.std(X)))
print('var {:.4f}'.format(np.std(X)**2))
from book_format import set_figsize, figsize
from code.book_plots import interactive_plot
from code.gaussian_internal import plot_height_std
import matplotlib.pyplot as plt
with interactive_plot():
plot_height_std(X)
from numpy.random import randn
data = [1.8 + .1414*randn() for i in range(100)]
with interactive_plot():
plot_height_std(data, lw=2)
print('mean = {:.3f}'.format(np.mean(data)))
print('std = {:.3f}'.format(np.std(data)))
print('std of Y is {:.4f} m'.format(np.std(Y)))
print(np.std(Z))
with interactive_plot():
X = [3, -3, 3, -3]
mean = np.average(X)
for i in range(len(X)):
plt.plot([i ,i], [mean, X[i]], color='k')
plt.axhline(mean)
plt.xlim(-1, len(X))
plt.tick_params(axis='x', labelbottom='off')
X = [1, -1, 1, -2, 3, 2, 100]
print('Variance of X = {:.2f}'.format(np.var(X)))
from filterpy.stats import plot_gaussian_pdf
plt.figure()
ax = plot_gaussian_pdf(mean=1.8, variance=0.1414**2,
xlabel='Student Height', ylabel='pdf')
import code.book_plots as book_plots
belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0]
with interactive_plot():
book_plots.bar_plot(belief)
with interactive_plot():
ax = plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)')
%load -s gaussian stats.py
def gaussian(x, mean, var):
"""returns normal distribution for x given a
gaussian with the specified mean and variance.
"""
return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) /
math.sqrt(2*math.pi*var))
from filterpy.stats import gaussian, norm_cdf
with interactive_plot():
ax = plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$')
print('Probability of range 21.5 to 22.5 is {:.2f}%'.format(
norm_cdf((21.5, 22.5), 22,4)*100))
print('Probability of range 23.5 to 24.5 is {:.2f}%'.format(
norm_cdf((23.5, 24.5), 22,4)*100))
print(norm_cdf((-1e8, 1e8), mu=0, var=4))
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(15, 30, 0.05)
with interactive_plot():
plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b')
plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b')
plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b')
plt.legend()
from code.gaussian_internal import display_stddev_plot
with interactive_plot():
display_stddev_plot()
import math
from IPython.html.widgets import interact, interactive, fixed
set_figsize(y=3)
def plt_g(mu,variance):
plt.figure()
xs = np.arange(2, 8, 0.1)
ys = gaussian(xs, mu, variance)
plt.plot(xs, ys)
plt.ylim((0, 1))
interact (plt_g, mu=(0., 10), variance = (.2, 1.));
from scipy.stats import norm
import filterpy.stats
print(norm(2, 3).pdf(1.5))
print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3))
n23 = norm(2, 3)
print('pdf of 1.5 is %.4f' % n23.pdf(1.5))
print('pdf of 2.5 is also %.4f' % n23.pdf(2.5))
print('pdf of 2 is %.4f' % n23.pdf(2))
np.set_printoptions(precision=3, linewidth=50)
print(n23.rvs(size=15))
# probability that a random value is less than the mean 2
print(n23.cdf(2))
print('variance is', n23.var())
print('standard deviation is', n23.std())
print('mean is', n23.mean())
xs = np.arange(10,100, 0.05)
ys = [gaussian(x, 90, 30) for x in xs]
with interactive_plot():
plt.plot(xs, ys, label='var=0.2')
plt.xlim((0,120))
plt.ylim(0, 0.09);
from numpy.random import randn
def sense():
return 10 + randn()*2
zs = [sense() for i in range(5000)]
with interactive_plot():
plt.plot(zs, lw=1)
import random
import math
def rand_student_t(df, mu=0, std=1):
"""return random number distributed by Student's t
distribution with `df` degrees of freedom with the
specified mean and standard deviation.
"""
x = random.gauss(0, std)
y = 2.0*random.gammavariate(0.5*df, 2.0)
return x / (math.sqrt(y / df)) + mu
def sense_t():
return 10 + rand_student_t(7)*2
zs = [sense_t() for i in range(5000)]
with interactive_plot():
plt.plot(zs, lw=1)
from filterpy.stats import rand_student_t
| 0.68056 | 0.992386 |
# Analyse d’images avec le service Vision par ordinateur

*Vision par ordinateur* est une branche de l’intelligence artificielle (IA) qui explore le développement de systèmes d’IA capables de « voir » le monde, soit en temps réel à travers une caméra, soit en analysant des images et des vidéos. Cela est rendu possible par le fait que les images numériques ne sont essentiellement que des tableaux de valeurs numériques de pixels et que nous pouvons utiliser ces valeurs de pixels comme *des fonctionnalités* capables de former des modèles d’apprentissage automatique afin de classer des images, détecter des objets discrets dans une image, voire même de générer des résumés textuels de photographies.
## Utiliser la vision par ordinateur de Cognitive Services
Microsoft Azure comprend plusieurs *Cognitive Services* qui encapsulent des fonctions d’IA courantes, dont certaines peuvent vous aider à élaborer des solutions de vision par ordinateur.
Le service cognitif *Vision par ordinateur* constitue un point de départ évident pour notre exploration de la vision par ordinateur dans Azure. Il utilise des modèles d’apprentissage automatique pré-formés pour analyser les images et en extraire des informations.
Par exemple, supposons que Northwind Traders ait décidé de mettre en place un « magasin intelligent », dans lequel les services d’IA surveillent le magasin afin d’identifier les clients ayant besoin d’aide et de diriger les employés pour les aider. En utilisant le service Vision par ordinateur, les images prises par les caméras dans le magasin peuvent être analysées pour fournir des descriptions significatives de ce qu’elles représentent.
### Créer une ressource Cognitive Services
Commençons par créer une ressource **Cognitive Services** dans votre abonnement Azure :
1. Sous un autre onglet du navigateur, ouvrez le portail Azure à l’adresse https://portal.azure.com, en vous connectant avec votre compte Microsoft.
2. Cliquez sur le bouton **+ Créer une ressource**, recherchez *Cognitive Services*, puis créez une ressource **Cognitive Services** avec les paramètres suivants :
- **Abonnement** : *Votre abonnement Azure*.
- **Groupe de ressources** : *Sélectionnez ou créez un groupe de ressources portant un nom unique*.
- **Région** : *Choisissez une région disponible* :
- **Nom** : *Saisissez un nom unique*.
- **Niveau tarifaire** : S0
- **Je confirme avoir lu et compris les avis** : Sélectionné.
3. Attendez la fin du déploiement. Ensuite, accédez à votre ressource Cognitive Services et, sur la page **Aperçu**, cliquez sur le lien permettant de gérer les clés du service. Vous aurez besoin du point de terminaison et des clés pour vous connecter à votre ressource Cognitive Services à partir d’applications clientes.
### Obtenir la clé et le point de terminaison de votre ressource Cognitive Services
Pour utiliser votre ressource Cognitive Services, les applications clientes ont besoin de son point de terminaison et de sa clé d’authentification :
1. Dans le portail Azure, sur la page **Clés et Point de terminaison** de votre ressource Cognitive Services, copiez la **Clé 1** de votre ressource et collez-la dans le code ci-dessous, en remplaçant la valeur **YOUR_COG_KEY**.
2. Copiez le **point de terminaison** de votre ressource et collez-le dans le code ci-dessous, en remplaçant la valeur **YOUR_COG_ENDPOINT**.
3. Exécutez le code ci-dessous en sélectionnant la cellule et en cliquant sur le bouton **Exécuter la cellule** (▷) à gauche de la cellule.
```
cog_key = 'YOUR_COG_KEY'
cog_endpoint = 'YOUR_COG_ENDPOINT'
print('Ready to use cognitive services at {} using key {}'.format(cog_endpoint, cog_key))
```
Maintenant que vous avez configuré la clé et le point de terminaison, vous pouvez utiliser le service Vision par ordinateur pour analyser une image.
Exécutez la cellule suivante pour obtenir une description d’une image dans le fichier */data/vision/store_cam1.jpg*.
```
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from msrest.authentication import CognitiveServicesCredentials
from python_code import vision
import os
%matplotlib inline
# Get the path to an image file
image_path = os.path.join('data', 'vision', 'store_cam1.jpg')
# Get a client for the computer vision service
computervision_client = ComputerVisionClient(cog_endpoint, CognitiveServicesCredentials(cog_key))
# Get a description from the computer vision service
image_stream = open(image_path, "rb")
description = computervision_client.describe_image_in_stream(image_stream)
# Display image and caption (code in helper_scripts/vision.py)
vision.show_image_caption(image_path, description)
```
Cela semble raisonnablement précis.
Essayons avec une autre image.
```
# Get the path to an image file
image_path = os.path.join('data', 'vision', 'store_cam2.jpg')
# Get a description from the computer vision service
image_stream = open(image_path, "rb")
description = computervision_client.describe_image_in_stream(image_stream)
# Display image and caption (code in helper_scripts/vision.py)
vision.show_image_caption(image_path, description)
```
Là encore, la légende suggérée semble assez précise.
## Analyser les fonctionnalités d’une image
Jusqu’à présent, vous avez utilisé le service Vision par ordinateur pour générer une légende descriptive pour quelques images, mais vous pouvez faire beaucoup plus. Le service Vision par ordinateur offre des capacités d’analyse qui permettent d’extraire des informations détaillées telles que :
- Les emplacements des types d’objets courants détectés sur l’image.
- L’emplacement et l’âge approximatif des visages humains sur l’image.
- Si l’image contient un contenu « adulte », « osé » ou « sanglant ».
- Les étiquettes pertinentes qui pourraient être associées à l’image dans une base de données afin de faciliter sa recherche.
Exécutez le code suivant pour analyser une image d’un acheteur.
```
# Get the path to an image file
image_path = os.path.join('data', 'vision', 'store_cam1.jpg')
Specify the features we want to analyze
features = ['Description', 'Tags', 'Adult', 'Objects', 'Faces']
# Get an analysis from the computer vision service
image_stream = open(image_path, "rb")
analysis = computervision_client.analyze_image_in_stream(image_stream, visual_features=features)
# Show the results of analysis (code in helper_scripts/vision.py)
vision.show_image_analysis(image_path, analysis)
```
## En savoir plus
En plus des capacités que vous avez explorées dans ce carnet de notes, le service cognitif Vision par ordinateur permet de réaliser les opérations suivantes :
- Identifier des célébrités sur des images.
- Détecter des logos de marque sur une image.
- Effectuer une reconnaissance optique de caractères (OCR) pour lire du texte sur une image.
Pour en savoir plus sur le service cognitif Vision par ordinateur, consultez la [documentation Vision par ordinateur](https://docs.microsoft.com/azure/cognitive-services/computer-vision/)
|
github_jupyter
|
cog_key = 'YOUR_COG_KEY'
cog_endpoint = 'YOUR_COG_ENDPOINT'
print('Ready to use cognitive services at {} using key {}'.format(cog_endpoint, cog_key))
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from msrest.authentication import CognitiveServicesCredentials
from python_code import vision
import os
%matplotlib inline
# Get the path to an image file
image_path = os.path.join('data', 'vision', 'store_cam1.jpg')
# Get a client for the computer vision service
computervision_client = ComputerVisionClient(cog_endpoint, CognitiveServicesCredentials(cog_key))
# Get a description from the computer vision service
image_stream = open(image_path, "rb")
description = computervision_client.describe_image_in_stream(image_stream)
# Display image and caption (code in helper_scripts/vision.py)
vision.show_image_caption(image_path, description)
# Get the path to an image file
image_path = os.path.join('data', 'vision', 'store_cam2.jpg')
# Get a description from the computer vision service
image_stream = open(image_path, "rb")
description = computervision_client.describe_image_in_stream(image_stream)
# Display image and caption (code in helper_scripts/vision.py)
vision.show_image_caption(image_path, description)
# Get the path to an image file
image_path = os.path.join('data', 'vision', 'store_cam1.jpg')
Specify the features we want to analyze
features = ['Description', 'Tags', 'Adult', 'Objects', 'Faces']
# Get an analysis from the computer vision service
image_stream = open(image_path, "rb")
analysis = computervision_client.analyze_image_in_stream(image_stream, visual_features=features)
# Show the results of analysis (code in helper_scripts/vision.py)
vision.show_image_analysis(image_path, analysis)
| 0.490968 | 0.975367 |
#1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
```
!pip install git+https://github.com/google/starthinker
```
#2. Get Cloud Project ID
To run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
```
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
```
#3. Get Client Credentials
To read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
```
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
```
#4. Enter Email Fetch Parameters
Import emailed CM report, Dv360 report, csv, or excel into a BigQuery table.
1. The person executing this recipe must be the recipient of the email.
1. Give a regular expression to match the email subject, link or attachment.
1. The data downloaded will overwrite the table specified.
Modify the values below for your use case, can be done multiple times, then click play.
```
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'email_from': '', # Must match from field.
'email_to': '', # Must match to field.
'subject': '', # Regular expression to match subject.
'link': '', # Regular expression to match email.
'attachment': '', # Regular expression to match atttachment.
'dataset': '', # Existing dataset in BigQuery.
'table': '', # Name of table to be written to.
'dbm_schema': '[]', # Schema provided in JSON list format or empty list.
'is_incremental_load': False, # Append report data to table based on date column, de-duplicates.
}
print("Parameters Set To: %s" % FIELDS)
```
#5. Execute Email Fetch
This does NOT need to be modified unles you are changing the recipe, click play.
```
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'email': {
'auth': 'user',
'read': {
'from': {'field': {'name': 'email_from','kind': 'string','order': 1,'default': '','description': 'Must match from field.'}},
'to': {'field': {'name': 'email_to','kind': 'string','order': 2,'default': '','description': 'Must match to field.'}},
'subject': {'field': {'name': 'subject','kind': 'string','order': 3,'default': '','description': 'Regular expression to match subject.'}},
'link': {'field': {'name': 'link','kind': 'string','order': 4,'default': '','description': 'Regular expression to match email.'}},
'attachment': {'field': {'name': 'attachment','kind': 'string','order': 5,'default': '','description': 'Regular expression to match atttachment.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 6,'default': '','description': 'Existing dataset in BigQuery.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 7,'default': '','description': 'Name of table to be written to.'}},
'schema': {'field': {'name': 'dbm_schema','kind': 'json','order': 8,'default': '[]','description': 'Schema provided in JSON list format or empty list.'}},
'is_incremental_load': {'field': {'name': 'is_incremental_load','kind': 'boolean','order': 9,'default': False,'description': 'Append report data to table based on date column, de-duplicates.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
```
|
github_jupyter
|
!pip install git+https://github.com/google/starthinker
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'email_from': '', # Must match from field.
'email_to': '', # Must match to field.
'subject': '', # Regular expression to match subject.
'link': '', # Regular expression to match email.
'attachment': '', # Regular expression to match atttachment.
'dataset': '', # Existing dataset in BigQuery.
'table': '', # Name of table to be written to.
'dbm_schema': '[]', # Schema provided in JSON list format or empty list.
'is_incremental_load': False, # Append report data to table based on date column, de-duplicates.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'email': {
'auth': 'user',
'read': {
'from': {'field': {'name': 'email_from','kind': 'string','order': 1,'default': '','description': 'Must match from field.'}},
'to': {'field': {'name': 'email_to','kind': 'string','order': 2,'default': '','description': 'Must match to field.'}},
'subject': {'field': {'name': 'subject','kind': 'string','order': 3,'default': '','description': 'Regular expression to match subject.'}},
'link': {'field': {'name': 'link','kind': 'string','order': 4,'default': '','description': 'Regular expression to match email.'}},
'attachment': {'field': {'name': 'attachment','kind': 'string','order': 5,'default': '','description': 'Regular expression to match atttachment.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 6,'default': '','description': 'Existing dataset in BigQuery.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 7,'default': '','description': 'Name of table to be written to.'}},
'schema': {'field': {'name': 'dbm_schema','kind': 'json','order': 8,'default': '[]','description': 'Schema provided in JSON list format or empty list.'}},
'is_incremental_load': {'field': {'name': 'is_incremental_load','kind': 'boolean','order': 9,'default': False,'description': 'Append report data to table based on date column, de-duplicates.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
| 0.328422 | 0.765725 |
```
# Import Dependencies
import pandas as pd
# Make a reference to the election_data.csv file path
csv_path = "Resources/election_data.csv"
# Set the name and path for file to output
file_to_output = "election_results.txt"
# Import the election_data.csv file as a DataFrame
election_data_df = pd.read_csv(csv_path, encoding="utf-8")
election_data_df.head()
# Establish the total number of votes cast
total_votes = len(election_data_df)
print (total_votes)
# Establish a complete list of candidates who received votes
candidates = (election_data_df["Candidate"].unique())
print (candidates)
# Place all of the data found into a summary DataFrame
summary_table = pd.DataFrame({"All Candidates": candidates})
summary_table
# Count how many votes have been entered for each candidate
candidate_votes = election_data_df["Candidate"].value_counts()
candidate_votes
# Calculate the percentage of votes each candidate won
percentage_votes = ((candidate_votes / total_votes) * 100).astype(float).map(
"{:,.2f}%".format)
percentage_votes
# The winner of the election based on popular vote
winner = candidate_votes.idxmax()
print(winner)
#### print the results (alternate method)
#### print("Election results")
#### print("--------------------------")
#### print("Total votes: " + str(total_votes))
#### print("--------------------------")
#### print("Khan:" + " " + str(percentage_votes[0]) + " ("+str(candidate_votes[0])+")")
#### print("Correy:" + " " + str(percentage_votes[1]) + " ("+str(candidate_votes[1])+")")
#### print("Li:" + " " + str(percentage_votes[2]) + " ("+str(candidate_votes[2])+")")
#### print("O'Tooley:" + " " + str(percentage_votes[3]) + " ("+str(candidate_votes[3])+")")
#### print("Winner: " + winner)
# Print the election results to terminal
election_results_summary = (
f"Election results\n"
f"-------------------------\n"
f"Total votes: {total_votes}\n"
f"-------------------------\n"
f"Khan: {percentage_votes[0]} {candidate_votes[0]}\n"
f"Correy: {percentage_votes[1]} {candidate_votes[1]}\n"
f"Li: {percentage_votes[2]} {candidate_votes[2]}\n"
f"O'Tooley: {percentage_votes[3]} {candidate_votes[3]}\n"
f"-------------------------\n"
f"Winner: {winner}\n"
f"-------------------------\n")
print(election_results_summary)
# Export the data to our text file
with open(file_to_output, "w") as txt_file:
# Save the winning candidate's name to the text file
txt_file.write(election_results_summary)
```
|
github_jupyter
|
# Import Dependencies
import pandas as pd
# Make a reference to the election_data.csv file path
csv_path = "Resources/election_data.csv"
# Set the name and path for file to output
file_to_output = "election_results.txt"
# Import the election_data.csv file as a DataFrame
election_data_df = pd.read_csv(csv_path, encoding="utf-8")
election_data_df.head()
# Establish the total number of votes cast
total_votes = len(election_data_df)
print (total_votes)
# Establish a complete list of candidates who received votes
candidates = (election_data_df["Candidate"].unique())
print (candidates)
# Place all of the data found into a summary DataFrame
summary_table = pd.DataFrame({"All Candidates": candidates})
summary_table
# Count how many votes have been entered for each candidate
candidate_votes = election_data_df["Candidate"].value_counts()
candidate_votes
# Calculate the percentage of votes each candidate won
percentage_votes = ((candidate_votes / total_votes) * 100).astype(float).map(
"{:,.2f}%".format)
percentage_votes
# The winner of the election based on popular vote
winner = candidate_votes.idxmax()
print(winner)
#### print the results (alternate method)
#### print("Election results")
#### print("--------------------------")
#### print("Total votes: " + str(total_votes))
#### print("--------------------------")
#### print("Khan:" + " " + str(percentage_votes[0]) + " ("+str(candidate_votes[0])+")")
#### print("Correy:" + " " + str(percentage_votes[1]) + " ("+str(candidate_votes[1])+")")
#### print("Li:" + " " + str(percentage_votes[2]) + " ("+str(candidate_votes[2])+")")
#### print("O'Tooley:" + " " + str(percentage_votes[3]) + " ("+str(candidate_votes[3])+")")
#### print("Winner: " + winner)
# Print the election results to terminal
election_results_summary = (
f"Election results\n"
f"-------------------------\n"
f"Total votes: {total_votes}\n"
f"-------------------------\n"
f"Khan: {percentage_votes[0]} {candidate_votes[0]}\n"
f"Correy: {percentage_votes[1]} {candidate_votes[1]}\n"
f"Li: {percentage_votes[2]} {candidate_votes[2]}\n"
f"O'Tooley: {percentage_votes[3]} {candidate_votes[3]}\n"
f"-------------------------\n"
f"Winner: {winner}\n"
f"-------------------------\n")
print(election_results_summary)
# Export the data to our text file
with open(file_to_output, "w") as txt_file:
# Save the winning candidate's name to the text file
txt_file.write(election_results_summary)
| 0.459804 | 0.175962 |
# Abalone Age Classification Project Report
This report is for the data analysis project for DSCI 522 (Data Science workflows); a course in the Master of Data Science program at the University of British Columbia. Content includes key exploratory data analysis, statistical summaries and figures.
## Introduction
Abalones are endangered marine snails that are found in the cold coastal water around the world. The price of an abalone is positively associated with its age. However, determining how old an abalone is can be a very complex process. Having a machine learning model that classifies the age of abalones will efficiently accelerate this manual process, and benefit researchers on abalones and add value to the domain.
In this project we are classifying abalone snails into "young" and "old" according to their number of rings based on input features such as abalone's gender, height with meat in shell, weight of the shell etc.
## About the data set
The Abalone data set that was used in this project was sourced from the UC
Irvine Machine Learning Repository published in 1995. It can be
found <a href="https://archive-beta.ics.uci.edu/ml/datasets/abalone" >here</a>. Each row in
the data set represents the attributes and physical measurements of
abalones including number of rings, sex, length, diameter, height, weight,
etc. The number of rings were
counted manually using a microscope by the researchers. The age of an abalone is represented by its number of
rings plus 1.5 as number of years lived.
This dataset was developed in 1995. Despite the age of this dataset, predictive models that can be made from this dataset are likely still relevant for the modern day. It takes thousands to millions of years in order for any meaningful changes to be made to the biological characteristics and features of animals. Darwin's theory of evolution and natural selection applies to all animals, including abalone. Thus, the biological features of the abalone within this dataset are likely still relevant today, and meaningful predictive models can still be created from this dataset.
The sex variable in this dataset includes three categories: female, male and infant. This is a curious component of the dataset since abalone sex is actually binary (male or female). Therefore, infant is not really considered a sex of abalone but instead is in reference to its age. Thus, this could pose a potential limitation in the predictive model which we will discuss later.
Abalone of different sex has different body composition with distinct economic values.
The data set has already removed its missing values and the range of the continuous values have been scaled for use with an ANN (by dividing by 200).
In the research paper "A Quantitative Comparison of Dystal and Backpropagation" that David Clark, Zoltan Schreter and Anthony Adams submitted to the Australian Conference on Neural Networks (ACNN'96), the original abalone data set was treated as a 3-category classification problem (grouping ring classes 1-8, 9 and 10, and 11 on). In our project, we will treat the data set as a 2-categorical classification problem (grouping ring classes less or equal to 11, and more than 11).
Here, we aim to answer one research question with a Logistic Regression classification model:
- **Given the input features including sex, size and weight, is an abalone young or old?**
## Findings and results
Considering that the price of an abalone is positively associated with its age, creating a predictive model that is able to automate the manual process of determining the age of an abalone would be valuable to those wishing to determine the age of an abalone, whether it is researchers or those interested in making a profit in the abalone market. Of note, the number of rings present on the abalone directly determines the age of the abalone. For this project, we are separating the abalone into two classes, young and old, based on a threshold on the rings. Moreover, we are using a threshold whereby abalone that contain more than 11 rings would be placed in the old class and otherwise the abalone would be placed in the young class.
```
from IPython.display import Image, HTML
Image("../results/eda/target_distribution.png")
```
Figure 1. There is an imbalance in the distribution of young and old abalone in the training data.
After looking at the distributions of young and old abalone in the training, it's quite clear that there is a class imbalance in the age of the abalone (Figure 1). In fact, the number of young abalone is around triple the number of old abalone in the training data. In the model, we will test a bunch of metrics including accuracy, precision, recall, f1 score, ROC AUC, and average precision. We are going to focus more on f1 score and ROC AUC because we want to observe the overall performance of our model instead of putting more weight on one class over another.
Next, we looked to elaborate upon the distribution of numerical features in the training data in relation to the target class (Figure 2). The distribution of the numerical features seemed to follow a similar shape for both the old class and the young class. The distribution of the length and diameter features was left-skewed, while the whole weight, viscera weight, shucked weight, and shell weight appeared to have a right-skewed distribution. The height feature did not have a clear skewness to the distribution.
```
Image("../results/eda/histograms.png")
```
Figure 2. The distributions of the numerical features are similar between young and old abalone, but may provide insight into slight differences in the features between young and old abalone.
By looking at these histograms, we can see the pronounced effect of the class imbalance. The majority of the values in each numerical feature histogram has a higher proportion of young abalone examples compared to old abalone examples. It's difficult to say for certain whether there are clear distinctions in these features between the young and old class. However, there are a few areas to be aware of that might help us understand how the model might make predictions. Observing the length feature, we can see that when the length of the abalone is below 0.38, almost all of the examples are from the young class, with very few examples from the old class. Similarly for the diameter feature, when the diameter of the abalone is below 0.25, the majority of examples are from the young class, with hardly any examples from the old class. This aligns with our intuitions about abalone, since we should expect younger abalone to be smaller (i.e smaller diameter and length).
There are rare occasions when there are examples that are predominantly from the old class. For example, when shell weight is above 0.6, the majority of examples are of the old class. Additionally, when whole weight is above 2.2, the old class begins to be the more predominant class. Again, this aligns with our intuitions about abalone. We would expect older abalone to be larger, and thus, have a larger whole weight. In terms of shell weight, perhaps abalone of the old class require a larger shell for their larger bodies compared to young abalone, which could explain why there are more examples of old abalone which have a shell weight above 0.6.
Observing the distribution of sexes in the training data, there appears to be a relatively even spread of Female (F), Male (M) within both the young and old target classes, whereas there are a greater number of Infant (I) examples in the young class compared to the old class (Figure 3). Specifically, there were 354 examples of abalone that were male and 340 examples of abalone that were Female in the old class and there were 882 examples of abalone that were male and 684 examples of abalone that were female in the young class. For the Infant class, there was a greater number of examples of Infant in the young class (1009) compared to the old class (72).
```
Image("../results/eda/sex_dist.png")
```
Figure 3. There is an even distribution of male and female abalone within the young and old classes, but a major imbalance in the infant category between young and old abalone.
As expected, there is no roughly bias for one particular sex (Male or Female) depending on if the abalone is old or young. However, the greater number of Infant abalone in the young class does give pause. Our intuitions do indeed tell us that more Infant abalone would classified as young, but the bigger issue in this dataset could be that we are predicting whether an abalone is young or old, after being given information about whether an abalone is an Infant which creates redundancy in the predictive model. It is curious why the researchers decided to include the category Infant within the sex feature column. Perhaps when an abalone is an infant, it is difficult to classify the abalone as Male or Female. Without speaking to domain experts, it is difficult to determine the significance behind having an Infant category within the Sex feature.
Since the target classes, old and young, are directly determined by counting the number of rings, we were able to determine the correlation of numerical features with the number of rings, as well as the correlation among other features (Figure 4). Based on the correlation values, many of the features are highly correlated with other features. As for correlation with rings, the shell weight seemed to have the greatest correlation value (0.69) with rings, while shucked weight appeared to have the lowest correlation with rings (0.54). Based on the correlation heat map, it appears that the numerical features are at least moderately correlated with rings.
```
Image("../results/eda/correlation_map.png")
```
Figure 4. Features are highly correlated with each other and moderately correlated with rings, which is a proxy for the age of the abalone.
These correlation values give us some insight into how the predictive model might make its decision. For example, since shell weight and Rings have a moderately high correlation, the shell weight might be an important feature for predicting the age of the abalone. In the context of abalone, this could mean that older abalone require a heavier shell, whereas younger abalone may only need a lighter shell. With these correlation values, our model might be able to pick up on these types of associations. One trend to note is that many of the explanatory features are correlated with each other. For example, the diameter of an abalone is highly correlated with the length of the abalone. This is quite understandable, considering that as an abalone gets larger in diameter, one might expect the length of the abalone to also get larger. However, this does pose some implications for our model. It begs the question, how essential is it to include every single explanatory feature in this model? If diameter is encapsulating the information provided by length, would it be necessary to include both of these features? Discussing with domain experts can help us to determine which features may be more essential, or in the event that we lack access to domain experts, we could conduct automated feature selection in the future to address the redundancy in explanatory features.
## Model results
An important note about the training data is that there exists a class imbalance between old and young target classes. With more examples of the target class, young, this could have an effect on the accuracy of the model. Because of this fact, we are considering the f1 score and ROC AUC to account for this class imbalance.
```
Image("../results/model/cv_result.png")
```
Figure 5. As the hyperparameter C of logistic regression increases, the validation score increases before leveling off at higher values of C.
We fit a logistic regression on the training data as we are dealing with a binary classification problem. The model set the target class old as 0 and young as 1. We first built a preprocessor which transformed the Sex category by using One-Hot-Encoding and we applied standard scaler on other numeric features. We then used a Grid Search cross validation to determine the best hyperparameter for the logistic regression. As we can see, as the value of C increases, the model's performance on validation sets increase and plateau at around $C = 100$. Note that the hyperparameter, C, of logistic regression is associated with the regularization strength (complexity penalty) of the model. Based on the tuning results, the best logistic regression model occurs when $C = 100$ (Figure 5 and Table 1).
```
HTML('../results/model/train_result_table.html')
```
Table 1. A closer look to each parameter and validation score
```
HTML("../results/model/test_result_table.html")
```
Table 2. Test results on different metrics
After fitting the model, we used a test set to assess how the model would perform on novel examples. The evaluation metrics on the test data set is shown (Table 2). Based on the model's performance on the test set, the f1 score is 0.9, where the f1 score is the harmonic mean of the model's recall score and precision score.
To understand how the features in the dataset are influencing the model's predictions, we calculated the coefficients to demonstrate the importance of each feature in the model (Figure 6 and Table 3).
```
HTML("../results/model/coeff_sorted.html")
```
Table 3. Feature importance in our logistic model
```
Image("../results/model/coeff_bar.png")
```
Figure 6. Shucked weight appears to be an important feature in the predictive model for predicting the young class, while Whole weight is an important feature in the predictive model for predicting the old class.
Based on the coefficients, shucked weight influences the model the most towards predicting that an abalone is young, whereas whole_weight influences the model the most towards predicting that an abalone is old.
It is interesting to observe that the whole weight of an abalone and the shucked weight of an abalone are influence the predictions in opposite directions. By observing the distribution of shucked weight (Figure 2), the shapes of the distributions are quite similar between old and young abalone, and at no point in the distribution are there more examples of old abalone compared to young abalone. In contrast, above a certain threshold, there are more examples of old abalone in the whole weight and shucked weight distributions. Understanding the distributions of these weight features helps us to understand why different types of weight are influencing the prediction in opposite directions. It would be useful to consult a domain expert to see if they would have insight in the differences between old and young abalone in regard to the different types of weight features. However, it's imperative that one takes these feature importances and model performance with a grain of salt, considering that the dataset is imbalanced and that these statistical models don't necessarily explain how the real world works.
## Summary of Findings
Based on the model results, we can see that the logistic regression model is performing well on new examples of abalone, as described by an f1 score of 0.90 and ROC AUC score of 0.86. We focus on these two metrics because they evaluate overall performance of model instead of weighing one class over another. Moreover, given a certain set of biological features of abalone, we're able to predict whether an abalone is old or young fairly accurately while minimizing false negatives and false positives. We were able to obtain these results by testing different values for the model's hyperparameter, C, on various validation sets of the abalone training data in order to obtain an optimal logistic regression model (where $C = 100$). We also obtained the coefficients of the various biological features that helped us understand how the features were influencing the prediction. The weight features (shucked weight, whole weight, and shell weight) specifically had a large influence on the model's predictions. Contrasting the distributions of these weight features between the old and young abalone helped us to investigate why shucked weight was having an opposite predictive effect in comparison with whole weight and shell weight, although consulting with domain experts may help us further understand this opposing effect. Overall, the model's ability to predict whether an abalone is young or old based on specific biological characteristics is good but should be taken with a grain of salt given the imbalance of young and old abalone within the dataset, as well as some of the limitations of the included biological characteristics.
## Limitations and assumptions
One limitation is that we found some of the input features are highly correlated. For example, the correlation between whole weight and length of abalone is 0.97, indicating that these two features are highly positively correlated. This will potentially raise the multicollinearity concern. As a result, it can become difficult for the model to estimate the relationship between each independent variable and the dependent variable independently. One method to address correlated features is to use recursive feature elimination to exclude features with little importance so we can fit a more interpretable model. Additionally, the high correlation between many of the features may insinuate that many of the features are redundant and the inclusion of all of them may be unnecessary. For example, including both diameter and length conveys very similar messages about the biology of the abalone, and may indicate that it is unnecessary to include both of these biological features. Since our primary goal is to make classification on the abalone age (old or young), and we don’t need to understand the role of each independent variable such as weight and height, we did not take additional actions to reduce the multicollinearity problem in this project.
We fit a logistic regression and tuned it by using grid search. Other classification models like decision tree or KNN can be used in this project. We chose logistic regression for its good interpretability and its performance. However, with better feature engineering or better model selection, the performance can be improved.
In regard of the sex feature, the infant category for sex of a abalone is included in this project which may be unnecessary and may harm the validity of the model. It is interesting that the researchers that collected this data included an Infant category within the sex feature, and makes us ponder the significance of its inclusion. Perhaps with consultation with domain experts, the significance of this collection method can be elucidated. In future additional analysis and after consultation with domain experts, we might consider removing the Infant category or the sex feature altogether, since being an infant inherently indicates that the abalone is young, and therefore makes the predictive model redundant.
The lack of the domain knowledge to feature engineer the model inputs was a pronounced limitation throughout the project. Because of this limitation, we included all features in the data set in our classification model for predicting age. However, once greater knowledge is achieved through domain expert consultation, we may be able to conduct additional feature engineering and feature selection that would potentially improve the model's performance and reliability.
## Future directions
Future analyses can be performed to improve this classification model. For example, we are interested in adding additional features such as: the geographical location where the abalones are collected, abalone species, color, number of predators and living environment etc. Consultation with domain experts must also be considered for appropriate and accurate analysis directions.
## References
```{bibliography} references.bib
:all:
```
|
github_jupyter
|
from IPython.display import Image, HTML
Image("../results/eda/target_distribution.png")
Image("../results/eda/histograms.png")
Image("../results/eda/sex_dist.png")
Image("../results/eda/correlation_map.png")
Image("../results/model/cv_result.png")
HTML('../results/model/train_result_table.html')
HTML("../results/model/test_result_table.html")
HTML("../results/model/coeff_sorted.html")
Image("../results/model/coeff_bar.png")
| 0.309232 | 0.993801 |
# Hill Climbing
---
In this notebook, we will train hill climbing with adaptive noise scaling with OpenAI Gym's Cartpole environment.
### 1. Import the Necessary Packages
```
import gym
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
```
### 2. Define the Policy
```
env = gym.make('CartPole-v0')
print('observation space:', env.observation_space)
print('action space:', env.action_space)
class Policy():
def __init__(self, s_size=4, a_size=2):
self.w = 1e-4*np.random.rand(s_size, a_size) # weights for simple linear policy: state_space x action_space
def forward(self, state):
x = np.dot(state, self.w)
return np.exp(x)/sum(np.exp(x)) #softmax
def act(self, state):
probs = self.forward(state)
#action = np.random.choice(2, p=probs) # option 1: stochastic policy
action = np.argmax(probs) # option 2: deterministic policy
return action
```
### 3. Train the Agent with Stochastic Policy Search
```
env = gym.make('CartPole-v0')
env.seed(0)
np.random.seed(0)
policy = Policy()
def hill_climbing(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100, noise_scale=1e-2):
"""Implementation of hill climbing with adaptive noise scaling.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
gamma (float): discount rate
print_every (int): how often to print average score (over last 100 episodes)
noise_scale (float): standard deviation of additive noise
"""
scores_deque = deque(maxlen=100)
scores = []
best_R = -np.Inf
best_w = policy.w
for i_episode in range(1, n_episodes+1):
rewards = []
state = env.reset()
for t in range(max_t):
action = policy.act(state)
state, reward, done, _ = env.step(action)
rewards.append(reward)
if done:
break
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
discounts = [gamma**i for i in range(len(rewards)+1)]
R = sum([a*b for a,b in zip(discounts, rewards)])
if R >= best_R: # found better weights
best_R = R
best_w = policy.w
noise_scale = max(1e-3, noise_scale / 2)
policy.w += noise_scale * np.random.rand(*policy.w.shape)
else: # did not find better weights
noise_scale = min(2, noise_scale * 2)
policy.w = best_w + noise_scale * np.random.rand(*policy.w.shape)
if i_episode % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if np.mean(scores_deque)>=195.0:
print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))
policy.w = best_w
break
return scores
scores = hill_climbing()
```
### 4. Plot the Scores
```
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
### 5. Watch a Smart Agent!
```
env = gym.make('CartPole-v0')
state = env.reset()
for t in range(200):
action = policy.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
```
|
github_jupyter
|
import gym
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('CartPole-v0')
print('observation space:', env.observation_space)
print('action space:', env.action_space)
class Policy():
def __init__(self, s_size=4, a_size=2):
self.w = 1e-4*np.random.rand(s_size, a_size) # weights for simple linear policy: state_space x action_space
def forward(self, state):
x = np.dot(state, self.w)
return np.exp(x)/sum(np.exp(x)) #softmax
def act(self, state):
probs = self.forward(state)
#action = np.random.choice(2, p=probs) # option 1: stochastic policy
action = np.argmax(probs) # option 2: deterministic policy
return action
env = gym.make('CartPole-v0')
env.seed(0)
np.random.seed(0)
policy = Policy()
def hill_climbing(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100, noise_scale=1e-2):
"""Implementation of hill climbing with adaptive noise scaling.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
gamma (float): discount rate
print_every (int): how often to print average score (over last 100 episodes)
noise_scale (float): standard deviation of additive noise
"""
scores_deque = deque(maxlen=100)
scores = []
best_R = -np.Inf
best_w = policy.w
for i_episode in range(1, n_episodes+1):
rewards = []
state = env.reset()
for t in range(max_t):
action = policy.act(state)
state, reward, done, _ = env.step(action)
rewards.append(reward)
if done:
break
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
discounts = [gamma**i for i in range(len(rewards)+1)]
R = sum([a*b for a,b in zip(discounts, rewards)])
if R >= best_R: # found better weights
best_R = R
best_w = policy.w
noise_scale = max(1e-3, noise_scale / 2)
policy.w += noise_scale * np.random.rand(*policy.w.shape)
else: # did not find better weights
noise_scale = min(2, noise_scale * 2)
policy.w = best_w + noise_scale * np.random.rand(*policy.w.shape)
if i_episode % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if np.mean(scores_deque)>=195.0:
print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))
policy.w = best_w
break
return scores
scores = hill_climbing()
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
env = gym.make('CartPole-v0')
state = env.reset()
for t in range(200):
action = policy.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
| 0.699152 | 0.966537 |
# Bayesian Survival Analysis
Author: Austin Rochford
[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3.
We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
```
import arviz as az
import numpy as np
import pandas as pd
import pymc3 as pm
import theano
%matplotlib inline
from matplotlib import pyplot as plt
from pymc3.distributions.timeseries import GaussianRandomWalk
from theano import tensor as T
RANDOM_SEED = 8927
rng = np.random.default_rng(RANDOM_SEED)
az.style.use("arviz-darkgrid")
try:
df = pd.read_csv("../data/mastectomy.csv")
except FileNotFoundError:
df = pd.read_csv(pm.get_data("mastectomy.csv"))
df.event = df.event.astype(np.int64)
df.metastasized = (df.metastasized == "yes").astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
```
Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastasized` represents whether the cancer had [metastasized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.
This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastasized.
#### A crash course in survival analysis
First we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function
$$S(t) = P(T > t) = 1 - F(t),$$
where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysis#Hazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occurred. That is,
$$\begin{align*}
\lambda(t)
& = \lim_{\Delta t \to 0} \frac{P(t < T < t + \Delta t\ |\ T > t)}{\Delta t} \\
& = \lim_{\Delta t \to 0} \frac{P(t < T < t + \Delta t)}{\Delta t \cdot P(T > t)} \\
& = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t) - S(t + \Delta t)}{\Delta t}
= -\frac{S'(t)}{S(t)}.
\end{align*}$$
Solving this differential equation for the survival function shows that
$$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$
This representation of the survival function shows that the cumulative hazard function
$$\Lambda(t) = \int_0^t \lambda(s)\ ds$$
is an important quantity in survival analysis, since we may concisely write $S(t) = \exp(-\Lambda(t)).$
An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysis#Censoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
```
df.event.mean()
```
Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
```
fig, ax = plt.subplots(figsize=(8, 6))
ax.hlines(
patients[df.event.values == 0], 0, df[df.event.values == 0].time, color="C3", label="Censored"
)
ax.hlines(
patients[df.event.values == 1], 0, df[df.event.values == 1].time, color="C7", label="Uncensored"
)
ax.scatter(
df[df.metastasized.values == 1].time,
patients[df.metastasized.values == 1],
color="k",
zorder=10,
label="Metastasized",
)
ax.set_xlim(left=0)
ax.set_xlabel("Months since mastectomy")
ax.set_yticks([])
ax.set_ylabel("Subject")
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc="center right");
```
When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored observation is that the subject's true survival time exceeds `df.time`.
This is enough basic survival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.]
#### Bayesian proportional hazards model
The two most basic estimators in survival analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as
$$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$
Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensional vector `df.metastasized`.
Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that
$$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$
If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.
In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.
A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
```
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
```
We see how deaths and censored observations are distributed in these intervals.
```
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(
df[df.event == 0].time.values,
bins=interval_bounds,
lw=0,
color="C3",
alpha=0.5,
label="Censored",
)
ax.hist(
df[df.event == 1].time.values,
bins=interval_bounds,
lw=0,
color="C7",
alpha=0.5,
label="Uncensored",
)
ax.set_xlim(0, interval_bounds[-1])
ax.set_xlabel("Months since mastectomy")
ax.set_yticks([0, 1, 2, 3])
ax.set_ylabel("Number of observations")
ax.legend();
```
With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germán Rodríguez's WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)
We define indicator variables based on whether the $i$-th subject died in the $j$-th interval,
$$d_{i, j} = \begin{cases}
1 & \textrm{if subject } i \textrm{ died in interval } j \\
0 & \textrm{otherwise}
\end{cases}.$$
```
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
```
We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
```
exposure = np.greater_equal.outer(df.time.to_numpy(), interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
```
Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.
We may approximate $d_{i, j}$ with a Poisson random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
```
coords = {"intervals": intervals}
with pm.Model(coords=coords) as model:
lambda0 = pm.Gamma("lambda0", 0.01, 0.01, dims="intervals")
beta = pm.Normal("beta", 0, sigma=1000)
lambda_ = pm.Deterministic("lambda_", T.outer(T.exp(beta * df.metastasized), lambda0))
mu = pm.Deterministic("mu", exposure * lambda_)
obs = pm.Poisson("obs", mu, observed=death)
```
We now sample from the model.
```
n_samples = 1000
n_tune = 1000
with model:
idata = pm.sample(
n_samples,
tune=n_tune,
target_accept=0.99,
return_inferencedata=True,
random_seed=RANDOM_SEED,
)
```
We see that the hazard rate for subjects whose cancer has metastasized is about one and a half times the rate of those whose cancer has not metastasized.
```
np.exp(idata.posterior["beta"]).mean()
az.plot_posterior(idata, var_names=["beta"]);
az.plot_autocorr(idata, var_names=["beta"]);
```
We now examine the effect of metastization on both the cumulative hazard and on the survival function.
```
base_hazard = idata.posterior["lambda0"]
met_hazard = idata.posterior["lambda0"] * np.exp(idata.posterior["beta"])
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def get_mean(trace):
return trace.mean(("chain", "draw"))
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
az.plot_hdi(
interval_bounds[:-1],
cum_hazard(base_hazard),
ax=hazard_ax,
smooth=False,
color="C0",
fill_kwargs={"label": "Had not metastasized"},
)
az.plot_hdi(
interval_bounds[:-1],
cum_hazard(met_hazard),
ax=hazard_ax,
smooth=False,
color="C1",
fill_kwargs={"label": "Metastasized"},
)
hazard_ax.plot(interval_bounds[:-1], get_mean(cum_hazard(base_hazard)), color="darkblue")
hazard_ax.plot(interval_bounds[:-1], get_mean(cum_hazard(met_hazard)), color="maroon")
hazard_ax.set_xlim(0, df.time.max())
hazard_ax.set_xlabel("Months since mastectomy")
hazard_ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
hazard_ax.legend(loc=2)
az.plot_hdi(interval_bounds[:-1], survival(base_hazard), ax=surv_ax, smooth=False, color="C0")
az.plot_hdi(interval_bounds[:-1], survival(met_hazard), ax=surv_ax, smooth=False, color="C1")
surv_ax.plot(interval_bounds[:-1], get_mean(survival(base_hazard)), color="darkblue")
surv_ax.plot(interval_bounds[:-1], get_mean(survival(met_hazard)), color="maroon")
surv_ax.set_xlim(0, df.time.max())
surv_ax.set_xlabel("Months since mastectomy")
surv_ax.set_ylabel("Survival function $S(t)$")
fig.suptitle("Bayesian survival model");
```
We see that the cumulative hazard for metastasized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.
These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates.
##### Time varying effects
Another of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastasized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accommodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficient model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.
We implement this model in `pymc3` as follows.
```
coords = {"intervals": intervals}
with pm.Model(coords=coords) as time_varying_model:
lambda0 = pm.Gamma("lambda0", 0.01, 0.01, dims="intervals")
beta = GaussianRandomWalk("beta", tau=1.0, dims="intervals")
lambda_ = pm.Deterministic("h", lambda0 * T.exp(T.outer(T.constant(df.metastasized), beta)))
mu = pm.Deterministic("mu", exposure * lambda_)
obs = pm.Poisson("obs", mu, observed=death)
```
We proceed to sample from this model.
```
with time_varying_model:
time_varying_idata = pm.sample(
n_samples,
tune=n_tune,
return_inferencedata=True,
target_accept=0.99,
random_seed=RANDOM_SEED,
)
az.plot_forest(time_varying_idata, var_names=["beta"]);
```
We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
```
fig, ax = plt.subplots(figsize=(8, 6))
beta_eti = time_varying_idata.posterior["beta"].quantile((0.025, 0.975), dim=("chain", "draw"))
beta_eti_low = beta_eti.sel(quantile=0.025)
beta_eti_high = beta_eti.sel(quantile=0.975)
ax.fill_between(interval_bounds[:-1], beta_eti_low, beta_eti_high, color="C0", alpha=0.25)
beta_hat = time_varying_idata.posterior["beta"].mean(("chain", "draw"))
ax.step(interval_bounds[:-1], beta_hat, color="C0")
ax.scatter(
interval_bounds[last_period[(df.event.values == 1) & (df.metastasized == 1)]],
beta_hat.isel(intervals=last_period[(df.event.values == 1) & (df.metastasized == 1)]),
color="C1",
zorder=10,
label="Died, cancer metastasized",
)
ax.scatter(
interval_bounds[last_period[(df.event.values == 0) & (df.metastasized == 1)]],
beta_hat.isel(intervals=last_period[(df.event.values == 0) & (df.metastasized == 1)]),
color="C0",
zorder=10,
label="Censored, cancer metastasized",
)
ax.set_xlim(0, df.time.max())
ax.set_xlabel("Months since mastectomy")
ax.set_ylabel(r"$\beta_j$")
ax.legend();
```
The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastasized lived past this point died during the study.
The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
```
tv_base_hazard = time_varying_idata.posterior["lambda0"]
tv_met_hazard = time_varying_idata.posterior["lambda0"] * np.exp(
time_varying_idata.posterior["beta"]
)
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(
interval_bounds[:-1],
cum_hazard(base_hazard.mean(("chain", "draw"))),
color="C0",
label="Had not metastasized",
)
ax.step(
interval_bounds[:-1],
cum_hazard(met_hazard.mean(("chain", "draw"))),
color="C1",
label="Metastasized",
)
ax.step(
interval_bounds[:-1],
cum_hazard(tv_base_hazard.mean(("chain", "draw"))),
color="C0",
linestyle="--",
label="Had not metastasized (time varying effect)",
)
ax.step(
interval_bounds[:-1],
cum_hazard(tv_met_hazard.mean(dim=("chain", "draw"))),
color="C1",
linestyle="--",
label="Metastasized (time varying effect)",
)
ax.set_xlim(0, df.time.max() - 4)
ax.set_xlabel("Months since mastectomy")
ax.set_ylim(0, 2)
ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
az.plot_hdi(
interval_bounds[:-1],
cum_hazard(tv_base_hazard),
ax=hazard_ax,
color="C0",
smooth=False,
fill_kwargs={"label": "Had not metastasized"},
)
az.plot_hdi(
interval_bounds[:-1],
cum_hazard(tv_met_hazard),
ax=hazard_ax,
smooth=False,
color="C1",
fill_kwargs={"label": "Metastasized"},
)
hazard_ax.plot(interval_bounds[:-1], get_mean(cum_hazard(tv_base_hazard)), color="darkblue")
hazard_ax.plot(interval_bounds[:-1], get_mean(cum_hazard(tv_met_hazard)), color="maroon")
hazard_ax.set_xlim(0, df.time.max())
hazard_ax.set_xlabel("Months since mastectomy")
hazard_ax.set_ylim(0, 2)
hazard_ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
hazard_ax.legend(loc=2)
az.plot_hdi(interval_bounds[:-1], survival(tv_base_hazard), ax=surv_ax, smooth=False, color="C0")
az.plot_hdi(interval_bounds[:-1], survival(tv_met_hazard), ax=surv_ax, smooth=False, color="C1")
surv_ax.plot(interval_bounds[:-1], get_mean(survival(tv_base_hazard)), color="darkblue")
surv_ax.plot(interval_bounds[:-1], get_mean(survival(tv_met_hazard)), color="maroon")
surv_ax.set_xlim(0, df.time.max())
surv_ax.set_xlabel("Months since mastectomy")
surv_ax.set_ylabel("Survival function $S(t)$")
fig.suptitle("Bayesian survival model with time varying effects");
```
We have really only scratched the surface of both survival analysis and the Bayesian approach to survival analysis. More information on Bayesian survival analysis is available in Ibrahim et al. (2005). (For example, we may want to account for individual frailty in either or original or time-varying models.)
This tutorial is available as an [IPython](http://ipython.org/) notebook [here](https://gist.github.com/AustinRochford/4c6b07e51a2247d678d6). It is adapted from a blog post that first appeared [here](http://austinrochford.com/posts/2015-10-05-bayes-survival.html).
```
%load_ext watermark
%watermark -n -u -v -iv -w -p xarray
```
|
github_jupyter
|
import arviz as az
import numpy as np
import pandas as pd
import pymc3 as pm
import theano
%matplotlib inline
from matplotlib import pyplot as plt
from pymc3.distributions.timeseries import GaussianRandomWalk
from theano import tensor as T
RANDOM_SEED = 8927
rng = np.random.default_rng(RANDOM_SEED)
az.style.use("arviz-darkgrid")
try:
df = pd.read_csv("../data/mastectomy.csv")
except FileNotFoundError:
df = pd.read_csv(pm.get_data("mastectomy.csv"))
df.event = df.event.astype(np.int64)
df.metastasized = (df.metastasized == "yes").astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
df.event.mean()
fig, ax = plt.subplots(figsize=(8, 6))
ax.hlines(
patients[df.event.values == 0], 0, df[df.event.values == 0].time, color="C3", label="Censored"
)
ax.hlines(
patients[df.event.values == 1], 0, df[df.event.values == 1].time, color="C7", label="Uncensored"
)
ax.scatter(
df[df.metastasized.values == 1].time,
patients[df.metastasized.values == 1],
color="k",
zorder=10,
label="Metastasized",
)
ax.set_xlim(left=0)
ax.set_xlabel("Months since mastectomy")
ax.set_yticks([])
ax.set_ylabel("Subject")
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc="center right");
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(
df[df.event == 0].time.values,
bins=interval_bounds,
lw=0,
color="C3",
alpha=0.5,
label="Censored",
)
ax.hist(
df[df.event == 1].time.values,
bins=interval_bounds,
lw=0,
color="C7",
alpha=0.5,
label="Uncensored",
)
ax.set_xlim(0, interval_bounds[-1])
ax.set_xlabel("Months since mastectomy")
ax.set_yticks([0, 1, 2, 3])
ax.set_ylabel("Number of observations")
ax.legend();
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
exposure = np.greater_equal.outer(df.time.to_numpy(), interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
coords = {"intervals": intervals}
with pm.Model(coords=coords) as model:
lambda0 = pm.Gamma("lambda0", 0.01, 0.01, dims="intervals")
beta = pm.Normal("beta", 0, sigma=1000)
lambda_ = pm.Deterministic("lambda_", T.outer(T.exp(beta * df.metastasized), lambda0))
mu = pm.Deterministic("mu", exposure * lambda_)
obs = pm.Poisson("obs", mu, observed=death)
n_samples = 1000
n_tune = 1000
with model:
idata = pm.sample(
n_samples,
tune=n_tune,
target_accept=0.99,
return_inferencedata=True,
random_seed=RANDOM_SEED,
)
np.exp(idata.posterior["beta"]).mean()
az.plot_posterior(idata, var_names=["beta"]);
az.plot_autocorr(idata, var_names=["beta"]);
base_hazard = idata.posterior["lambda0"]
met_hazard = idata.posterior["lambda0"] * np.exp(idata.posterior["beta"])
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def get_mean(trace):
return trace.mean(("chain", "draw"))
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
az.plot_hdi(
interval_bounds[:-1],
cum_hazard(base_hazard),
ax=hazard_ax,
smooth=False,
color="C0",
fill_kwargs={"label": "Had not metastasized"},
)
az.plot_hdi(
interval_bounds[:-1],
cum_hazard(met_hazard),
ax=hazard_ax,
smooth=False,
color="C1",
fill_kwargs={"label": "Metastasized"},
)
hazard_ax.plot(interval_bounds[:-1], get_mean(cum_hazard(base_hazard)), color="darkblue")
hazard_ax.plot(interval_bounds[:-1], get_mean(cum_hazard(met_hazard)), color="maroon")
hazard_ax.set_xlim(0, df.time.max())
hazard_ax.set_xlabel("Months since mastectomy")
hazard_ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
hazard_ax.legend(loc=2)
az.plot_hdi(interval_bounds[:-1], survival(base_hazard), ax=surv_ax, smooth=False, color="C0")
az.plot_hdi(interval_bounds[:-1], survival(met_hazard), ax=surv_ax, smooth=False, color="C1")
surv_ax.plot(interval_bounds[:-1], get_mean(survival(base_hazard)), color="darkblue")
surv_ax.plot(interval_bounds[:-1], get_mean(survival(met_hazard)), color="maroon")
surv_ax.set_xlim(0, df.time.max())
surv_ax.set_xlabel("Months since mastectomy")
surv_ax.set_ylabel("Survival function $S(t)$")
fig.suptitle("Bayesian survival model");
coords = {"intervals": intervals}
with pm.Model(coords=coords) as time_varying_model:
lambda0 = pm.Gamma("lambda0", 0.01, 0.01, dims="intervals")
beta = GaussianRandomWalk("beta", tau=1.0, dims="intervals")
lambda_ = pm.Deterministic("h", lambda0 * T.exp(T.outer(T.constant(df.metastasized), beta)))
mu = pm.Deterministic("mu", exposure * lambda_)
obs = pm.Poisson("obs", mu, observed=death)
with time_varying_model:
time_varying_idata = pm.sample(
n_samples,
tune=n_tune,
return_inferencedata=True,
target_accept=0.99,
random_seed=RANDOM_SEED,
)
az.plot_forest(time_varying_idata, var_names=["beta"]);
fig, ax = plt.subplots(figsize=(8, 6))
beta_eti = time_varying_idata.posterior["beta"].quantile((0.025, 0.975), dim=("chain", "draw"))
beta_eti_low = beta_eti.sel(quantile=0.025)
beta_eti_high = beta_eti.sel(quantile=0.975)
ax.fill_between(interval_bounds[:-1], beta_eti_low, beta_eti_high, color="C0", alpha=0.25)
beta_hat = time_varying_idata.posterior["beta"].mean(("chain", "draw"))
ax.step(interval_bounds[:-1], beta_hat, color="C0")
ax.scatter(
interval_bounds[last_period[(df.event.values == 1) & (df.metastasized == 1)]],
beta_hat.isel(intervals=last_period[(df.event.values == 1) & (df.metastasized == 1)]),
color="C1",
zorder=10,
label="Died, cancer metastasized",
)
ax.scatter(
interval_bounds[last_period[(df.event.values == 0) & (df.metastasized == 1)]],
beta_hat.isel(intervals=last_period[(df.event.values == 0) & (df.metastasized == 1)]),
color="C0",
zorder=10,
label="Censored, cancer metastasized",
)
ax.set_xlim(0, df.time.max())
ax.set_xlabel("Months since mastectomy")
ax.set_ylabel(r"$\beta_j$")
ax.legend();
tv_base_hazard = time_varying_idata.posterior["lambda0"]
tv_met_hazard = time_varying_idata.posterior["lambda0"] * np.exp(
time_varying_idata.posterior["beta"]
)
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(
interval_bounds[:-1],
cum_hazard(base_hazard.mean(("chain", "draw"))),
color="C0",
label="Had not metastasized",
)
ax.step(
interval_bounds[:-1],
cum_hazard(met_hazard.mean(("chain", "draw"))),
color="C1",
label="Metastasized",
)
ax.step(
interval_bounds[:-1],
cum_hazard(tv_base_hazard.mean(("chain", "draw"))),
color="C0",
linestyle="--",
label="Had not metastasized (time varying effect)",
)
ax.step(
interval_bounds[:-1],
cum_hazard(tv_met_hazard.mean(dim=("chain", "draw"))),
color="C1",
linestyle="--",
label="Metastasized (time varying effect)",
)
ax.set_xlim(0, df.time.max() - 4)
ax.set_xlabel("Months since mastectomy")
ax.set_ylim(0, 2)
ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
az.plot_hdi(
interval_bounds[:-1],
cum_hazard(tv_base_hazard),
ax=hazard_ax,
color="C0",
smooth=False,
fill_kwargs={"label": "Had not metastasized"},
)
az.plot_hdi(
interval_bounds[:-1],
cum_hazard(tv_met_hazard),
ax=hazard_ax,
smooth=False,
color="C1",
fill_kwargs={"label": "Metastasized"},
)
hazard_ax.plot(interval_bounds[:-1], get_mean(cum_hazard(tv_base_hazard)), color="darkblue")
hazard_ax.plot(interval_bounds[:-1], get_mean(cum_hazard(tv_met_hazard)), color="maroon")
hazard_ax.set_xlim(0, df.time.max())
hazard_ax.set_xlabel("Months since mastectomy")
hazard_ax.set_ylim(0, 2)
hazard_ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
hazard_ax.legend(loc=2)
az.plot_hdi(interval_bounds[:-1], survival(tv_base_hazard), ax=surv_ax, smooth=False, color="C0")
az.plot_hdi(interval_bounds[:-1], survival(tv_met_hazard), ax=surv_ax, smooth=False, color="C1")
surv_ax.plot(interval_bounds[:-1], get_mean(survival(tv_base_hazard)), color="darkblue")
surv_ax.plot(interval_bounds[:-1], get_mean(survival(tv_met_hazard)), color="maroon")
surv_ax.set_xlim(0, df.time.max())
surv_ax.set_xlabel("Months since mastectomy")
surv_ax.set_ylabel("Survival function $S(t)$")
fig.suptitle("Bayesian survival model with time varying effects");
%load_ext watermark
%watermark -n -u -v -iv -w -p xarray
| 0.635222 | 0.990633 |
# H2O Tutorial
Author: Spencer Aiello
Contact: spencer@h2oai.com
This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms.
Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai.
## Setting up your system for this demo
The following code creates two csv files using data from the [Boston Housing dataset](https://archive.ics.uci.edu/ml/datasets/Housing) which is built into scikit-learn and adds them to the local directory
```
import pandas as pd
import numpy
from numpy.random import choice
from sklearn.datasets import load_boston
from h2o.estimators.random_forest import H2ORandomForestEstimator
import h2o
h2o.init()
# transfer the boston data from pandas to H2O
boston_data = load_boston()
X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names)
X["Median_value"] = boston_data.target
X = h2o.H2OFrame.from_python(X.to_dict("list"))
# select 10% for valdation
r = X.runif(seed=123456789)
train = X[r < 0.9,:]
valid = X[r >= 0.9,:]
h2o.export_file(train, "Boston_housing_train.csv", force=True)
h2o.export_file(valid, "Boston_housing_test.csv", force=True)
```
Enable inline plotting in the Jupyter Notebook
```
%matplotlib inline
import matplotlib.pyplot as plt
```
## Intro to H2O Data Munging
Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store.
```
fr = h2o.import_file("Boston_housing_train.csv")
```
View the top of the H2O frame.
```
fr.head()
```
View the bottom of the H2O Frame
```
fr.tail()
```
Select a column
fr["VAR_NAME"]
```
fr["CRIM"].head() # Tab completes
```
Select a few columns
```
columns = ["CRIM", "RM", "RAD"]
fr[columns].head()
```
Select a subset of rows
Unlike in Pandas, columns may be identified by index or column name. **Therefore, when subsetting by rows, you must also pass the column selection.**
```
fr[2:7,:] # explicitly select all columns with :
```
Key attributes:
* columns, names, col_names
* len, shape, dim, nrow, ncol
* types
Note:
Since the data is _not_ in local python memory
there is no "values" attribute. If you want to
pull all of the data into the local python memory
then do so explicitly with h2o.export_file and
reading the data into python memory from disk.
```
# The columns attribute is exactly like Pandas
print("Columns:", fr.columns, "\n")
print("Columns:", fr.names, "\n")
print("Columns:", fr.col_names, "\n")
# There are a number of attributes to get at the shape
print("length:", str( len(fr) ), "\n")
print("shape:", fr.shape, "\n")
print("dim:", fr.dim, "\n")
print("nrow:", fr.nrow, "\n")
print("ncol:", fr.ncol, "\n")
# Use the "types" attribute to list the column types
print("types:", fr.types, "\n")
```
Select rows based on value
```
fr.shape
```
Boolean masks can be used to subselect rows based on a criteria.
```
mask = fr["CRIM"]>1
fr[mask,:].shape
```
Get summary statistics of the data and additional data distribution information.
```
fr.describe()
```
Set up the predictor and response column names
Using H2O algorithms, it's easier to reference predictor and response columns
by name in a single frame (i.e., don't split up X and y)
```
x = fr.names[:]
y="Median_value"
x.remove(y)
```
## Machine Learning With H2O
H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is [open source](http://github.com/h2oai) and [well-documented](http://docs.h2o.ai).
Unlike Scikit-learn, H2O allows for categorical and missing data.
The basic work flow is as follows:
* Fit the training data with a machine learning algorithm
* Predict on the testing data
### Simple model
```
# Define and fit first 400 points
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=fr[:400,:])
model.predict(fr[400:fr.nrow,:]) # Predict the rest
```
The performance of the model can be checked using the holdout dataset
```
perf = model.model_performance(fr[400:fr.nrow,:])
perf.r2() # get the r2 on the holdout data
perf.mse() # get the mse on the holdout data
perf # display the performance object
```
### Train-Test Split
Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data.
```
r = fr.runif(seed=12345) # build random uniform column over [0,1]
train= fr[r<0.75,:] # perform a 75-25 split
test = fr[r>=0.75,:]
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=train, validation_frame=test)
perf = model.model_performance(test)
perf.r2()
```
There was a massive jump in the R^2 value. This is because the original data is not shuffled.
### Cross validation
H2O's machine learning algorithms take an optional parameter **nfolds** to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits).
In conjunction with the **nfolds** parameter, a user may specify the way in which observations are assigned to each fold with the **fold_assignment** parameter, which can be set to either:
* AUTO: Perform random assignment
* Random: Each row has a equal (1/nfolds) chance of being in any fold.
* Modulo: Observations are in/out of the fold based by modding on nfolds
```
model = H2ORandomForestEstimator(nfolds=10) # build a 10-fold cross-validated model
model.train(x=x, y=y, training_frame=fr)
scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute
print("Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96))
print("Scores:", scores.round(2))
```
However, you can still make use of the cross_val_score from Scikit-Learn
### Cross validation: H2O and Scikit-Learn
```
from sklearn.model_selection import cross_val_score
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
```
You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is similar to the scikit-learn RandomForestRegressor object with its own ``train`` method.
```
model = H2ORandomForestEstimator(seed=42)
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv)
print("Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96))
print("Scores:", scores.round(2))
```
There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage.
Since the progress bar print out gets annoying let's disable that
```
h2o.__PROGRESS_BAR__=False
h2o.no_progress()
```
### Grid Search
Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties)
### Randomized grid search: H2O and Scikit-Learn
```
from sklearn import __version__
sklearn_version = __version__
print(sklearn_version)
```
If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions).
The steps to perform a randomized grid search:
1. Import model and RandomizedSearchCV
2. Define model
3. Specify parameters to test
4. Define grid search object
5. Fit data to grid search object
6. Collect scores
All the steps will be repeated from above.
Because 0.16.1 is installed, we use scipy to define specific distributions
ADVANCED TIP:
Turn off reference counting for spawning jobs in parallel (n_jobs=-1, or n_jobs > 1).
We'll turn it back on again in the aftermath of a Parallel job.
If you don't want to run jobs in parallel, don't turn off the reference counting.
Pattern is:
>>> h2o.turn_off_ref_cnts()
>>> .... parallel job ....
>>> h2o.turn_on_ref_cnts()
```
%%time
from sklearn.model_selection import RandomizedSearchCV # Import grid search
from scipy.stats import randint, uniform
model = H2ORandomForestEstimator(seed=42) # Define model
params = {"ntrees": randint(20,30),
"max_depth": randint(1,10),
"min_rows": randint(1,10), # scikit's min_samples_leaf
"mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # make a cv
random_search = RandomizedSearchCV(model, params,
n_iter=10,
scoring=scorer,
cv=custom_cv,
random_state=42,
n_jobs=1) # Define grid search object
random_search.fit(fr[x], fr[y])
print("Best R^2:", random_search.best_score_, "\n")
print("Best params:", random_search.best_params_)
```
We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report.
```
def report_grid_score_detail(random_search, charts=True):
"""Input fit grid search estimator. Returns df of scores with details"""
df_list = []
for line in random_search.grid_scores_:
results_dict = dict(line.parameters)
results_dict["score"] = line.mean_validation_score
results_dict["std"] = line.cv_validation_scores.std()*1.96
df_list.append(results_dict)
result_df = pd.DataFrame(df_list)
result_df = result_df.sort("score", ascending=False)
if charts:
for col in get_numeric(result_df):
if col not in ["score", "std"]:
plt.scatter(result_df[col], result_df.score)
plt.title(col)
plt.show()
for col in list(result_df.columns[result_df.dtypes == "object"]):
cat_plot = result_df.score.groupby(result_df[col]).mean()[0]
cat_plot.sort()
cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2))
plt.show()
return result_df
def get_numeric(X):
"""Return list of numeric dtypes variables"""
return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist()
report_grid_score_detail(random_search).head()
```
Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs:
```
%%time
params = {"ntrees": randint(30,35),
"max_depth": randint(5,8),
"mtries": randint(4,6),}
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big
# impact on the std of the resulting scores. More
random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher
n_iter=5, # variation per sample
scoring=scorer,
cv=custom_cv,
random_state=43,
n_jobs=1)
random_search.fit(fr[x], fr[y])
print("Best R^2:", random_search.best_score_, "\n")
print("Best params:", random_search.best_params_)
report_grid_score_detail(random_search)
```
### Transformations
Rule of machine learning: Don't use your testing data to inform your training data. Unfortunately, this happens all the time when preparing a dataset for the final model. But on smaller datasets, you must be especially careful.
At the moment, there are no classes for managing data transformations. On the one hand, this requires the user to tote around some extra state, but on the other, it allows the user to be more explicit about transforming H2OFrames.
Basic steps:
0. Remove the response variable from transformations.
1. Import transformer
2. Define transformer
3. Fit train data to transformer
4. Transform test and train data
5. Re-attach the response variable.
First let's normalize the data using the means and standard deviations of the training data.
Then let's perform a principal component analysis on the training data and select the top 5 components.
Using these components, let's use them to reduce the train and test design matrices.
```
from h2o.transforms.preprocessing import H2OScaler
from h2o.estimators.pca import H2OPrincipalComponentAnalysisEstimator as H2OPCA
```
#### Normalize Data: Use the means and standard deviations from the training data.
```
y_train = train.pop("Median_value")
y_test = test.pop("Median_value")
norm = H2OScaler()
norm.fit(train)
X_train_norm = norm.transform(train)
X_test_norm = norm.transform(test)
print(X_test_norm.shape)
X_test_norm
```
Then, we can apply PCA and keep the top 5 components. A user warning is expected here.
```
pca = H2OPCA(k=5)
pca.fit(X_train_norm)
X_train_norm_pca = pca.transform(X_train_norm)
X_test_norm_pca = pca.transform(X_test_norm)
# prop of variance explained by top 5 components?
print(X_test_norm_pca.shape)
X_test_norm_pca[:5]
model = H2ORandomForestEstimator(seed=42)
model.train(x=X_train_norm_pca.names, y=y_train.names, training_frame=X_train_norm_pca.cbind(y_train))
y_hat = model.predict(X_test_norm_pca)
h2o_r2_score(y_test,y_hat)
```
Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers.
### Pipelines
"Tranformers unite!"
If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple.
Steps:
1. Import Pipeline, transformers, and model
2. Define pipeline. The first and only argument is a *list* of *tuples* where the first element of each tuple is a name you give the step and the second element is a defined transformer. The last step is optionally an estimator class (like a RandomForest).
3. Fit the training data to pipeline
4. Either transform or predict the testing data
```
from h2o.transforms.preprocessing import H2OScaler
from h2o.estimators.pca import H2OPrincipalComponentAnalysisEstimator as H2OPCA
from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown>
model = H2ORandomForestEstimator(seed=42)
pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps
("pca", H2OPCA(k=5)),
("rf", model)]) # Notice the last step is an estimator
pipe.fit(train, y_train) # Fit training data
y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator)
h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before
```
This is so much easier!!!
But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score.
### Combining randomized grid search and pipelines
"Yo dawg, I heard you like models, so I put models in your models to model models."
Steps:
1. Import Pipeline, grid search, transformers, and estimators <Not shown below>
2. Define pipeline
3. Define parameters to test in the form: "(Step name)__(argument name)" A double underscore separates the two words.
4. Define grid search
5. Fit to grid search
```
pipe = Pipeline([("standardize", H2OScaler()),
("pca", H2OPCA()),
("rf", H2ORandomForestEstimator(seed=42))])
params = {"standardize__center": [True, False], # Parameters to test
"standardize__scale": [True, False],
"pca__k": randint(2, 6),
"rf__ntrees": randint(10,20),
"rf__max_depth": randint(4,10),
"rf__min_rows": randint(5,10), }
# "rf__mtries": randint(1,4),} # gridding over mtries is
# problematic with pca grid over
# k above
from sklearn.model_selection import RandomizedSearchCV
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42)
random_search = RandomizedSearchCV(pipe, params,
n_iter=5,
scoring=make_scorer(h2o_r2_score),
cv=custom_cv,
random_state=42,
n_jobs=1)
random_search.fit(fr[x],fr[y])
results = report_grid_score_detail(random_search)
results.head()
```
Currently Under Development (drop-in scikit-learn pieces):
* Richer set of transforms (only PCA and Scale are implemented)
* Richer set of estimators (only RandomForest is available)
* Full H2O Grid Search
### Other Tips: Model Save/Load
It is useful to save constructed models to disk and reload them between H2O sessions. Here's how:
```
best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search
h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline
save_path = h2o.save_model(h2o_model, path=".", force=True)
print(save_path)
# assumes new session
my_model = h2o.load_model(path=save_path)
my_model.predict(X_test_norm_pca)
```
|
github_jupyter
|
import pandas as pd
import numpy
from numpy.random import choice
from sklearn.datasets import load_boston
from h2o.estimators.random_forest import H2ORandomForestEstimator
import h2o
h2o.init()
# transfer the boston data from pandas to H2O
boston_data = load_boston()
X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names)
X["Median_value"] = boston_data.target
X = h2o.H2OFrame.from_python(X.to_dict("list"))
# select 10% for valdation
r = X.runif(seed=123456789)
train = X[r < 0.9,:]
valid = X[r >= 0.9,:]
h2o.export_file(train, "Boston_housing_train.csv", force=True)
h2o.export_file(valid, "Boston_housing_test.csv", force=True)
%matplotlib inline
import matplotlib.pyplot as plt
fr = h2o.import_file("Boston_housing_train.csv")
fr.head()
fr.tail()
fr["CRIM"].head() # Tab completes
columns = ["CRIM", "RM", "RAD"]
fr[columns].head()
fr[2:7,:] # explicitly select all columns with :
# The columns attribute is exactly like Pandas
print("Columns:", fr.columns, "\n")
print("Columns:", fr.names, "\n")
print("Columns:", fr.col_names, "\n")
# There are a number of attributes to get at the shape
print("length:", str( len(fr) ), "\n")
print("shape:", fr.shape, "\n")
print("dim:", fr.dim, "\n")
print("nrow:", fr.nrow, "\n")
print("ncol:", fr.ncol, "\n")
# Use the "types" attribute to list the column types
print("types:", fr.types, "\n")
fr.shape
mask = fr["CRIM"]>1
fr[mask,:].shape
fr.describe()
x = fr.names[:]
y="Median_value"
x.remove(y)
# Define and fit first 400 points
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=fr[:400,:])
model.predict(fr[400:fr.nrow,:]) # Predict the rest
perf = model.model_performance(fr[400:fr.nrow,:])
perf.r2() # get the r2 on the holdout data
perf.mse() # get the mse on the holdout data
perf # display the performance object
r = fr.runif(seed=12345) # build random uniform column over [0,1]
train= fr[r<0.75,:] # perform a 75-25 split
test = fr[r>=0.75,:]
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=train, validation_frame=test)
perf = model.model_performance(test)
perf.r2()
model = H2ORandomForestEstimator(nfolds=10) # build a 10-fold cross-validated model
model.train(x=x, y=y, training_frame=fr)
scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute
print("Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96))
print("Scores:", scores.round(2))
from sklearn.model_selection import cross_val_score
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
model = H2ORandomForestEstimator(seed=42)
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv)
print("Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96))
print("Scores:", scores.round(2))
h2o.__PROGRESS_BAR__=False
h2o.no_progress()
from sklearn import __version__
sklearn_version = __version__
print(sklearn_version)
%%time
from sklearn.model_selection import RandomizedSearchCV # Import grid search
from scipy.stats import randint, uniform
model = H2ORandomForestEstimator(seed=42) # Define model
params = {"ntrees": randint(20,30),
"max_depth": randint(1,10),
"min_rows": randint(1,10), # scikit's min_samples_leaf
"mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # make a cv
random_search = RandomizedSearchCV(model, params,
n_iter=10,
scoring=scorer,
cv=custom_cv,
random_state=42,
n_jobs=1) # Define grid search object
random_search.fit(fr[x], fr[y])
print("Best R^2:", random_search.best_score_, "\n")
print("Best params:", random_search.best_params_)
def report_grid_score_detail(random_search, charts=True):
"""Input fit grid search estimator. Returns df of scores with details"""
df_list = []
for line in random_search.grid_scores_:
results_dict = dict(line.parameters)
results_dict["score"] = line.mean_validation_score
results_dict["std"] = line.cv_validation_scores.std()*1.96
df_list.append(results_dict)
result_df = pd.DataFrame(df_list)
result_df = result_df.sort("score", ascending=False)
if charts:
for col in get_numeric(result_df):
if col not in ["score", "std"]:
plt.scatter(result_df[col], result_df.score)
plt.title(col)
plt.show()
for col in list(result_df.columns[result_df.dtypes == "object"]):
cat_plot = result_df.score.groupby(result_df[col]).mean()[0]
cat_plot.sort()
cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2))
plt.show()
return result_df
def get_numeric(X):
"""Return list of numeric dtypes variables"""
return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist()
report_grid_score_detail(random_search).head()
%%time
params = {"ntrees": randint(30,35),
"max_depth": randint(5,8),
"mtries": randint(4,6),}
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big
# impact on the std of the resulting scores. More
random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher
n_iter=5, # variation per sample
scoring=scorer,
cv=custom_cv,
random_state=43,
n_jobs=1)
random_search.fit(fr[x], fr[y])
print("Best R^2:", random_search.best_score_, "\n")
print("Best params:", random_search.best_params_)
report_grid_score_detail(random_search)
from h2o.transforms.preprocessing import H2OScaler
from h2o.estimators.pca import H2OPrincipalComponentAnalysisEstimator as H2OPCA
y_train = train.pop("Median_value")
y_test = test.pop("Median_value")
norm = H2OScaler()
norm.fit(train)
X_train_norm = norm.transform(train)
X_test_norm = norm.transform(test)
print(X_test_norm.shape)
X_test_norm
pca = H2OPCA(k=5)
pca.fit(X_train_norm)
X_train_norm_pca = pca.transform(X_train_norm)
X_test_norm_pca = pca.transform(X_test_norm)
# prop of variance explained by top 5 components?
print(X_test_norm_pca.shape)
X_test_norm_pca[:5]
model = H2ORandomForestEstimator(seed=42)
model.train(x=X_train_norm_pca.names, y=y_train.names, training_frame=X_train_norm_pca.cbind(y_train))
y_hat = model.predict(X_test_norm_pca)
h2o_r2_score(y_test,y_hat)
from h2o.transforms.preprocessing import H2OScaler
from h2o.estimators.pca import H2OPrincipalComponentAnalysisEstimator as H2OPCA
from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown>
model = H2ORandomForestEstimator(seed=42)
pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps
("pca", H2OPCA(k=5)),
("rf", model)]) # Notice the last step is an estimator
pipe.fit(train, y_train) # Fit training data
y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator)
h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before
pipe = Pipeline([("standardize", H2OScaler()),
("pca", H2OPCA()),
("rf", H2ORandomForestEstimator(seed=42))])
params = {"standardize__center": [True, False], # Parameters to test
"standardize__scale": [True, False],
"pca__k": randint(2, 6),
"rf__ntrees": randint(10,20),
"rf__max_depth": randint(4,10),
"rf__min_rows": randint(5,10), }
# "rf__mtries": randint(1,4),} # gridding over mtries is
# problematic with pca grid over
# k above
from sklearn.model_selection import RandomizedSearchCV
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42)
random_search = RandomizedSearchCV(pipe, params,
n_iter=5,
scoring=make_scorer(h2o_r2_score),
cv=custom_cv,
random_state=42,
n_jobs=1)
random_search.fit(fr[x],fr[y])
results = report_grid_score_detail(random_search)
results.head()
best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search
h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline
save_path = h2o.save_model(h2o_model, path=".", force=True)
print(save_path)
# assumes new session
my_model = h2o.load_model(path=save_path)
my_model.predict(X_test_norm_pca)
| 0.615319 | 0.986191 |
# 3교시 데이터 타입
> 스파크에서 사용되는 데이터 타입에 대해 실습합니다
## 목차
* [1. 리터럴 타입](#1.-리터럴-타입)
* [2. 불리언 형 데이터 타입 다루기](#2.-불리언-형-데이터-타입-다루기)
* [3. 수치형 데이터 타입 다루기](#3.-수치형-데이터-타입-다루기)
* [4. 문자열 데이터 타입 다루기](#4.-문자열-데이터-타입-다루기)
* [5. 정규 표현식](#5.-정규-표현식)
* [6. 날짜와 타임스팸프 데이터 타입 다루기](#6.-날짜와-타임스팸프-데이터-타입-다루기)
* [7. 널 값 다루기](#7.-널-값-다루기)
* [참고자료](#참고자료)
```
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark.sql.types import *
from IPython.display import display, display_pretty, clear_output, JSON
spark = (
SparkSession
.builder
.config("spark.sql.session.timeZone", "Asia/Seoul")
.getOrCreate()
)
# 노트북에서 테이블 형태로 데이터 프레임 출력을 위한 설정을 합니다
spark.conf.set("spark.sql.repl.eagerEval.enabled", True) # display enabled
spark.conf.set("spark.sql.repl.eagerEval.truncate", 100) # display output columns size
""" DataFrame 생성 """
df = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df.printSchema()
df.createOrReplaceTempView("retail")
df.show(5)
```
## 1. 리터럴 타입
```
from pyspark.sql.functions import lit
df.select(lit(5), lit("five"), lit(5.0))
```
## 2. 불리언 형 데이터 타입 다루기
### 2.1 AND 조건
```
from pyspark.sql.functions import col
x1 = df.where(col("InvoiceNO") != 536365).select("InvoiceNO", "Description")
x2 = df.where("InvoiceNO <> 536365").select("InvoiceNO", "Description")
x3 = df.where("InvoiceNO = 536365").select("InvoiceNO", "Description")
x1.show(2)
x2.show(2)
```
### 2.2 OR 조건
```
from pyspark.sql.functions import instr
df.where("UnitPrice > 600 OR instr(Description, 'POSTAGE') >= 1").show()
```
### 2.3 ISIN - 제공된 목록에 포함되었는지 여부
```
# SparkSQL 을 이용한 is in 구문 사용
from pyspark.sql.functions import desc
df.select('StockCode').where("StockCode in ('DOT', 'POST', 'C2')").distinct().show()
```
### 2.4 INSTR - 특정 문자열이 포함되었는지 여부
```
from pyspark.sql.functions import *
""" instr 함수 """
df.withColumn("added", instr(df.Description, "POSTAGE")).where("added > 1").show() # 8번째 글자에 'POSTAGE'가 시작됨
```
### <font color=green>1. [기본]</font> "data/retail-data/by-day/2010-12-01.csv" 에 저장된 CSV 파일을 읽고
#### 1. 스키마를 출력하세요
#### 2. 데이터를 10건 출력하세요
#### 3. 송장번호(InvoiceNo) 가 '536365' 이면서
#### 4. 상품코드(StockCode) 가 ('85123A', '84406B', '84029G', '84029E') 중에 하나이면서
#### 5. 제품단가(UnitPrice) 가 2.6 이하 혹은 3.0 이상인 경우를 출력하세요
<details><summary>[실습1] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
df1 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df1.printSchema()
df1.show(10)
answer = df1.where("InvoiceNo = '536365'").where("StockCode in ('85123A', '84406B', '84029G', '84029E')").where("UnitPrice < 2.6 or UnitPrice > 3.0")
answer.show()
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
```
## 3. 수치형 데이터 타입 다루기
### 3.1 각종 함수를 표현식으로 작성합니다
```
from pyspark.sql.functions import expr, pow
df.selectExpr("CustomerID", "pow(Quantity * UnitPrice, 2) + 5 as realQuantity").show(2)
```
### 3.2 반올림(round), 올림(ceil), 버림(floor)
```
from pyspark.sql.functions import *
df.selectExpr("round(2.5, 0)", "ceil(2.4)", "floor(2.6)").show(1)
```
### 3.3 요약 통계
```
df.describe().show()
df.describe("InvoiceNo").show() # 컬럼을 입력
```
### <font color=blue>2. [중급]</font> "data/retail-data/by-day/2010-12-01.csv" 에 저장된 CSV 파일을 읽고
#### 1. 스키마를 출력하세요
#### 2. 데이터를 10건 출력하세요
#### 3. 송장번호(InvoiceNo) 가 '536367' 인 거래 내역의
#### 4. 총 금액 (TotalPrice) = 수량(Quantity) * 단가(UnitPrice) 를 계산하여 TotalPrice 컬럼을 추가하세요
#### 5. 단, 총 금액 (TotalPrice) 계산시에 소수점 이하는 버림으로 처리하세요
<details><summary>[실습2] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
df2 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df2.printSchema()
df2.show(10)
answer = df2.where("InvoiceNo = '536367'").withColumn("TotalPrice", expr("floor(Quantity * UnitPrice)"))
display(answer)
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
```
## 4. 문자열 데이터 타입 다루기
### 4.1 첫 문자열만 대문자로 변경
* 공백으로 나뉘는 모든 단어의 첫 글자를 대문자로 변경, initcap
```
from pyspark.sql.functions import initcap
df.select(initcap(col("Description"))).show(2, False)
```
### 4.2 대문자(upper), 소문자(lower)
```
from pyspark.sql.functions import lower, upper
df.selectExpr("Description", "lower(Description)", "upper(Description)").show(2)
```
### 4.3 문자열 주변의 공백을 제거, lpad/ltrim/rpad/rtrim/trim
```
from pyspark.sql.functions import lit, ltrim, rtrim, rpad, lpad, trim
df.select(
ltrim(lit(" HELLO ")).alias("ltrim"),
rtrim(lit(" HELLO ")).alias("rtrim"),
trim(lit(" HELLO ")).alias("trim"),
lpad(lit("HELLO"), 3, " ").alias("lp"),
rpad(lit("HELLO"), 10, " ").alias("rp")
).show(2)
```
### <font color=blue>3. [중급]</font> "data/retail-data/by-day/2010-12-01.csv" 에 저장된 CSV 파일을 읽고
#### 1. 스키마를 출력하세요
#### 2. 데이터를 10건 출력하세요
#### 3. 송장번호(InvoiceNo) 가 '536365' 인 거래 내역의
#### 4. 제품코드(StockCode) 를 출력하되 총 8자리 문자로 출력하되 빈 앞자리는 0으로 채워주세요
#### 5. 0이 패딩된 제품코드(StockCode) 컬럼의 컬럼명은 StockCode 로 유지되어야 합니다
#### 5. 최종 출력되는 컬럼은 "InvoiceNo", "StockCode", "Description" 만 출력하세요
<details><summary>[실습3] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
df3 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df3.printSchema()
df3.show(10)
answer = df3.where("InvoiceNo = '536365'").select("InvoiceNo", lpad("StockCode", 8, "0").alias("StockCode"), "Description")
display(answer)
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
```
## 5. 정규 표현식
### 5.1 단어 치환, regexp_extract
```
from pyspark.sql.functions import regexp_replace
regex_string = "BLACK|WHITE|RED|GRENN|BLUE"
df.select(regexp_replace(col("Description"), regex_string, "COLOR").alias("color_clean"), col("Description")).show(2, truncate=False)
```
## 6. 날짜와 타임스팸프 데이터 타입 다루기
> 시간대 설정이 필요하다면 스파크 SQL 설정의 spark.conf.sessionLocalTimeZone 속성으로 가능 <br>
> TimestampType 클래스는 초 단위 정밀도만 지원 - 초 단위 이상 정밀도 요구 시 long 데이터 타입으로 데이터를 변환해 처리하는 우회 정책이 필요 <br>
### 6.1 오늘 날짜 구하기
```
from pyspark.sql.functions import current_date, current_timestamp
dateDF = spark.range(10) \
.withColumn("today", current_date()) \
.withColumn("now", current_timestamp())
dateDF.createOrReplaceTempView("dataTable")
dateDF.printSchema()
dateDF.show(3, False)
```
### 6.2 날짜를 더하거나 빼기
```
from pyspark.sql.functions import date_sub, date_add
dateDF.select(
date_sub(col("today"), 5),
date_add(col("today"), 5)
).show(1)
```
### 6.3 문자열을 날짜로 변환
```
from pyspark.sql.functions import to_date, lit
spark.range(5) \
.withColumn("date", lit("2017-01-01")) \
.select(to_date(col("date"))) \
.show(1)
""" 파싱오류로 날짜가 null로 반환되는 사례 """
dateDF.select(to_date(lit("2016-20-12")), to_date(lit("2017-12-11"))).show(1) # 월과 일의 순서가 바뀜
```
### <font color=red>4. [고급]</font> "data/retail-data/by-day/2010-12-01.csv" 에 저장된 CSV 파일을 읽고
#### 1. 스키마를 출력하세요
#### 2. 데이터를 10건 출력하세요
#### 3. 적재일자(LoadDate) 컬럼을 넣되 포맷은 'yyyy-MM-dd' 으로 추가해 주시고 현재 일자를 넣으시면 됩니다
#### 4. 송장일자(InvoiceDate) 와 오늘 시간과의 차이를 나타내는 컬럼(InvoiceDiff)을 표현식(`LoadDate - to_date(InvoiceDate)`)넣어주세요 (힌트: withColumn("컬럼명", "표현식"))
#### 5. 변경된 스키마를 출력하세요
<details><summary>[실습4] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
df4 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df4.printSchema()
df4.show(10)
answer = df4.withColumn("LoadDate", current_date()).withColumn("InvoiceDiff", expr("LoadDate - to_date(InvoiceDate)"))
display(answer)
answer.printSchema()
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
```
## 7. 널 값 다루기
+ null 값을 사용하는 것 보다 명시적으로 사용하는 것이 항상 좋음
+ null 값을 허용하지 않는 컬럼을 선언해도 강제성은 없음
+ nullable 속성은 스파크 SQL 옵티마이저가 해당 컬럼을 제어하는 동작을 단순하게 돕는 역할
+ null 값을 다루는 방법은 두 가지
+ 명시적으로 null을 제거
+ 전역 또느 컬럼 단위로 null 값을 특정 값으로 채움
### 7-1. 컬럼 값에 따른 널 처리 함수 (ifnull, nullIf, nvl, nvl2)
+ SQL 함수이며 DataFrame의 select 표현식으로 사용 가능
+ ifnull(null, 'return_value') # 두 번째 값을, 아니라면 첫 번째 값을 반환
+ nullif('value', 'value') # 두 값이 같으면 null
+ nvl(null, 'return_value') # 두 번째 값을, 아니라면 첫 번째 값을 반환
+ nvl2('not_null', 'return_value', 'else_value') # 두 번째 값을, 아니라면 세번째 값을 반환
```
spark.sql("""
SELECT
ifnull(null, 'return_value'),
nullif('value', 'value'),
nvl(null, 'return_value'),
nvl2('not null', 'return_value', 'else_value')
""").show()
```
### 7-2 컬럼의 널 값에 따른 로우 제거 (na.drop)
```
df.na.drop()
df.na.drop("any").show(1) # 로우 컬럼값 중 하나라도 null이면 제거
df.na.drop("all").show(1) # 로우 컬럼값 모두 null이면 제거
# 배열 형태의 컬럼을 인수로 전달하여 지정한 컬럼만 제거합니다
df.na.drop("all", subset=("StockCode", "InvoiceNo")).show(1)
```
### 7.3 컬럼의 널 값에 따른 값을 채움 (na.fill)
```
""" null을 포함한 DataFrame 행성 """
from pyspark.sql import Row
from pyspark.sql.types import StructField, StructType, StringType, DoubleType
myManualSchema = StructType([
StructField("string_null", StringType(), True),
StructField("string2_null", StringType(), True),
StructField("number_null", DoubleType(), True)
])
myRows = []
myRows.append(Row("Hello", None, float(5))) # string 컬럼에 null 포함
myRows.append(Row(None, "World", None)) # number 컬럼에 null 포함
myDf = spark.createDataFrame(myRows, myManualSchema)
myDf.show()
myDf.na.fill( {"number_null": 5.0, "string_null": "not_null"} ).show()
```
### <font color=green>5. [기본]</font> "data/retail-data/by-day/2010-12-01.csv" 에 저장된 CSV 파일을 읽고
#### 1. 스키마를 출력하세요
#### 2. 데이터를 10건 출력하세요
#### 3. 고객구분자(CustomerID)와 설명(Description) 컬럼이 널값인 데이터프레임을 추출하여 출력하세요
#### 4. 고객구분자(CustomerID)가 null 인 경우는 0.0 으로 치환하고
#### 5. 설명(Description)가 null 인 경우는 "NOT MENTIONED" 값으로 저장될 수 있도록 만들어주세요
#### 6. 최종 스키마와 데이터를 출력해 주세요
<details><summary>[실습5] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
df5 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
).where(expr("Description is null or CustomerID is null"))
df5.printSchema()
df5.show(10)
desc_custid_fill = {"Description":"NOT MENTIONED", "CustomerID":0.0}
answer = df5.na.fill(desc_custid_fill)
answer.printSchema()
display(answer)
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
```
## 참고자료
#### 1. [Spark Programming Guide](https://spark.apache.org/docs/latest/sql-programming-guide.html)
#### 2. [PySpark SQL Modules Documentation](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html)
#### 3. <a href="https://spark.apache.org/docs/3.0.1/api/sql/" target="_blank">PySpark 3.0.1 Builtin Functions</a>
#### 4. [PySpark Search](https://spark.apache.org/docs/latest/api/python/search.html)
#### 5. [Pyspark Functions](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?#module-pyspark.sql.functions)
|
github_jupyter
|
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark.sql.types import *
from IPython.display import display, display_pretty, clear_output, JSON
spark = (
SparkSession
.builder
.config("spark.sql.session.timeZone", "Asia/Seoul")
.getOrCreate()
)
# 노트북에서 테이블 형태로 데이터 프레임 출력을 위한 설정을 합니다
spark.conf.set("spark.sql.repl.eagerEval.enabled", True) # display enabled
spark.conf.set("spark.sql.repl.eagerEval.truncate", 100) # display output columns size
""" DataFrame 생성 """
df = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df.printSchema()
df.createOrReplaceTempView("retail")
df.show(5)
from pyspark.sql.functions import lit
df.select(lit(5), lit("five"), lit(5.0))
from pyspark.sql.functions import col
x1 = df.where(col("InvoiceNO") != 536365).select("InvoiceNO", "Description")
x2 = df.where("InvoiceNO <> 536365").select("InvoiceNO", "Description")
x3 = df.where("InvoiceNO = 536365").select("InvoiceNO", "Description")
x1.show(2)
x2.show(2)
from pyspark.sql.functions import instr
df.where("UnitPrice > 600 OR instr(Description, 'POSTAGE') >= 1").show()
# SparkSQL 을 이용한 is in 구문 사용
from pyspark.sql.functions import desc
df.select('StockCode').where("StockCode in ('DOT', 'POST', 'C2')").distinct().show()
from pyspark.sql.functions import *
""" instr 함수 """
df.withColumn("added", instr(df.Description, "POSTAGE")).where("added > 1").show() # 8번째 글자에 'POSTAGE'가 시작됨
df1 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df1.printSchema()
df1.show(10)
answer = df1.where("InvoiceNo = '536365'").where("StockCode in ('85123A', '84406B', '84029G', '84029E')").where("UnitPrice < 2.6 or UnitPrice > 3.0")
answer.show()
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
from pyspark.sql.functions import expr, pow
df.selectExpr("CustomerID", "pow(Quantity * UnitPrice, 2) + 5 as realQuantity").show(2)
from pyspark.sql.functions import *
df.selectExpr("round(2.5, 0)", "ceil(2.4)", "floor(2.6)").show(1)
df.describe().show()
df.describe("InvoiceNo").show() # 컬럼을 입력
df2 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df2.printSchema()
df2.show(10)
answer = df2.where("InvoiceNo = '536367'").withColumn("TotalPrice", expr("floor(Quantity * UnitPrice)"))
display(answer)
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
from pyspark.sql.functions import initcap
df.select(initcap(col("Description"))).show(2, False)
from pyspark.sql.functions import lower, upper
df.selectExpr("Description", "lower(Description)", "upper(Description)").show(2)
from pyspark.sql.functions import lit, ltrim, rtrim, rpad, lpad, trim
df.select(
ltrim(lit(" HELLO ")).alias("ltrim"),
rtrim(lit(" HELLO ")).alias("rtrim"),
trim(lit(" HELLO ")).alias("trim"),
lpad(lit("HELLO"), 3, " ").alias("lp"),
rpad(lit("HELLO"), 10, " ").alias("rp")
).show(2)
df3 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df3.printSchema()
df3.show(10)
answer = df3.where("InvoiceNo = '536365'").select("InvoiceNo", lpad("StockCode", 8, "0").alias("StockCode"), "Description")
display(answer)
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
from pyspark.sql.functions import regexp_replace
regex_string = "BLACK|WHITE|RED|GRENN|BLUE"
df.select(regexp_replace(col("Description"), regex_string, "COLOR").alias("color_clean"), col("Description")).show(2, truncate=False)
from pyspark.sql.functions import current_date, current_timestamp
dateDF = spark.range(10) \
.withColumn("today", current_date()) \
.withColumn("now", current_timestamp())
dateDF.createOrReplaceTempView("dataTable")
dateDF.printSchema()
dateDF.show(3, False)
from pyspark.sql.functions import date_sub, date_add
dateDF.select(
date_sub(col("today"), 5),
date_add(col("today"), 5)
).show(1)
from pyspark.sql.functions import to_date, lit
spark.range(5) \
.withColumn("date", lit("2017-01-01")) \
.select(to_date(col("date"))) \
.show(1)
""" 파싱오류로 날짜가 null로 반환되는 사례 """
dateDF.select(to_date(lit("2016-20-12")), to_date(lit("2017-12-11"))).show(1) # 월과 일의 순서가 바뀜
df4 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
)
df4.printSchema()
df4.show(10)
answer = df4.withColumn("LoadDate", current_date()).withColumn("InvoiceDiff", expr("LoadDate - to_date(InvoiceDate)"))
display(answer)
answer.printSchema()
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
spark.sql("""
SELECT
ifnull(null, 'return_value'),
nullif('value', 'value'),
nvl(null, 'return_value'),
nvl2('not null', 'return_value', 'else_value')
""").show()
df.na.drop()
df.na.drop("any").show(1) # 로우 컬럼값 중 하나라도 null이면 제거
df.na.drop("all").show(1) # 로우 컬럼값 모두 null이면 제거
# 배열 형태의 컬럼을 인수로 전달하여 지정한 컬럼만 제거합니다
df.na.drop("all", subset=("StockCode", "InvoiceNo")).show(1)
""" null을 포함한 DataFrame 행성 """
from pyspark.sql import Row
from pyspark.sql.types import StructField, StructType, StringType, DoubleType
myManualSchema = StructType([
StructField("string_null", StringType(), True),
StructField("string2_null", StringType(), True),
StructField("number_null", DoubleType(), True)
])
myRows = []
myRows.append(Row("Hello", None, float(5))) # string 컬럼에 null 포함
myRows.append(Row(None, "World", None)) # number 컬럼에 null 포함
myDf = spark.createDataFrame(myRows, myManualSchema)
myDf.show()
myDf.na.fill( {"number_null": 5.0, "string_null": "not_null"} ).show()
df5 = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/retail-data/by-day/2010-12-01.csv")
).where(expr("Description is null or CustomerID is null"))
df5.printSchema()
df5.show(10)
desc_custid_fill = {"Description":"NOT MENTIONED", "CustomerID":0.0}
answer = df5.na.fill(desc_custid_fill)
answer.printSchema()
display(answer)
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
| 0.467332 | 0.974239 |
## Import modules
```
import pdb
import glob
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.feature_selection import mutual_info_classif
import nhanes as nhanes
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
%matplotlib notebook
import importlib
importlib.reload(nhanes)
```
## Settings
```
DATA_PATH = '/Users/qiwenlyu/Development/NHANES/'
DATASET = 'arthritis'
```
### Note:
The code below loads each dataset: dataset_features, dataset_targets
Here, all datasets are defined explicitly (see nhanes.py).
```
importlib.reload(nhanes)
ds = nhanes.Dataset(DATA_PATH)
ds.load_arthritis()
n_fe = ds.features.shape[1]
n_classes = 2
indx = np.argwhere(ds.targets != 3)
dataset_features = ds.features[indx.flatten()]
dataset_targets = ds.targets[indx.flatten()]
```
## PCA
```
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(dataset_features)
print(pca.explained_variance_ratio_)
```
## Train/Test Separation
```
#mutual information part
mutualInfo = mutual_info_classif(dataset_features,dataset_targets)
dataset_features = dataset_features.T[mutualInfo > 0].T
print(dataset_features.shape)
perm = np.random.permutation(dataset_targets.shape[0])
dataset_features = dataset_features[perm]
dataset_targets = dataset_targets[perm]
def get_batch(n_size, phase):
# select indices
n_samples = dataset_features.shape[0]
n_classes = int(dataset_targets.max() + 1)
if phase == 'test':
inds_sel = np.arange(0, int(n_samples*0.15), 1)
elif phase == 'validation':
n_samples = dataset_features.shape[0]
inds_sel = np.arange(int(n_samples*0.15), int(n_samples*0.30), 1)
elif phase == 'train':
n_samples = dataset_features.shape[0]
inds_sel = np.arange(int(n_samples*0.30), n_samples, 1)
else:
raise NotImplementedError
inds_sel = np.random.permutation(inds_sel)
batch_inds = []
for cl in range(n_classes):
inds_cl = inds_sel[dataset_targets[inds_sel] == cl]
batch_inds.extend(inds_cl[:n_size//n_classes])
batch_inds = np.random.permutation(batch_inds)
return dataset_features[batch_inds], dataset_targets[batch_inds]
features_trn, targets_trn = get_batch(n_size=5000, phase='train')
features_tst, targets_tst = get_batch(n_size=1000, phase='test')
```
## Classification
```
def plot_roc(fpr, tpr):
fig, ax = plt.subplots()
roc_auc = auc(fpr,tpr)
ax.plot(fpr, tpr, lw=2, label= 'area under curve = %0.4f' % roc_auc)
ax.grid(color='0.7', linestyle='--', linewidth=1)
ax.set_xlim([-0.1, 1.1])
ax.set_ylim([0.0, 1.05])
ax.set_xlabel('False Positive Rate',fontsize=15)
ax.set_ylabel('True Positive Rate',fontsize=15)
ax.legend(loc="lower right")
for label in ax.get_xticklabels()+ax.get_yticklabels():
label.set_fontsize(15)
plt.show()
def plot_confusion_matrix(cm, classes, normalize=False,
title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
# print("Normalized confusion matrix")
else:
# print('Confusion matrix, without normalization')
pass
# print(cm)
plt.figure()
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.show()
```
## Function for all the testing
```
def result_print(model,features_trn,targets_trn,features_tst):
model.fit(features_trn, targets_trn)
preds_tst = clf.predict(features_tst)
accu = np.mean(preds_tst==targets_tst)
print('accu_tst_RFC', accu)
predict_proba = clf.predict_proba(features_tst)
cnf_matrix = confusion_matrix(targets_tst, preds_tst)
fpr, tpr, _ = roc_curve(targets_tst, predict_proba[:,1])
plot_roc(fpr, tpr)
plot_confusion_matrix(cnf_matrix, classes=["CT","RA"], normalize=True, title='Normalized confusion matrix')
print(classification_report(targets_tst, preds_tst))
def result_print_without(model,features_trn,targets_trn,features_tst):
model.fit(features_trn, targets_trn)
preds_tst = clf.predict(features_tst)
accu = np.mean(preds_tst==targets_tst)
print('accu_tst_RFC', accu)
cnf_matrix = confusion_matrix(targets_tst, preds_tst)
plot_confusion_matrix(cnf_matrix, classes=["CT","RA"], normalize=True, title='Normalized confusion matrix')
print(classification_report(targets_tst, preds_tst))
```
## Decision Tree
```
from sklearn import tree
clf = tree.DecisionTreeClassifier()
result_print(clf,features_trn,targets_trn,features_tst)
```
## Neural Network
```
#neural network
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,hidden_layer_sizes=(5, 2), random_state=1)
result_print(clf,features_trn,targets_trn,features_tst)
```
## AdaBoostClassifier
```
from sklearn.ensemble import AdaBoostClassifier
from sklearn.datasets import make_classification
clf = AdaBoostClassifier(n_estimators=100, random_state=0)
result_print(clf,features_trn,targets_trn,features_tst)
from sklearn.ensemble import ExtraTreesClassifier
clf = ExtraTreesClassifier(n_estimators=250,
random_state=0,class_weight='balanced')
result_print(clf,features_trn,targets_trn,features_tst)
from sklearn import ensemble
from sklearn import datasets
original_params = {'n_estimators': 1000, 'max_leaf_nodes': 4, 'max_depth': None, 'random_state': 2,
'min_samples_split': 5}
clf = ensemble.GradientBoostingClassifier(**original_params)
result_print(clf,features_trn,targets_trn,features_tst)
```
## Random Forest
```
clf = RandomForestClassifier(n_estimators=1000,class_weight='balanced')
result_print(clf,features_trn,targets_trn,features_tst)
```
## SVC
```
clf = SVC(gamma='auto',class_weight='balanced',probability=True)
result_print(clf,features_trn,targets_trn,features_tst)
```
## Logistic Regression
```
clf = LogisticRegression(solver='lbfgs', max_iter=200,class_weight='balanced')
result_print(clf,features_trn,targets_trn,features_tst)
```
## KMeans
```
from sklearn.cluster import KMeans
clf = KMeans(n_clusters=2, random_state=0, max_iter=1000, n_init=30)
result_print_without(clf,features_trn,targets_trn,features_tst)
```
|
github_jupyter
|
import pdb
import glob
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.feature_selection import mutual_info_classif
import nhanes as nhanes
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
%matplotlib notebook
import importlib
importlib.reload(nhanes)
DATA_PATH = '/Users/qiwenlyu/Development/NHANES/'
DATASET = 'arthritis'
importlib.reload(nhanes)
ds = nhanes.Dataset(DATA_PATH)
ds.load_arthritis()
n_fe = ds.features.shape[1]
n_classes = 2
indx = np.argwhere(ds.targets != 3)
dataset_features = ds.features[indx.flatten()]
dataset_targets = ds.targets[indx.flatten()]
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(dataset_features)
print(pca.explained_variance_ratio_)
#mutual information part
mutualInfo = mutual_info_classif(dataset_features,dataset_targets)
dataset_features = dataset_features.T[mutualInfo > 0].T
print(dataset_features.shape)
perm = np.random.permutation(dataset_targets.shape[0])
dataset_features = dataset_features[perm]
dataset_targets = dataset_targets[perm]
def get_batch(n_size, phase):
# select indices
n_samples = dataset_features.shape[0]
n_classes = int(dataset_targets.max() + 1)
if phase == 'test':
inds_sel = np.arange(0, int(n_samples*0.15), 1)
elif phase == 'validation':
n_samples = dataset_features.shape[0]
inds_sel = np.arange(int(n_samples*0.15), int(n_samples*0.30), 1)
elif phase == 'train':
n_samples = dataset_features.shape[0]
inds_sel = np.arange(int(n_samples*0.30), n_samples, 1)
else:
raise NotImplementedError
inds_sel = np.random.permutation(inds_sel)
batch_inds = []
for cl in range(n_classes):
inds_cl = inds_sel[dataset_targets[inds_sel] == cl]
batch_inds.extend(inds_cl[:n_size//n_classes])
batch_inds = np.random.permutation(batch_inds)
return dataset_features[batch_inds], dataset_targets[batch_inds]
features_trn, targets_trn = get_batch(n_size=5000, phase='train')
features_tst, targets_tst = get_batch(n_size=1000, phase='test')
def plot_roc(fpr, tpr):
fig, ax = plt.subplots()
roc_auc = auc(fpr,tpr)
ax.plot(fpr, tpr, lw=2, label= 'area under curve = %0.4f' % roc_auc)
ax.grid(color='0.7', linestyle='--', linewidth=1)
ax.set_xlim([-0.1, 1.1])
ax.set_ylim([0.0, 1.05])
ax.set_xlabel('False Positive Rate',fontsize=15)
ax.set_ylabel('True Positive Rate',fontsize=15)
ax.legend(loc="lower right")
for label in ax.get_xticklabels()+ax.get_yticklabels():
label.set_fontsize(15)
plt.show()
def plot_confusion_matrix(cm, classes, normalize=False,
title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
# print("Normalized confusion matrix")
else:
# print('Confusion matrix, without normalization')
pass
# print(cm)
plt.figure()
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.show()
def result_print(model,features_trn,targets_trn,features_tst):
model.fit(features_trn, targets_trn)
preds_tst = clf.predict(features_tst)
accu = np.mean(preds_tst==targets_tst)
print('accu_tst_RFC', accu)
predict_proba = clf.predict_proba(features_tst)
cnf_matrix = confusion_matrix(targets_tst, preds_tst)
fpr, tpr, _ = roc_curve(targets_tst, predict_proba[:,1])
plot_roc(fpr, tpr)
plot_confusion_matrix(cnf_matrix, classes=["CT","RA"], normalize=True, title='Normalized confusion matrix')
print(classification_report(targets_tst, preds_tst))
def result_print_without(model,features_trn,targets_trn,features_tst):
model.fit(features_trn, targets_trn)
preds_tst = clf.predict(features_tst)
accu = np.mean(preds_tst==targets_tst)
print('accu_tst_RFC', accu)
cnf_matrix = confusion_matrix(targets_tst, preds_tst)
plot_confusion_matrix(cnf_matrix, classes=["CT","RA"], normalize=True, title='Normalized confusion matrix')
print(classification_report(targets_tst, preds_tst))
from sklearn import tree
clf = tree.DecisionTreeClassifier()
result_print(clf,features_trn,targets_trn,features_tst)
#neural network
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,hidden_layer_sizes=(5, 2), random_state=1)
result_print(clf,features_trn,targets_trn,features_tst)
from sklearn.ensemble import AdaBoostClassifier
from sklearn.datasets import make_classification
clf = AdaBoostClassifier(n_estimators=100, random_state=0)
result_print(clf,features_trn,targets_trn,features_tst)
from sklearn.ensemble import ExtraTreesClassifier
clf = ExtraTreesClassifier(n_estimators=250,
random_state=0,class_weight='balanced')
result_print(clf,features_trn,targets_trn,features_tst)
from sklearn import ensemble
from sklearn import datasets
original_params = {'n_estimators': 1000, 'max_leaf_nodes': 4, 'max_depth': None, 'random_state': 2,
'min_samples_split': 5}
clf = ensemble.GradientBoostingClassifier(**original_params)
result_print(clf,features_trn,targets_trn,features_tst)
clf = RandomForestClassifier(n_estimators=1000,class_weight='balanced')
result_print(clf,features_trn,targets_trn,features_tst)
clf = SVC(gamma='auto',class_weight='balanced',probability=True)
result_print(clf,features_trn,targets_trn,features_tst)
clf = LogisticRegression(solver='lbfgs', max_iter=200,class_weight='balanced')
result_print(clf,features_trn,targets_trn,features_tst)
from sklearn.cluster import KMeans
clf = KMeans(n_clusters=2, random_state=0, max_iter=1000, n_init=30)
result_print_without(clf,features_trn,targets_trn,features_tst)
| 0.655557 | 0.745977 |
# 10-6 ResNet for cifar10
## original code is =>
https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py
```
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import visdom
vis = visdom.Visdom()
vis.close(env="main")
```
## define value tracker
```
def value_tracker(value_plot, value, num):
'''num, loss_value, are Tensor'''
vis.line(X=num,
Y=value,
win = value_plot,
update='append'
)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
torch.manual_seed(777)
if device =='cuda':
torch.cuda.manual_seed_all(777)
```
## transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
### How to Calculate mean and std in Normalize
```
transform = transforms.Compose([
transforms.ToTensor()
])
trainset = torchvision.datasets.CIFAR10(root='./cifar10', train=True, download=True, transform=transform)
print(trainset.train_data.shape)
train_data_mean = trainset.train_data.mean( axis=(0,1,2) )
train_data_std = trainset.train_data.std( axis=(0,1,2) )
print(train_data_mean)
print(train_data_std)
train_data_mean = train_data_mean / 255
train_data_std = train_data_std / 255
print(train_data_mean)
print(train_data_std)
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.ToTensor(),
transforms.Normalize(train_data_mean, train_data_std)
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(train_data_mean, train_data_std)
])
trainset = torchvision.datasets.CIFAR10(root='./cifar10', train=True,
download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=256,
shuffle=True, num_workers=0)
testset = torchvision.datasets.CIFAR10(root='./cifar10', train=False,
download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(testset, batch_size=256,
shuffle=False, num_workers=0)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
## make ResNet50 using resnet.py
```
import resnet
conv1x1=resnet.conv1x1
Bottleneck = resnet.Bottleneck
BasicBlock= resnet.BasicBlock
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
super(ResNet, self).__init__()
self.inplanes = 16
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1,
bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
#self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 16, layers[0], stride=1)
self.layer2 = self._make_layer(block, 32, layers[1], stride=1)
self.layer3 = self._make_layer(block, 64, layers[2], stride=2)
self.layer4 = self._make_layer(block, 128, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(128 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
# Zero-initialize the last BN in each residual branch,
# so that the residual branch starts with zeros, and each residual block behaves like an identity.
# This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
if zero_init_residual:
for m in self.modules():
if isinstance(m, Bottleneck):
nn.init.constant_(m.bn3.weight, 0)
elif isinstance(m, BasicBlock):
nn.init.constant_(m.bn2.weight, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
#x.shape =[1, 16, 32,32]
x = self.bn1(x)
x = self.relu(x)
#x = self.maxpool(x)
x = self.layer1(x)
#x.shape =[1, 128, 32,32]
x = self.layer2(x)
#x.shape =[1, 256, 32,32]
x = self.layer3(x)
#x.shape =[1, 512, 16,16]
x = self.layer4(x)
#x.shape =[1, 1024, 8,8]
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
resnet50 = ResNet(resnet.Bottleneck, [3, 4, 6, 3], 10, True).to(device)
#1(conv1) + 9(layer1) + 12(layer2) + 18(layer3) + 9(layer4) +1(fc)= ResNet50
resnet50
a=torch.Tensor(1,3,32,32).to(device)
out = resnet50(a)
print(out)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(resnet50.parameters(), lr = 0.1, momentum = 0.9, weight_decay=5e-4)
lr_sche = optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.5)
```
## make plot
```
loss_plt = vis.line(Y=torch.Tensor(1).zero_(),opts=dict(title='loss_tracker', legend=['loss'], showlegend=True))
acc_plt = vis.line(Y=torch.Tensor(1).zero_(),opts=dict(title='Accuracy', legend=['Acc'], showlegend=True))
```
## define acc_check function
```
def acc_check(net, test_set, epoch, save=1):
correct = 0
total = 0
with torch.no_grad():
for data in test_set:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc = (100 * correct / total)
print('Accuracy of the network on the 10000 test images: %d %%' % acc)
if save:
torch.save(net.state_dict(), "./model/model_epoch_{}_acc_{}.pth".format(epoch, int(acc)))
return acc
```
## Training with (acc check + model save)
```
print(len(trainloader))
epochs = 150
for epoch in range(epochs): # loop over the dataset multiple times
running_loss = 0.0
lr_sche.step()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = resnet50(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 30 == 29: # print every 30 mini-batches
value_tracker(loss_plt, torch.Tensor([running_loss/30]), torch.Tensor([i + epoch*len(trainloader) ]))
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 30))
running_loss = 0.0
#Check Accuracy
acc = acc_check(resnet50, testloader, epoch, save=1)
value_tracker(acc_plt, torch.Tensor([acc]), torch.Tensor([epoch]))
print('Finished Training')
```
## Model Accuracy Testing
```
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = resnet50(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
|
github_jupyter
|
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import visdom
vis = visdom.Visdom()
vis.close(env="main")
def value_tracker(value_plot, value, num):
'''num, loss_value, are Tensor'''
vis.line(X=num,
Y=value,
win = value_plot,
update='append'
)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
torch.manual_seed(777)
if device =='cuda':
torch.cuda.manual_seed_all(777)
transform = transforms.Compose([
transforms.ToTensor()
])
trainset = torchvision.datasets.CIFAR10(root='./cifar10', train=True, download=True, transform=transform)
print(trainset.train_data.shape)
train_data_mean = trainset.train_data.mean( axis=(0,1,2) )
train_data_std = trainset.train_data.std( axis=(0,1,2) )
print(train_data_mean)
print(train_data_std)
train_data_mean = train_data_mean / 255
train_data_std = train_data_std / 255
print(train_data_mean)
print(train_data_std)
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.ToTensor(),
transforms.Normalize(train_data_mean, train_data_std)
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(train_data_mean, train_data_std)
])
trainset = torchvision.datasets.CIFAR10(root='./cifar10', train=True,
download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=256,
shuffle=True, num_workers=0)
testset = torchvision.datasets.CIFAR10(root='./cifar10', train=False,
download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(testset, batch_size=256,
shuffle=False, num_workers=0)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import resnet
conv1x1=resnet.conv1x1
Bottleneck = resnet.Bottleneck
BasicBlock= resnet.BasicBlock
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
super(ResNet, self).__init__()
self.inplanes = 16
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1,
bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
#self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 16, layers[0], stride=1)
self.layer2 = self._make_layer(block, 32, layers[1], stride=1)
self.layer3 = self._make_layer(block, 64, layers[2], stride=2)
self.layer4 = self._make_layer(block, 128, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(128 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
# Zero-initialize the last BN in each residual branch,
# so that the residual branch starts with zeros, and each residual block behaves like an identity.
# This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
if zero_init_residual:
for m in self.modules():
if isinstance(m, Bottleneck):
nn.init.constant_(m.bn3.weight, 0)
elif isinstance(m, BasicBlock):
nn.init.constant_(m.bn2.weight, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
#x.shape =[1, 16, 32,32]
x = self.bn1(x)
x = self.relu(x)
#x = self.maxpool(x)
x = self.layer1(x)
#x.shape =[1, 128, 32,32]
x = self.layer2(x)
#x.shape =[1, 256, 32,32]
x = self.layer3(x)
#x.shape =[1, 512, 16,16]
x = self.layer4(x)
#x.shape =[1, 1024, 8,8]
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
resnet50 = ResNet(resnet.Bottleneck, [3, 4, 6, 3], 10, True).to(device)
#1(conv1) + 9(layer1) + 12(layer2) + 18(layer3) + 9(layer4) +1(fc)= ResNet50
resnet50
a=torch.Tensor(1,3,32,32).to(device)
out = resnet50(a)
print(out)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(resnet50.parameters(), lr = 0.1, momentum = 0.9, weight_decay=5e-4)
lr_sche = optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.5)
loss_plt = vis.line(Y=torch.Tensor(1).zero_(),opts=dict(title='loss_tracker', legend=['loss'], showlegend=True))
acc_plt = vis.line(Y=torch.Tensor(1).zero_(),opts=dict(title='Accuracy', legend=['Acc'], showlegend=True))
def acc_check(net, test_set, epoch, save=1):
correct = 0
total = 0
with torch.no_grad():
for data in test_set:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc = (100 * correct / total)
print('Accuracy of the network on the 10000 test images: %d %%' % acc)
if save:
torch.save(net.state_dict(), "./model/model_epoch_{}_acc_{}.pth".format(epoch, int(acc)))
return acc
print(len(trainloader))
epochs = 150
for epoch in range(epochs): # loop over the dataset multiple times
running_loss = 0.0
lr_sche.step()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = resnet50(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 30 == 29: # print every 30 mini-batches
value_tracker(loss_plt, torch.Tensor([running_loss/30]), torch.Tensor([i + epoch*len(trainloader) ]))
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 30))
running_loss = 0.0
#Check Accuracy
acc = acc_check(resnet50, testloader, epoch, save=1)
value_tracker(acc_plt, torch.Tensor([acc]), torch.Tensor([epoch]))
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = resnet50(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
| 0.914642 | 0.888952 |
## Benchmark logistic regression on speech recognition
```
!pip install scikit-learn
# File utilities
import glob
import os.path
import numpy as np
# To read spectrograms
from scipy import signal
from scipy.io import wavfile
# To resize spectrograms
import cv2 # Normal resizing
import skimage.measure # Max pooling
# Classification
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
import skimage.measure
from scipy.io import wavfile
from scipy import signal
from sklearn.model_selection import train_test_split
import glob
# Shuffling data
import random
def create_spectrogram(file_name, window_size=20, step_size=10, eps=1e-10):
"""Creates a spectrogram from audio file"""
sample_rate, audio = wavfile.read(file_name)
nperseg = int(round(window_size * sample_rate / 1e3))
noverlap = int(round(step_size * sample_rate / 1e3))
_, _, spec = signal.spectrogram(audio, fs=sample_rate,
window='hann',
nperseg=nperseg,
noverlap=noverlap,
detrend=False)
# Create log spectrogram
spectrogram = np.log(spec.astype(np.float32) + eps)
# Max pooling
spectrogram = skimage.measure.block_reduce(spectrogram, (13, 13), np.max)
# Resize to 8x8 and flatten
spectrogram = cv2.resize(spectrogram, (8,8), cv2.INTER_CUBIC).flatten()
return spectrogram
def speech_mnist(phase='train'):
print("Creating speech_mnist dataset")
X = np.empty((2350*10, 64))
y = np.empty((2350*10))
numbers = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']
for n, number in enumerate(numbers):
paths = glob.glob(f"../datasets/speech_mnist/{number}/*.wav")
paths = sorted(paths)
for i, path in enumerate(paths):
X[n*2350+i,:] = create_spectrogram(path).flatten()
y[n*2350+i] = n
Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.2,random_state=123)
return Xtr, ytr.astype(np.uint8), Xte, yte.astype(np.uint8)
X_tr, y_tr, X_te, y_te = speech_mnist()
model = LogisticRegression(max_iter=0, random_state=3)
model.fit(X_tr, y_tr)
preds = model.predict(X_te)
print(f"Test acc: {np.mean(preds == y_te):.4f}")
cf = list(confusion_matrix(y_te, preds).astype(int))
df = pd.DataFrame(cf, range(10), range(10))
sn.set(font_scale=1.4)
plt.figure(figsize=(10,7))
sn.heatmap(df, annot=True, annot_kws={'size': 16}, fmt='g', cmap=sn.cm.mako_r)
plt.show()
sn.cm.
import matplotlib.pyplot as plt
import seaborn as sn
import pandas as pd
df
SEED = 123
random.seed(123)
yes_paths = glob.glob("../datasets/yes/*wav")
no_paths = glob.glob("datasets/no/*wav")
random.shuffle(yes_paths)
random.shuffle(no_paths)
print(f"Found {len(yes_paths)} 'yes' files and {len(no_paths)} 'no' files")
n = int(2375*0.8)
yes = [create_spectrogram(file_path).flatten() for file_path in yes_paths]
no = [create_spectrogram(file_path).flatten() for file_path in no_paths]
yes_train, yes_test = yes[:n], yes[n:]
no_train, no_test = no[:n], no[n:]
X_train = []
for y, n in zip(yes_train, no_train):
X_train.append(y)
X_train.append(n)
X_train = np.array(X_train)
X_train = (X_train - X_train.max()/2)
X_train /= X_train.max()
y_train = np.zeros(len(X_train))
y_train[::2] = 1
X_test = np.array(yes_test + no_test)
X_test = (X_test - X_test.max()/2)
X_test /= X_test.max()
y_test = np.zeros(len(X_test))
y_test[:len(X_test)//2] = 1
model = LogisticRegression(max_iter=5000)
model.fit(X_train, y_train)
preds = model.predict(X_test)
print(f"Test acc: {np.mean(preds == y_test):.4f}")
```
|
github_jupyter
|
!pip install scikit-learn
# File utilities
import glob
import os.path
import numpy as np
# To read spectrograms
from scipy import signal
from scipy.io import wavfile
# To resize spectrograms
import cv2 # Normal resizing
import skimage.measure # Max pooling
# Classification
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
import skimage.measure
from scipy.io import wavfile
from scipy import signal
from sklearn.model_selection import train_test_split
import glob
# Shuffling data
import random
def create_spectrogram(file_name, window_size=20, step_size=10, eps=1e-10):
"""Creates a spectrogram from audio file"""
sample_rate, audio = wavfile.read(file_name)
nperseg = int(round(window_size * sample_rate / 1e3))
noverlap = int(round(step_size * sample_rate / 1e3))
_, _, spec = signal.spectrogram(audio, fs=sample_rate,
window='hann',
nperseg=nperseg,
noverlap=noverlap,
detrend=False)
# Create log spectrogram
spectrogram = np.log(spec.astype(np.float32) + eps)
# Max pooling
spectrogram = skimage.measure.block_reduce(spectrogram, (13, 13), np.max)
# Resize to 8x8 and flatten
spectrogram = cv2.resize(spectrogram, (8,8), cv2.INTER_CUBIC).flatten()
return spectrogram
def speech_mnist(phase='train'):
print("Creating speech_mnist dataset")
X = np.empty((2350*10, 64))
y = np.empty((2350*10))
numbers = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']
for n, number in enumerate(numbers):
paths = glob.glob(f"../datasets/speech_mnist/{number}/*.wav")
paths = sorted(paths)
for i, path in enumerate(paths):
X[n*2350+i,:] = create_spectrogram(path).flatten()
y[n*2350+i] = n
Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.2,random_state=123)
return Xtr, ytr.astype(np.uint8), Xte, yte.astype(np.uint8)
X_tr, y_tr, X_te, y_te = speech_mnist()
model = LogisticRegression(max_iter=0, random_state=3)
model.fit(X_tr, y_tr)
preds = model.predict(X_te)
print(f"Test acc: {np.mean(preds == y_te):.4f}")
cf = list(confusion_matrix(y_te, preds).astype(int))
df = pd.DataFrame(cf, range(10), range(10))
sn.set(font_scale=1.4)
plt.figure(figsize=(10,7))
sn.heatmap(df, annot=True, annot_kws={'size': 16}, fmt='g', cmap=sn.cm.mako_r)
plt.show()
sn.cm.
import matplotlib.pyplot as plt
import seaborn as sn
import pandas as pd
df
SEED = 123
random.seed(123)
yes_paths = glob.glob("../datasets/yes/*wav")
no_paths = glob.glob("datasets/no/*wav")
random.shuffle(yes_paths)
random.shuffle(no_paths)
print(f"Found {len(yes_paths)} 'yes' files and {len(no_paths)} 'no' files")
n = int(2375*0.8)
yes = [create_spectrogram(file_path).flatten() for file_path in yes_paths]
no = [create_spectrogram(file_path).flatten() for file_path in no_paths]
yes_train, yes_test = yes[:n], yes[n:]
no_train, no_test = no[:n], no[n:]
X_train = []
for y, n in zip(yes_train, no_train):
X_train.append(y)
X_train.append(n)
X_train = np.array(X_train)
X_train = (X_train - X_train.max()/2)
X_train /= X_train.max()
y_train = np.zeros(len(X_train))
y_train[::2] = 1
X_test = np.array(yes_test + no_test)
X_test = (X_test - X_test.max()/2)
X_test /= X_test.max()
y_test = np.zeros(len(X_test))
y_test[:len(X_test)//2] = 1
model = LogisticRegression(max_iter=5000)
model.fit(X_train, y_train)
preds = model.predict(X_test)
print(f"Test acc: {np.mean(preds == y_test):.4f}")
| 0.560132 | 0.668583 |
```
import math
import random
import gym
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Normal
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
```
<h2>Use CUDA</h2>
```
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
```
<h2>Replay Buffer</h2>
```
class ReplayBuffer:
def __init__(self, capacity):
self.capacity = capacity
self.buffer = []
self.position = 0
def push(self, state, action, reward, next_state, done):
if len(self.buffer) < self.capacity:
self.buffer.append(None)
self.buffer[self.position] = (state, action, reward, next_state, done)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
batch = random.sample(self.buffer, batch_size)
state, action, reward, next_state, done = map(np.stack, zip(*batch))
return state, action, reward, next_state, done
def __len__(self):
return len(self.buffer)
```
<h2>Normalize action space</h2>
```
class NormalizedActions(gym.ActionWrapper):
def _action(self, action):
low_bound = self.action_space.low
upper_bound = self.action_space.high
action = low_bound + (action + 1.0) * 0.5 * (upper_bound - low_bound)
action = np.clip(action, low_bound, upper_bound)
return action
def _reverse_action(self, action):
low_bound = self.action_space.low
upper_bound = self.action_space.high
action = 2 * (action - low_bound) / (upper_bound - low_bound) - 1
action = np.clip(action, low_bound, upper_bound)
return actions
```
<h2>Ornstein-Uhlenbeck process</h2>
Adding time-correlated noise to the actions taken by the deterministic policy<br>
<a href="https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process">wiki</a>
```
class OUNoise(object):
def __init__(self, action_space, mu=0.0, theta=0.15, max_sigma=0.3, min_sigma=0.3, decay_period=100000):
self.mu = mu
self.theta = theta
self.sigma = max_sigma
self.max_sigma = max_sigma
self.min_sigma = min_sigma
self.decay_period = decay_period
self.action_dim = action_space.shape[0]
self.low = action_space.low
self.high = action_space.high
self.reset()
def reset(self):
self.state = np.ones(self.action_dim) * self.mu
def evolve_state(self):
x = self.state
dx = self.theta * (self.mu - x) + self.sigma * np.random.randn(self.action_dim)
self.state = x + dx
return self.state
def get_action(self, action, t=0):
ou_state = self.evolve_state()
self.sigma = self.max_sigma - (self.max_sigma - self.min_sigma) * min(1.0, t / self.decay_period)
return np.clip(action + ou_state, self.low, self.high)
#https://github.com/vitchyr/rlkit/blob/master/rlkit/exploration_strategies/ou_strategy.py
def plot(frame_idx, rewards):
clear_output(True)
plt.figure(figsize=(20,5))
plt.subplot(131)
plt.title('frame %s. reward: %s' % (frame_idx, rewards[-1]))
plt.plot(rewards)
plt.show()
```
<h1> Continuous control with deep reinforcement learning</h1>
<h2><a href="https://arxiv.org/abs/1509.02971">Arxiv</a></h2>
```
class ValueNetwork(nn.Module):
def __init__(self, num_inputs, num_actions, hidden_size, init_w=3e-3):
super(ValueNetwork, self).__init__()
self.linear1 = nn.Linear(num_inputs + num_actions, hidden_size)
self.linear2 = nn.Linear(hidden_size, hidden_size)
self.linear3 = nn.Linear(hidden_size, 1)
self.linear3.weight.data.uniform_(-init_w, init_w)
self.linear3.bias.data.uniform_(-init_w, init_w)
def forward(self, state, action):
x = torch.cat([state, action], 1)
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
class PolicyNetwork(nn.Module):
def __init__(self, num_inputs, num_actions, hidden_size, init_w=3e-3):
super(PolicyNetwork, self).__init__()
self.linear1 = nn.Linear(num_inputs, hidden_size)
self.linear2 = nn.Linear(hidden_size, hidden_size)
self.linear3 = nn.Linear(hidden_size, num_actions)
self.linear3.weight.data.uniform_(-init_w, init_w)
self.linear3.bias.data.uniform_(-init_w, init_w)
def forward(self, state):
x = F.relu(self.linear1(state))
x = F.relu(self.linear2(x))
x = F.tanh(self.linear3(x))
return x
def get_action(self, state):
state = torch.FloatTensor(state).unsqueeze(0).to(device)
action = self.forward(state)
return action.detach().cpu().numpy()[0, 0]
```
<h2>DDPG Update</h2>
```
def ddpg_update(batch_size,
gamma = 0.99,
min_value=-np.inf,
max_value=np.inf,
soft_tau=1e-2):
state, action, reward, next_state, done = replay_buffer.sample(batch_size)
state = torch.FloatTensor(state).to(device)
next_state = torch.FloatTensor(next_state).to(device)
action = torch.FloatTensor(action).to(device)
reward = torch.FloatTensor(reward).unsqueeze(1).to(device)
done = torch.FloatTensor(np.float32(done)).unsqueeze(1).to(device)
policy_loss = value_net(state, policy_net(state))
policy_loss = -policy_loss.mean()
next_action = target_policy_net(next_state)
target_value = target_value_net(next_state, next_action.detach())
expected_value = reward + (1.0 - done) * gamma * target_value
expected_value = torch.clamp(expected_value, min_value, max_value)
value = value_net(state, action)
value_loss = value_criterion(value, expected_value.detach())
policy_optimizer.zero_grad()
policy_loss.backward()
policy_optimizer.step()
value_optimizer.zero_grad()
value_loss.backward()
value_optimizer.step()
for target_param, param in zip(target_value_net.parameters(), value_net.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - soft_tau) + param.data * soft_tau
)
for target_param, param in zip(target_policy_net.parameters(), policy_net.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - soft_tau) + param.data * soft_tau
)
env = NormalizedActions(gym.make("Pendulum-v0"))
ou_noise = OUNoise(env.action_space)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
hidden_dim = 256
value_net = ValueNetwork(state_dim, action_dim, hidden_dim).to(device)
policy_net = PolicyNetwork(state_dim, action_dim, hidden_dim).to(device)
target_value_net = ValueNetwork(state_dim, action_dim, hidden_dim).to(device)
target_policy_net = PolicyNetwork(state_dim, action_dim, hidden_dim).to(device)
for target_param, param in zip(target_value_net.parameters(), value_net.parameters()):
target_param.data.copy_(param.data)
for target_param, param in zip(target_policy_net.parameters(), policy_net.parameters()):
target_param.data.copy_(param.data)
value_lr = 1e-3
policy_lr = 1e-4
value_optimizer = optim.Adam(value_net.parameters(), lr=value_lr)
policy_optimizer = optim.Adam(policy_net.parameters(), lr=policy_lr)
value_criterion = nn.MSELoss()
replay_buffer_size = 1000000
replay_buffer = ReplayBuffer(replay_buffer_size)
max_frames = 12000
max_steps = 500
frame_idx = 0
rewards = []
batch_size = 128
while frame_idx < max_frames:
state = env.reset()
ou_noise.reset()
episode_reward = 0
for step in range(max_steps):
action = policy_net.get_action(state)
action = ou_noise.get_action(action, step)
next_state, reward, done, _ = env.step(action)
replay_buffer.push(state, action, reward, next_state, done)
if len(replay_buffer) > batch_size:
ddpg_update(batch_size)
state = next_state
episode_reward += reward
frame_idx += 1
if frame_idx % max(1000, max_steps + 1) == 0:
plot(frame_idx, rewards)
if done:
break
rewards.append(episode_reward)
```
|
github_jupyter
|
import math
import random
import gym
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Normal
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
class ReplayBuffer:
def __init__(self, capacity):
self.capacity = capacity
self.buffer = []
self.position = 0
def push(self, state, action, reward, next_state, done):
if len(self.buffer) < self.capacity:
self.buffer.append(None)
self.buffer[self.position] = (state, action, reward, next_state, done)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
batch = random.sample(self.buffer, batch_size)
state, action, reward, next_state, done = map(np.stack, zip(*batch))
return state, action, reward, next_state, done
def __len__(self):
return len(self.buffer)
class NormalizedActions(gym.ActionWrapper):
def _action(self, action):
low_bound = self.action_space.low
upper_bound = self.action_space.high
action = low_bound + (action + 1.0) * 0.5 * (upper_bound - low_bound)
action = np.clip(action, low_bound, upper_bound)
return action
def _reverse_action(self, action):
low_bound = self.action_space.low
upper_bound = self.action_space.high
action = 2 * (action - low_bound) / (upper_bound - low_bound) - 1
action = np.clip(action, low_bound, upper_bound)
return actions
class OUNoise(object):
def __init__(self, action_space, mu=0.0, theta=0.15, max_sigma=0.3, min_sigma=0.3, decay_period=100000):
self.mu = mu
self.theta = theta
self.sigma = max_sigma
self.max_sigma = max_sigma
self.min_sigma = min_sigma
self.decay_period = decay_period
self.action_dim = action_space.shape[0]
self.low = action_space.low
self.high = action_space.high
self.reset()
def reset(self):
self.state = np.ones(self.action_dim) * self.mu
def evolve_state(self):
x = self.state
dx = self.theta * (self.mu - x) + self.sigma * np.random.randn(self.action_dim)
self.state = x + dx
return self.state
def get_action(self, action, t=0):
ou_state = self.evolve_state()
self.sigma = self.max_sigma - (self.max_sigma - self.min_sigma) * min(1.0, t / self.decay_period)
return np.clip(action + ou_state, self.low, self.high)
#https://github.com/vitchyr/rlkit/blob/master/rlkit/exploration_strategies/ou_strategy.py
def plot(frame_idx, rewards):
clear_output(True)
plt.figure(figsize=(20,5))
plt.subplot(131)
plt.title('frame %s. reward: %s' % (frame_idx, rewards[-1]))
plt.plot(rewards)
plt.show()
class ValueNetwork(nn.Module):
def __init__(self, num_inputs, num_actions, hidden_size, init_w=3e-3):
super(ValueNetwork, self).__init__()
self.linear1 = nn.Linear(num_inputs + num_actions, hidden_size)
self.linear2 = nn.Linear(hidden_size, hidden_size)
self.linear3 = nn.Linear(hidden_size, 1)
self.linear3.weight.data.uniform_(-init_w, init_w)
self.linear3.bias.data.uniform_(-init_w, init_w)
def forward(self, state, action):
x = torch.cat([state, action], 1)
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
class PolicyNetwork(nn.Module):
def __init__(self, num_inputs, num_actions, hidden_size, init_w=3e-3):
super(PolicyNetwork, self).__init__()
self.linear1 = nn.Linear(num_inputs, hidden_size)
self.linear2 = nn.Linear(hidden_size, hidden_size)
self.linear3 = nn.Linear(hidden_size, num_actions)
self.linear3.weight.data.uniform_(-init_w, init_w)
self.linear3.bias.data.uniform_(-init_w, init_w)
def forward(self, state):
x = F.relu(self.linear1(state))
x = F.relu(self.linear2(x))
x = F.tanh(self.linear3(x))
return x
def get_action(self, state):
state = torch.FloatTensor(state).unsqueeze(0).to(device)
action = self.forward(state)
return action.detach().cpu().numpy()[0, 0]
def ddpg_update(batch_size,
gamma = 0.99,
min_value=-np.inf,
max_value=np.inf,
soft_tau=1e-2):
state, action, reward, next_state, done = replay_buffer.sample(batch_size)
state = torch.FloatTensor(state).to(device)
next_state = torch.FloatTensor(next_state).to(device)
action = torch.FloatTensor(action).to(device)
reward = torch.FloatTensor(reward).unsqueeze(1).to(device)
done = torch.FloatTensor(np.float32(done)).unsqueeze(1).to(device)
policy_loss = value_net(state, policy_net(state))
policy_loss = -policy_loss.mean()
next_action = target_policy_net(next_state)
target_value = target_value_net(next_state, next_action.detach())
expected_value = reward + (1.0 - done) * gamma * target_value
expected_value = torch.clamp(expected_value, min_value, max_value)
value = value_net(state, action)
value_loss = value_criterion(value, expected_value.detach())
policy_optimizer.zero_grad()
policy_loss.backward()
policy_optimizer.step()
value_optimizer.zero_grad()
value_loss.backward()
value_optimizer.step()
for target_param, param in zip(target_value_net.parameters(), value_net.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - soft_tau) + param.data * soft_tau
)
for target_param, param in zip(target_policy_net.parameters(), policy_net.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - soft_tau) + param.data * soft_tau
)
env = NormalizedActions(gym.make("Pendulum-v0"))
ou_noise = OUNoise(env.action_space)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
hidden_dim = 256
value_net = ValueNetwork(state_dim, action_dim, hidden_dim).to(device)
policy_net = PolicyNetwork(state_dim, action_dim, hidden_dim).to(device)
target_value_net = ValueNetwork(state_dim, action_dim, hidden_dim).to(device)
target_policy_net = PolicyNetwork(state_dim, action_dim, hidden_dim).to(device)
for target_param, param in zip(target_value_net.parameters(), value_net.parameters()):
target_param.data.copy_(param.data)
for target_param, param in zip(target_policy_net.parameters(), policy_net.parameters()):
target_param.data.copy_(param.data)
value_lr = 1e-3
policy_lr = 1e-4
value_optimizer = optim.Adam(value_net.parameters(), lr=value_lr)
policy_optimizer = optim.Adam(policy_net.parameters(), lr=policy_lr)
value_criterion = nn.MSELoss()
replay_buffer_size = 1000000
replay_buffer = ReplayBuffer(replay_buffer_size)
max_frames = 12000
max_steps = 500
frame_idx = 0
rewards = []
batch_size = 128
while frame_idx < max_frames:
state = env.reset()
ou_noise.reset()
episode_reward = 0
for step in range(max_steps):
action = policy_net.get_action(state)
action = ou_noise.get_action(action, step)
next_state, reward, done, _ = env.step(action)
replay_buffer.push(state, action, reward, next_state, done)
if len(replay_buffer) > batch_size:
ddpg_update(batch_size)
state = next_state
episode_reward += reward
frame_idx += 1
if frame_idx % max(1000, max_steps + 1) == 0:
plot(frame_idx, rewards)
if done:
break
rewards.append(episode_reward)
| 0.887674 | 0.784979 |
```
##!/usr/bin/env python
"""plot_landuse_WRF.py
Purpose: Plot the dominant land use
Author: Annette L Hirsch @ CLEX, UNSW. Sydney (Australia)
email: a.hirsch@unsw.edu.au
Created: Tue Jul 21 09:46:20 AEST 2020
"""
# Load packages
import numpy as np
import netCDF4 as nc
import sys
import os
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.colors as mcolors
from matplotlib.colors import BoundaryNorm
from matplotlib.ticker import MaxNLocator
from matplotlib import cm
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import cartopy.crs as ccrs
```
Define AWS locations
```
awsdir = '/g/data/w97/azh561/WRF/obs/AWS_1mindata_20stations'
awsnum = ['066037','066137','066194','067105','067108','067113','061078','061366','066062','067119','068228']
awsnm = ['Sydney Airport','Bankstown','Canterbury','Richmond','Badgerys Creek','Penrith','Williamtown','Norah Head','Observatory Hill','Horsley Park','Bellambi']
awslat = [-33.9465,-33.9176,-33.9057,-33.6004,-33.8969,-33.7195,-32.7939,-33.2814,-33.8607,-33.851,-34.3691]
awslon = [151.1731,150.9837,151.1134,150.7761,150.7281,150.6783,151.8364,151.5766,151.2050,150.8567,150.9291]
naws = len(awsnum)
```
Define inputs
```
datadir = '/g/data/w97/azh561/WRF/sydney800m/'
ndom = 2
EXPNM = "Sydney_800m"
cbarsize = [0.4,0.6]
for dd in range(ndom):
# Read data
f = nc.Dataset('%sgeo_em.d0%s.nc'%(datadir,dd+1))
lu = f.variables['LU_INDEX'][0,10:-11,10:-11]
lat2d = f.variables['XLAT_M'][0,10:-11,10:-11]
lontmp = f.variables['XLONG_M'][0,10:-11,10:-11]
lon2d = np.where(lontmp<0.0,lontmp+360,lontmp)
clon = f.getncattr('CEN_LON')
nlu = f.getncattr('NUM_LAND_CAT')
iswater = f.getncattr('ISWATER')
translat = f.variables['XLAT_M'][0,174,0]
f.close()
# Mask the water bodies
luma = np.ma.masked_where(lu == iswater, lu)
# Define region extents
lonN = np.min(lon2d)
lonX = np.max(lon2d)
latN = np.min(lat2d)
latX = np.max(lat2d)
# Figure formatting
plt.rcParams['savefig.dpi']=250
plt.rcParams["font.weight"] = "bold"
plt.rcParams["axes.labelweight"] = "bold"
# Set up projection
plotcrs = ccrs.PlateCarree(central_longitude=clon)
gs = mpl.gridspec.GridSpec(nrows=1,ncols=1)
fig = plt.figure(figsize=(15.0,15.0))
ax = fig.add_subplot(gs[0,0],projection=plotcrs)
# Specify the colormap with 1-30 green hues and 30-40 as grey hues
colors1 = plt.cm.Greens(np.linspace(0.2,1,30))
colors2 = plt.cm.Greys_r(np.linspace(0.05,0.7,10))
colors = np.vstack((colors1, colors2))
new_colormap = mcolors.LinearSegmentedColormap.from_list('new_colormap', colors)
new_colormap.set_bad(color='lightblue') # This forces any missing values to have a light blue hue rather than white
# Force the plot and colorbar to have discrete values rather than continuous
levels = MaxNLocator(nbins=nlu).tick_values(1,nlu)
norm = BoundaryNorm(levels,ncolors=new_colormap.N, clip=True)
# Plot the land use
cm = ax.pcolormesh(lon2d,lat2d,luma,
vmin=1,vmax=nlu, # Specify the color limits to appropriate data range
cmap=new_colormap, # To use the new colormap that we've created
norm=norm, # To specify the intervals for the discrete colors
transform=ccrs.PlateCarree())
ax.set_extent([lonN,lonX,latN,latX], ccrs.PlateCarree())
ax.coastlines(resolution='10m', color='black', linewidth=1)
plt.colorbar(cm, ticks=np.arange(1,nlu,2), orientation='vertical',shrink=cbarsize[dd], pad=0.04)
# For the inner domain add the station locations
if dd == 1:
hloc = ['left','right','left','right','right','right','left','left','left','right','left']
vloc = ['top','top','bottom','bottom','top','bottom','bottom','bottom','bottom','bottom','bottom']
for ss in range(naws):
ax.plot(awslon[ss],awslat[ss],color="black",marker = 'o',transform=ccrs.PlateCarree())
transform = ccrs.PlateCarree()._as_mpl_transform(ax)
ax.annotate(awsnm[ss],xy=(awslon[ss],awslat[ss]), xycoords=transform,ha=hloc[ss], va=vloc[ss])
ax.axhline(translat, color='black', linestyle='--',linewidth=2.0) # For the transect location
# Save the figure
plt.savefig("Dominant_Land_Use_d0%s_%s.png" %(dd+1,EXPNM))
plt.close(fig)
def truncate_colormap(cmap, minval=0.0, maxval=1.0, n=100):
new_cmap = mcolors.LinearSegmentedColormap.from_list(
'trunc({n},{a:.2f},{b:.2f})'.format(n=cmap.name, a=minval, b=maxval),
cmap(np.linspace(minval, maxval, n)))
return new_cmap
for dd in range(ndom):
# Read data
f = nc.Dataset('%sgeo_em.d0%s.nc'%(datadir,dd+1))
topo = f.variables['HGT_M'][0,10:-11,10:-11]
mask = f.variables['LANDMASK'][0,10:-11,10:-11]
lat2d = f.variables['XLAT_M'][0,10:-11,10:-11]
lontmp = f.variables['XLONG_M'][0,10:-11,10:-11]
lon2d = np.where(lontmp<0.0,lontmp+360,lontmp)
clon = f.getncattr('CEN_LON')
nlu = f.getncattr('NUM_LAND_CAT')
iswater = f.getncattr('ISWATER')
translat = f.variables['XLAT_M'][0,174,0]
f.close()
# Mask the water bodies
topoma = np.ma.masked_where(mask == 0, topo)
# Define region extents
lonN = np.min(lon2d)
lonX = np.max(lon2d)
latN = np.min(lat2d)
latX = np.max(lat2d)
# Figure formatting
plt.rcParams['savefig.dpi']=250
plt.rcParams["font.weight"] = "bold"
plt.rcParams["axes.labelweight"] = "bold"
# Set up projection
plotcrs = ccrs.PlateCarree(central_longitude=clon)
gs = mpl.gridspec.GridSpec(nrows=1,ncols=1)
fig = plt.figure(figsize=(15.0,15.0))
ax = fig.add_subplot(gs[0,0],projection=plotcrs)
# Specify the colormap
cmap = plt.cm.get_cmap('terrain')
new_cmap = truncate_colormap(cmap, 0.2, 0.9)
new_cmap.set_bad('blue')
mn = 0
mx = 1500
nbins=15
levels = MaxNLocator(nbins=nbins).tick_values(mn,mx)
norm = BoundaryNorm(levels, ncolors=new_cmap.N, clip=True)
# Plot the land use
cm = ax.pcolormesh(lon2d,lat2d,topoma,vmin=mn,vmax=mx,cmap=new_cmap,norm=norm,transform=ccrs.PlateCarree())
ax.set_extent([lonN,lonX,latN,latX], ccrs.PlateCarree())
ax.coastlines(resolution='10m', color='black', linewidth=1)
plt.colorbar(cm, ticks=np.arange(mn,mx,100), orientation='vertical',shrink=cbarsize[dd], pad=0.04)
# For the inner domain add the station locations
if dd == 1:
hloc = ['left','right','left','right','right','right','left','left','left','right','left']
vloc = ['top','top','bottom','bottom','top','bottom','bottom','bottom','bottom','bottom','bottom']
for ss in range(naws):
ax.plot(awslon[ss],awslat[ss],color="black",marker = 'o',transform=ccrs.PlateCarree())
transform = ccrs.PlateCarree()._as_mpl_transform(ax)
ax.annotate(awsnm[ss],xy=(awslon[ss],awslat[ss]), xycoords=transform,ha=hloc[ss], va=vloc[ss])
ax.axhline(translat, color='black', linestyle='--',linewidth=2.0) # For the transect location
# Save the figure
plt.savefig("Topography_d0%s_%s.png" %(dd+1,EXPNM))
plt.close(fig)
```
|
github_jupyter
|
##!/usr/bin/env python
"""plot_landuse_WRF.py
Purpose: Plot the dominant land use
Author: Annette L Hirsch @ CLEX, UNSW. Sydney (Australia)
email: a.hirsch@unsw.edu.au
Created: Tue Jul 21 09:46:20 AEST 2020
"""
# Load packages
import numpy as np
import netCDF4 as nc
import sys
import os
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.colors as mcolors
from matplotlib.colors import BoundaryNorm
from matplotlib.ticker import MaxNLocator
from matplotlib import cm
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import cartopy.crs as ccrs
awsdir = '/g/data/w97/azh561/WRF/obs/AWS_1mindata_20stations'
awsnum = ['066037','066137','066194','067105','067108','067113','061078','061366','066062','067119','068228']
awsnm = ['Sydney Airport','Bankstown','Canterbury','Richmond','Badgerys Creek','Penrith','Williamtown','Norah Head','Observatory Hill','Horsley Park','Bellambi']
awslat = [-33.9465,-33.9176,-33.9057,-33.6004,-33.8969,-33.7195,-32.7939,-33.2814,-33.8607,-33.851,-34.3691]
awslon = [151.1731,150.9837,151.1134,150.7761,150.7281,150.6783,151.8364,151.5766,151.2050,150.8567,150.9291]
naws = len(awsnum)
datadir = '/g/data/w97/azh561/WRF/sydney800m/'
ndom = 2
EXPNM = "Sydney_800m"
cbarsize = [0.4,0.6]
for dd in range(ndom):
# Read data
f = nc.Dataset('%sgeo_em.d0%s.nc'%(datadir,dd+1))
lu = f.variables['LU_INDEX'][0,10:-11,10:-11]
lat2d = f.variables['XLAT_M'][0,10:-11,10:-11]
lontmp = f.variables['XLONG_M'][0,10:-11,10:-11]
lon2d = np.where(lontmp<0.0,lontmp+360,lontmp)
clon = f.getncattr('CEN_LON')
nlu = f.getncattr('NUM_LAND_CAT')
iswater = f.getncattr('ISWATER')
translat = f.variables['XLAT_M'][0,174,0]
f.close()
# Mask the water bodies
luma = np.ma.masked_where(lu == iswater, lu)
# Define region extents
lonN = np.min(lon2d)
lonX = np.max(lon2d)
latN = np.min(lat2d)
latX = np.max(lat2d)
# Figure formatting
plt.rcParams['savefig.dpi']=250
plt.rcParams["font.weight"] = "bold"
plt.rcParams["axes.labelweight"] = "bold"
# Set up projection
plotcrs = ccrs.PlateCarree(central_longitude=clon)
gs = mpl.gridspec.GridSpec(nrows=1,ncols=1)
fig = plt.figure(figsize=(15.0,15.0))
ax = fig.add_subplot(gs[0,0],projection=plotcrs)
# Specify the colormap with 1-30 green hues and 30-40 as grey hues
colors1 = plt.cm.Greens(np.linspace(0.2,1,30))
colors2 = plt.cm.Greys_r(np.linspace(0.05,0.7,10))
colors = np.vstack((colors1, colors2))
new_colormap = mcolors.LinearSegmentedColormap.from_list('new_colormap', colors)
new_colormap.set_bad(color='lightblue') # This forces any missing values to have a light blue hue rather than white
# Force the plot and colorbar to have discrete values rather than continuous
levels = MaxNLocator(nbins=nlu).tick_values(1,nlu)
norm = BoundaryNorm(levels,ncolors=new_colormap.N, clip=True)
# Plot the land use
cm = ax.pcolormesh(lon2d,lat2d,luma,
vmin=1,vmax=nlu, # Specify the color limits to appropriate data range
cmap=new_colormap, # To use the new colormap that we've created
norm=norm, # To specify the intervals for the discrete colors
transform=ccrs.PlateCarree())
ax.set_extent([lonN,lonX,latN,latX], ccrs.PlateCarree())
ax.coastlines(resolution='10m', color='black', linewidth=1)
plt.colorbar(cm, ticks=np.arange(1,nlu,2), orientation='vertical',shrink=cbarsize[dd], pad=0.04)
# For the inner domain add the station locations
if dd == 1:
hloc = ['left','right','left','right','right','right','left','left','left','right','left']
vloc = ['top','top','bottom','bottom','top','bottom','bottom','bottom','bottom','bottom','bottom']
for ss in range(naws):
ax.plot(awslon[ss],awslat[ss],color="black",marker = 'o',transform=ccrs.PlateCarree())
transform = ccrs.PlateCarree()._as_mpl_transform(ax)
ax.annotate(awsnm[ss],xy=(awslon[ss],awslat[ss]), xycoords=transform,ha=hloc[ss], va=vloc[ss])
ax.axhline(translat, color='black', linestyle='--',linewidth=2.0) # For the transect location
# Save the figure
plt.savefig("Dominant_Land_Use_d0%s_%s.png" %(dd+1,EXPNM))
plt.close(fig)
def truncate_colormap(cmap, minval=0.0, maxval=1.0, n=100):
new_cmap = mcolors.LinearSegmentedColormap.from_list(
'trunc({n},{a:.2f},{b:.2f})'.format(n=cmap.name, a=minval, b=maxval),
cmap(np.linspace(minval, maxval, n)))
return new_cmap
for dd in range(ndom):
# Read data
f = nc.Dataset('%sgeo_em.d0%s.nc'%(datadir,dd+1))
topo = f.variables['HGT_M'][0,10:-11,10:-11]
mask = f.variables['LANDMASK'][0,10:-11,10:-11]
lat2d = f.variables['XLAT_M'][0,10:-11,10:-11]
lontmp = f.variables['XLONG_M'][0,10:-11,10:-11]
lon2d = np.where(lontmp<0.0,lontmp+360,lontmp)
clon = f.getncattr('CEN_LON')
nlu = f.getncattr('NUM_LAND_CAT')
iswater = f.getncattr('ISWATER')
translat = f.variables['XLAT_M'][0,174,0]
f.close()
# Mask the water bodies
topoma = np.ma.masked_where(mask == 0, topo)
# Define region extents
lonN = np.min(lon2d)
lonX = np.max(lon2d)
latN = np.min(lat2d)
latX = np.max(lat2d)
# Figure formatting
plt.rcParams['savefig.dpi']=250
plt.rcParams["font.weight"] = "bold"
plt.rcParams["axes.labelweight"] = "bold"
# Set up projection
plotcrs = ccrs.PlateCarree(central_longitude=clon)
gs = mpl.gridspec.GridSpec(nrows=1,ncols=1)
fig = plt.figure(figsize=(15.0,15.0))
ax = fig.add_subplot(gs[0,0],projection=plotcrs)
# Specify the colormap
cmap = plt.cm.get_cmap('terrain')
new_cmap = truncate_colormap(cmap, 0.2, 0.9)
new_cmap.set_bad('blue')
mn = 0
mx = 1500
nbins=15
levels = MaxNLocator(nbins=nbins).tick_values(mn,mx)
norm = BoundaryNorm(levels, ncolors=new_cmap.N, clip=True)
# Plot the land use
cm = ax.pcolormesh(lon2d,lat2d,topoma,vmin=mn,vmax=mx,cmap=new_cmap,norm=norm,transform=ccrs.PlateCarree())
ax.set_extent([lonN,lonX,latN,latX], ccrs.PlateCarree())
ax.coastlines(resolution='10m', color='black', linewidth=1)
plt.colorbar(cm, ticks=np.arange(mn,mx,100), orientation='vertical',shrink=cbarsize[dd], pad=0.04)
# For the inner domain add the station locations
if dd == 1:
hloc = ['left','right','left','right','right','right','left','left','left','right','left']
vloc = ['top','top','bottom','bottom','top','bottom','bottom','bottom','bottom','bottom','bottom']
for ss in range(naws):
ax.plot(awslon[ss],awslat[ss],color="black",marker = 'o',transform=ccrs.PlateCarree())
transform = ccrs.PlateCarree()._as_mpl_transform(ax)
ax.annotate(awsnm[ss],xy=(awslon[ss],awslat[ss]), xycoords=transform,ha=hloc[ss], va=vloc[ss])
ax.axhline(translat, color='black', linestyle='--',linewidth=2.0) # For the transect location
# Save the figure
plt.savefig("Topography_d0%s_%s.png" %(dd+1,EXPNM))
plt.close(fig)
| 0.629547 | 0.72648 |
# _K MEANS CLUSTERING ALGORITHM_
### MODEL REPRESENTATION:
`K-Means` is the simplest and most fundamental clustering algorithm. In general the clustering problem is in escense simple, having data ${x_1,....,x_n}$ we want to partition it into clusters where the goal is to find these clusters only given the data and some modelling assumption.
The model representation for `K-Means` is to learn the clusters such that observations or data points that are in the same cluster are considered similar. We assume that there are `k` clusters underlining our dataset and we're going to introduce a latent variable _`c`_. Then for the $i^{th}$ data point $x_i$ assigned to the cluster `k`, then $c_i$ is going to be equal to `k`, namely variable $c_i$ is going to be an index of what cluster the observation $x_i$ belongs to.
The output of the algorithm are two vectors. A vector **c** of lenght `n`, so if $c_i$ and $c_j$ are both equal to `k`, then the point $x_i$ and $x_j$ are clustered together in cluster `k`. Also, a _K_ mean vectors $\mu$ where each $\mu_i$ is in $R^d$. And each of these $\mu_k$ are going to be called `centroids` which define the center of the cluster.
The goal now is to learn these two sets so, in order to do this we need to define an objective function.
### K-MEANS OBJECTIVE FUNCTION:
K-means objective function can be written as:
$$L = \sum_{i=1}^{n} \sum_{k=1}^{K} \mathbb{1} \big\{ c_i = k \big\} ||x_i - \mu_k||^2$$
- K-means uses the Euclidean distance of $x_i$ to the centroid `k` penalizing the distance of $x_i$ to the centroid it's assigned by $c_i$
The objective function is **non-convex**. This means we can't acctually find the optimal $\mu$ and $c$ so, we can only derive an algorithm for finding a local optimum.
Variables are split into two unknown sets $\mu$ and $c$ that we can't find their best values at the same time to minimize `L`. However fixing $\mu$ we can find the best $c$ and fixing $c$ we can find the best $\mu$. This optimization approach is called coordinate descent: Hold one set of parameters fixed, and optimize the other set. Then switch which set is fixed.
### Euclidean distance:
**Let $x = (x_1,x_2,x_3,...,x_n)$, $j = (j_1,j_2,j_3,...,j_n)$ be points in $R^n$ space, and $D$ the Euclidean distance, then:**
** **
$$D = \sqrt{(x_1 - j_1)^2 + (x_2 - j_2)^2 + (x_3 - j_3)^2 + .... + (x_n - j_n)^2}$$
** **
### K-MEANS ALGORITHM:
- Giving $x_1,. . . .,x_n$ where each $x$ $\in$ $R^d$
- Minimize $L = \sum_{i=1}^{n} \sum_{k=1}^{K} \mathbb{1} \big\{ c_i = k \big\} ||x_i - \mu_k||^2$
1- Initialize $\mu$ $=$ $(\mu_1,...,\mu_k)$
2- Update each $c_i$:
$$c_i = arg min_k ||x_i - \mu_k||^2$$
3- Update each $\mu_k$:
$$n_k = \sum_{i=1}^{n} \mathbb{1} \big\{ c_i = k \big\} \text{ and } \mu_k = 1/n_k \sum_{i=1}^{n} x_i \mathbb{1} \big\{ c_i = k \big\}$$
4- Iterate until $c$ and $\mu$ stop changing
Ok, let's start with the implementation of `K Means Clustering` from scratch. According with the algorithm described above, we need to initialize the `centroids` first. The initial `centroids` designation will be done manually in our example, but there are other ways to do it, for example using the algorithm _`K-Means++`_.
Two main steps have to be implemented for the `K Means Clustering`:
**1.- How to assign each point into a cluster based on the Euclidean distance (_$c_i$ vectors_)**
**2.- Update the cluster centroids using the most updated $c_i$ vectors**
```
##IMPORTING ALL NECESSARY SUPPORT LIBRARIES
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from resources import kmeans_helper as hp
def assign_clusters(x_points, clusters):
'''
arguments:
> x_points: 2-D array type, with the data points
> clusters: 2-D array type, with the cluster centroids
returns:
> 2-D array with a c_i vector that assign each point into a cluster
'''
##DECLARATION: AUXILIARY FUNCTIONS ...
def point_cluster_dis(centroid):
def euclidean(point):
sse = 0.0
for i in range(len(centroid)):
sse += (point[i] - centroid[i])**2
return (sse)**0.5
return euclidean
##THE DISTANCES FOR EACH POINT TO EACH CLUSTER IS CALCULATED ...
clus_dist = []
for c in clusters:
calc_distance = point_cluster_dis(c)
dist = list(map(calc_distance,x_points))
clus_dist.append(dist)
##THE DISTANCES ARE RE-ARRANGED ACCORDING TO EACH POINT ...
c_vectors = [[x] for x in clus_dist[0]]
for i in range(1, len(clus_dist)):
for j in range(len(clus_dist[0])):
c_vectors[j] += [clus_dist[i][j]]
##EACH POINT IS ASSIGNED INTO A CLUSTER ...
pts_assg = c_vectors.copy()
for i, p in enumerate(c_vectors):
idx = p.index(min(p))
for j in range(len(p)):
if j == idx:
pts_assg[i][j] = 1
else:
pts_assg[i][j] = 0
return pts_assg
def update_clusters(x_points, c_vectors, no_clusters):
'''
arguments:
> x_points: 2-D array type, with the data points
> c_vectors: 2-D array type, with c_i vectors that assign each point into a cluster
> no_clusters: integer type, total number of clusters
returns:
> 2-D array with updated cluster centroids
'''
##ALL POINTS ARE SEPARATED PER CLUSTER ...
split_pts_cluster = []
for c in range(no_clusters):
pts_in_cluster = [x for x,cv in zip(x_points,c_vectors) if cv[c] == 1]
split_pts_cluster.append(pts_in_cluster)
##RE-CALCULATE EACH CLUSTER CENTROID BASED ON THE NUMBER OF POINTS ASSIGNED ...
new_centroids = []
for c in range(no_clusters):
no_pts = len(split_pts_cluster[c])
sum_x, sum_y = (0,0)
for j in range(no_pts):
sum_x += split_pts_cluster[c][j][0]
sum_y += split_pts_cluster[c][j][1]
new_centroids.append([sum_x/no_pts,sum_y/no_pts])
return new_centroids
```
In order to verify proper implementation a simple **example** is used to evaluate the functions.
five points are selected and two cluster centroids then both functions are tested. First, the `assign_clusters` function is used to assign each point to the closest centroid. Second, using the $c_i$ vectors the cluster centroids are updated with the `update_clusters` function.
```
# Example:
points = [[0,1], [2,2], [5,4], [3,6], [4,2]]
clusters = [[0,1],[5,4]]
ci_vectors = assign_clusters(points,clusters)
ci_vectors
# [[1, 0],
# [1, 0],
# [0, 1],
# [0, 1],
# [0, 1]]
new_clusters = update_clusters(points, ci_vectors, 2)
new_clusters
#[[1. , 1.5],
#[4. , 4. ]]
```
## CLUSTERING WITH K-MEANS
Having defined the functions to calculate the cluster assignments and update the centroids, we can now test the clustering functionality.
In order to execute our functions in an appropiate manner, we need to create a meta-function which combines the "assign" and "update" functions into a coherent clustering algorithm with stopping threshold.
The assignment of initial clusters will be determined by a `K-Means++ Algorithm`, also a `threshold monitor` is used to evaluate that the updated clusters are not changing and stop the main algorithm execution. Both functions are imported from a different module.
```
def k_means_clustering(x_points, no_clusters=3, max_iter = 100, stop_threshold = .001):
##K-MEANS ALGORITHM
##STEP #01: CLUSTER CENTROIDS INITIALIZATION ...
clusters = hp.init_clusters(x_points, no_clusters)
for k in range(max_iter):
##STEP 02: ASSIGN POINTS TO CENTROIDS ...
ci_vectors = assign_clusters(x_points, clusters)
##STEP 03: UPDATE CENTROIDS ...
n_clusters = update_clusters(x_points, ci_vectors, no_clusters)
##STEP 04: CHECK FOR CONVERGENCE ...
if hp.threshold_monitor(n_clusters, clusters, stop_threshold):
break
else:
clusters = n_clusters
return n_clusters
```
Thru our meta-function `k_means_clustering` we are ready to _"learn"_ the centroids from our dataset.
```
##LOADING THE DATASET
df = pd.read_csv('.\data\clouds.csv', index_col=0)
df.head(3)
##PLOTTING THE DATA POINTS
plt.scatter(df.x, df.y, alpha = 0.65)
plt.show()
##ESTIMATING CLUSTER CENTROIDS FROM THE DATA
x_points = df.drop(['cat'],axis=1).values
no_clusters = 3
centroids = k_means_clustering(x_points, no_clusters)
```
To compare the performance of our `K-Means Algorithm` the library `sklearn` is used. The `KMeans` model is imported and the dataset is passed to learn the centroids.
```
##GET AND INSTANCIATE K-MEANS MODEL
from sklearn.cluster import KMeans
##ESTIMATE CENTROIDS FROM SKLEARN ...
km = KMeans(n_clusters= no_clusters, n_init=1).fit(x_points)
centroids_sk = km.cluster_centers_
print(centroids)
print(centroids_sk)
```
The next graph shows the data points and the centroids calculated by both algorithms.
- In red, the centroids calculated thru our customize `k-means Algorithm`
- In black, the centroids calculated thru Sklearn `KMeans Algorithm`
```
##PLOTTING CLUSTER CENTROIDS ...
centroids_cst = np.array(centroids)
plt.scatter(df.x, df.y, alpha = 0.65)
plt.scatter(centroids_cst[:,0], centroids_cst[:,1], c = 'r', marker = 'x', s = 50)
plt.scatter(centroids_sk[:,0], centroids_sk[:,1], c = 'k', marker = '+', s = 50)
plt.show()
```
|
github_jupyter
|
##IMPORTING ALL NECESSARY SUPPORT LIBRARIES
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from resources import kmeans_helper as hp
def assign_clusters(x_points, clusters):
'''
arguments:
> x_points: 2-D array type, with the data points
> clusters: 2-D array type, with the cluster centroids
returns:
> 2-D array with a c_i vector that assign each point into a cluster
'''
##DECLARATION: AUXILIARY FUNCTIONS ...
def point_cluster_dis(centroid):
def euclidean(point):
sse = 0.0
for i in range(len(centroid)):
sse += (point[i] - centroid[i])**2
return (sse)**0.5
return euclidean
##THE DISTANCES FOR EACH POINT TO EACH CLUSTER IS CALCULATED ...
clus_dist = []
for c in clusters:
calc_distance = point_cluster_dis(c)
dist = list(map(calc_distance,x_points))
clus_dist.append(dist)
##THE DISTANCES ARE RE-ARRANGED ACCORDING TO EACH POINT ...
c_vectors = [[x] for x in clus_dist[0]]
for i in range(1, len(clus_dist)):
for j in range(len(clus_dist[0])):
c_vectors[j] += [clus_dist[i][j]]
##EACH POINT IS ASSIGNED INTO A CLUSTER ...
pts_assg = c_vectors.copy()
for i, p in enumerate(c_vectors):
idx = p.index(min(p))
for j in range(len(p)):
if j == idx:
pts_assg[i][j] = 1
else:
pts_assg[i][j] = 0
return pts_assg
def update_clusters(x_points, c_vectors, no_clusters):
'''
arguments:
> x_points: 2-D array type, with the data points
> c_vectors: 2-D array type, with c_i vectors that assign each point into a cluster
> no_clusters: integer type, total number of clusters
returns:
> 2-D array with updated cluster centroids
'''
##ALL POINTS ARE SEPARATED PER CLUSTER ...
split_pts_cluster = []
for c in range(no_clusters):
pts_in_cluster = [x for x,cv in zip(x_points,c_vectors) if cv[c] == 1]
split_pts_cluster.append(pts_in_cluster)
##RE-CALCULATE EACH CLUSTER CENTROID BASED ON THE NUMBER OF POINTS ASSIGNED ...
new_centroids = []
for c in range(no_clusters):
no_pts = len(split_pts_cluster[c])
sum_x, sum_y = (0,0)
for j in range(no_pts):
sum_x += split_pts_cluster[c][j][0]
sum_y += split_pts_cluster[c][j][1]
new_centroids.append([sum_x/no_pts,sum_y/no_pts])
return new_centroids
# Example:
points = [[0,1], [2,2], [5,4], [3,6], [4,2]]
clusters = [[0,1],[5,4]]
ci_vectors = assign_clusters(points,clusters)
ci_vectors
# [[1, 0],
# [1, 0],
# [0, 1],
# [0, 1],
# [0, 1]]
new_clusters = update_clusters(points, ci_vectors, 2)
new_clusters
#[[1. , 1.5],
#[4. , 4. ]]
def k_means_clustering(x_points, no_clusters=3, max_iter = 100, stop_threshold = .001):
##K-MEANS ALGORITHM
##STEP #01: CLUSTER CENTROIDS INITIALIZATION ...
clusters = hp.init_clusters(x_points, no_clusters)
for k in range(max_iter):
##STEP 02: ASSIGN POINTS TO CENTROIDS ...
ci_vectors = assign_clusters(x_points, clusters)
##STEP 03: UPDATE CENTROIDS ...
n_clusters = update_clusters(x_points, ci_vectors, no_clusters)
##STEP 04: CHECK FOR CONVERGENCE ...
if hp.threshold_monitor(n_clusters, clusters, stop_threshold):
break
else:
clusters = n_clusters
return n_clusters
##LOADING THE DATASET
df = pd.read_csv('.\data\clouds.csv', index_col=0)
df.head(3)
##PLOTTING THE DATA POINTS
plt.scatter(df.x, df.y, alpha = 0.65)
plt.show()
##ESTIMATING CLUSTER CENTROIDS FROM THE DATA
x_points = df.drop(['cat'],axis=1).values
no_clusters = 3
centroids = k_means_clustering(x_points, no_clusters)
##GET AND INSTANCIATE K-MEANS MODEL
from sklearn.cluster import KMeans
##ESTIMATE CENTROIDS FROM SKLEARN ...
km = KMeans(n_clusters= no_clusters, n_init=1).fit(x_points)
centroids_sk = km.cluster_centers_
print(centroids)
print(centroids_sk)
##PLOTTING CLUSTER CENTROIDS ...
centroids_cst = np.array(centroids)
plt.scatter(df.x, df.y, alpha = 0.65)
plt.scatter(centroids_cst[:,0], centroids_cst[:,1], c = 'r', marker = 'x', s = 50)
plt.scatter(centroids_sk[:,0], centroids_sk[:,1], c = 'k', marker = '+', s = 50)
plt.show()
| 0.494385 | 0.93511 |
# T1124 - System Time Discovery
An adversary may gather the system time and/or time zone from a local or remote system. The system time is set and stored by the Windows Time Service within a domain to maintain time synchronization between systems and services in an enterprise network. (Citation: MSDN System Time) (Citation: Technet Windows Time Service)
System time information may be gathered in a number of ways, such as with [Net](https://attack.mitre.org/software/S0039) on Windows by performing <code>net time \\hostname</code> to gather the system time on a remote system. The victim's time zone may also be inferred from the current system time or gathered by using <code>w32tm /tz</code>. (Citation: Technet Windows Time Service) The information could be useful for performing other techniques, such as executing a file with a [Scheduled Task/Job](https://attack.mitre.org/techniques/T1053) (Citation: RSA EU12 They're Inside), or to discover locality information based on time zone to assist in victim targeting.
## Atomic Tests
```
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
```
### Atomic Test #1 - System Time Discovery
Identify the system time. Upon execution, the local computer system time and timezone will be displayed.
**Supported Platforms:** windows
#### Attack Commands: Run with `command_prompt`
```command_prompt
net time \\localhost
w32tm /tz
```
```
Invoke-AtomicTest T1124 -TestNumbers 1
```
### Atomic Test #2 - System Time Discovery - PowerShell
Identify the system time via PowerShell. Upon execution, the system time will be displayed.
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
```powershell
Get-Date
```
```
Invoke-AtomicTest T1124 -TestNumbers 2
```
## Detection
Command-line interface monitoring may be useful to detect instances of net.exe or other command-line utilities being used to gather system time or time zone. Methods of detecting API use for gathering this information are likely less useful due to how often they may be used by legitimate software.
## Shield Active Defense
### Software Manipulation
Make changes to a system's software properties and functions to achieve a desired effect.
Software Manipulation allows a defender to alter or replace elements of the operating system, file system, or any other software installed and executed on a system.
#### Opportunity
There is an opportunity for the defender to observe the adversary and control what they can see, what effects they can have, and/or what data they can access.
#### Use Case
If the defender knows the specific regions an adversary is targeting, they can alter the output of commands which return systems times to return data consistent with what an adversary would want to see.
#### Procedures
Hook the Win32 Sleep() function so that it always performs a Sleep(1) instead of the intended duration. This can increase the speed at which dynamic analysis can be performed when a normal malicious file sleeps for long periods before attempting additional capabilities.
Hook the Win32 NetUserChangePassword() and modify it such that the new password is different from the one provided. The data passed into the function is encrypted along with the modified new password, then logged so a defender can get alerted about the change as well as decrypt the new password for use.
Alter the output of an adversary's profiling commands to make newly-built systems look like the operating system was installed months earlier.
Alter the output of adversary recon commands to not show important assets, such as a file server containing sensitive data.
|
github_jupyter
|
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
### Atomic Test #2 - System Time Discovery - PowerShell
Identify the system time via PowerShell. Upon execution, the system time will be displayed.
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
| 0.429908 | 0.917043 |
<a href="https://colab.research.google.com/github/hrishipoola/Gun_Sales_Structural_Break/blob/main/Gun_Sales_Structural_Break.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Gun Sales: Quantifying a Time Series Structural Break
## Table of Contents
1. Introduction
2. Install & Import Packages
3. Access NICS Background Checks Data
4. Missing Values
5. National Gun Sales
6. Population-Adjusted Sales
7. Rolling Mean and Standard Deviation
8. Rolling Mean and Yearly Average
9. Chow Test
10. Volatility
11. References
## 1. Introduction
In a [previous post](https://crawstat.com/2020/12/15/guns-time-series-analysis-and-forecast/), we explored and visualized population-adjusted gun sales, looking at many dimensions of time series data. We also fit a prediction and forecast applying the Box-Jenkins framework of identification (including stationarizing), estimation (SARIMAX), and model diagnostics.
Today, we'll focus specifically on the structural break in the gun sales time series data. The purpose is to:
- Visualize time series dimensions (rolling, yearly average, volatility),including a structural uptick in gun sales and volatility beginning 2012
- Quantify this structural break using [chow test](https://en.wikipedia.org/wiki/Chow_test) and looking at volatility.
For the chow test, our null hypothesis is that there's difference between two sub-periods. We'll run three regressions of sales with year, one over the whole time period (pooled), one before the breakpoint, and one after the breakpoint and take the sum of squared residuals for each. The chow test follows an f distribution with k degrees of freedom in the numerator and N1+N2-2k degrees of freedom in the denominator. After calculating the chow statistic, we see that it is above the critical value, meaning we can reject our null hypothesis and accept our alternative hypothesis of a structural break. Structural changes are often also accompanies by changes in volatility - we'll visualize volatility and % change in volatility to further illustrate the structural break.
Background check data originates from the [FBI's National Instant Criminal Background Check System (NICS)](https://www.fbi.gov/services/cjis/nics). Original data is available as a [pdf](https://www.fbi.gov/file-repository/nics_firearm_checks_-_month_year_by_state_type.pdf/view). If you'd like to extract the csv from the pdf directly, you can do so using BuzzFeed's [parsing scripts](https://github.com/BuzzFeedNews/nics-firearm-background-checks/tree/master/scripts) or [Tabula](https://tabula.technology/). According to the data pdf, "These statistics represent the number of firearm background checks initiated through the NICS. They do not represent the number of firearms sold. Based on varying state laws and purchase scenarios, a one-to-one correlation cannot be made between a firearm background check and a firearm sale." Important things to keep in mind for our analysis:
- We focus on background checks by month, state, and gun type, namely long guns, which include rifles and shot guns, and handguns.
- We exclude permit check/recheck as regulations vary widely by state
- Also excluded are 'other' gun background checks
- FBI's NICS data only include licensed commercial gun sales and exclude private gun sale, which often don't undergo a background check and represent a sizeable portion of total gun sales. Additionally, many background checks are carried out for concealed carry permits, not gun sales (e.g., Kentucky runs a new check on each concealed carry license holder each month).
To convert background checks to sales (number of units), we apply the multiple gun sales factor (MGSF) multiplier found in Jurgen Brauer's [Small Arms Survey](http://www.smallarmssurvey.org/fileadmin/docs/F-Working-papers/SAS-WP14-US-Firearms-Industry.pdf), which is based on interviews with gun shop owners: multiply background checks for handguns by 1.1, long guns by 1.1, and multiple guns by 2 (page 44). Because state laws and individual transactions differ, sales between states cannot be directly compared. Despite those caveats, the FBI’s NICS numbers are widely accepted as the best proxy for total gun sales in a given time period. Additionally, to adjust sales for population growth, we'll pull monthly U.S. population data from [Federal Reserve Economic Data (FRED)](https://fred.stlouisfed.org/).
Future areas to explore include factors behind the structural change, including shifts in background check reporting and policies among states, economic shocks, legislation, and political change and uncertainty.
Lets' dig in!
## 2. Install & Import Packages
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
import matplotlib.ticker as mtick
%matplotlib inline
!pip install seaborn --upgrade
import seaborn as sns
from datetime import datetime
from datetime import date
from random import randint
import plotly.express as px
# Regression
import statsmodels.api as sm
# F distribution critical value
import scipy.stats
# Access FRED data
!pip install pandas-datareader
from pandas_datareader.data import DataReader
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# Infer NaN values
pd.options.mode.use_inf_as_na = True
# Set seaborn plot style
sns.set_style("darkgrid")
```
## 3. Access NICS Background Checks Data
```
# Read in data
guns = pd.read_csv('https://raw.githubusercontent.com/BuzzFeedNews/nics-firearm-background-checks/master/data/nics-firearm-background-checks.csv')
# Check first and last 5 rows
pd.concat([guns.head(), guns.tail()])
# Convert month to period type of monthly frequency
guns['month'] = pd.to_datetime(guns['month'])
# Start from Jan 1999 as Nov 1998 data looks inconsistent and having only December data for 1998 wouldn't be representative of that year
guns = guns[guns['month'] >= '1999-01']
# Keep only relevant columns, see intro on rationale
guns = guns[['month', 'state', 'handgun', 'long_gun', 'multiple']]
# Reverse order so earliest month is at the top and most recent month is at the bottom
guns = guns.iloc[::-1].reset_index(drop=True)
# Check first 3 and last 3 rows
pd.concat([guns.head(3), guns.tail(3)])
# States also include island territories like Guam, Virgin Islands, Mariana Islands, and Puerto Rico
guns.state.unique()
```
## 4. Missing Values
Before handling or filling null values, let's understand where they're coming from, if they're random, and if we expect them to be missing in the future.
We see that missing values are only from Virgin Islands and Mariana Islands and are generally spread out throughout the time period. Since data is likely missing because background check data from these is unreliable, instead of filling, let's drop these two islands from our dataframe. Additionally, for consistency and ease of comparison in this analysis, let's stick to the 50 states and Washington D.C. and also remove Guam and Puerto Rico.
```
# Check which columns have missing values
guns.isnull().sum()
# Create dataframe with rows that include null values
null_mask = guns.isnull()
row_has_null = null_mask.any(axis=1)
null_df = guns[row_has_null]
null_df
# Remove island territories
guns = guns[(guns.state != 'Virgin Islands') & (guns.state != 'Mariana Islands') & (guns.state != 'Guam') & (guns.state != 'Puerto Rico')]
# Change data type of 'handgun' to int
guns['handgun'] = guns['handgun'].astype(int)
# Change data type of 'long_gun' to int
guns['long_gun'] = guns['long_gun'].astype(int)
# Rename columns
guns = guns.rename(columns={'month':'month_stamp','handgun':'handgun_checks', 'long_gun':'long_gun_checks', 'multiple':'multiple_gun_checks'})
# Calculate total checks
guns['total_checks'] = guns.handgun_checks + guns.long_gun_checks + guns.multiple_gun_checks
# Check first few rows
guns.head()
# Double-check data types and info
guns.info()
```
## 5. National Gun Sales
```
# Compute sales using multiplier of 1.1 for handguns and long guns and 2 for multiple guns (discussed in the intro)
guns = guns.assign(
handgun_sales = (guns['handgun_checks'] * 1.1).astype(int),
long_gun_sales = (guns['long_gun_checks'] * 1.1).astype(int),
multiple_gun_sales = (guns['multiple_gun_checks'] * 2).astype(int))
guns['total_sales'] = (guns.handgun_sales + guns.long_gun_sales + guns.multiple_gun_sales).astype(int)
# Check first few rows
guns.head()
# National handgun sales
national_handgun_sales = pd.DataFrame(guns.groupby('month_stamp')['handgun_sales'].sum())
# National long gun sales
national_long_gun_sales = pd.DataFrame(guns.groupby('month_stamp')['long_gun_sales'].sum())
# National multiple gun sales
national_mult_gun_sales = pd.DataFrame(guns.groupby('month_stamp')['multiple_gun_sales'].sum())
# National total sales
national_total_sales = pd.DataFrame(guns.groupby('month_stamp')['total_sales'].sum())
# National sales dataframe
national_sales = pd.concat([national_handgun_sales, national_long_gun_sales, national_mult_gun_sales, national_total_sales], axis=1)
national_sales.reset_index(inplace=True)
#national_sales['month_stamp'] = national_sales['month_stamp'].dt.to_timestamp()
national_sales.set_index('month_stamp', inplace=True)
# Check last few rows
national_sales.tail()
```
## 6. Population-Adjusted National Sales
Since population, specifically the population over age 18 legally allowed to buy guns, has increased over the time period, we can get to a more accurate picture by adjusting national sales for population. Let's pull monthly U.S. population estimates from Federal Reserve Economic Data (FRED), calculate the population over age 18, which has remained roughly 75% of the total population throughout the time period. We can then calculate sales per 100000 by dividing sales by the population over age 18 and multiplying by 100000.
```
# Monthly U.S. population in '000s
# Set start date as January 1, 1999
start = date(1999, 1, 1)
# Set series code, can find on FRED website
series = 'POPTHM'
# Import the data, multiply by 1000 as the data is in '000s
population = DataReader(series, 'fred', start=start) * 1000
# Check first 2 and last 2 rows.
pd.concat([population.head(2), population.tail(2)])
# It's not exactly in the form we need, so let's adjust it. Population for December 2020 is missing so let's add that.
# Reset index
population.reset_index(inplace=True)
# Rename columns
population.columns = ['month_stamp', 'total_pop']
# Set data types
population['month_stamp'] = population['month_stamp'].astype(str)
population['total_pop'] = population['total_pop'].astype(int)
# Add in population for Dec 2020 as new row
dec_2020_pop = ((population.iloc[-1,1] / population.iloc[-2,1]) * population.iloc[-1,1]).astype(int) # Multiply previous month by growth rate of previous month
df2 = pd.DataFrame([['2020-12-01',dec_2020_pop]], columns=['month_stamp','total_pop'])
population = pd.concat([population, df2], ignore_index=True)
# Convert month_stamp to datetime type
population['month_stamp'] = pd.to_datetime(population['month_stamp'])
# Set index to month_stamp
population.set_index('month_stamp', inplace=True)
# Calculate population over 18 as 0.75 * population (population over 18 is roughly 75% of population over the years)
population['pop_over_18'] = (population['total_pop']*0.75).astype(int)
# Check first 3 and last 3 rows. We see that it's monthly data starting at the 1st of each month
pd.concat([population.head(3), population.tail(3)])
```
As we can see the population increased from about 278 million in January 1999 to about 331 million in December 2020. The percent of the population over age 18 has remained roughly consistently 75% over this time and has grown from about 208 million in January 1999 to 248 million in December 2020. The population over 18 which is legally allowed to buy guns is the relevant figure for our case.
```
# Check info, data type of dataframe
population.info()
# Combine national sales and population dataframes
national_sales = pd.concat([national_sales.reset_index(), population.reset_index(drop=True)], axis=1)
# Compute sales per 100000 by dividing sales by population over 18 and multiplying by 100000
national_sales = national_sales.assign(
handgun_sales_per_100000 = ((national_sales['handgun_sales'] / national_sales['pop_over_18'])*100000).astype(int),
long_gun_sales_per_100000 = ((national_sales['long_gun_sales'] / national_sales['pop_over_18'])*100000).astype(int),
multiple_gun_sales_per_100000 = ((national_sales['multiple_gun_sales'] / national_sales['pop_over_18'])*100000).astype(int),
total_sales_per_100000 = ((national_sales['total_sales'] / national_sales['pop_over_18'])*100000).astype(int)
)
national_sales.set_index('month_stamp', inplace=True)
national_sales.head()
```
## 7. Rolling Mean and Standard Deviation
```
# Plot national total sales
style.use('fivethirtyeight')
fig, ax = plt.subplots(figsize=(18,8))
sns.lineplot(x=national_sales.index,
y='total_sales_per_100000',
data=national_sales,
color='slategray',
ax=ax,
label='monthly sales',
alpha=0.8)
rolling_national_sales = national_sales.rolling(12).mean()
sns.lineplot(x=national_sales.index,
y='total_sales_per_100000',
data=rolling_national_sales,
color='lightcoral',
ax=ax,
label='12-month average',
alpha=0.8)
rolling_std = national_sales['total_sales_per_100000'].rolling(12).std().to_frame()
ax.fill_between(national_sales.index,
rolling_national_sales['total_sales_per_100000'] + (2 * rolling_std['total_sales_per_100000']),
rolling_national_sales['total_sales_per_100000'] - (2 * rolling_std['total_sales_per_100000']),
color='pink', alpha=0.4,
label="standard error")
ax.set(title='Monthly National Gun Sales', xlabel='Time', ylabel='Number (per 100,000)')
ax.legend()
```
## 8. Rolling Mean and Yearly Average
```
# Plot national total sales
style.use('fivethirtyeight')
fig, ax = plt.subplots(figsize=(18,8))
# Monthly rolling 12-month average
rolling_national_sales = national_sales.rolling(12).mean().dropna() # first 12 months will be NaN, let's drop them
sns.lineplot(x=rolling_national_sales.index,
y='total_sales_per_100000',
data=rolling_national_sales,
color='lightcoral',
ax=ax,
label='Rolling 12-month average',
alpha=0.6)
# Yearly average
sales_yearly_average = national_sales.resample('Y').mean().dropna() # first 12 months will be NaN, let's drop them
sns.lineplot(x=sales_yearly_average.index,
y='total_sales_per_100000',
data=sales_yearly_average,
color='turquoise',
ax=ax,
label='Yearly average',
alpha=0.8)
ax.set(title='National Gun Sales', xlabel='Time', ylabel='Number (per 100,000)')
ax.axvline(pd.to_datetime('2012-01-01'), color='slategray', lw=2, linestyle='--')
ax.text(pd.to_datetime('2012-01-30'), max(rolling_national_sales['total_sales_per_100000']), 'Structural break', color='slategray')
ax.legend()
```
## 9. Chow Test
```
# Create separate month and year columns so we can plot seasonality by year and month
sales_yearly_average.reset_index(inplace=True)
sales_yearly_average = sales_yearly_average.assign(year = lambda x: x['month_stamp'].dt.year,
month = lambda x: x['month_stamp'].dt.month)
sales_yearly_average.head()
# Chow test equation: https://en.wikipedia.org/wiki/Chow_test
# Test statistic follows f distribution with k and N1+N2-2k degrees of freedom
def chow_test(df, breakpoint):
# Pooled regression of sales with year
result_pooled = sm.OLS(df['total_sales_per_100000'], df['year']).fit()
ssr_pooled = result_pooled.ssr
# Regression for each period
before = df[df['year'] < breakpoint]
after = df[df['year'] >= breakpoint]
result_before = sm.OLS(before['total_sales_per_100000'], before['year']).fit()
result_after = sm.OLS(after['total_sales_per_100000'], after['year']).fit()
ssr_1 = result_before.ssr
ssr_2 = result_after.ssr
k = 2 # degrees of freedom: slope and intercept
N1 = len(before) # number of observations before break
N2 = len(after) # number of observations after break
chow = ((ssr_pooled - (ssr_1 + ssr_2)) / k) / ((ssr_1 + ssr_2) / (N1+N2-2*k))
return print('Chow test statistic: ', chow)
chow_test(sales_yearly_average, 2012)
# F critical value, test statistic follows f distribution with k and N1+N2-2k degrees of freedom
critical_value = scipy.stats.f.ppf(q=0.99, dfn=k, dfd= N1+N2 -(2*k))
critical_value
```
Chow test statistic 23.80 is greater than the critical value 6.01, meaning we can reject the null hypothesis and accept the alternative hypothesis that the two subperiods are structurally different.
## 10. Volatility
```
rolling = national_sales.rolling(12)
volatility = rolling.std().dropna()
volatility_mean = volatility.resample('Y').mean()
# Plot
fig = plt.figure(figsize=(17,5))
ax1 = fig.add_subplot(1, 2, 1)
ax2 = fig.add_subplot(1, 2, 2)
ax1.plot(volatility_mean['total_sales_per_100000'], color='lightcoral')
ax1.set(title='Yearly Volatility', xlabel='Year', ylabel='St. Dev.')
ax1.axvline(pd.to_datetime('2012-01-01'), color='slategray', lw=2, linestyle='--')
ax2.plot(volatility_mean['total_sales_per_100000'].pct_change()*100, color='turquoise')
ax2.set(title='Change in Yearly Volatility', xlabel='Year', ylabel='Change in St. Dev')
ax2.axvline(pd.to_datetime('2012-01-01'), color='slategray', lw=2, linestyle='--')
ax2.yaxis.set_major_formatter(mtick.PercentFormatter())
plt.show()
```
## 11. References
https://learn.datacamp.com/skill-tracks/applied-finance-in-python
https://medium.com/@remycanario17/the-chow-test-dealing-with-heterogeneity-in-python-1b9057f0f07a
https://en.wikipedia.org/wiki/Chow_test
https://github.com/BuzzFeedNews/nics-firearm-background-checks
https://github.com/nytimes/gunsales
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
import matplotlib.ticker as mtick
%matplotlib inline
!pip install seaborn --upgrade
import seaborn as sns
from datetime import datetime
from datetime import date
from random import randint
import plotly.express as px
# Regression
import statsmodels.api as sm
# F distribution critical value
import scipy.stats
# Access FRED data
!pip install pandas-datareader
from pandas_datareader.data import DataReader
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# Infer NaN values
pd.options.mode.use_inf_as_na = True
# Set seaborn plot style
sns.set_style("darkgrid")
# Read in data
guns = pd.read_csv('https://raw.githubusercontent.com/BuzzFeedNews/nics-firearm-background-checks/master/data/nics-firearm-background-checks.csv')
# Check first and last 5 rows
pd.concat([guns.head(), guns.tail()])
# Convert month to period type of monthly frequency
guns['month'] = pd.to_datetime(guns['month'])
# Start from Jan 1999 as Nov 1998 data looks inconsistent and having only December data for 1998 wouldn't be representative of that year
guns = guns[guns['month'] >= '1999-01']
# Keep only relevant columns, see intro on rationale
guns = guns[['month', 'state', 'handgun', 'long_gun', 'multiple']]
# Reverse order so earliest month is at the top and most recent month is at the bottom
guns = guns.iloc[::-1].reset_index(drop=True)
# Check first 3 and last 3 rows
pd.concat([guns.head(3), guns.tail(3)])
# States also include island territories like Guam, Virgin Islands, Mariana Islands, and Puerto Rico
guns.state.unique()
# Check which columns have missing values
guns.isnull().sum()
# Create dataframe with rows that include null values
null_mask = guns.isnull()
row_has_null = null_mask.any(axis=1)
null_df = guns[row_has_null]
null_df
# Remove island territories
guns = guns[(guns.state != 'Virgin Islands') & (guns.state != 'Mariana Islands') & (guns.state != 'Guam') & (guns.state != 'Puerto Rico')]
# Change data type of 'handgun' to int
guns['handgun'] = guns['handgun'].astype(int)
# Change data type of 'long_gun' to int
guns['long_gun'] = guns['long_gun'].astype(int)
# Rename columns
guns = guns.rename(columns={'month':'month_stamp','handgun':'handgun_checks', 'long_gun':'long_gun_checks', 'multiple':'multiple_gun_checks'})
# Calculate total checks
guns['total_checks'] = guns.handgun_checks + guns.long_gun_checks + guns.multiple_gun_checks
# Check first few rows
guns.head()
# Double-check data types and info
guns.info()
# Compute sales using multiplier of 1.1 for handguns and long guns and 2 for multiple guns (discussed in the intro)
guns = guns.assign(
handgun_sales = (guns['handgun_checks'] * 1.1).astype(int),
long_gun_sales = (guns['long_gun_checks'] * 1.1).astype(int),
multiple_gun_sales = (guns['multiple_gun_checks'] * 2).astype(int))
guns['total_sales'] = (guns.handgun_sales + guns.long_gun_sales + guns.multiple_gun_sales).astype(int)
# Check first few rows
guns.head()
# National handgun sales
national_handgun_sales = pd.DataFrame(guns.groupby('month_stamp')['handgun_sales'].sum())
# National long gun sales
national_long_gun_sales = pd.DataFrame(guns.groupby('month_stamp')['long_gun_sales'].sum())
# National multiple gun sales
national_mult_gun_sales = pd.DataFrame(guns.groupby('month_stamp')['multiple_gun_sales'].sum())
# National total sales
national_total_sales = pd.DataFrame(guns.groupby('month_stamp')['total_sales'].sum())
# National sales dataframe
national_sales = pd.concat([national_handgun_sales, national_long_gun_sales, national_mult_gun_sales, national_total_sales], axis=1)
national_sales.reset_index(inplace=True)
#national_sales['month_stamp'] = national_sales['month_stamp'].dt.to_timestamp()
national_sales.set_index('month_stamp', inplace=True)
# Check last few rows
national_sales.tail()
# Monthly U.S. population in '000s
# Set start date as January 1, 1999
start = date(1999, 1, 1)
# Set series code, can find on FRED website
series = 'POPTHM'
# Import the data, multiply by 1000 as the data is in '000s
population = DataReader(series, 'fred', start=start) * 1000
# Check first 2 and last 2 rows.
pd.concat([population.head(2), population.tail(2)])
# It's not exactly in the form we need, so let's adjust it. Population for December 2020 is missing so let's add that.
# Reset index
population.reset_index(inplace=True)
# Rename columns
population.columns = ['month_stamp', 'total_pop']
# Set data types
population['month_stamp'] = population['month_stamp'].astype(str)
population['total_pop'] = population['total_pop'].astype(int)
# Add in population for Dec 2020 as new row
dec_2020_pop = ((population.iloc[-1,1] / population.iloc[-2,1]) * population.iloc[-1,1]).astype(int) # Multiply previous month by growth rate of previous month
df2 = pd.DataFrame([['2020-12-01',dec_2020_pop]], columns=['month_stamp','total_pop'])
population = pd.concat([population, df2], ignore_index=True)
# Convert month_stamp to datetime type
population['month_stamp'] = pd.to_datetime(population['month_stamp'])
# Set index to month_stamp
population.set_index('month_stamp', inplace=True)
# Calculate population over 18 as 0.75 * population (population over 18 is roughly 75% of population over the years)
population['pop_over_18'] = (population['total_pop']*0.75).astype(int)
# Check first 3 and last 3 rows. We see that it's monthly data starting at the 1st of each month
pd.concat([population.head(3), population.tail(3)])
# Check info, data type of dataframe
population.info()
# Combine national sales and population dataframes
national_sales = pd.concat([national_sales.reset_index(), population.reset_index(drop=True)], axis=1)
# Compute sales per 100000 by dividing sales by population over 18 and multiplying by 100000
national_sales = national_sales.assign(
handgun_sales_per_100000 = ((national_sales['handgun_sales'] / national_sales['pop_over_18'])*100000).astype(int),
long_gun_sales_per_100000 = ((national_sales['long_gun_sales'] / national_sales['pop_over_18'])*100000).astype(int),
multiple_gun_sales_per_100000 = ((national_sales['multiple_gun_sales'] / national_sales['pop_over_18'])*100000).astype(int),
total_sales_per_100000 = ((national_sales['total_sales'] / national_sales['pop_over_18'])*100000).astype(int)
)
national_sales.set_index('month_stamp', inplace=True)
national_sales.head()
# Plot national total sales
style.use('fivethirtyeight')
fig, ax = plt.subplots(figsize=(18,8))
sns.lineplot(x=national_sales.index,
y='total_sales_per_100000',
data=national_sales,
color='slategray',
ax=ax,
label='monthly sales',
alpha=0.8)
rolling_national_sales = national_sales.rolling(12).mean()
sns.lineplot(x=national_sales.index,
y='total_sales_per_100000',
data=rolling_national_sales,
color='lightcoral',
ax=ax,
label='12-month average',
alpha=0.8)
rolling_std = national_sales['total_sales_per_100000'].rolling(12).std().to_frame()
ax.fill_between(national_sales.index,
rolling_national_sales['total_sales_per_100000'] + (2 * rolling_std['total_sales_per_100000']),
rolling_national_sales['total_sales_per_100000'] - (2 * rolling_std['total_sales_per_100000']),
color='pink', alpha=0.4,
label="standard error")
ax.set(title='Monthly National Gun Sales', xlabel='Time', ylabel='Number (per 100,000)')
ax.legend()
# Plot national total sales
style.use('fivethirtyeight')
fig, ax = plt.subplots(figsize=(18,8))
# Monthly rolling 12-month average
rolling_national_sales = national_sales.rolling(12).mean().dropna() # first 12 months will be NaN, let's drop them
sns.lineplot(x=rolling_national_sales.index,
y='total_sales_per_100000',
data=rolling_national_sales,
color='lightcoral',
ax=ax,
label='Rolling 12-month average',
alpha=0.6)
# Yearly average
sales_yearly_average = national_sales.resample('Y').mean().dropna() # first 12 months will be NaN, let's drop them
sns.lineplot(x=sales_yearly_average.index,
y='total_sales_per_100000',
data=sales_yearly_average,
color='turquoise',
ax=ax,
label='Yearly average',
alpha=0.8)
ax.set(title='National Gun Sales', xlabel='Time', ylabel='Number (per 100,000)')
ax.axvline(pd.to_datetime('2012-01-01'), color='slategray', lw=2, linestyle='--')
ax.text(pd.to_datetime('2012-01-30'), max(rolling_national_sales['total_sales_per_100000']), 'Structural break', color='slategray')
ax.legend()
# Create separate month and year columns so we can plot seasonality by year and month
sales_yearly_average.reset_index(inplace=True)
sales_yearly_average = sales_yearly_average.assign(year = lambda x: x['month_stamp'].dt.year,
month = lambda x: x['month_stamp'].dt.month)
sales_yearly_average.head()
# Chow test equation: https://en.wikipedia.org/wiki/Chow_test
# Test statistic follows f distribution with k and N1+N2-2k degrees of freedom
def chow_test(df, breakpoint):
# Pooled regression of sales with year
result_pooled = sm.OLS(df['total_sales_per_100000'], df['year']).fit()
ssr_pooled = result_pooled.ssr
# Regression for each period
before = df[df['year'] < breakpoint]
after = df[df['year'] >= breakpoint]
result_before = sm.OLS(before['total_sales_per_100000'], before['year']).fit()
result_after = sm.OLS(after['total_sales_per_100000'], after['year']).fit()
ssr_1 = result_before.ssr
ssr_2 = result_after.ssr
k = 2 # degrees of freedom: slope and intercept
N1 = len(before) # number of observations before break
N2 = len(after) # number of observations after break
chow = ((ssr_pooled - (ssr_1 + ssr_2)) / k) / ((ssr_1 + ssr_2) / (N1+N2-2*k))
return print('Chow test statistic: ', chow)
chow_test(sales_yearly_average, 2012)
# F critical value, test statistic follows f distribution with k and N1+N2-2k degrees of freedom
critical_value = scipy.stats.f.ppf(q=0.99, dfn=k, dfd= N1+N2 -(2*k))
critical_value
rolling = national_sales.rolling(12)
volatility = rolling.std().dropna()
volatility_mean = volatility.resample('Y').mean()
# Plot
fig = plt.figure(figsize=(17,5))
ax1 = fig.add_subplot(1, 2, 1)
ax2 = fig.add_subplot(1, 2, 2)
ax1.plot(volatility_mean['total_sales_per_100000'], color='lightcoral')
ax1.set(title='Yearly Volatility', xlabel='Year', ylabel='St. Dev.')
ax1.axvline(pd.to_datetime('2012-01-01'), color='slategray', lw=2, linestyle='--')
ax2.plot(volatility_mean['total_sales_per_100000'].pct_change()*100, color='turquoise')
ax2.set(title='Change in Yearly Volatility', xlabel='Year', ylabel='Change in St. Dev')
ax2.axvline(pd.to_datetime('2012-01-01'), color='slategray', lw=2, linestyle='--')
ax2.yaxis.set_major_formatter(mtick.PercentFormatter())
plt.show()
| 0.574395 | 0.988279 |
Table of Contents:
Step1: Image collection and Labelling
Step2: Installation of the required package
Step3: Custom image augmentation
Step4: Model Training
Step5: Model saving, loading, and predicting
Step1: Image collection and labeling:
The first step of any object detection model is collecting images and performing annotation. For this project, I have downloaded 50 ‘Maruti Car Images’ from google image. There is a package called simple_image_download which is used for automatic image download. Feel free to use the following code:
With this code, we will get 50 downloaded images in our ‘Maruti Car’ folder of the working directory. Feel free to change the number of images to as many as you want. After that, we will randomly split images into two parts i.e. Train (35 images) and Test(15 images)
The next job is labeling the images. There are various image annotation tool is available. For this project, I have used MAKESENSE.AI. It’s a free online tool for labeling. No installation process is required. We can open it using the browser only. Using the link, I dropped my car images and did annotation for Train and Validation datasets separately.
Now, we can export the annotation in XML format as ‘Detecto’ supports it. Then we have placed XML files of train and validation images in the Train and validation folder respectively. So the folder tree looks like this:
```
from simple_image_download import simple_image_download as simp
response = simp.simple_image_download
lst=['Maruti car']
for rep in lst:
response().download(rep, 50)
##MAKESENSE.AI
```
Step2: Installation of the required packages:
As it is already mentioned that ‘Detecto’ is built on top of the PyTorch, we need to first install PyTorch. I have used Google Colab for this project. Then we need to check whether we have the support of GPU or not using the following code:
```
import torch
print(torch.cude.is_available())
```
If the print is ‘True’, it means you can use GPU. If it is ‘False’, please change the ‘Hardware Accelerator’ of the Notebook Setting to ‘GPU’. Now, your system is ready with the requisition to install ‘Detecto’. Use the following magic code to install it.
```
!pip install detecto
```
Once it’s done, let’s import the libraries using the following code:
```
from detecto import core, utils, visualize
from detecto.visualize import show_labeled_image, plot_prediction_grid
from torchvision import transforms
import matplotlib.pyplot as plt
import numpy as np
```
Step3: Custom image augmentation:
Image augmentation is the process of artificially expanding data by creating a modified version of images. Detecto has an inbuilt function to do custom transform by applying to resize, flip, and saturation augmentation. Please, use the following code for augmenting the image dataset.
```
custom_transforms = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize(900),
transforms.RandomHorizontalFlip(0.5),
transforms.ColorJitter(saturation=0.2),
transforms.ToTensor(),
utils.normalize_transform(),
])
```
Step4: Model Training:
Now, we have come to that most awaited step i.e. Model Training. Here, magic happens in Five lines of code.
```
Train_dataset=core.Dataset('Train/',transform=custom_transforms)#L1
Test_dataset = core.Dataset('Test/')#L2
loader=core.DataLoader(Train_dataset, batch_size=2, shuffle=True)#L3
model = core.Model(['Wheel', 'Head Light'])#L4
losses = model.fit(loader, Test_dataset, epochs=25, lr_step_size=5, learning_rate=0.001, verbose=True)#L5
```
In the first two lines of code(L1 & L2), we have assigned Train and Test dataset. In L3, we have created DataLoader over our dataset. It helps define how we batch and feed our images into the model for training. Feel free to experiment by changing ‘batch_size’.
Now, it’s time to mention the ‘Labels’ or ‘classes’ which are made in L4. Finally, model training will be started via ‘model.fit’ in L5. Here, we can play with different options such as epochs, lr_step_size, and learning rate’. The default model is Faster R-CNN ResNet-50 FPN. We have fine-tuned this model for our custom dataset.
Now, we can look at the loss function using the following code:
```
plt.plot(losses)
plt.show()
```
Step5: Model saving, loading, and predicting:
Once we are satisfied with a model loss, we need to save the model for future reference. So that we can load it as and when required. Use the following code for saving and loading.
```
model.save('model_weights.pth')
model = core.Model.load('model_weights.pth', ['Wheel', 'Head Light'])
```
After loading the model, we want to use it for prediction. Let’s use it for one observation from the Test folder and plot the image with a bounding box. Here, the prediction format is labels, boxes, and scores.
```
image = utils.read_image('Test/Maruti car_27.jpeg')
predictions = model.predict(image)
labels, boxes, scores = predictions
show_labeled_image(image, boxes, labels)
```
There are many unwanted bounding boxes in the above picture. So, we have to remove them. The simplest way to solve the issue is by providing a threshold on the score. For this project, I have put the threshold as 0.6 for both classes. I came to this point through different trials and errors. Use the following code to set up the threshold for bounding boxes and plotting them.
```
thresh=0.6
filtered_indices=np.where(scores>thresh)
filtered_scores=scores[filtered_indices]
filtered_boxes=boxes[filtered_indices]
num_list = filtered_indices[0].tolist()
filtered_labels = [labels[i] for i in num_list]
show_labeled_image(image, filtered_boxes, filtered_labels)
```
Now, we can see the final output. And yes, it is quite impressive. So, this the end of the project. Let us know your opinion after using this one for your custom dataset.
Happy learnings !!!!
|
github_jupyter
|
from simple_image_download import simple_image_download as simp
response = simp.simple_image_download
lst=['Maruti car']
for rep in lst:
response().download(rep, 50)
##MAKESENSE.AI
import torch
print(torch.cude.is_available())
!pip install detecto
from detecto import core, utils, visualize
from detecto.visualize import show_labeled_image, plot_prediction_grid
from torchvision import transforms
import matplotlib.pyplot as plt
import numpy as np
custom_transforms = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize(900),
transforms.RandomHorizontalFlip(0.5),
transforms.ColorJitter(saturation=0.2),
transforms.ToTensor(),
utils.normalize_transform(),
])
Train_dataset=core.Dataset('Train/',transform=custom_transforms)#L1
Test_dataset = core.Dataset('Test/')#L2
loader=core.DataLoader(Train_dataset, batch_size=2, shuffle=True)#L3
model = core.Model(['Wheel', 'Head Light'])#L4
losses = model.fit(loader, Test_dataset, epochs=25, lr_step_size=5, learning_rate=0.001, verbose=True)#L5
plt.plot(losses)
plt.show()
model.save('model_weights.pth')
model = core.Model.load('model_weights.pth', ['Wheel', 'Head Light'])
image = utils.read_image('Test/Maruti car_27.jpeg')
predictions = model.predict(image)
labels, boxes, scores = predictions
show_labeled_image(image, boxes, labels)
thresh=0.6
filtered_indices=np.where(scores>thresh)
filtered_scores=scores[filtered_indices]
filtered_boxes=boxes[filtered_indices]
num_list = filtered_indices[0].tolist()
filtered_labels = [labels[i] for i in num_list]
show_labeled_image(image, filtered_boxes, filtered_labels)
| 0.505371 | 0.987771 |
# Tutorial: optimal binning with binary target under uncertainty
The drawback of performing optimal binning given only expected event rates is that variability of event rates in different periods is not taken into account. In this tutorial, we show how scenario-based stochastic programming allows incorporating uncertainty without much difficulty.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
from optbinning import OptimalBinning
from optbinning.binning.uncertainty import SBOptimalBinning
```
### Scenario generation
We generate three scenarios, all equally likely, aiming to represent three economic scenarios severity using the customer's score variable, for instance.
**Scenario 0 - Normal (Realistic)**: A low customer' score has a higher event rate (default rate, churn, etc) than a high customer's score. The population corresponding to non-event and event are reasonably separated.
```
N0 = int(1e5)
xe = stats.beta(a=4, b=15).rvs(size=N0, random_state=42)
ye = stats.bernoulli(p=0.7).rvs(size=N0, random_state=42)
xn = stats.beta(a=6, b=8).rvs(size=N0, random_state=42)
yn = stats.bernoulli(p=0.2).rvs(size=N0, random_state=42)
x0 = np.concatenate((xn, xe), axis=0)
y0 = np.concatenate((yn, ye), axis=0)
def plot_distribution(x, y):
plt.hist(x[y == 0], label="n_nonevent", color="b", alpha=0.5)
plt.hist(x[y == 1], label="n_event", color="r", alpha=0.5)
plt.legend()
plt.show()
plot_distribution(x0, y0)
```
**Scenario 1: Good (Optimistic)**: A low customer' score has a much higher event rate (default rate, churn, etc) than a high customer's score. The population corresponding to non-event and event rate are very well separated, showing minimum overlap regions.
```
N1 = int(5e4)
xe = stats.beta(a=25, b=50).rvs(size=N1, random_state=42)
ye = stats.bernoulli(p=0.9).rvs(size=N1, random_state=42)
xn = stats.beta(a=22, b=25).rvs(size=N1, random_state=42)
yn = stats.bernoulli(p=0.05).rvs(size=N1, random_state=42)
x1 = np.concatenate((xn, xe), axis=0)
y1 = np.concatenate((yn, ye), axis=0)
plot_distribution(x1, y1)
```
**Scenario 2: Bad (Pessimistic)**: Customer's behavior cannot be accurately segmented, and a general increase in event rates is exhibited. The populations corresponding to non-event and event are practically overlapped.
```
N2 = int(5e4)
xe = stats.beta(a=4, b=6).rvs(size=N2, random_state=42)
ye = stats.bernoulli(p=0.7).rvs(size=N2, random_state=42)
xn = stats.beta(a=8, b=10).rvs(size=N2, random_state=42)
yn = stats.bernoulli(p=0.4).rvs(size=N2, random_state=42)
x2 = np.concatenate((xn, xe), axis=0)
y2 = np.concatenate((yn, ye), axis=0)
plot_distribution(x2, y2)
```
### Scenario-based stochastic optimal binning
Prepare scenarios data and instantiate an ``SBOptimalBinning`` object class. We set a descending monotonicity constraint with respect to event rate and a minimum bin size.
```
X = [x0, x1, x2]
Y = [y0, y1, y2]
sboptb = SBOptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
sboptb.fit(X, Y)
sboptb.status
```
We obtain "only" three splits guaranteeing feasibility for each scenario.
```
sboptb.splits
sboptb.information(print_level=2)
```
#### The binning table
As other optimal binning algorithms in OptBinning, ``SBOptimalBinning`` also returns a binning table displaying the binned data considering all scenarios.
```
sboptb.binning_table.build()
sboptb.binning_table.plot(metric="event_rate")
sboptb.binning_table.analysis()
```
### Expected value solution (EVS)
The expected value solution is calculated with the normal (expected) scenario.
```
optb = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb.fit(x0, y0)
optb.binning_table.build()
optb.binning_table.plot(metric="event_rate")
optb.binning_table.analysis()
```
### Scenario analysis
#### Scenario 0 - Normal (Realistic)
```
bt0 = sboptb.binning_table_scenario(scenario_id=0)
bt0.build()
bt0.plot(metric="event_rate")
optb0 = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb0.fit(x0, y0)
optb0.binning_table.build()
optb0.binning_table.plot(metric="event_rate")
```
Apply expected value solution to scenario 0.
```
evs_optb0 = OptimalBinning(user_splits=optb.splits)
evs_optb0.fit(x0, y0)
evs_optb0.binning_table.build()
evs_optb0.binning_table.plot(metric="event_rate")
```
The expected value solution applied to scenarion 0 does not satisfy the ``min_bin_size`` constraint, hence the solution is not feasible.
```
EVS_0 = 0.594974
```
**Scenario 1: Good (Optimistic)**
```
bt1 = sboptb.binning_table_scenario(scenario_id=1)
bt1.build()
bt1.plot(metric="event_rate")
optb1 = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb1.fit(x1, y1)
optb1.binning_table.build()
optb1.binning_table.plot(metric="event_rate")
```
Apply expected value solution to scenario 1.
```
evs_optb1 = OptimalBinning(user_splits=optb.splits)
evs_optb1.fit(x1, y1)
evs_optb1.binning_table.build()
evs_optb1.binning_table.plot(metric="event_rate")
```
The expected value solution applied to scenario 1 satisfies neither the ``min_bin_size`` constraint nor the monotonicity constraint, hence the solution is not feasible.
```
EVS_1 = -np.inf
```
**Scenario 2: Bad (Pessimistic)**
```
bt2 = sboptb.binning_table_scenario(scenario_id=2)
bt2.build()
bt2.plot(metric="event_rate")
optb2 = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb2.fit(x2, y2)
optb2.binning_table.build()
optb2.binning_table.plot(metric="event_rate")
```
Apply expected value solution to scenario 2.
```
evs_optb2 = OptimalBinning(user_splits=optb.splits)
evs_optb2.fit(x2, y2)
evs_optb2.binning_table.build()
evs_optb2.binning_table.plot(metric="event_rate")
```
The expected value solution applied to scenario 2 satisfies neither the ``min_bin_size`` constraint nor the monotonicity constraint, hence the solution is not feasible.
```
EVS_2 = -np.inf
```
### Expected value of perfect information (EVPI)
If we have prior information about the incoming economic scenarios, we could take optimal solutions for each scenario, with total IV:
```
DIV0 = optb0.binning_table.iv
DIV1 = optb1.binning_table.iv
DIV2 = optb2.binning_table.iv
DIV = (DIV0 + DIV1 + DIV2) / 3
DIV
```
However, this information is unlikely to be available in advance, so the best we can do in the long run is to use the stochastic programming, with expected total IV:
```
SIV = sboptb.binning_table.iv
SIV
```
The difference, in the case of perfect information, is the expected value of perfect information (EVPI) given by:
```
EVPI = DIV - SIV
EVPI
```
### Value of stochastic solution (VSS)
The loss in IV by not considering stochasticity is the difference between the application of the expected value solution for each scenario and the stochastic model IV. The application of the EVS to each scenario results in infeasible solutions, thus
```
VSS = SIV - (EVS_0 + EVS_1 + EVS_2)
VSS
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
from optbinning import OptimalBinning
from optbinning.binning.uncertainty import SBOptimalBinning
N0 = int(1e5)
xe = stats.beta(a=4, b=15).rvs(size=N0, random_state=42)
ye = stats.bernoulli(p=0.7).rvs(size=N0, random_state=42)
xn = stats.beta(a=6, b=8).rvs(size=N0, random_state=42)
yn = stats.bernoulli(p=0.2).rvs(size=N0, random_state=42)
x0 = np.concatenate((xn, xe), axis=0)
y0 = np.concatenate((yn, ye), axis=0)
def plot_distribution(x, y):
plt.hist(x[y == 0], label="n_nonevent", color="b", alpha=0.5)
plt.hist(x[y == 1], label="n_event", color="r", alpha=0.5)
plt.legend()
plt.show()
plot_distribution(x0, y0)
N1 = int(5e4)
xe = stats.beta(a=25, b=50).rvs(size=N1, random_state=42)
ye = stats.bernoulli(p=0.9).rvs(size=N1, random_state=42)
xn = stats.beta(a=22, b=25).rvs(size=N1, random_state=42)
yn = stats.bernoulli(p=0.05).rvs(size=N1, random_state=42)
x1 = np.concatenate((xn, xe), axis=0)
y1 = np.concatenate((yn, ye), axis=0)
plot_distribution(x1, y1)
N2 = int(5e4)
xe = stats.beta(a=4, b=6).rvs(size=N2, random_state=42)
ye = stats.bernoulli(p=0.7).rvs(size=N2, random_state=42)
xn = stats.beta(a=8, b=10).rvs(size=N2, random_state=42)
yn = stats.bernoulli(p=0.4).rvs(size=N2, random_state=42)
x2 = np.concatenate((xn, xe), axis=0)
y2 = np.concatenate((yn, ye), axis=0)
plot_distribution(x2, y2)
X = [x0, x1, x2]
Y = [y0, y1, y2]
sboptb = SBOptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
sboptb.fit(X, Y)
sboptb.status
sboptb.splits
sboptb.information(print_level=2)
sboptb.binning_table.build()
sboptb.binning_table.plot(metric="event_rate")
sboptb.binning_table.analysis()
optb = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb.fit(x0, y0)
optb.binning_table.build()
optb.binning_table.plot(metric="event_rate")
optb.binning_table.analysis()
bt0 = sboptb.binning_table_scenario(scenario_id=0)
bt0.build()
bt0.plot(metric="event_rate")
optb0 = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb0.fit(x0, y0)
optb0.binning_table.build()
optb0.binning_table.plot(metric="event_rate")
evs_optb0 = OptimalBinning(user_splits=optb.splits)
evs_optb0.fit(x0, y0)
evs_optb0.binning_table.build()
evs_optb0.binning_table.plot(metric="event_rate")
EVS_0 = 0.594974
bt1 = sboptb.binning_table_scenario(scenario_id=1)
bt1.build()
bt1.plot(metric="event_rate")
optb1 = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb1.fit(x1, y1)
optb1.binning_table.build()
optb1.binning_table.plot(metric="event_rate")
evs_optb1 = OptimalBinning(user_splits=optb.splits)
evs_optb1.fit(x1, y1)
evs_optb1.binning_table.build()
evs_optb1.binning_table.plot(metric="event_rate")
EVS_1 = -np.inf
bt2 = sboptb.binning_table_scenario(scenario_id=2)
bt2.build()
bt2.plot(metric="event_rate")
optb2 = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb2.fit(x2, y2)
optb2.binning_table.build()
optb2.binning_table.plot(metric="event_rate")
evs_optb2 = OptimalBinning(user_splits=optb.splits)
evs_optb2.fit(x2, y2)
evs_optb2.binning_table.build()
evs_optb2.binning_table.plot(metric="event_rate")
EVS_2 = -np.inf
DIV0 = optb0.binning_table.iv
DIV1 = optb1.binning_table.iv
DIV2 = optb2.binning_table.iv
DIV = (DIV0 + DIV1 + DIV2) / 3
DIV
SIV = sboptb.binning_table.iv
SIV
EVPI = DIV - SIV
EVPI
VSS = SIV - (EVS_0 + EVS_1 + EVS_2)
VSS
| 0.320183 | 0.968051 |
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('./inputs/mnist')
```
# Inspecting
```
plt.imshow(mnist.train.images[12].reshape(28, 28), cmap='gray')
```
# Building a generator
- takes inputs z
- applies leaky relu activation function
- outputs result as it's a generator
```
# Resetting TF Graph
tf.reset_default_graph()
def generator(z, reuse=None):
with tf.variable_scope('gen', reuse=reuse):
alpha = 0.1
hidden1 = tf.layers.dense(inputs=z, units=128)
# TODO: Please use https://www.tensorflow.org/api_docs/python/tf/nn/leaky_relu
hidden1 = tf.maximum(alpha*hidden1, hidden1)
hidden2 = tf.layers.dense(inputs=hidden1, units=128)
hidden2 = tf.maximum(alpha*hidden2, hidden2)
output = tf.layers.dense(hidden2, units=784, activation=tf.nn.tanh)
return output
```
# Building a descriminator
```
def descriminator(X, reuse=None):
with tf.variable_scope('dis', reuse=reuse):
alpha = 0.1
hidden1 = tf.layers.dense(inputs=X, units=256)
# TODO: Please use https://www.tensorflow.org/api_docs/python/tf/nn/leaky_relu
hidden1 = tf.maximum(alpha*hidden1, hidden1)
hidden2 = tf.layers.dense(inputs=hidden1, units=256)
hidden2 = tf.maximum(alpha*hidden2, hidden2)
logits = tf.layers.dense(hidden2, units=1)
output = tf.sigmoid(logits)
return output, logits
# Generator placeholders
real_images = tf.placeholder(tf.float32, shape=[None, 784])
z = tf.placeholder(tf.float32, shape=[None, 100])
G = generator(z)
D_output_real, D_logits_real = descriminator(real_images)
D_output_fake, D_logits_fake = descriminator(G, reuse=True)
# Losses helper function
def loss_func(logits_in, labels_in):
return tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=logits_in, labels=labels_in))
D_real_loss = loss_func(D_logits_real, tf.ones_like(D_logits_real) * 0.9)
D_fake_loss = loss_func(D_logits_fake, tf.zeros_like(D_logits_real))
D_loss = D_real_loss + D_fake_loss
G_loss = loss_func(D_logits_fake, tf.ones_like(D_logits_fake))
learning_rate = 0.001
tvars = tf.trainable_variables()
d_vars = [var for var in tvars if 'gen' in var.name]
g_vars = [var for var in tvars if 'dis' in var.name]
D_trainer = tf.train.AdamOptimizer(learning_rate).minimize(D_loss, var_list=d_vars)
G_trainer = tf.train.AdamOptimizer(learning_rate).minimize(G_loss, var_list=g_vars)
d_vars
g_vars
# Hyperparams
batch_size = 100
epochs = 30
init = tf.global_variables_initializer()
samples = []
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
# Calculating how many batches does it take to go through all the examples
num_batches = mnist.train.num_examples // batch_size
for i in range(num_batches):
batch = mnist.train.next_batch(batch_size)
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images * 2 - 1
batch_z = np.random.uniform(-1, 1, size=(batch_size, 64))
_ = sess.run(D_trainer, feed_dict={real_images: batch_images, z: batch_z})
_ = sess.run(G_trainer, feed_dict={z: batch_z})
print('ON EPOCH {}'.format(epoch))
sample_z = np.random.uniform(-1, 1, size=(1, 100))
gen_sample = sess.run(generator(z, reuse=True), feed_dict={z: sample_z})
samples.append(gen_sample)
save_path = saver.save(sess, "./model.ckpt")
print("Model saved in path: %s" % save_path)
# Still pretty noisy
plt.imshow(samples[29].reshape(28, 28), cmap='gray')
saver = tf.train.Saver(var_list=g_vars)
```
|
github_jupyter
|
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('./inputs/mnist')
plt.imshow(mnist.train.images[12].reshape(28, 28), cmap='gray')
# Resetting TF Graph
tf.reset_default_graph()
def generator(z, reuse=None):
with tf.variable_scope('gen', reuse=reuse):
alpha = 0.1
hidden1 = tf.layers.dense(inputs=z, units=128)
# TODO: Please use https://www.tensorflow.org/api_docs/python/tf/nn/leaky_relu
hidden1 = tf.maximum(alpha*hidden1, hidden1)
hidden2 = tf.layers.dense(inputs=hidden1, units=128)
hidden2 = tf.maximum(alpha*hidden2, hidden2)
output = tf.layers.dense(hidden2, units=784, activation=tf.nn.tanh)
return output
def descriminator(X, reuse=None):
with tf.variable_scope('dis', reuse=reuse):
alpha = 0.1
hidden1 = tf.layers.dense(inputs=X, units=256)
# TODO: Please use https://www.tensorflow.org/api_docs/python/tf/nn/leaky_relu
hidden1 = tf.maximum(alpha*hidden1, hidden1)
hidden2 = tf.layers.dense(inputs=hidden1, units=256)
hidden2 = tf.maximum(alpha*hidden2, hidden2)
logits = tf.layers.dense(hidden2, units=1)
output = tf.sigmoid(logits)
return output, logits
# Generator placeholders
real_images = tf.placeholder(tf.float32, shape=[None, 784])
z = tf.placeholder(tf.float32, shape=[None, 100])
G = generator(z)
D_output_real, D_logits_real = descriminator(real_images)
D_output_fake, D_logits_fake = descriminator(G, reuse=True)
# Losses helper function
def loss_func(logits_in, labels_in):
return tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=logits_in, labels=labels_in))
D_real_loss = loss_func(D_logits_real, tf.ones_like(D_logits_real) * 0.9)
D_fake_loss = loss_func(D_logits_fake, tf.zeros_like(D_logits_real))
D_loss = D_real_loss + D_fake_loss
G_loss = loss_func(D_logits_fake, tf.ones_like(D_logits_fake))
learning_rate = 0.001
tvars = tf.trainable_variables()
d_vars = [var for var in tvars if 'gen' in var.name]
g_vars = [var for var in tvars if 'dis' in var.name]
D_trainer = tf.train.AdamOptimizer(learning_rate).minimize(D_loss, var_list=d_vars)
G_trainer = tf.train.AdamOptimizer(learning_rate).minimize(G_loss, var_list=g_vars)
d_vars
g_vars
# Hyperparams
batch_size = 100
epochs = 30
init = tf.global_variables_initializer()
samples = []
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
# Calculating how many batches does it take to go through all the examples
num_batches = mnist.train.num_examples // batch_size
for i in range(num_batches):
batch = mnist.train.next_batch(batch_size)
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images * 2 - 1
batch_z = np.random.uniform(-1, 1, size=(batch_size, 64))
_ = sess.run(D_trainer, feed_dict={real_images: batch_images, z: batch_z})
_ = sess.run(G_trainer, feed_dict={z: batch_z})
print('ON EPOCH {}'.format(epoch))
sample_z = np.random.uniform(-1, 1, size=(1, 100))
gen_sample = sess.run(generator(z, reuse=True), feed_dict={z: sample_z})
samples.append(gen_sample)
save_path = saver.save(sess, "./model.ckpt")
print("Model saved in path: %s" % save_path)
# Still pretty noisy
plt.imshow(samples[29].reshape(28, 28), cmap='gray')
saver = tf.train.Saver(var_list=g_vars)
| 0.583915 | 0.873323 |
# Boxcar & Hanning Windows
[](https://github.com/eabarnes1010/course_objective_analysis/tree/main/code)
[](https://colab.research.google.com/github/eabarnes1010/course_objective_analysis/blob/main/code/boxcar_hanning_response_functions.ipynb)
Demonstration of Hanning and Boxcar window response functions.
The code directly below disables autoscrolling in this notebook so that you can see all of the figures at the same time.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.dpi'] = 100
LW = 2
```
### Get your data together and plot the two windows
```
T = 3
omega = np.arange(-2.*np.pi*3., 2.*np.pi*3.+.1, .1)
t = np.arange(0.01,T+0.01,0.01)
plt.figure()
plt.xlim(-0.05,T+0.05)
plt.ylim(-0.02,1.5)
plt.axhline(y=0,color='gray')
box_window = np.ones(np.shape(t))
plt.plot(t,box_window,'-b', linewidth = LW, label = 'Boxcar window')
plt.plot((0,0),(0,1),'-b',linewidth = LW)
plt.plot((T,T),(0,1),'-b',linewidth = LW)
hann_window = 0.5*(1-np.cos(2.*np.pi*t/T))
plt.plot(t,hann_window,'-r',linewidth = LW, label = 'Hanning window')
plt.xlabel('time')
plt.ylabel('value of data (as a fraction)')
plt.legend(frameon = False)
plt.show()
```
### Compute the response functions of these windows
Note that rather than actually compute them from scratch we can rely on the theory we discussed in class to plot these response functions in frequency space.
```
#%% Response functions in frequency space
BOX = np.sinc(omega*T/(2.*np.pi))
HANN = np.sinc(omega*T/(2.*np.pi)) + (1./2.)*(np.sinc(omega*T/(2.*np.pi) + 1.) + np.sinc(omega*T/(2.*np.pi) - 1.))
HANN_term1 = np.sinc(omega*T/(2.*np.pi))
HANN_terms23 = (1./2.)*(np.sinc(omega*T/(2.*np.pi) + 1.) + np.sinc(omega*T/(2.*np.pi) - 1.))
```
### Plot the response functions
```
#%% plot Boxcar response function
plt.figure()
plt.plot(omega,BOX/np.max(BOX),'-b',linewidth = LW, label = 'Boxcar response')
plt.xlabel('radial frequency')
plt.ylabel('spectral power')
plt.axhline(y=0,color='gray')
plt.xlim(-16,16)
plt.legend(frameon = False, loc = 'upper left')
#%% plot Boxcar response function
plt.figure()
plt.plot(omega,BOX/np.max(BOX),'-b',linewidth = LW, label = 'Boxcar response')
plt.xlabel('radial frequency')
plt.ylabel('spectral power')
plt.axhline(y=0,color='gray')
plt.xlim(-16,16)
#%% plot Hanning response function
plt.plot(omega, HANN,'-r', linewidth = LW, label = 'full Hanning response')
plt.legend(frameon = False, loc = 'upper left')
plt.show()
```
The Hanning window has a similar response function to the boxcar, except those nasty side-lobes are mostly removed. However, you never get something for nothing and so the trade-off is that the response function around the central frequency of interest (freq = 0) is wider, i.e. so more smoothing. However, this may be a small price to pay.
```
#%% plot Boxcar response function
plt.figure()
plt.plot(omega,BOX/np.max(BOX),'-b',linewidth = LW, label = 'Boxcar response')
plt.xlabel('radial frequency')
plt.ylabel('spectral power')
plt.axhline(y=0,color='gray')
plt.xlim(-16,16)
#%% plot Hanning response function
plt.plot(omega, HANN,'-r', linewidth = LW, label = 'full Hanning response')
plt.legend(frameon = False, loc = 'upper left')
#%% plot terms 2 and 3 of Hanning response
plt.plot(omega, HANN_terms23,'--r', linewidth = LW, label = 'terms 2 & 3 of Hanning response')
plt.legend(frameon = False, loc = 'upper left', fontsize = 8)
plt.show()
```
As you can see from terms 2 and 3 of the Hanning response, their main job is to cancel out the side lobes that are present in the boxcar (and term 1 of the Hanning response).
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.dpi'] = 100
LW = 2
T = 3
omega = np.arange(-2.*np.pi*3., 2.*np.pi*3.+.1, .1)
t = np.arange(0.01,T+0.01,0.01)
plt.figure()
plt.xlim(-0.05,T+0.05)
plt.ylim(-0.02,1.5)
plt.axhline(y=0,color='gray')
box_window = np.ones(np.shape(t))
plt.plot(t,box_window,'-b', linewidth = LW, label = 'Boxcar window')
plt.plot((0,0),(0,1),'-b',linewidth = LW)
plt.plot((T,T),(0,1),'-b',linewidth = LW)
hann_window = 0.5*(1-np.cos(2.*np.pi*t/T))
plt.plot(t,hann_window,'-r',linewidth = LW, label = 'Hanning window')
plt.xlabel('time')
plt.ylabel('value of data (as a fraction)')
plt.legend(frameon = False)
plt.show()
#%% Response functions in frequency space
BOX = np.sinc(omega*T/(2.*np.pi))
HANN = np.sinc(omega*T/(2.*np.pi)) + (1./2.)*(np.sinc(omega*T/(2.*np.pi) + 1.) + np.sinc(omega*T/(2.*np.pi) - 1.))
HANN_term1 = np.sinc(omega*T/(2.*np.pi))
HANN_terms23 = (1./2.)*(np.sinc(omega*T/(2.*np.pi) + 1.) + np.sinc(omega*T/(2.*np.pi) - 1.))
#%% plot Boxcar response function
plt.figure()
plt.plot(omega,BOX/np.max(BOX),'-b',linewidth = LW, label = 'Boxcar response')
plt.xlabel('radial frequency')
plt.ylabel('spectral power')
plt.axhline(y=0,color='gray')
plt.xlim(-16,16)
plt.legend(frameon = False, loc = 'upper left')
#%% plot Boxcar response function
plt.figure()
plt.plot(omega,BOX/np.max(BOX),'-b',linewidth = LW, label = 'Boxcar response')
plt.xlabel('radial frequency')
plt.ylabel('spectral power')
plt.axhline(y=0,color='gray')
plt.xlim(-16,16)
#%% plot Hanning response function
plt.plot(omega, HANN,'-r', linewidth = LW, label = 'full Hanning response')
plt.legend(frameon = False, loc = 'upper left')
plt.show()
#%% plot Boxcar response function
plt.figure()
plt.plot(omega,BOX/np.max(BOX),'-b',linewidth = LW, label = 'Boxcar response')
plt.xlabel('radial frequency')
plt.ylabel('spectral power')
plt.axhline(y=0,color='gray')
plt.xlim(-16,16)
#%% plot Hanning response function
plt.plot(omega, HANN,'-r', linewidth = LW, label = 'full Hanning response')
plt.legend(frameon = False, loc = 'upper left')
#%% plot terms 2 and 3 of Hanning response
plt.plot(omega, HANN_terms23,'--r', linewidth = LW, label = 'terms 2 & 3 of Hanning response')
plt.legend(frameon = False, loc = 'upper left', fontsize = 8)
plt.show()
| 0.658747 | 0.974067 |
```
import requests
import json
from tabulate import tabulate
```
Our list of targets
```
targets = ['ENSG00000069696', 'ENSG00000144285']
targets_string = ', '.join('"{0}"'.format(t) for t in targets)
```
Make the API call with our list of targets to find the associations. Set facets to true.
```
url = 'https://www.targetvalidation.org/api/latest/public/association/filter'
headers = {"Accept": "application/json"}
# There may be an easier way of building these parameters...
data = "{\"target\":[" + targets_string + "], \"facets\":true}"
response = requests.post(url, headers=headers, data=data)
output = response.json()
```
Print out all the json returned just for reference
```
#print json.dumps(output, indent=2)
```
The therapeutic area facets look interesting - lets iterate through these and display
```
therapeuticareas = []
for bucket in output['facets']['therapeutic_area']['buckets']:
therapeuticareas.append({
'target_count' : bucket['unique_target_count']['value'],
'disease_count' : bucket['unique_disease_count']['value'],
'therapeutic_area' : bucket['label'],
'key' : bucket['key']
})
```
Sort by target count and then disease count
```
therapeuticareas = sorted(therapeuticareas, key=lambda k: (k['target_count'],k['disease_count']), reverse=True)
```
Using the python [tabulate](https://pypi.python.org/pypi/tabulate) library to render a pretty table of our extracted therapeutic areas.
Note: You may need to run `pip install tabulate` in your python environment
```
print tabulate(therapeuticareas, headers="keys", tablefmt="grid")
```
Lets just consider the first 5 top therapeutic areas
```
therapeuticareas = therapeuticareas[:5]
print tabulate(therapeuticareas, headers="keys", tablefmt="grid")
```
Now for each of those identify the top 5 diseases. Unfortunately we don't get the disease names in the facets, just the codes. Is this is the right approach then an API change???
```
for therapeuticarea in therapeuticareas:
print "Therapeutic area: " + therapeuticarea['therapeutic_area']
data = "{\"target\":[" + targets_string + "], \"facets\":true, \"therapeutic_area\":[\"" + therapeuticarea['key'] + "\"]}"
response = requests.post(url, headers=headers, data=data)
output = response.json()
diseases = []
for bucket in output['facets']['disease']['buckets']:
diseases.append({
'target_count' : bucket['unique_target_count']['value'],
'doc_count' : bucket['doc_count'],
'key' : bucket['key']
})
# Sort and take top 5
diseases = sorted(diseases, key=lambda k: (k['target_count'],k['doc_count']), reverse=True)
diseases = diseases[:5]
print tabulate(diseases, headers="keys", tablefmt="grid")
print ""
```
|
github_jupyter
|
import requests
import json
from tabulate import tabulate
targets = ['ENSG00000069696', 'ENSG00000144285']
targets_string = ', '.join('"{0}"'.format(t) for t in targets)
url = 'https://www.targetvalidation.org/api/latest/public/association/filter'
headers = {"Accept": "application/json"}
# There may be an easier way of building these parameters...
data = "{\"target\":[" + targets_string + "], \"facets\":true}"
response = requests.post(url, headers=headers, data=data)
output = response.json()
#print json.dumps(output, indent=2)
therapeuticareas = []
for bucket in output['facets']['therapeutic_area']['buckets']:
therapeuticareas.append({
'target_count' : bucket['unique_target_count']['value'],
'disease_count' : bucket['unique_disease_count']['value'],
'therapeutic_area' : bucket['label'],
'key' : bucket['key']
})
therapeuticareas = sorted(therapeuticareas, key=lambda k: (k['target_count'],k['disease_count']), reverse=True)
print tabulate(therapeuticareas, headers="keys", tablefmt="grid")
therapeuticareas = therapeuticareas[:5]
print tabulate(therapeuticareas, headers="keys", tablefmt="grid")
for therapeuticarea in therapeuticareas:
print "Therapeutic area: " + therapeuticarea['therapeutic_area']
data = "{\"target\":[" + targets_string + "], \"facets\":true, \"therapeutic_area\":[\"" + therapeuticarea['key'] + "\"]}"
response = requests.post(url, headers=headers, data=data)
output = response.json()
diseases = []
for bucket in output['facets']['disease']['buckets']:
diseases.append({
'target_count' : bucket['unique_target_count']['value'],
'doc_count' : bucket['doc_count'],
'key' : bucket['key']
})
# Sort and take top 5
diseases = sorted(diseases, key=lambda k: (k['target_count'],k['doc_count']), reverse=True)
diseases = diseases[:5]
print tabulate(diseases, headers="keys", tablefmt="grid")
print ""
| 0.154472 | 0.730266 |
# Jupyter Notebook
Jupyter Notebook is an incredibly powerful tool for interactively developing and presenting data science projects. A notebook integrates code and its output into a single document that combines visualizations, narrative text, mathematical equations, and other rich media.
Each .ipynb file is a text file that describes the contents of your notebook in a format called JSON. Each cell and its contents, including image attachments that have been converted into strings of text, is listed therein along with some metadata.
Each notebook consists of many cells. Cells are primarily two types:
1. **Code:** Codes can be run within the notebook and the output will be printed within it and can be saved in the notebook as a whole.
2. **Markdown:** Markdown is a lightweight, easy to learn markup language for formatting plain text. It can include formatted text, formula, images, and other rich media. A useful feature is to include code or variable names in Markdown in a way that they appear as code. For instance `print("Hello, World!")`, or one can specify the language of code to get the custom format:
```python
print("Hello, World!")
```
If you are reading this notebook in a Jupyter environment you can double click on this cell to see the raw text.
### Kernel
Behind every notebook runs a kernel. When you run a code cell, that code is executed within the kernel and any output is returned back to the cell to be displayed. The kernel's state persists over time and between cells — it pertains to the document as a whole and not individual cells.
For example, if you import libraries or declare variables in one cell, they will be available in another. In this way, you can think of a notebook document as being somewhat comparable to a script file, except that it is multimedia. Let's try this out to get a feel for it. First, we'll import a Python package and define a function.
```
import numpy as np
def square(x):
return x * x
x = np.random.randint(1, 10)
```
We can execute the above cell by Ctrl + Enter (or Command + Enter in Mac).
we can reference np and square in any other cell:
```
y = square(x)
print('%d squared is %d' % (x, y))
```
### Creating plots
```
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 50)
y = np.sin(x)
y2 = y + 0.1 * np.random.normal(size=x.shape)
fig, ax = plt.subplots()
ax.plot(x, y, 'k--')
ax.plot(x, y2, 'ro')
# set ticks and tick labels
ax.set_xlim((0, 2*np.pi))
ax.set_xticks([0, np.pi, 2*np.pi])
ax.set_xticklabels(['0', '$\pi$','2$\pi$'])
ax.set_ylim((-1.5, 1.5))
ax.set_yticks([-1, 0, 1])
# Only draw spine between the y-ticks
ax.spines['left'].set_bounds(-1, 1)
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# Only show ticks on the left and bottom spines
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
```
## Further reading on Python
In order to learn more about the basics of Python syntax please visit this [Python Bootcamp](https://github.com/soltaniehha/Python-Bootcamp) repository.
|
github_jupyter
|
print("Hello, World!")
import numpy as np
def square(x):
return x * x
x = np.random.randint(1, 10)
y = square(x)
print('%d squared is %d' % (x, y))
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 50)
y = np.sin(x)
y2 = y + 0.1 * np.random.normal(size=x.shape)
fig, ax = plt.subplots()
ax.plot(x, y, 'k--')
ax.plot(x, y2, 'ro')
# set ticks and tick labels
ax.set_xlim((0, 2*np.pi))
ax.set_xticks([0, np.pi, 2*np.pi])
ax.set_xticklabels(['0', '$\pi$','2$\pi$'])
ax.set_ylim((-1.5, 1.5))
ax.set_yticks([-1, 0, 1])
# Only draw spine between the y-ticks
ax.spines['left'].set_bounds(-1, 1)
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# Only show ticks on the left and bottom spines
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
| 0.486088 | 0.97296 |
# SETUP
- - -
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["font.family"] = "Dejavu Sans"
import seaborn as sns
sns.set()
%matplotlib inline
file_path = '/Users/quartz/data/iot-data/cansim-0800020-eng-6674700030567901031.csv'
data_raw = pd.read_csv(file_path, skiprows=6, skipfooter=9)
data_raw.head()
data_raw.dtypes
# 월별 끝일 삽일
from pandas.tseries.offsets import MonthEnd
# data_raw['Adjustments'] =
data_raw.Adjustments = pd.to_datetime(data_raw['Adjustments']) + MonthEnd(1)
data_raw = data_raw.set_index('Adjustments')
data_raw.head()
```
### Plotting
```
# 기준점 형성(Timestamp)
split_date = pd.Timestamp('01-01-2011')
# Unadjusted feature만 활용해서 dataframe을 만든다
train = data_raw.loc[:split_date, ['Unadjusted']]
test = data_raw.loc[split_date:, ['Unadjusted']]
print(split_date, train.shape, test.shape)
# plot
ax = train.plot()
test.plot(ax=ax)
plt.legend(['train', 'test'])
```
### preprocessing
```
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler()
train_sc = sc.fit_transform(train)
test_sc = sc.fit_transform(test)
train_sc_df = pd.DataFrame(data=train_sc, columns=['Scaled'], index=train.index)
test_sc_df = pd.DataFrame(data=test_sc, columns=['Scaled'], index=test.index)
X_test = test_sc_df.dropna().drop('Scaled', axis=1)
y_test = test_sc_df.dropna()[['Scaled']]
train_sc_df.head()
for shift in range(1, 13):
train_sc_df['shift_{}'.format(shift)] = train_sc_df['Scaled'].shift(shift)
test_sc_df['shift_{}'.format(shift)] = test_sc_df['Scaled'].shift(shift)
train_sc_df.head(20)
```
### make dataset(train, test)
```
# train, test
X_train_df = train_sc_df.dropna().drop('Scaled', axis=1)
y_train_df = train_sc_df.dropna()[['Scaled']]
X_test_df = test_sc_df.dropna().drop('Scaled', axis=1)
y_test_df = test_sc_df.dropna()[['Scaled']]
# DataFrame -> ndarray
X_train = X_train_df.values
y_train = y_train_df.values
X_test = X_test_df.values
y_test = y_test_df.values
# 2차원 데이터(size, feature)를 3차원 데이터(size, feature, time)으로.
X_train_t = X_train.reshape(X_train.shape[0], 12, 1)
X_test_t = X_test.reshape(X_test.shape[0], 12, 1)
# check shape
X_train.shape, X_train_t.shape, X_test.shape, X_test_t.shape
```
### LSTM Modeling
```
from keras.layers import LSTM
from keras.models import Sequential
from keras.layers import Dense
import keras
import keras.backend as K
from keras.callbacks import EarlyStopping
K.clear_session()
# 손실 이력 클래스 정의
class LossHistory(keras.callbacks.Callback):
def init(self):
self.losses = []
def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
# 손실 이력 객체 생성
history = LossHistory()
history.init()
model = Sequential() # Sequential Model
model.add(LSTM(100, input_shape=(12,1))) # (timestamp, feature)
model.add(Dense(1)) # output = 1
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train_t, y_train, epochs=100, batch_size=30, verbose=2, callbacks=[history])
y_pred = model.predict(X_test_t)
```
### Visualization
```
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
pred = y_pred
loss_ax.plot(pred, 'b', label='pred')
loss_ax.plot(y_test, 'r', label='act')
loss_ax.legend(loc='upper left')
plt.show()
# loss
plt.plot(history.losses)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
plt.show()
```
# 태양광 LSTM
- - -
### EDA
```
!ls /Users/quartz/data/iot-data/solar_data.csv
file_path = '/Users/quartz/data/iot-data/solar_data.csv'
solar_raw = pd.read_csv(file_path, engine='python')
solar_raw.iloc[:, :15].head()
# 발전량, 강수량, 습도, 풍속, 기온 | 일조량, 미세먼지
solar_raw.columns
total = solar_raw['충전시간발전량']
# '5Hr', '6Hr', '7Hr', '8Hr', '9Hr', '10Hr'
sub = solar_raw[['10Hr', '11Hr', '12Hr', '13Hr', '14Hr', '15Hr', '16Hr']]
solar_raw['충전시간발전량'].tail()
sub.tail()
# 충전시간발전량은 오전 10시부터 오후 4시까지 발전량의 합이다
for i in range(10):
print(np.sum(sub.values[i]), total.values[i])
# 시간대(Hr)
solar_raw.iloc[:, :18].tail()
# 강수량()
solar_raw.iloc[:, 20:36].tail()
# 습도()
solar_raw.iloc[:, 36:52].tail()
# 풍속()
solar_raw.iloc[:, 52:68].tail()
# 기온()
solar_raw.iloc[:, 68:84].tail()
# data shape 확인
solar_raw.shape # 1년치 시계열 데이터
# data type 확인 - int64: 일출시간, 일몰시간, 20Hr으로 구성. float64:
solar_raw.dtypes
# 종속변수(충전시간발전량) 뜯어보기
solar_raw['충전시간발전량'].describe()
# 독립변수 간 상관관계 뜯어보기
solar_raw.corr()
plt.figure(figsize=(20, 15))
sns.heatmap(solar_revise.corr(), cmap="YlGnBu")
plt.show()
!ls ./tmp
# 변수 분포 살펴보기
columns = list(solar_raw.columns)
for column in columns:
y = solar_raw[column].values
sns.distplot(y)
plt.xticks([-0.5, 1.5])
plt.yticks([0, 1])
plt.title("{} distplot".format(column))
plt.savefig('/Users/quartz/Dropbox/iot-data-practice/tmp/{}.png'.format(column))
# 전체 feature(독립변수) 확인 : Hr, 충전시간발전량, 일출시간, 일몰시간, 강수량, 습도, 풍속, 기온
solar_raw.columns
# feature 하나씩 뜯어보기
solar_raw.iloc[:1, 17:34]
# 결측치 확인 : 0개
solar_raw.isna().sum()[50:100]
solar_raw.describe()
```
### preprocessing
```
solar_raw['날짜'] = solar_raw['날짜'].apply(lambda x: "20"+str(x))
solar_raw.tail()
# 월별 끝일 삽일
solar_raw['날짜'] = pd.to_datetime(solar_raw['날짜'])
solar_raw = solar_raw.set_index('날짜')
solar_raw.head()
```
### 새로운 데이터셋 solar_revise 만들기
- 현재 데이터(시간 당 발전량, 기온, 강수량, 습도, 풍속)로 내일 데이터(충전시간발전량) 예측하기
```
solar_1 = solar_raw.drop(['충전시간발전량'], axis=1)
solar_2 = solar_raw['충전시간발전량']
solar_2 = solar_2.values
solar_2 = solar_2[1:]
solar_2[:4]
solar_2 = np.append(solar_2, np.nan)
solar_1.shape, solar_2.shape
solar_1['충전시간발전량'] = solar_2
solar_1.dropna(inplace=True)
solar_revise = solar_1.copy()
solar_revise['20Hr'] = solar_revise['20Hr'].astype('float64')
solar_revise.to_pickle('./solar_revise.pkl')
solar_revise = pd.read_pickle('./solar_revise.pkl')
solar_revise.tail()
```
### Feature Engineering
```
1. 데이터셋 중 10Hr ~ 16Hr 만 사용하기
2. 데이터셋을 4개의 구간으로 나누기
- 5~8 : 5_8Hr
- 9~12 : 9_12Hr
- 13~16 : 13_16Hr
- 17~20 : 17_20Hr
```
```
solar_1 = solar_revise.iloc[:, 5:12]
solar_2 = solar_revise.iloc[:, 21:28]
solar_3 = solar_revise.iloc[:, 37:44]
solar_4 = solar_revise.iloc[:, 53:60]
solar_5 = solar_revise.iloc[:, 69:76]
solar_6 = solar_revise.iloc[:, -1:]
solar_new = pd.concat([solar_1, solar_2, solar_3, solar_4, solar_5, solar_6], axis=1)
solar_new.tail()
solar_new.shape
y = solar_revise.iloc[0:1, :16]
y
solar_revise = solar_revise.drop(['일출시간', '일몰시간'], axis=1)
solar_revise.columns
from IPython.display import clear_output # clear_output() 으로 아웃풋 제거 가능
# 변수 분포 살펴보기
n = len(solar_revise)
for i in range(n)[:10]:
data = solar_revise.iloc[i:i+1,:16]
x = list(solar_revise.iloc[i:i+1,:16].columns)
y = list(solar_revise.iloc[i:i+1,:16].values[0])
plt.title('Hr_{}'.format(i))
plt.plot(x, y)
plt.savefig('./tmp_2/Hr_{}'.format(i))
clear_output()
### 새로운 Feature 만들기 (Hr, 기온, 습도, 풍속, 강수량)
```
### 모델링 함수 만들기
##### package, function
```
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
import keras
import keras.backend as K
from keras.layers import LSTM, Dense, Input, Embedding, Dropout
from keras.models import Sequential
from keras.models import Model
from keras.wrappers.scikit_learn import KerasRegressor
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_acc')
from keras.callbacks import CSVLogger
csv_logger = CSVLogger('training.log')
def dataset_reshape(dataset, window_size=1):
data = []
for i in range(len(dataset) - window_size - 1):
change_data = dataset[i:(i+window_size)]
data.append(np.array(change_data))
return np.array(data)
# 손실 이력 클래스 정의
class LossHistory(keras.callbacks.Callback):
def init(self):
self.losses = []
def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
# 손실 이력 객체 생성
history = LossHistory()
history.init()
```
##### modeling: LSTM
```
def make_model(dataset, input_shape=(0, 0), epochs=[10], batch_size=[30], dropout_rate=[0.2], layers=50, output_dim=1, cv=3):
columns = list(dataset.columns)
# 기준점 형성(Timestamp)
split_date = pd.Timestamp('2017-04-15')
# Unadjusted feature만 활용해서 dataframe을 만든다
train = dataset.loc[:split_date, columns]
test = dataset.loc[split_date:, columns]
# scaling
sc = MinMaxScaler()
train_sc = sc.fit_transform(train)
test_sc = sc.transform(test)
# train, test
train_sc_df = pd.DataFrame(data=train_sc, columns=columns, index=train.index)
test_sc_df = pd.DataFrame(data=test_sc, columns=columns, index=test.index)
X_train_df = train_sc_df.iloc[:, :-1]
y_train_df = train_sc_df.iloc[:, -1:]
X_test_df = test_sc_df.iloc[:, :-1]
y_test_df = test_sc_df.iloc[:, -1:]
# 2차원 데이터(size, feature)를 3차원 데이터(size, feature, time)으로.
X_train = dataset_reshape(X_train_df, 7)
y_train = dataset_reshape(y_train_df['충전시간발전량'], 7)
X_test = dataset_reshape(X_test_df, 7)
y_test = dataset_reshape(y_test_df['충전시간발전량'], 7)
# 모델 함수 만들기
def create_model(dropout_rate=0.0):
activation='relu'
dropout_rate=0.0
init_mode='uniform'
optimizer='adam'
lr=0.01
momentum=0
#create model
model = Sequential()
model.add(LSTM(layers, input_shape=input_shape))
model.add(Dropout(dropout_rate))
model.add(Dense(output_dim))
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
return model
# create model
model = KerasRegressor(build_fn=create_model, epochs=30, batch_size=30)
# Use scikit-learn to grid search
# activation = ['relu', 'tahn', 'sigmoid']
# optimizer = ['adam', 'SGD', 'RMSprop']
# dropout_rate = dropout_rate
# grid search epochs, batch size
epochs = epochs
batch_size = batch_size
param_grid = dict(epochs=epochs, batch_size=batch_size, dropout_rate=dropout_rate)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, verbose=1, cv=cv)
grid = grid.fit(X_train, y_train, callbacks=[history, csv_logger, EarlyStopping])
clear_output()
# make graph
y_pred = grid.predict(X_test)
y_test_tuple = (y_test[0], y_test[1], y_test[2], y_test[3], y_test[4], y_test[5], y_test[6])
y_pred_tuple = (y_pred[0], y_pred[1], y_pred[2], y_pred[3], y_pred[4], y_pred[5], y_pred[6])
plt.figure(figsize=(20, 10))
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
loss_ax.plot(np.concatenate(y_test_tuple), 'b', label='act')
loss_ax.plot(np.concatenate(y_pred_tuple), 'r', label='pred')
loss_ax.legend(loc='lower right')
plt.show()
# loss graph
plt.plot(history.losses)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
plt.show()
return grid
grid_1 = make_model(dataset=solar_revise, input_shape=(7, 80), epochs=[100], dropout_rate=[0.2, 0.4], layers=100, output_dim=7)
grid_1.best_params_
grid_2 = make_model(dataset=solar_new, input_shape=(7, 35), epochs=[100], dropout_rate=[0.2, 0.4], layers=100, output_dim=7)
grid_1.cv_results_
```
### save models
```
grid_1.best_estimator_.model.save("grid_1.h5")
grid_2.best_estimator_.model.save("grid_2.h5")
```
### log history
# Scikit-learn, Statsmodels
- - -
### preprocessing
```
#1
1. 20Hr, 일몰시간, 일출시간, 날짜 : int64 -> float64
#2
```
```
from sklearn.model_selection import train_test_split
solar_data = solar_raw.copy()
solar_data = solar_data.astype('float32', copy=True)
solar_data.tail()
X_data = solar_data.drop(['충전시간발전량'], axis=1)
y_data = solar_data['충전시간발전량']
y_data.tail()
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size=0.33, random_state=17)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
```
### modeling: scikit-learn
- - -
```
1. x, y train, text dataset 만들기
2. linear regression으로 모델링(statsmodels)
3. 다중회귀
4. 다중 공선성 확인, 제거(PCA)
5. 파라미터 튜닝
```
### scikit-learn
```
from sklearn.linear_model import LinearRegression
X_train_df.tail()
LR = LinearRegression(fit_intercept=True)
model_lr_1 = LR.fit(X_train_df.values, y_train_df.values)
# 성능평가
y_pred = model_lr_1.predict(X_test_df.values)
mse = (np.square(y_pred - y_test_df.values)).mean(axis=0)
mse
from sklearn.metrics import explained_variance_score
explained_variance_score(y_test_df.values, y_pred)
# 교차 검증
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model_lr_1, X_data, y_data, cv=50, scoring='r2')
scores = np.mean(scores)
scores
```
### modeling: statsmodels
- 다중공선성 제거
```
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
model_lr_2 = sm.OLS(y_train_df, X_train_df)
result_2 = model_lr_2.fit()
result_2.summary()
pd.set_option('display.max_columns', 200)
pd.set_option('display.width', 1000)
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(
X_train_df.values, i) for i in range(X_train_df.shape[1])]
vif["features"] = X_train_df.columns
vif.sort_values('VIF Factor', ascending=False)
formula = "충전시간발전량 ~ "
for column in list(X_data.columns):
to_add = "scale({}) + ".format(column)
formula += to_add
formula
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["font.family"] = "Dejavu Sans"
import seaborn as sns
sns.set()
%matplotlib inline
file_path = '/Users/quartz/data/iot-data/cansim-0800020-eng-6674700030567901031.csv'
data_raw = pd.read_csv(file_path, skiprows=6, skipfooter=9)
data_raw.head()
data_raw.dtypes
# 월별 끝일 삽일
from pandas.tseries.offsets import MonthEnd
# data_raw['Adjustments'] =
data_raw.Adjustments = pd.to_datetime(data_raw['Adjustments']) + MonthEnd(1)
data_raw = data_raw.set_index('Adjustments')
data_raw.head()
# 기준점 형성(Timestamp)
split_date = pd.Timestamp('01-01-2011')
# Unadjusted feature만 활용해서 dataframe을 만든다
train = data_raw.loc[:split_date, ['Unadjusted']]
test = data_raw.loc[split_date:, ['Unadjusted']]
print(split_date, train.shape, test.shape)
# plot
ax = train.plot()
test.plot(ax=ax)
plt.legend(['train', 'test'])
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler()
train_sc = sc.fit_transform(train)
test_sc = sc.fit_transform(test)
train_sc_df = pd.DataFrame(data=train_sc, columns=['Scaled'], index=train.index)
test_sc_df = pd.DataFrame(data=test_sc, columns=['Scaled'], index=test.index)
X_test = test_sc_df.dropna().drop('Scaled', axis=1)
y_test = test_sc_df.dropna()[['Scaled']]
train_sc_df.head()
for shift in range(1, 13):
train_sc_df['shift_{}'.format(shift)] = train_sc_df['Scaled'].shift(shift)
test_sc_df['shift_{}'.format(shift)] = test_sc_df['Scaled'].shift(shift)
train_sc_df.head(20)
# train, test
X_train_df = train_sc_df.dropna().drop('Scaled', axis=1)
y_train_df = train_sc_df.dropna()[['Scaled']]
X_test_df = test_sc_df.dropna().drop('Scaled', axis=1)
y_test_df = test_sc_df.dropna()[['Scaled']]
# DataFrame -> ndarray
X_train = X_train_df.values
y_train = y_train_df.values
X_test = X_test_df.values
y_test = y_test_df.values
# 2차원 데이터(size, feature)를 3차원 데이터(size, feature, time)으로.
X_train_t = X_train.reshape(X_train.shape[0], 12, 1)
X_test_t = X_test.reshape(X_test.shape[0], 12, 1)
# check shape
X_train.shape, X_train_t.shape, X_test.shape, X_test_t.shape
from keras.layers import LSTM
from keras.models import Sequential
from keras.layers import Dense
import keras
import keras.backend as K
from keras.callbacks import EarlyStopping
K.clear_session()
# 손실 이력 클래스 정의
class LossHistory(keras.callbacks.Callback):
def init(self):
self.losses = []
def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
# 손실 이력 객체 생성
history = LossHistory()
history.init()
model = Sequential() # Sequential Model
model.add(LSTM(100, input_shape=(12,1))) # (timestamp, feature)
model.add(Dense(1)) # output = 1
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train_t, y_train, epochs=100, batch_size=30, verbose=2, callbacks=[history])
y_pred = model.predict(X_test_t)
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
pred = y_pred
loss_ax.plot(pred, 'b', label='pred')
loss_ax.plot(y_test, 'r', label='act')
loss_ax.legend(loc='upper left')
plt.show()
# loss
plt.plot(history.losses)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
plt.show()
!ls /Users/quartz/data/iot-data/solar_data.csv
file_path = '/Users/quartz/data/iot-data/solar_data.csv'
solar_raw = pd.read_csv(file_path, engine='python')
solar_raw.iloc[:, :15].head()
# 발전량, 강수량, 습도, 풍속, 기온 | 일조량, 미세먼지
solar_raw.columns
total = solar_raw['충전시간발전량']
# '5Hr', '6Hr', '7Hr', '8Hr', '9Hr', '10Hr'
sub = solar_raw[['10Hr', '11Hr', '12Hr', '13Hr', '14Hr', '15Hr', '16Hr']]
solar_raw['충전시간발전량'].tail()
sub.tail()
# 충전시간발전량은 오전 10시부터 오후 4시까지 발전량의 합이다
for i in range(10):
print(np.sum(sub.values[i]), total.values[i])
# 시간대(Hr)
solar_raw.iloc[:, :18].tail()
# 강수량()
solar_raw.iloc[:, 20:36].tail()
# 습도()
solar_raw.iloc[:, 36:52].tail()
# 풍속()
solar_raw.iloc[:, 52:68].tail()
# 기온()
solar_raw.iloc[:, 68:84].tail()
# data shape 확인
solar_raw.shape # 1년치 시계열 데이터
# data type 확인 - int64: 일출시간, 일몰시간, 20Hr으로 구성. float64:
solar_raw.dtypes
# 종속변수(충전시간발전량) 뜯어보기
solar_raw['충전시간발전량'].describe()
# 독립변수 간 상관관계 뜯어보기
solar_raw.corr()
plt.figure(figsize=(20, 15))
sns.heatmap(solar_revise.corr(), cmap="YlGnBu")
plt.show()
!ls ./tmp
# 변수 분포 살펴보기
columns = list(solar_raw.columns)
for column in columns:
y = solar_raw[column].values
sns.distplot(y)
plt.xticks([-0.5, 1.5])
plt.yticks([0, 1])
plt.title("{} distplot".format(column))
plt.savefig('/Users/quartz/Dropbox/iot-data-practice/tmp/{}.png'.format(column))
# 전체 feature(독립변수) 확인 : Hr, 충전시간발전량, 일출시간, 일몰시간, 강수량, 습도, 풍속, 기온
solar_raw.columns
# feature 하나씩 뜯어보기
solar_raw.iloc[:1, 17:34]
# 결측치 확인 : 0개
solar_raw.isna().sum()[50:100]
solar_raw.describe()
solar_raw['날짜'] = solar_raw['날짜'].apply(lambda x: "20"+str(x))
solar_raw.tail()
# 월별 끝일 삽일
solar_raw['날짜'] = pd.to_datetime(solar_raw['날짜'])
solar_raw = solar_raw.set_index('날짜')
solar_raw.head()
solar_1 = solar_raw.drop(['충전시간발전량'], axis=1)
solar_2 = solar_raw['충전시간발전량']
solar_2 = solar_2.values
solar_2 = solar_2[1:]
solar_2[:4]
solar_2 = np.append(solar_2, np.nan)
solar_1.shape, solar_2.shape
solar_1['충전시간발전량'] = solar_2
solar_1.dropna(inplace=True)
solar_revise = solar_1.copy()
solar_revise['20Hr'] = solar_revise['20Hr'].astype('float64')
solar_revise.to_pickle('./solar_revise.pkl')
solar_revise = pd.read_pickle('./solar_revise.pkl')
solar_revise.tail()
1. 데이터셋 중 10Hr ~ 16Hr 만 사용하기
2. 데이터셋을 4개의 구간으로 나누기
- 5~8 : 5_8Hr
- 9~12 : 9_12Hr
- 13~16 : 13_16Hr
- 17~20 : 17_20Hr
solar_1 = solar_revise.iloc[:, 5:12]
solar_2 = solar_revise.iloc[:, 21:28]
solar_3 = solar_revise.iloc[:, 37:44]
solar_4 = solar_revise.iloc[:, 53:60]
solar_5 = solar_revise.iloc[:, 69:76]
solar_6 = solar_revise.iloc[:, -1:]
solar_new = pd.concat([solar_1, solar_2, solar_3, solar_4, solar_5, solar_6], axis=1)
solar_new.tail()
solar_new.shape
y = solar_revise.iloc[0:1, :16]
y
solar_revise = solar_revise.drop(['일출시간', '일몰시간'], axis=1)
solar_revise.columns
from IPython.display import clear_output # clear_output() 으로 아웃풋 제거 가능
# 변수 분포 살펴보기
n = len(solar_revise)
for i in range(n)[:10]:
data = solar_revise.iloc[i:i+1,:16]
x = list(solar_revise.iloc[i:i+1,:16].columns)
y = list(solar_revise.iloc[i:i+1,:16].values[0])
plt.title('Hr_{}'.format(i))
plt.plot(x, y)
plt.savefig('./tmp_2/Hr_{}'.format(i))
clear_output()
### 새로운 Feature 만들기 (Hr, 기온, 습도, 풍속, 강수량)
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
import keras
import keras.backend as K
from keras.layers import LSTM, Dense, Input, Embedding, Dropout
from keras.models import Sequential
from keras.models import Model
from keras.wrappers.scikit_learn import KerasRegressor
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_acc')
from keras.callbacks import CSVLogger
csv_logger = CSVLogger('training.log')
def dataset_reshape(dataset, window_size=1):
data = []
for i in range(len(dataset) - window_size - 1):
change_data = dataset[i:(i+window_size)]
data.append(np.array(change_data))
return np.array(data)
# 손실 이력 클래스 정의
class LossHistory(keras.callbacks.Callback):
def init(self):
self.losses = []
def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
# 손실 이력 객체 생성
history = LossHistory()
history.init()
def make_model(dataset, input_shape=(0, 0), epochs=[10], batch_size=[30], dropout_rate=[0.2], layers=50, output_dim=1, cv=3):
columns = list(dataset.columns)
# 기준점 형성(Timestamp)
split_date = pd.Timestamp('2017-04-15')
# Unadjusted feature만 활용해서 dataframe을 만든다
train = dataset.loc[:split_date, columns]
test = dataset.loc[split_date:, columns]
# scaling
sc = MinMaxScaler()
train_sc = sc.fit_transform(train)
test_sc = sc.transform(test)
# train, test
train_sc_df = pd.DataFrame(data=train_sc, columns=columns, index=train.index)
test_sc_df = pd.DataFrame(data=test_sc, columns=columns, index=test.index)
X_train_df = train_sc_df.iloc[:, :-1]
y_train_df = train_sc_df.iloc[:, -1:]
X_test_df = test_sc_df.iloc[:, :-1]
y_test_df = test_sc_df.iloc[:, -1:]
# 2차원 데이터(size, feature)를 3차원 데이터(size, feature, time)으로.
X_train = dataset_reshape(X_train_df, 7)
y_train = dataset_reshape(y_train_df['충전시간발전량'], 7)
X_test = dataset_reshape(X_test_df, 7)
y_test = dataset_reshape(y_test_df['충전시간발전량'], 7)
# 모델 함수 만들기
def create_model(dropout_rate=0.0):
activation='relu'
dropout_rate=0.0
init_mode='uniform'
optimizer='adam'
lr=0.01
momentum=0
#create model
model = Sequential()
model.add(LSTM(layers, input_shape=input_shape))
model.add(Dropout(dropout_rate))
model.add(Dense(output_dim))
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
return model
# create model
model = KerasRegressor(build_fn=create_model, epochs=30, batch_size=30)
# Use scikit-learn to grid search
# activation = ['relu', 'tahn', 'sigmoid']
# optimizer = ['adam', 'SGD', 'RMSprop']
# dropout_rate = dropout_rate
# grid search epochs, batch size
epochs = epochs
batch_size = batch_size
param_grid = dict(epochs=epochs, batch_size=batch_size, dropout_rate=dropout_rate)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, verbose=1, cv=cv)
grid = grid.fit(X_train, y_train, callbacks=[history, csv_logger, EarlyStopping])
clear_output()
# make graph
y_pred = grid.predict(X_test)
y_test_tuple = (y_test[0], y_test[1], y_test[2], y_test[3], y_test[4], y_test[5], y_test[6])
y_pred_tuple = (y_pred[0], y_pred[1], y_pred[2], y_pred[3], y_pred[4], y_pred[5], y_pred[6])
plt.figure(figsize=(20, 10))
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
loss_ax.plot(np.concatenate(y_test_tuple), 'b', label='act')
loss_ax.plot(np.concatenate(y_pred_tuple), 'r', label='pred')
loss_ax.legend(loc='lower right')
plt.show()
# loss graph
plt.plot(history.losses)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
plt.show()
return grid
grid_1 = make_model(dataset=solar_revise, input_shape=(7, 80), epochs=[100], dropout_rate=[0.2, 0.4], layers=100, output_dim=7)
grid_1.best_params_
grid_2 = make_model(dataset=solar_new, input_shape=(7, 35), epochs=[100], dropout_rate=[0.2, 0.4], layers=100, output_dim=7)
grid_1.cv_results_
grid_1.best_estimator_.model.save("grid_1.h5")
grid_2.best_estimator_.model.save("grid_2.h5")
#1
1. 20Hr, 일몰시간, 일출시간, 날짜 : int64 -> float64
#2
from sklearn.model_selection import train_test_split
solar_data = solar_raw.copy()
solar_data = solar_data.astype('float32', copy=True)
solar_data.tail()
X_data = solar_data.drop(['충전시간발전량'], axis=1)
y_data = solar_data['충전시간발전량']
y_data.tail()
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size=0.33, random_state=17)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
1. x, y train, text dataset 만들기
2. linear regression으로 모델링(statsmodels)
3. 다중회귀
4. 다중 공선성 확인, 제거(PCA)
5. 파라미터 튜닝
from sklearn.linear_model import LinearRegression
X_train_df.tail()
LR = LinearRegression(fit_intercept=True)
model_lr_1 = LR.fit(X_train_df.values, y_train_df.values)
# 성능평가
y_pred = model_lr_1.predict(X_test_df.values)
mse = (np.square(y_pred - y_test_df.values)).mean(axis=0)
mse
from sklearn.metrics import explained_variance_score
explained_variance_score(y_test_df.values, y_pred)
# 교차 검증
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model_lr_1, X_data, y_data, cv=50, scoring='r2')
scores = np.mean(scores)
scores
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
model_lr_2 = sm.OLS(y_train_df, X_train_df)
result_2 = model_lr_2.fit()
result_2.summary()
pd.set_option('display.max_columns', 200)
pd.set_option('display.width', 1000)
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(
X_train_df.values, i) for i in range(X_train_df.shape[1])]
vif["features"] = X_train_df.columns
vif.sort_values('VIF Factor', ascending=False)
formula = "충전시간발전량 ~ "
for column in list(X_data.columns):
to_add = "scale({}) + ".format(column)
formula += to_add
formula
| 0.455925 | 0.757817 |
Step 1: associate the data from the .csv data.
Strategy: 2 networks--one will be used to get probability of being by an artist, can use second to last layer as input into second network.
```
import sys
from utils import *
%matplotlib inline
path = "data/painter-by-numbers/"
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
#verify directory
%pwd
artist_names = np.genfromtxt('train_info.csv', delimiter=',', dtype=None, usecols=(1)).astype(str)
artist_names = artist_names[1:] #remove the first row that contains ['filename', 'artist']
#print(artist_names[0:5])
```
Above, I'm only using column 1 (artist ID) for the numpy array because everything is ordered, so I should just be able to iterate through the directory of painting IDs and add the corresponding artist ID to the title. The next step after getting an array of artist IDs is to iterate through the directory and add the corresponding artist ID to the title of the painting so that I can use bash to sort them into 9 subdirectories, 1 for each artist.
```
#iterate through the files
idIndex = 0 #keep track of where we are in the artist name array
directory = 'data/painter-by-numbers/train/' #needs full directory
for filename in os.listdir(path+'train/'):
"""
tag each file w/ artist ID--NOTE: if adding to end of name, use filename[:-4] so we don't add characters after the .jpg
Adding the artist ID first for easier recognition with bash (check beginning + wildcard character)
Since adding artist ID before filename we don't need to worry about the .jpg part
"""
fullPath = os.path.join(directory, filename)
newFileName = artist_names[idIndex] + '_' + filename
finalFile = os.path.join(directory, newFileName)
os.rename(fullPath, finalFile)
idIndex += 1
current_directory = os.getcwd()
LESSON_HOME_DIR = current_directory
DATA_HOME_DIR = current_directory+'/data'+'/painter-by-numbers'
#Allow relative imports to directories above lesson1/
sys.path.insert(1, os.path.join(sys.path[0], '..'))
%cd $DATA_HOME_DIR
%mkdir valid
%mkdir results
%mkdir -p sample/train
%mkdir -p sample/test
%mkdir -p sample/valid
%mkdir -p sample/results
%mkdir -p test/unknown
%cd $DATA_HOME_DIR/train
g = glob('*.jpg')
shuffle = np.random.permutation(g)
#the train dataset has 79,434 images, so we will take ~20% for our validation dataset
for i in range(15800): os.rename(shuffle[i], DATA_HOME_DIR+'/valid/'+shuffle[i])
from shutil import copyfile
g = glob('*.jpg')
shuffle = np.random.permutation(g)
#now we add photos for our sample dataset
for i in range(1200): os.rename(shuffle[i], DATA_HOME_DIR+'/sample/train/'+shuffle[i])
%cd DATA_HOME_DIR/valid
g = glob('*.jpg')
shuffle = np.random.permutation(g)
#add photos to sample valid batch
for i in range(300): copyfile(shuffle[i], DATA_HOME_DIR+'/sample/valid/'+shuffle[i])
```
The code below creates a vector of all the $unique$ artist names, iterates through them and makes appropriate directories in the train folder.
```
directory = 'data/painter-by-numbers/'
name = 'bob/'
newDir = os.path.join(directory, name)
os.makedirs(newDir)
#TODO: perhaps make this into a function, you will need to make and move tagged files onto directories in sample/train,
# sample/valid, train, and valid
directory = 'data/painter-by-numbers/train/'
def createArtistDirs(directory):
artistIdVector = np.unique(artist_names)
for name in artistIdVector:
newDirectory = os.path.join(directory, name)
finalDir = newDirectory + '/'
if not os.path.exists(finalDir):
os.makedirs(finalDir)
createArtistDirs('data/painter-by-numbers/train/')
createArtistDirs('data/painter-by-numbers/valid/')
createArtistDirs('data/painter-by-numbers/sample/train/')
createArtistDirs('data/painter-by-numbers/sample/valid/')
```
Next, we want to move the tagged files into appropriate directories that share the first $32$ characters with the filename.
```
import shutil
directory = 'data/painter-by-numbers/train/'
def moveFilesToDir(directory):
for filename in os.listdir(directory):
if '.jpg' in filename:
artistId = filename[:32]
fullDir = os.path.join(directory,filename)
#make the final directory for the jpg
newDir = os.path.join(directory, artistId)
newDir = newDir + '/'
shutil.copy(fullDir, newDir)
moveFilesToDir('data/painter-by-numbers/train/')
moveFilesToDir('data/painter-by-numbers/valid/')
moveFilesToDir('data/painter-by-numbers/sample/train/')
moveFilesToDir('data/painter-by-numbers/sample/valid/')
```
Next, we have to go through $each$ file we just made and put JPGs into, and chop off the first $33$ characters in the name, which are the artist ID and an underscore.
```
for fileHead in os.listdir(directory):
newDir = os.path.join(directory,fileHead)
for filename in os.listdir(newDir):
fullDir = os.path.join(newDir, filename)
print(fullDir)
newDir = os.path.join(newDir, filename[33:]) #this is the filename w/ the artistID and underscore chopped off.
os.rename(fullDir, newDir)
artistName ='85a0c03bcbe27be6d6166b4f4833b55a'
#TODO: it appears fullDir is not removing the old filenames and is just adding new filenames on top of them--figure out how
#to chop off that filename before moving onto the next file.
newDir = os.path.join(directory, artistName)
for filename in os.listdir(newDir):
fullDir = os.path.join(newDir, filename)
print(fullDir)
newDir = os.path.join(newDir, filename[33:])
os.rename(fullDir,newDir)
import utils; reload(utils)
from utils import plots
batch_size=64
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
```
After declaring vgg as a Vgg16() object, we're going to do a few things:
1. Get the data in batches (train and validation)
2. Fit vgg to the batches after finetuning
3. save weights of vgg
```
batches = vgg.get_batches(path+'train', batch_size=batch_size)
```
|
github_jupyter
|
import sys
from utils import *
%matplotlib inline
path = "data/painter-by-numbers/"
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
#verify directory
%pwd
artist_names = np.genfromtxt('train_info.csv', delimiter=',', dtype=None, usecols=(1)).astype(str)
artist_names = artist_names[1:] #remove the first row that contains ['filename', 'artist']
#print(artist_names[0:5])
#iterate through the files
idIndex = 0 #keep track of where we are in the artist name array
directory = 'data/painter-by-numbers/train/' #needs full directory
for filename in os.listdir(path+'train/'):
"""
tag each file w/ artist ID--NOTE: if adding to end of name, use filename[:-4] so we don't add characters after the .jpg
Adding the artist ID first for easier recognition with bash (check beginning + wildcard character)
Since adding artist ID before filename we don't need to worry about the .jpg part
"""
fullPath = os.path.join(directory, filename)
newFileName = artist_names[idIndex] + '_' + filename
finalFile = os.path.join(directory, newFileName)
os.rename(fullPath, finalFile)
idIndex += 1
current_directory = os.getcwd()
LESSON_HOME_DIR = current_directory
DATA_HOME_DIR = current_directory+'/data'+'/painter-by-numbers'
#Allow relative imports to directories above lesson1/
sys.path.insert(1, os.path.join(sys.path[0], '..'))
%cd $DATA_HOME_DIR
%mkdir valid
%mkdir results
%mkdir -p sample/train
%mkdir -p sample/test
%mkdir -p sample/valid
%mkdir -p sample/results
%mkdir -p test/unknown
%cd $DATA_HOME_DIR/train
g = glob('*.jpg')
shuffle = np.random.permutation(g)
#the train dataset has 79,434 images, so we will take ~20% for our validation dataset
for i in range(15800): os.rename(shuffle[i], DATA_HOME_DIR+'/valid/'+shuffle[i])
from shutil import copyfile
g = glob('*.jpg')
shuffle = np.random.permutation(g)
#now we add photos for our sample dataset
for i in range(1200): os.rename(shuffle[i], DATA_HOME_DIR+'/sample/train/'+shuffle[i])
%cd DATA_HOME_DIR/valid
g = glob('*.jpg')
shuffle = np.random.permutation(g)
#add photos to sample valid batch
for i in range(300): copyfile(shuffle[i], DATA_HOME_DIR+'/sample/valid/'+shuffle[i])
directory = 'data/painter-by-numbers/'
name = 'bob/'
newDir = os.path.join(directory, name)
os.makedirs(newDir)
#TODO: perhaps make this into a function, you will need to make and move tagged files onto directories in sample/train,
# sample/valid, train, and valid
directory = 'data/painter-by-numbers/train/'
def createArtistDirs(directory):
artistIdVector = np.unique(artist_names)
for name in artistIdVector:
newDirectory = os.path.join(directory, name)
finalDir = newDirectory + '/'
if not os.path.exists(finalDir):
os.makedirs(finalDir)
createArtistDirs('data/painter-by-numbers/train/')
createArtistDirs('data/painter-by-numbers/valid/')
createArtistDirs('data/painter-by-numbers/sample/train/')
createArtistDirs('data/painter-by-numbers/sample/valid/')
import shutil
directory = 'data/painter-by-numbers/train/'
def moveFilesToDir(directory):
for filename in os.listdir(directory):
if '.jpg' in filename:
artistId = filename[:32]
fullDir = os.path.join(directory,filename)
#make the final directory for the jpg
newDir = os.path.join(directory, artistId)
newDir = newDir + '/'
shutil.copy(fullDir, newDir)
moveFilesToDir('data/painter-by-numbers/train/')
moveFilesToDir('data/painter-by-numbers/valid/')
moveFilesToDir('data/painter-by-numbers/sample/train/')
moveFilesToDir('data/painter-by-numbers/sample/valid/')
for fileHead in os.listdir(directory):
newDir = os.path.join(directory,fileHead)
for filename in os.listdir(newDir):
fullDir = os.path.join(newDir, filename)
print(fullDir)
newDir = os.path.join(newDir, filename[33:]) #this is the filename w/ the artistID and underscore chopped off.
os.rename(fullDir, newDir)
artistName ='85a0c03bcbe27be6d6166b4f4833b55a'
#TODO: it appears fullDir is not removing the old filenames and is just adding new filenames on top of them--figure out how
#to chop off that filename before moving onto the next file.
newDir = os.path.join(directory, artistName)
for filename in os.listdir(newDir):
fullDir = os.path.join(newDir, filename)
print(fullDir)
newDir = os.path.join(newDir, filename[33:])
os.rename(fullDir,newDir)
import utils; reload(utils)
from utils import plots
batch_size=64
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
batches = vgg.get_batches(path+'train', batch_size=batch_size)
| 0.14777 | 0.616214 |
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pyLDAvis.gensim
import re
import spacy
import squarify
from collections import Counter
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from gensim import corpora
from gensim.models.ldamulticore import LdaMulticore
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.neighbors import NearestNeighbors
```
# Load in the Data
```
df = pd.read_csv('indeed_fswd_4_16_2020.csv', index_col=0)
df.head()
```
# Create the Tokenizer and filter common stopwords
```
STOPWORDS = set(STOPWORDS)
# Tokenizer for text
def tokenizer(text):
new_text = re.sub(r'[^a-zA-Z ^0-9]', '', text)
return [token for token in simple_preprocess(new_text) if token not in STOPWORDS]
# Just a quick view of this in action
tokenizer(df['summary'][0])
# Apply to the summary column
df['tokenized_summary'] = df['summary'].apply(tokenizer)
df['tokenized_summary'][:5]
```
# Vector Representation of the Data
```
# First, join the tokens together in a new column
text = [' '.join(doc) for doc in df['tokenized_summary']]
df['joined_tokens'] = text
# Next, define the vectorizor we will use and fit the model
# We will use TFIDF vectorizer as it is good for a baseline in document models
tfidf = TfidfVectorizer(stop_words='english',
ngram_range=(1,3))
tfidf.fit(text)
# after transforming the vectors, we show the tokens and n_grams in the text
dtm = tfidf.transform(text)
dtm = pd.DataFrame(dtm.todense(), columns=tfidf.get_feature_names())
dtm
# Next, we will fit a NearestNeighbors model on the data
# When we do this, we will be able to query with dummy-job description
# to find out which job listings closely match our string.
# Parameters:
# - n_neighbors (the number of closely-related searches)
# - algorithm (how it compiles the data)
nn = NearestNeighbors(n_neighbors=15, algorithm='kd_tree')
nn.fit(dtm)
# Write resume with which to query
fake_resume = ['''
Full Stack Developer with a track record of creating effective programs and projects quickly, without sacrificing quality or client needs.
Lifelong learner committed to staying current on new technologies.
Team player willing to take the lead on executing tasks and experimenting with new ideas.
''']
new = tfidf.transform(fake_resume)
new
nn.kneighbors(new.todense())[1][0]
for i in nn.kneighbors(new.todense())[1][0]:
print(df['company_name'][i], '\n')
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pyLDAvis.gensim
import re
import spacy
import squarify
from collections import Counter
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from gensim import corpora
from gensim.models.ldamulticore import LdaMulticore
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.neighbors import NearestNeighbors
df = pd.read_csv('indeed_fswd_4_16_2020.csv', index_col=0)
df.head()
STOPWORDS = set(STOPWORDS)
# Tokenizer for text
def tokenizer(text):
new_text = re.sub(r'[^a-zA-Z ^0-9]', '', text)
return [token for token in simple_preprocess(new_text) if token not in STOPWORDS]
# Just a quick view of this in action
tokenizer(df['summary'][0])
# Apply to the summary column
df['tokenized_summary'] = df['summary'].apply(tokenizer)
df['tokenized_summary'][:5]
# First, join the tokens together in a new column
text = [' '.join(doc) for doc in df['tokenized_summary']]
df['joined_tokens'] = text
# Next, define the vectorizor we will use and fit the model
# We will use TFIDF vectorizer as it is good for a baseline in document models
tfidf = TfidfVectorizer(stop_words='english',
ngram_range=(1,3))
tfidf.fit(text)
# after transforming the vectors, we show the tokens and n_grams in the text
dtm = tfidf.transform(text)
dtm = pd.DataFrame(dtm.todense(), columns=tfidf.get_feature_names())
dtm
# Next, we will fit a NearestNeighbors model on the data
# When we do this, we will be able to query with dummy-job description
# to find out which job listings closely match our string.
# Parameters:
# - n_neighbors (the number of closely-related searches)
# - algorithm (how it compiles the data)
nn = NearestNeighbors(n_neighbors=15, algorithm='kd_tree')
nn.fit(dtm)
# Write resume with which to query
fake_resume = ['''
Full Stack Developer with a track record of creating effective programs and projects quickly, without sacrificing quality or client needs.
Lifelong learner committed to staying current on new technologies.
Team player willing to take the lead on executing tasks and experimenting with new ideas.
''']
new = tfidf.transform(fake_resume)
new
nn.kneighbors(new.todense())[1][0]
for i in nn.kneighbors(new.todense())[1][0]:
print(df['company_name'][i], '\n')
| 0.521715 | 0.594904 |
In this notebook, we register the previously prepared dataset within an Azure ML Worspace, so that we can use it for remote training on Azure ML Compute.
Before registering the data, we need to make it available in a shared location. For that, we upload it to an Azure Blob Storage using the azure-storage-blob package.
For learning more about registering datasets within Azure ML, please see [here]( https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-register-datasets).
```
!pip install azure-storage-blob
```
Here we upload our dataset from a local folder to the default Azure Blob Storage associated with our Azure ML Workspace. For more details, please see [here]( https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python).
You need to replace the values for *account_name*, *account_key*, and *container_name* with the values for your own corresponding resources.
You can find those values by logging into your [Azure ML studio environment](https://ml.azure.com) and then click on *Datastores* on the left menu. You will find your Storage Account Name and Blob Container Name there. To get the corresponding Storage Account Key, you need to access your Azure ML Worspace through the [Azure Portal](https://ms.portal.azure.com), click on the Storage Account associated to your workspace, and then click on *Access keys* on the left menu. You can use either *key1* or *key2*.
```
from azure.storage.blob import BlockBlobService
account_name = '<your azure storage account name>'
account_key = '<your azure storage account access key>'
block_blob_service = BlockBlobService(account_name=account_name, account_key=account_key)
container_name = '<your azure blob storage container name>'
blob_name = 'data/complaints_dataset/consumer_complaint_data_sample_prepared.csv'
file_path = './data/consumer_complaint_data_sample_prepared.csv'
block_blob_service.create_blob_from_path(container_name=container_name, blob_name=blob_name, file_path=file_path)
```
To be able to register the dataset within Azure ML, we first need to get a reference to the [workspace]( https://docs.microsoft.com/en-us/azure/machine-learning/concept-workspace) we are registering it to.
We use the [Azure ML SDK]( https://docs.microsoft.com/en-us/python/api/overview/azureml-sdk/?view=azure-ml-py) for that. If you don’t have it installed into your development environment, please follow the instructions [here]( https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-environment#local). If you want to run the code on a managed VM instance, which already has the SDK, please see [here]( https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup).
You need to replace the values for *subscription_id*, *resource_group*, and *workspace_name* with the values for your own corresponding resources.
```
from azureml.core.authentication import InteractiveLoginAuthentication
from azureml.core import Workspace
interactive_auth = InteractiveLoginAuthentication()
subscription_id = '<your azure subscription id>'
resource_group = '<your azure ml workspace resource group>'
workspace_name = '<your azure ml workspace name>'
workspace = Workspace(subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name,
auth=interactive_auth)
```
Finally, we register our dataset as a [Dataset]( https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py) object within our Azure ML Workspace.
Notice that we need to have our Azure Storage Account already registered as a [Datastore]( https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py). The default Azure Storage Account associated woth our Azure ML Workspace is already refistered as a Datastore by default. Then, we only need to specify its name, which is * workspaceblobstore*.
```
from azureml.core import Datastore, Dataset
datastore = Datastore.get(workspace, 'workspaceblobstore')
datastore_path = [(datastore, 'data/complaints_dataset/consumer_complaint_data_sample_prepared.csv')]
dataset = Dataset.File.from_files(path=datastore_path)
dataset_name = 'Consumer Complaints Dataset'
dataset_description = 'Consumer Complaint Database. Source: https://catalog.data.gov/dataset/consumer-complaint-database'
dataset = dataset.register(workspace=workspace, name=dataset_name, description=dataset_description)
```
|
github_jupyter
|
!pip install azure-storage-blob
from azure.storage.blob import BlockBlobService
account_name = '<your azure storage account name>'
account_key = '<your azure storage account access key>'
block_blob_service = BlockBlobService(account_name=account_name, account_key=account_key)
container_name = '<your azure blob storage container name>'
blob_name = 'data/complaints_dataset/consumer_complaint_data_sample_prepared.csv'
file_path = './data/consumer_complaint_data_sample_prepared.csv'
block_blob_service.create_blob_from_path(container_name=container_name, blob_name=blob_name, file_path=file_path)
from azureml.core.authentication import InteractiveLoginAuthentication
from azureml.core import Workspace
interactive_auth = InteractiveLoginAuthentication()
subscription_id = '<your azure subscription id>'
resource_group = '<your azure ml workspace resource group>'
workspace_name = '<your azure ml workspace name>'
workspace = Workspace(subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name,
auth=interactive_auth)
from azureml.core import Datastore, Dataset
datastore = Datastore.get(workspace, 'workspaceblobstore')
datastore_path = [(datastore, 'data/complaints_dataset/consumer_complaint_data_sample_prepared.csv')]
dataset = Dataset.File.from_files(path=datastore_path)
dataset_name = 'Consumer Complaints Dataset'
dataset_description = 'Consumer Complaint Database. Source: https://catalog.data.gov/dataset/consumer-complaint-database'
dataset = dataset.register(workspace=workspace, name=dataset_name, description=dataset_description)
| 0.60054 | 0.972934 |
# BIOS 823
- The time allocated is 2 hours
- This is a **closed book** examination
- Close ALL applications on your laptop
- Start an empty browser with a SINGLE Tab in FULL SCREEN MODE
- You should only have this SINGLE notebook page open in your browser, with NO OTHER TABS or WINDOWS
- You are not allowed any reference material except for the following:
- Cheat sheet (1 letter-sized paper, both sides)
- Built-in help accessible either by `?foo`, `foo?` or `help(foo)`
- ALL necessary imports of Python modules have been done for you.
- **You should not import any additional modules - this includes standard library packages**.
Note that answers will be graded on **correctness**, **efficiency** and **readability**.
<font color=blue>By taking this exam, you acknowledge that you have read the instructions and agree to abide by the Duke Honor Code.</font>
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
**1**. (10 points)
Warm up exercise.
Find the 5 most common words and their counts in `data/moby.txt`, after removing punctuation, setting to lowercase and splitting by blank space.
```
import string
d = {}
with open('data/moby.txt') as f:
text = f.read()
teext = text.translate(str.maketrans('', '', string.punctuation))
for word in text.lower().split():
d[word] = d.get(word, 0) + 1
sorted(d.items(), key=lambda x: -x[1])[:5]
```
**2**. (10 points)
- Assemble the data from `features`, `subjects`, `X`, and `y` into a single `pandas.DataFrame (DF)` called `har`. You should end up with a DF that is 7352 by 562 with `activity` as the first column. Rows and columns should be appropriately labeled.
- `X` is a matrix where each row is a feature matrix
- The columns of X are given in `features`
- Each row of X is a subject given in `subjects`
- `y` is a code for the type of activity performed by the subject (name the column in the DataFrame `actvitity`)
- Name the index `subject`
- Display a sample of 5 rows chosen at random without replacement and the first 5 columns.
```
activities = np.loadtxt('data/HAR/activity_labels.txt', dtype='str')
features = np.loadtxt('data/HAR/features.txt', dtype='str')[:, 1]
subjects = np.loadtxt('data/HAR/train/subject_train.txt', dtype='int')
X = np.loadtxt('data/HAR/train/X_train.txt')
y = np.loadtxt('data/HAR/train/y_train.txt', dtype='int')
har = pd.DataFrame(np.c_[X, y], columns=np.r_[features, ['activity']], index=subjects)
har.index.name = 'subject'
har.sample(5).iloc[:, :5]
```
**3**. (10 points)
Using the DF from Question 1, find the average feature value for each subject for all features that have the string `entropy` in it but does NOT end in X, Y or Z. Use method chaining to perform this operation and show a random sample of 5 rows without replacement as a single expression.
```
(
har.
filter(regex='.*entropy.*[^X-Z]$').
groupby('subject').
mean().
sample(5)
)
```
**4**. (10 points)
Write an SQL query against the `har` table to count the number of distinct subjects and the total number of rows for each activity, ordering the results by number of rows for each activity in decreasing order. A simple example of how to run an SQL query using `pandas` is provided.
```
from sqlalchemy import create_engine
engine = create_engine('sqlite:///data/har.db', echo=False)
query = '''
SELECT subject, activity
FROM har
LIMIT 5
'''
pd.read_sql(query, con=engine)
query = '''
SELECT activity, count(DISTINCT subject), count(*)
FROM har
GROUP BY activity
ORDER BY count(*) DESC
LIMIT 5
'''
pd.read_sql(query, con=engine)
```
**5**. (25 points)
- Create a new DF `df` from the `har` DF with all features that include the string `Acc-mean`
- Scale the feature columns so that all features have mean 0 and standard deviation 1
- Use SVD to find the first two principal components
- Plot the first two principal components as a scatter plot colored by the `activity` type of each feature vector
- Plot the 2D t-SNE plot colored in the same way (t-SNE dimension reduction may take 1-2 minutes)
Do not import any other packages apart from the cell below.
```
from scipy.linalg import svd
from sklearn.manifold import TSNE
df = har.filter(regex='Acc-mean')
df = df.transform(lambda x: (x - x.mean())/x.std())
np.allclose(0, df.mean(axis=0))
np.allclose(1, df.std(axis=0))
U, s, Vt = svd(df)
pc = U[:,:2] @ np.diag(s[:2])
plt.scatter(pc[:, 0], pc[:, 1], s=3, c=har['activity'])
plt.axis('square')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('PCA from SVD')
pass
tsne = TSNE(n_components=2)
df_tsne = tsne.fit_transform(df)
plt.scatter(df_tsne[:, 0], df_tsne[:, 1], s=3, c=har['activity'])
plt.axis('square')
plt.xlabel('TSNE1')
plt.ylabel('TSNE2')
plt.title('TSEN Plot')
pass
```
**6**. (25 points)
You are given training and test data and labels using a subset of the HAR data set. Your job is to use these features to classify rows into WALKING UPSTAIRS (code = 2) or WALKING DOWNSTAIRS (code = 3).
- Scale the data to have mean zero and unit standard deviation using `StandardScaler`, taking care to apply the same scaling parameters for the training and test data sets
- Use the LaeblEncoder to transform the codes 2 and 3 to 0 and 1 in `y_train` and `y_test`
- Perform ridge regression to classify data as WALKING UPSTAIRS or WALKING DOWNSTAIRS
- Train the model with an Cs value chosen from one of (0.01, 0.1, 1, 10, 100) by 5-fold cross-validation using the training data
- Plot the ROC curve (TPR versus FPR) evaluated on the test data
The necessary classes from `sklearn` are imported for you. Do not use any other `sklearn` classes
```
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import roc_curve
X_train = np.load('data/X_train.npy')
X_test = np.load('data/X_test.npy')
y_train = np.load('data/y_train.npy')
y_test = np.load('data/y_test.npy')
ss = StandardScaler()
le = LabelEncoder()
ss.fit(X_train)
le.fit(y_train)
X_train = ss.transform(X_train)
y_train = le.transform(y_train)
X_test = ss.transform(X_test)
y_test = le.transform(y_test)
clf = LogisticRegressionCV(Cs=(0.01, 0.1, 1, 10, 100), cv=5)
clf.fit(X_train, y_train)
y_pred = clf.decision_function(X_test)
fpr, tpr, threhsolds = roc_curve(y_test, y_pred)
plt.step(fpr, tpr, '-')
plt.plot([0,1], [0,1], '--')
plt.axis('square')
plt.title('ROC curve')
plt.xlabel('FPR')
plt.ylabel('TPR')
pass
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import string
d = {}
with open('data/moby.txt') as f:
text = f.read()
teext = text.translate(str.maketrans('', '', string.punctuation))
for word in text.lower().split():
d[word] = d.get(word, 0) + 1
sorted(d.items(), key=lambda x: -x[1])[:5]
activities = np.loadtxt('data/HAR/activity_labels.txt', dtype='str')
features = np.loadtxt('data/HAR/features.txt', dtype='str')[:, 1]
subjects = np.loadtxt('data/HAR/train/subject_train.txt', dtype='int')
X = np.loadtxt('data/HAR/train/X_train.txt')
y = np.loadtxt('data/HAR/train/y_train.txt', dtype='int')
har = pd.DataFrame(np.c_[X, y], columns=np.r_[features, ['activity']], index=subjects)
har.index.name = 'subject'
har.sample(5).iloc[:, :5]
(
har.
filter(regex='.*entropy.*[^X-Z]$').
groupby('subject').
mean().
sample(5)
)
from sqlalchemy import create_engine
engine = create_engine('sqlite:///data/har.db', echo=False)
query = '''
SELECT subject, activity
FROM har
LIMIT 5
'''
pd.read_sql(query, con=engine)
query = '''
SELECT activity, count(DISTINCT subject), count(*)
FROM har
GROUP BY activity
ORDER BY count(*) DESC
LIMIT 5
'''
pd.read_sql(query, con=engine)
from scipy.linalg import svd
from sklearn.manifold import TSNE
df = har.filter(regex='Acc-mean')
df = df.transform(lambda x: (x - x.mean())/x.std())
np.allclose(0, df.mean(axis=0))
np.allclose(1, df.std(axis=0))
U, s, Vt = svd(df)
pc = U[:,:2] @ np.diag(s[:2])
plt.scatter(pc[:, 0], pc[:, 1], s=3, c=har['activity'])
plt.axis('square')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('PCA from SVD')
pass
tsne = TSNE(n_components=2)
df_tsne = tsne.fit_transform(df)
plt.scatter(df_tsne[:, 0], df_tsne[:, 1], s=3, c=har['activity'])
plt.axis('square')
plt.xlabel('TSNE1')
plt.ylabel('TSNE2')
plt.title('TSEN Plot')
pass
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import roc_curve
X_train = np.load('data/X_train.npy')
X_test = np.load('data/X_test.npy')
y_train = np.load('data/y_train.npy')
y_test = np.load('data/y_test.npy')
ss = StandardScaler()
le = LabelEncoder()
ss.fit(X_train)
le.fit(y_train)
X_train = ss.transform(X_train)
y_train = le.transform(y_train)
X_test = ss.transform(X_test)
y_test = le.transform(y_test)
clf = LogisticRegressionCV(Cs=(0.01, 0.1, 1, 10, 100), cv=5)
clf.fit(X_train, y_train)
y_pred = clf.decision_function(X_test)
fpr, tpr, threhsolds = roc_curve(y_test, y_pred)
plt.step(fpr, tpr, '-')
plt.plot([0,1], [0,1], '--')
plt.axis('square')
plt.title('ROC curve')
plt.xlabel('FPR')
plt.ylabel('TPR')
pass
| 0.647352 | 0.896659 |
# Creating your first pipeline
This is the notebook version of our [Quickstart](../getting-started/quickstart.md)! Our goal here is to help you to get the first practical experience with our tool and give you a brief overview on some basic functionalities of **ZenML**.
In this example, we will create and run a simple pipeline featuring a local CSV dataset and a basic feedforward neural network and run it in our local environment. If you want to run this notebook in an interactive environment, feel free to [run it in a Google Colab](https://colab.research.google.com/github/maiot-io/zenml/blob/main/docs/book/tutorials/simple-classification.ipynb)
## First things first...
You can install **ZenML** through:
```
pip install zenml
```
Once the installation is completed, you can go ahead and create your first **ZenML** repository for your project. As **ZenML** repositories are built on top of Git repositories, you can create yours in a desired empty directory through:
```
git init
zenml init
```
Now, the setup is completed. For the next steps, just make sure that you are executing the code within your **ZenML** repository.
## Creating the pipeline
Once you set everything up, we can start our tutorial. The first step is to create an instance of a pipeline. **ZenML** comes equipped with different types of pipelines, but for this example we will be using the most classic one, namely a `TrainingPipeline`.
While creating your pipeline, you can give it a name and use that name to reference the pipeline later.
```
from zenml.core.pipelines.training_pipeline import TrainingPipeline
training_pipeline = TrainingPipeline(name='QuickstartPipeline')
```
In a **ZenML** `TrainingPipeline`, there is a fixed set of steps representing the processes, which can be found in any machine learning workflow. These steps include:
1. **Split**: responsible for splitting your dataset into smaller datasets such as train, eval, etc.
2. **Transform**: responsible for the preprocessing of your data
3. **Train**: responsible for the model creation and training process
4. **Evaluate**: responsible for the evaluation of your results
## Creating a datasource
However, before we dive into the aforementioned steps, let's briefly talk about our dataset.
For this quickstart, we will be using the *Pima Indians Diabetes Dataset* and on it, we will train a model which will aim to predict whether a person has diabetes based on diagnostic measures.
In order to be able to use this dataset (which is currently in CSV format) in your **ZenML** pipeline, we first need to create a `datasource`. **ZenML** has built-in support for various types of datasources and for this example you can use the `CSVDatasource`. All you need to provide is a `name` for the datasource and the `path` to the CSV file.
```
from zenml.core.datasources.csv_datasource import CSVDatasource
ds = CSVDatasource(name='Pima Indians Diabetes Dataset',
path='gs://zenml_quickstart/diabetes.csv')
```
Once you are through, you will have created a tracked and versioned datasource and you can use this datasource in any pipeline. Go ahead and add it to your pipeline.
```
training_pipeline.add_datasource(ds)
```
## Configuring the split
Now, let us get back to the **four** essential steps where the first step is the **Split**.
For the sake of simplicity in this tutorial, we will be using a completely random `70-30` split into a train and evaluation dataset.
```
from zenml.core.steps.split.random_split import RandomSplit
training_pipeline.add_split(RandomSplit(split_map={'train': 0.7,
'eval': 0.3}))
```
Keep in mind, in a more complicated example, it might be necessary to apply a different splitting strategy. For these cases, you can use the other built-in split configuration **ZenML** offers or even implement your own custom logic into the split step.
## Handling data preprocessing
The next step is to configure the step **Transform**, the data preprocessing.
For this example, we will use the built-in `StandardPreprocesser`. It handles the feature selection and has sane defaults of preprocessing behaviour for each data type, such as stardardization for numerical features or vocabularization for non-numerical features.
In order to use it, you need to provide a list of feature names and a list of label names. Moreover, if you do not want it use the default transformation for a feature or you want to overwrite it with a different preprocessing method, this is also possible as we do in this example.
```
from zenml.core.steps.preprocesser.standard_preprocesser.standard_preprocesser import StandardPreprocesser
training_pipeline.add_preprocesser(
StandardPreprocesser(
features=['times_pregnant',
'pgc',
'dbp',
'tst',
'insulin',
'bmi',
'pedigree',
'age'],
labels=['has_diabetes'],
overwrite={'has_diabetes': {
'transform': [{'method': 'no_transform',
'parameters': {}}]}}))
```
Much like the splitting process, you might want to work on cases, where the capabilities of the `StandardPreprocesser` do not match your task at hand. In this case, you can create your own custom preprocessing step, but we will go into that topic in a different tutorial.
## Training your model
As the data is now ready, we can move onto the step **Train**, the model creation and training.
For this quickstart, we will be using the simple built-in `FeedForwardTrainer` step and as the name suggests, it represents a feedforward neural network, which is configurable through a set of variables.
```
from zenml.core.steps.trainer.tensorflow_trainers.tf_ff_trainer import FeedForwardTrainer
training_pipeline.add_trainer(FeedForwardTrainer(loss='binary_crossentropy',
last_activation='sigmoid',
output_units=1,
metrics=['accuracy'],
epochs=20))
```
Of course, not every single machine learning problem is solvable by a simple feedforward neural network and most of the time, they will require a model which is tailored to the corresponding problem. That is why we created an interface where the users can implement their own custom models and integrate it in a trainer step. However this approach is not within the scope of this tutorial and you can learn more about it in our docs and the upcoming tutorials.
## Evaluation of the results
The last step to configure in our pipeline is the **Evaluate**.
For this example, we will be using the built-in `TFMAEvaluator` which uses [Tensorflow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) to compute metrics based on your results (possibly within slices).
```
from zenml.core.steps.evaluator.tfma_evaluator import TFMAEvaluator
training_pipeline.add_evaluator(
TFMAEvaluator(slices=[['has_diabetes']],
metrics={'has_diabetes': ['binary_crossentropy',
'binary_accuracy']}))
```
## Running your pipeline
Now that everything is set, go ahead and run the pipeline, thus your steps.
```
training_pipeline.run()
```
With the execution of the pipeline, you should see the logs informing you about each step along the way. In more detail, you should first see that your dataset will is ingested through the component *DataGen* and then split by the component *SplitGen*. Afterwards data preprocessing will take place with the component *Transform* and will lead to the main training component *Trainer*. Ultimately, the results will be evaluated by the component *Evaluator*.
## Post-training functionalities
Once the training pipeline is finished, you can check the outputs of your pipeline in different ways.
### Dataset
As the data is now ingested, you can go ahead and take a peek into your dataset. You can achieve this by simply getting the datasources registered to your repository and calling the method `sample_data`.
```
from zenml.core.repo.repo import Repository
repo = Repository.get_instance()
datasources = repo.get_datasources()
datasources[0].sample_data()
```
### Statistics
Furthermore, you can check the statistics which are yielded by your datasource and split configuration through the method `view_statistics`. By using the `magic` flag, we can even achieve this right here in this notebook.
```
training_pipeline.view_statistics(magic=True)
```
### Evaluate
On the other hand, if you want to evalaute the results of your training process you can use the `evaluate` method of your pipeline.
Much like the `view_statistics`, if you execute `evaluate` with the `magic` flag, it will help you continue in this notebook and generate two new cells, each set up with a different evaluation tool:
1. **Tensorboard** can help you to understand the behaviour of your model during the training session
2. **TFMA** or **tensorflow_model_analysis** can help you assess your already trained model based on given metrics and slices on the evaluation dataset
*Note*: if you want to see the sliced results, comment in the last line and adjust it according to the slicing column. In the end it should look like this:
```
tfma.view.render_slicing_metrics(evaluation, slicing_column='has_diabetes')
```
```
training_pipeline.evaluate(magic=True)
```
... and this it it for the quickstart. If you came here without a hiccup, you must have successly installed ZenML, set up a ZenML repo, registered a new datasource, configured a training pipeline, executed it locally and evaluated the results. And, this is just the tip of the iceberg on the capabilities of **ZenML**.
However, if you had a hiccup or you have some suggestions/questions regarding our framework, you can always check our [docs](https://docs.zenml.io/) or our [github](https://github.com/maiot-io/zenml) or even better join us on our [Slack](https://zenml.io/slack-invite) channel.
Cheers!
|
github_jupyter
|
pip install zenml
git init
zenml init
from zenml.core.pipelines.training_pipeline import TrainingPipeline
training_pipeline = TrainingPipeline(name='QuickstartPipeline')
from zenml.core.datasources.csv_datasource import CSVDatasource
ds = CSVDatasource(name='Pima Indians Diabetes Dataset',
path='gs://zenml_quickstart/diabetes.csv')
training_pipeline.add_datasource(ds)
from zenml.core.steps.split.random_split import RandomSplit
training_pipeline.add_split(RandomSplit(split_map={'train': 0.7,
'eval': 0.3}))
from zenml.core.steps.preprocesser.standard_preprocesser.standard_preprocesser import StandardPreprocesser
training_pipeline.add_preprocesser(
StandardPreprocesser(
features=['times_pregnant',
'pgc',
'dbp',
'tst',
'insulin',
'bmi',
'pedigree',
'age'],
labels=['has_diabetes'],
overwrite={'has_diabetes': {
'transform': [{'method': 'no_transform',
'parameters': {}}]}}))
from zenml.core.steps.trainer.tensorflow_trainers.tf_ff_trainer import FeedForwardTrainer
training_pipeline.add_trainer(FeedForwardTrainer(loss='binary_crossentropy',
last_activation='sigmoid',
output_units=1,
metrics=['accuracy'],
epochs=20))
from zenml.core.steps.evaluator.tfma_evaluator import TFMAEvaluator
training_pipeline.add_evaluator(
TFMAEvaluator(slices=[['has_diabetes']],
metrics={'has_diabetes': ['binary_crossentropy',
'binary_accuracy']}))
training_pipeline.run()
from zenml.core.repo.repo import Repository
repo = Repository.get_instance()
datasources = repo.get_datasources()
datasources[0].sample_data()
training_pipeline.view_statistics(magic=True)
tfma.view.render_slicing_metrics(evaluation, slicing_column='has_diabetes')
training_pipeline.evaluate(magic=True)
| 0.691393 | 0.988492 |
```
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import linregress
from matplotlib import rcParams
rcParams['figure.figsize'] = [18, 12]
rcParams['font.size'] = 22
data = [l.strip().split() for l in open('process/data.txt', 'r').readlines()]
data_linear = np.asarray([[float(d[0].split("_")[0]), float(d[1])] for d in data if 'interleave' not in d[0] and 'random' not in d[0]])
data_interleave = np.asarray([[float(d[0].split("_")[0]), float(d[1])] for d in data if 'interleave' in d[0]])
data_random = np.asarray([[float(d[0].split("_")[0]), float(d[1])] for d in data if 'random' in d[0]])
data_linear = data_linear[:, 0], data_linear[:, 1]
data_interleave = data_interleave[:, 0], data_interleave[:, 1]
data_random = data_random[:, 0], data_random[:, 1]
data_linear = data_linear[0][np.argsort(data_linear)[0]], data_linear[1][np.argsort(data_linear)[0]]
data_interleave = data_interleave[0][np.argsort(data_interleave)[0]], data_interleave[1][np.argsort(data_interleave)[0]]
data_random = data_random[0][np.argsort(data_random)[0]], data_random[1][np.argsort(data_random)[0]]
plt.plot(data_linear[0], data_linear[1], linestyle='None', marker='o', color='black', markersize=8, label='Linear')
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(data_interleave[0], data_interleave[1], linestyle='None', marker='^', color='gray', markersize=8, label='Interleave')
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(data_random[0], data_random[1], linestyle='None', marker='+', color='darkgray', markersize=8, label='Random')
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(data_linear[0], data_linear[1], linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.plot(data_interleave[0], data_interleave[1], linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.plot(data_random[0], data_random[1], linestyle='None', marker='s', color='blue', markersize=10, label='Random')
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(np.log(data_linear[0]), np.log(data_linear[1]), linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.plot(np.log(data_interleave[0]), np.log(data_interleave[1]), linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.plot(np.log(data_random[0]), np.log(data_random[1]), linestyle='None', marker='+', color='blue', markersize=10, label='Random')
# plt.xticks(np.log(data_linear[0]), ["{:0.0f}".format(d) for d in data_linear[0]])
# plt.yticks(np.log(data_linear[1]), ["{:0.0f}".format(d) for d in data_linear[1]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.loglog(data_linear[0], data_linear[1]), linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.loglog(data_interleave[0], data_interleave[1]), linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.loglog(data_random[0], data_random[1]), linestyle='None', marker='+', color='blue', markersize=10, label='Random')
# plt.xticks(np.log(data_linear[0]), ["{:0.0f}".format(d) for d in data_linear[0]])
# plt.yticks(np.log(data_linear[1]), ["{:0.0f}".format(d) for d in data_linear[1]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(np.log(data_linear[0][:-3]), np.log(data_linear[1][:-3]), linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.plot(np.log(data_interleave[0][:-3]), np.log(data_interleave[1][:-3]), linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.plot(np.log(data_random[0][:-3]), np.log(data_random[1][:-3]), linestyle='None', marker='+', color='blue', markersize=10, label='Random')
# plt.xticks(np.log(data_linear[0][:-3]), ["{:0.0f}".format(d) for d in data_linear[0][:-3]])
# plt.yticks(np.log(data_linear[1][:-3]), ["{:0.0f}".format(d) for d in data_linear[1][:-3]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
data_linear = data_linear[0][np.argsort(data_linear)[0]], data_linear[1][np.argsort(data_linear)[0]]
result = linregress(np.log10(data_linear[0][:-3]), np.log10(data_linear[1][:-3]))
data_linear_slope = result.slope
data_linear_inter = result.intercept
print(data_linear_slope, data_linear_inter)
result = linregress(np.log10(data_interleave[0][:-3]), np.log10(data_interleave[1][:-3]))
data_interleave_slope = result.slope
data_interleave_inter = result.intercept
print(data_interleave_slope, data_interleave_inter)
result = linregress(np.log10(data_random[0][:-3]), np.log10(data_random[1][:-3]))
data_random_slope = result.slope
data_random_inter = result.intercept
print(data_random_slope, data_random_inter)
x_line = np.linspace(1, 120, 1000)
plt.plot(np.log10(data_linear[0]), np.log10(data_linear[1]), linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.plot(np.log10(x_line), data_linear_slope * np.log10(x_line) + data_linear_inter, linestyle='--', color='k', linewidth=3)
plt.plot(np.log10(data_interleave[0]), np.log10(data_interleave[1]), linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.plot(np.log10(x_line), data_interleave_slope * np.log10(x_line) + data_interleave_inter, linestyle='-.', color='gray', linewidth=3)
plt.plot(np.log10(data_random[0]), np.log10(data_random[1]), linestyle='None', marker='+', color='blue', markersize=10, label='Interleave')
plt.plot(np.log10(x_line), data_random_slope * np.log10(x_line) + data_random_inter, linestyle=':', color='blue', linewidth=3)
# plt.xticks(np.log10(data_linear[0]), ["{:0.0f}".format(d) for d in data_linear[0]])
# plt.yticks(np.log10(data_linear[1]), ["{:0.0f}".format(d) for d in data_linear[1]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
x_line = np.linspace(1, 120, 1000)
plt.loglog(data_linear[0], data_linear[1], linestyle='None', marker='o', color='black', markersize=10, label='Linear')
# plt.plot(np.log10(x_line), data_linear_slope * np.log10(x_line) + data_linear_inter, linestyle='--', color='k', linewidth=3)
plt.loglog(data_interleave[0], data_interleave[1], linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
# plt.plot(np.log10(x_line), data_interleave_slope * np.log10(x_line) + data_interleave_inter, linestyle='-.', color='gray', linewidth=3)
plt.loglog(data_random[0], data_random[1], linestyle='None', marker='+', color='blue', markersize=10, label='Interleave')
# plt.plot(np.log10(x_line), data_random_slope * np.log10(x_line) + data_random_inter, linestyle=':', color='blue', linewidth=3)
# plt.xticks(np.log10(data_linear[0]), ["{:0.0f}".format(d) for d in data_linear[0]])
# plt.yticks(np.log10(data_linear[1]), ["{:0.0f}".format(d) for d in data_linear[1]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import linregress
from matplotlib import rcParams
rcParams['figure.figsize'] = [18, 12]
rcParams['font.size'] = 22
data = [l.strip().split() for l in open('process/data.txt', 'r').readlines()]
data_linear = np.asarray([[float(d[0].split("_")[0]), float(d[1])] for d in data if 'interleave' not in d[0] and 'random' not in d[0]])
data_interleave = np.asarray([[float(d[0].split("_")[0]), float(d[1])] for d in data if 'interleave' in d[0]])
data_random = np.asarray([[float(d[0].split("_")[0]), float(d[1])] for d in data if 'random' in d[0]])
data_linear = data_linear[:, 0], data_linear[:, 1]
data_interleave = data_interleave[:, 0], data_interleave[:, 1]
data_random = data_random[:, 0], data_random[:, 1]
data_linear = data_linear[0][np.argsort(data_linear)[0]], data_linear[1][np.argsort(data_linear)[0]]
data_interleave = data_interleave[0][np.argsort(data_interleave)[0]], data_interleave[1][np.argsort(data_interleave)[0]]
data_random = data_random[0][np.argsort(data_random)[0]], data_random[1][np.argsort(data_random)[0]]
plt.plot(data_linear[0], data_linear[1], linestyle='None', marker='o', color='black', markersize=8, label='Linear')
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(data_interleave[0], data_interleave[1], linestyle='None', marker='^', color='gray', markersize=8, label='Interleave')
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(data_random[0], data_random[1], linestyle='None', marker='+', color='darkgray', markersize=8, label='Random')
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(data_linear[0], data_linear[1], linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.plot(data_interleave[0], data_interleave[1], linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.plot(data_random[0], data_random[1], linestyle='None', marker='s', color='blue', markersize=10, label='Random')
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(np.log(data_linear[0]), np.log(data_linear[1]), linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.plot(np.log(data_interleave[0]), np.log(data_interleave[1]), linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.plot(np.log(data_random[0]), np.log(data_random[1]), linestyle='None', marker='+', color='blue', markersize=10, label='Random')
# plt.xticks(np.log(data_linear[0]), ["{:0.0f}".format(d) for d in data_linear[0]])
# plt.yticks(np.log(data_linear[1]), ["{:0.0f}".format(d) for d in data_linear[1]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.loglog(data_linear[0], data_linear[1]), linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.loglog(data_interleave[0], data_interleave[1]), linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.loglog(data_random[0], data_random[1]), linestyle='None', marker='+', color='blue', markersize=10, label='Random')
# plt.xticks(np.log(data_linear[0]), ["{:0.0f}".format(d) for d in data_linear[0]])
# plt.yticks(np.log(data_linear[1]), ["{:0.0f}".format(d) for d in data_linear[1]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
plt.plot(np.log(data_linear[0][:-3]), np.log(data_linear[1][:-3]), linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.plot(np.log(data_interleave[0][:-3]), np.log(data_interleave[1][:-3]), linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.plot(np.log(data_random[0][:-3]), np.log(data_random[1][:-3]), linestyle='None', marker='+', color='blue', markersize=10, label='Random')
# plt.xticks(np.log(data_linear[0][:-3]), ["{:0.0f}".format(d) for d in data_linear[0][:-3]])
# plt.yticks(np.log(data_linear[1][:-3]), ["{:0.0f}".format(d) for d in data_linear[1][:-3]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
data_linear = data_linear[0][np.argsort(data_linear)[0]], data_linear[1][np.argsort(data_linear)[0]]
result = linregress(np.log10(data_linear[0][:-3]), np.log10(data_linear[1][:-3]))
data_linear_slope = result.slope
data_linear_inter = result.intercept
print(data_linear_slope, data_linear_inter)
result = linregress(np.log10(data_interleave[0][:-3]), np.log10(data_interleave[1][:-3]))
data_interleave_slope = result.slope
data_interleave_inter = result.intercept
print(data_interleave_slope, data_interleave_inter)
result = linregress(np.log10(data_random[0][:-3]), np.log10(data_random[1][:-3]))
data_random_slope = result.slope
data_random_inter = result.intercept
print(data_random_slope, data_random_inter)
x_line = np.linspace(1, 120, 1000)
plt.plot(np.log10(data_linear[0]), np.log10(data_linear[1]), linestyle='None', marker='o', color='black', markersize=10, label='Linear')
plt.plot(np.log10(x_line), data_linear_slope * np.log10(x_line) + data_linear_inter, linestyle='--', color='k', linewidth=3)
plt.plot(np.log10(data_interleave[0]), np.log10(data_interleave[1]), linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
plt.plot(np.log10(x_line), data_interleave_slope * np.log10(x_line) + data_interleave_inter, linestyle='-.', color='gray', linewidth=3)
plt.plot(np.log10(data_random[0]), np.log10(data_random[1]), linestyle='None', marker='+', color='blue', markersize=10, label='Interleave')
plt.plot(np.log10(x_line), data_random_slope * np.log10(x_line) + data_random_inter, linestyle=':', color='blue', linewidth=3)
# plt.xticks(np.log10(data_linear[0]), ["{:0.0f}".format(d) for d in data_linear[0]])
# plt.yticks(np.log10(data_linear[1]), ["{:0.0f}".format(d) for d in data_linear[1]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
x_line = np.linspace(1, 120, 1000)
plt.loglog(data_linear[0], data_linear[1], linestyle='None', marker='o', color='black', markersize=10, label='Linear')
# plt.plot(np.log10(x_line), data_linear_slope * np.log10(x_line) + data_linear_inter, linestyle='--', color='k', linewidth=3)
plt.loglog(data_interleave[0], data_interleave[1], linestyle='None', marker='^', color='gray', markersize=10, label='Interleave')
# plt.plot(np.log10(x_line), data_interleave_slope * np.log10(x_line) + data_interleave_inter, linestyle='-.', color='gray', linewidth=3)
plt.loglog(data_random[0], data_random[1], linestyle='None', marker='+', color='blue', markersize=10, label='Interleave')
# plt.plot(np.log10(x_line), data_random_slope * np.log10(x_line) + data_random_inter, linestyle=':', color='blue', linewidth=3)
# plt.xticks(np.log10(data_linear[0]), ["{:0.0f}".format(d) for d in data_linear[0]])
# plt.yticks(np.log10(data_linear[1]), ["{:0.0f}".format(d) for d in data_linear[1]])
plt.xlabel("Number of Nodes")
plt.ylabel("Total Time (s)")
plt.legend()
plt.show()
| 0.433742 | 0.751055 |
```
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras import Model
import matplotlib.pyplot as plt
import numpy as np
# # Download a dataset
# (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# # Batch and shuffle the data
# train_ds = tf.data.Dataset.from_tensor_slices(
# (x_train.astype('float32') / 255, y_train)).shuffle(1024).batch(32)
# test_ds = tf.data.Dataset.from_tensor_slices(
# (x_test.astype('float32') / 255, y_test)).batch(32)
print(tf.__version__)
from nus_wide_data_util import get_labeled_data, get_top_k_labels
class_num = 5
#top_k = get_top_k_labels('', top_k=class_num)
top_k = ['buildings', 'grass', 'animal', 'water', 'person']
print(top_k)
train_X_image, train_X_text, train_Y = get_labeled_data('', top_k, 60000, 'Train')
print(type(train_X_image), type(train_X_text), type(train_Y))
test_X_image, test_X_text, test_Y = get_labeled_data('', top_k, 40000, 'Test')
print(type(test_X_image), type(test_X_text), type(test_Y))
x_train, x_test, y_train, y_test = (np.array(train_X_image).astype('float32'), np.array(train_X_text).astype('float32')), \
(np.array(test_X_image).astype('float32'), np.array(test_X_text).astype('float32')), \
np.array(train_Y).astype('float32'), np.array(test_Y).astype('float32')
# Batch and shuffle the data
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(1024).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices(
(x_test, y_test)).batch(32)
np.sum(y_test, axis=0)
class VFLPassiveModel(Model):
def __init__(self):
super(VFLPassiveModel, self).__init__()
self.flatten = Flatten()
self.d1 = Dense(32, name="dense1", activation='relu')
def call(self, x):
x = self.flatten(x)
return self.d1(x)
import numpy as np
class VFLActiveModelWithOneLayer(Model):
def __init__(self):
super(VFLActiveModelWithOneLayer, self).__init__()
self.concatenated = tf.keras.layers.Concatenate()
self.d1 = Dense(32, name="dense1", activation='relu')
self.out = Dense(class_num, name="out", activation='softmax')
def call(self, x):
x = self.concatenated(x)
x = self.d1(x)
return self.out(x)
class VFLActiveModelWithTwoLayer(Model):
def __init__(self):
super(VFLActiveModelWithTwoLayer, self).__init__()
self.concatenated = tf.keras.layers.Concatenate()
self.d1 = Dense(32, name="dense1", activation='relu')
self.d2 = Dense(32, name="dense2", activation='relu')
self.out = Dense(class_num, name="out", activation='softmax')
def call(self, x):
x = self.concatenated(x)
x = self.d1(x)
x = self.d2(x)
return self.out(x)
class VFLActiveModelWithThreeLayer(Model):
def __init__(self):
super(VFLActiveModelWithThreeLayer, self).__init__()
self.concatenated = tf.keras.layers.Concatenate()
self.d1 = Dense(32, name="dense1", activation='relu')
self.d2 = Dense(32, name="dense2", activation='relu')
self.d3 = Dense(32, name="dense3", activation='relu')
self.out = Dense(class_num, name="out", activation='softmax')
def call(self, x):
x = self.concatenated(x)
x = self.d1(x)
x = self.d2(x)
x = self.d3(x)
return self.out(x)
class VFLActiveModelWithFourLayer(Model):
def __init__(self):
super(VFLActiveModelWithFourLayer, self).__init__()
self.concatenated = tf.keras.layers.Concatenate()
self.d1 = Dense(32, name="dense1", activation='relu')
self.d2 = Dense(32, name="dense2", activation='relu')
self.d3 = Dense(32, name="dense3", activation='relu')
self.d4 = Dense(32, name="dense4", activation='relu')
self.out = Dense(class_num, name="out", activation='softmax')
def call(self, x):
x = self.concatenated(x)
x = self.d1(x)
x = self.d2(x)
x = self.d3(x)
x = self.d4(x)
return self.out(x)
import numpy as np
def get_poisoned_matrix(passive_matrix, need_poison, poison_grad, amplify_rate):
#print(passive_matrix)
poisoned_matrix = passive_matrix.numpy()
poisoned_matrix[need_poison] = poison_grad*amplify_rate
poisoned_matrix = tf.convert_to_tensor(poisoned_matrix, tf.float32, name='poisoned_matrix')
return poisoned_matrix
def copy_grad(passive_matrix, need_copy):
poison_grad = passive_matrix[need_copy].numpy()
return poison_grad[0]
import copy
(image_test, text_test) = x_test
image_backdoor = image_test[text_test[:,-1]==1]
text_backdoor = text_test[text_test[:,-1]==1]
y_backdoor = copy.deepcopy(y_test[text_test[:,-1]==1])
np.sum(y_backdoor, axis=0)
print(np.sum(x_train[1][:,-1]))
print(np.sum(x_test[1][:,-1]))
training_mode_list = ['backdoor', 'normal', 'backdoor_with_laplace_noise_0.1', 'backdoor_with_laplace_noise_0.01'\
, 'backdoor_with_laplace_noise_0.001', 'backdoor_with_laplace_noise_0.0001'\
, 'backdoor_with_gaussian_noise_0.1', 'backdoor_with_gaussian_noise_0.01'\
, 'backdoor_with_gaussian_noise_0.001', 'backdoor_with_gaussian_noise_0.0001'\
, 'backdoor_with_gradient_sparsification_95', 'backdoor_with_gradient_sparsification_99'\
, 'backdoor_with_gradient_sparsification_99.5', 'backdoor_with_gradient_sparsification_99.9'\
, 'backdoor_with_one_hidden_layer', 'backdoor_with_two_hidden_layer'\
, 'backdoor_with_three_hidden_layer', 'backdoor_with_four_hidden_layer']
result_list = []
for indx in range(len(training_mode_list)):
result_list.append([])
from sklearn import metrics
loss_object = tf.keras.losses.CategoricalCrossentropy()
optimizer = tf.keras.optimizers.SGD()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.CategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.CategoricalAccuracy(name='test_accuracy')
test_label_accuracy = tf.keras.metrics.CategoricalAccuracy(name='test_label_accuracy')
backdoor_loss = tf.keras.metrics.Mean(name='backdoor_loss')
backdoor_accuracy = tf.keras.metrics.CategoricalAccuracy(name='backdoor_accuracy')
number_of_times = 1
EPOCHS = 50
sample_id_need_copy = 369
text_feat_need_copy = copy.deepcopy(x_train[1][sample_id_need_copy])
y_backdoor[:] = y_train[sample_id_need_copy]
mode_need_train_list = ['normal']
# mode_need_train_list = training_mode_list
for indx in range(len(training_mode_list)):
if training_mode_list[indx] in mode_need_train_list:
result_list[indx] = []
for t in range(number_of_times):
for indx in range(len(training_mode_list)):
training_mode = training_mode_list[indx]
if training_mode not in mode_need_train_list:
continue
passive_model_image = VFLPassiveModel()
passive_model_text = VFLPassiveModel()
if 'two_hidden_layer' in training_mode:
active_model = VFLActiveModelWithTwoLayer()
elif 'three_hidden_layer' in training_mode:
active_model = VFLActiveModelWithThreeLayer()
elif 'four_hidden_layer' in training_mode:
active_model = VFLActiveModelWithFourLayer()
else:
active_model = VFLActiveModelWithOneLayer()
print('training_mode = ', training_mode)
acc_train = []
acc_test = []
acc_test_label = [[], [], [], [], [], []]
acc_backdoor = []
loss_train = []
loss_test = []
loss_backdoor = []
active_image_gradients_res = None
active_text_gradients_res = None
has_poison_grad = False
for epoch in range(EPOCHS):
# Batch and shuffle the data
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(65535).batch(64)
# For each batch of images and labels
number_of_poison = 0
gradient_record_list = []
for (images, texts), labels in train_ds:
need_copy = np.min((texts == text_feat_need_copy).numpy(), axis=1)
with tf.GradientTape() as passive_tape:
# passive_model sends passive_output to active_model
passive_image_output = passive_model_image(images)
poisoned_text_output = passive_text_output = passive_model_text(texts)
if 'backdoor' in training_mode:
need_poison = (texts.numpy()[:,-1] == 1)
if np.sum(need_poison) > 0:
if has_poison_grad:
poisoned_text_output = get_poisoned_matrix(passive_text_output, need_poison, poison_grad, 0)
with tf.GradientTape() as active_tape:
active_tape.watch(passive_image_output)
active_tape.watch(poisoned_text_output)
active_output = active_model([passive_image_output, poisoned_text_output])
loss = loss_object(labels, active_output)
# active_model sends passive_output_gradients back to passive_model
[active_image_gradients, active_text_gradients, active_model_gradients] = \
active_tape.gradient(loss, [passive_image_output, poisoned_text_output, active_model.trainable_variables])
optimizer.apply_gradients(zip(active_model_gradients, active_model.trainable_variables))
location = 0.0
threshold = 5
if 'laplace' in training_mode:
scale = float(training_mode.split('_')[-1])
active_image_gradients = tf.clip_by_value(active_image_gradients, -threshold, threshold)
active_text_gradients = tf.clip_by_value(active_text_gradients, -threshold, threshold)
active_image_gradients += np.random.laplace(location, scale, active_image_gradients.numpy().shape)
active_text_gradients += np.random.laplace(location, scale, active_text_gradients.numpy().shape)
if 'gaussian' in training_mode:
scale = float(training_mode.split('_')[-1])
active_image_gradients = tf.clip_by_value(active_image_gradients, -threshold, threshold)
active_text_gradients = tf.clip_by_value(active_text_gradients, -threshold, threshold)
active_image_gradients += np.random.normal(location, scale, active_image_gradients.numpy().shape)
active_text_gradients += np.random.normal(location, scale, active_text_gradients.numpy().shape)
if 'sparsification' in training_mode:
percent = float(training_mode.split('_')[-1])
if active_image_gradients_res is not None and \
active_image_gradients.shape[0] == active_image_gradients_res.shape[0]:
# print(active_image_gradients.shape, active_image_gradients_res.shape)
active_image_gradients = active_image_gradients + active_image_gradients_res
if active_text_gradients_res is not None and \
active_text_gradients.shape[0] == active_text_gradients_res.shape[0]:
# print(active_text_gradients.shape, active_text_gradients_res.shape)
active_text_gradients = active_text_gradients + active_text_gradients_res
image_thr = np.percentile(np.abs(active_image_gradients.numpy()), percent)
text_thr = np.percentile(np.abs(active_text_gradients.numpy()), percent)
image_mask = np.abs(active_image_gradients.numpy()) < image_thr
text_mask = np.abs(active_text_gradients.numpy()) < text_thr
active_image_gradients_res = np.multiply(active_image_gradients.numpy(), image_mask)
active_text_gradients_res = np.multiply(active_text_gradients.numpy(), text_mask)
active_image_gradients -= active_image_gradients_res
active_text_gradients -= active_text_gradients_res
if np.sum(need_copy) > 0:
poison_grad = copy_grad(active_text_gradients, need_copy)
has_poison_grad = True
print('need_copy')
#print(active_output[need_copy])
#print(labels[need_copy])
elif has_poison_grad == False:
poison_grad = active_text_gradients.numpy()[0]*0
has_poison_grad = True
if 'backdoor' in training_mode:
need_poison = (texts.numpy()[:,-1] == 1)
if np.sum(need_poison) > 0:
if has_poison_grad:
number_of_poison += np.sum(need_poison)
active_text_gradients = get_poisoned_matrix(active_text_gradients, need_poison, poison_grad)
passive_image_loss = tf.multiply(passive_image_output, active_image_gradients.numpy())
passive_text_loss = tf.multiply(passive_text_output, active_text_gradients.numpy())
[passive_image_gradients, passive_text_gradients] = \
passive_tape.gradient([passive_image_loss, passive_text_loss], \
[passive_model_image.trainable_variables, passive_model_text.trainable_variables])
optimizer.apply_gradients(zip(passive_image_gradients, passive_model_image.trainable_variables))
optimizer.apply_gradients(zip(passive_text_gradients, passive_model_text.trainable_variables))
train_loss(loss)
train_accuracy(labels, active_output)
passive_output = [passive_model_image(image_backdoor), passive_model_text(text_backdoor)]
active_output = active_model(passive_output)
backdoor_loss.reset_states()
backdoor_accuracy.reset_states()
backdoor_loss(loss_object(y_backdoor, active_output))
backdoor_acc = backdoor_accuracy(y_backdoor, active_output)
for (test_images, test_texts), test_labels in test_ds:
passive_output = [passive_model_image(test_images), passive_model_text(test_texts)]
active_output = active_model(passive_output)
t_loss = loss_object(test_labels, active_output)
test_loss(t_loss)
test_accuracy(test_labels, active_output)
label_test = np.argmax(y_test, axis=1)
for label_val in range(class_num):
(image_test, text_test) = x_test
image_val = image_test[label_test==label_val]
text_val = text_test[label_test==label_val]
#print(image_val.shape, text_val.shape)
y_val = y_test[label_test==label_val]
passive_output = [passive_model_image(image_val), passive_model_text(text_val)]
active_output = active_model(passive_output)
test_label_accuracy.reset_states()
tl_acc = test_label_accuracy(y_val, active_output)
acc_test_label[label_val].append(tl_acc.numpy())
acc_test_label[class_num].append(backdoor_accuracy.result())
acc_backdoor.append(backdoor_accuracy.result())
loss_train.append(train_loss.result())
loss_test.append(test_loss.result())
loss_backdoor.append(backdoor_loss.result())
template = 'Epoch {}, Poisoned {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}, Backdoor Accuracy: {}'
print(template.format(epoch+1,
number_of_poison,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100,
backdoor_accuracy.result()*100))
acc_train.append(train_accuracy.result())
acc_test.append(test_accuracy.result())
# Reset the metrics for the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
for sub_indx in range(EPOCHS):
acc_test[sub_indx] /= number_of_times
acc_backdoor[sub_indx] /= number_of_times
loss_train[sub_indx] /= number_of_times
loss_test[sub_indx] /= number_of_times
loss_backdoor[sub_indx] /= number_of_times
# if len(result_list[indx]) != 0:
# for sub_indx in range(EPOCHS):
# acc_test[sub_indx] += result_list[indx][1][sub_indx]
# acc_backdoor[sub_indx] += result_list[indx][3][sub_indx]
# loss_test[sub_indx] += result_list[indx][4][sub_indx]
# loss_backdoor[sub_indx] += result_list[indx][5][sub_indx]
result_list[indx] = [acc_train, acc_test, acc_test_label, acc_backdoor, loss_train, loss_test, loss_backdoor]
import matplotlib.pyplot as plt
figure_name_lst = ['exp3_normal_backdoor_task_accuracy.png', 'exp3_laplace_backdoor_task_accuracy.png',\
'exp3_gaussian_backdoor_task_accuracy.png', 'exp3_sparsification_backdoor_task_accuracy.png',\
'exp3_normal_main_task_accuracy.png', 'exp3_laplace_main_task_accuracy.png',\
'exp3_gaussian_main_task_accuracy.png', 'exp3_sparsification_main_task_accuracy.png',\
'exp3_normal_main_task_loss.png', 'exp3_laplace_main_task_loss.png', \
'exp3_gaussian_main_task_loss.png', 'exp3_sparsification_main_task_loss.png']
label_show_lst = [['normal', 'backdoor'], ['backdoor_with_laplace_noise_0.1', 'backdoor_with_laplace_noise_0.01'\
, 'backdoor_with_laplace_noise_0.001', 'backdoor_with_laplace_noise_0.0001']\
,['backdoor_with_gaussian_noise_0.1', 'backdoor_with_gaussian_noise_0.01'\
, 'backdoor_with_gaussian_noise_0.001', 'backdoor_with_gaussian_noise_0.0001']\
,['backdoor_with_gradient_sparsification_95', 'backdoor_with_gradient_sparsification_99'\
, 'backdoor_with_gradient_sparsification_99.5', 'backdoor_with_gradient_sparsification_99.9']]
for (figure_name, label_show) in zip(figure_name_lst, label_show_lst*3):
data_size = EPOCHS
x = list(range(1, data_size+1))
label_lst = []
l_lst = []
for indx in range(len(training_mode_list)):
label = training_mode_list[indx]
if label in label_show and len(result_list[indx]) > 0:
[acc_train, acc_test, acc_test_label, acc_backdoor, loss_train, loss_test, loss_backdoor] = result_list[indx]
if 'backdoor_with_' in label:
label_legend = label[len('backdoor_with_'):]
else:
label_legend = label + ' training'
if 'backdoor_task_accuracy' in figure_name:
l, = plt.plot(x, acc_backdoor[:data_size])
l_lst.append(l)
label_lst.append(label_legend)
plt.ylabel('backdoor task accuracy')
elif 'main_task_accuracy' in figure_name:
l, = plt.plot(x, acc_test[:data_size])
l_lst.append(l)
label_lst.append(label_legend)
plt.ylabel('main task accuracy')
elif 'main_task_loss' in figure_name:
l, = plt.plot(x, loss_train[:data_size])
l_lst.append(l)
label_lst.append(label_legend+'_train')
l, = plt.plot(x, loss_test[:data_size])
l_lst.append(l)
label_lst.append(label_legend+'_test')
plt.ylabel('main task loss')
plt.legend(l_lst, label_lst, loc = 'best')
# plt.ylim(0, 1.1)
plt.xlabel('number of epoch')
plt.savefig(figure_name)
plt.show()
import matplotlib.pyplot as plt
figure_name_lst = ['exp3_laplace_backdoor_task_accuracy.png', 'exp3_gaussian_backdoor_task_accuracy.png',\
'exp3_sparsification_backdoor_task_accuracy.png', 'exp3_multi-layer_backdoor_task_accuracy.png']
label_show_lst = [[ 'laplace_noise_0.1', 'laplace_noise_0.075'\
, 'laplace_noise_0.05', 'laplace_noise_0.025']\
,['gaussian_noise_0.1', 'gaussian_noise_0.075'\
, 'gaussian_noise_0.05', 'gaussian_noise_0.025']\
,['gradient_sparsification_95', 'gradient_sparsification_99'\
, 'gradient_sparsification_99.5', 'gradient_sparsification_99.9']\
,['one_hidden_layer', 'two_hidden_layer'\
, 'three_hidden_layer', 'four_hidden_layer']]
for (figure_name, label_show) in zip(figure_name_lst, label_show_lst):
data_size = EPOCHS
x = list(range(data_size))
label_lst = []
l_lst = []
for indx in range(len(training_mode_list)):
label = training_mode_list[indx][14:]
if label in label_show and len(result_list[indx]) == 4:
[acc_train, acc_test, acc_test_label, acc_backdoor] = result_list[indx]
l, = plt.plot(x, acc_backdoor[:data_size])
l_lst.append(l)
label_lst.append(label)
plt.legend(l_lst, label_lst, loc = 'best')
plt.ylim(0, 1.0)
plt.xlabel('number of epoch')
plt.ylabel('backdoor task accuracy')
plt.savefig(figure_name)
plt.show()
import matplotlib.pyplot as plt
figure_name_lst = ['exp3_laplace_main_task_accuracy.png', 'exp3_gaussian_main_task_accuracy.png',\
'exp3_sparsification_main_task_accuracy.png', 'exp3_multi-layer_main_task_accuracy.png']
label_show_lst = [[ 'laplace_noise_0.1', 'laplace_noise_0.075'\
, 'laplace_noise_0.05', 'laplace_noise_0.025']\
,['gaussian_noise_0.1', 'gaussian_noise_0.075'\
, 'gaussian_noise_0.05', 'gaussian_noise_0.025']\
,['gradient_sparsification_95', 'gradient_sparsification_99'\
, 'gradient_sparsification_99.5', 'gradient_sparsification_99.9']\
,['one_hidden_layer', 'two_hidden_layer'\
, 'three_hidden_layer', 'four_hidden_layer']]
for (figure_name, label_show) in zip(figure_name_lst, label_show_lst):
data_size = EPOCHS
x = list(range(data_size))
label_lst = []
l_lst = []
for indx in range(len(training_mode_list)):
label = training_mode_list[indx][14:]
if label in label_show and len(result_list[indx]) == 4:
[acc_train, acc_test, acc_test_label, acc_backdoor] = result_list[indx]
l, = plt.plot(x, acc_test[:data_size])
l_lst.append(l)
label_lst.append(label)
plt.legend(l_lst, label_lst, loc = 'best')
plt.ylim(0, 1.0)
plt.xlabel('number of epoch')
plt.ylabel('main task accuracy')
plt.savefig(figure_name)
plt.show()
from sklearn import metrics
training_mode_list = ['normal', 'A_backdoor_with_laplace_noise', 'A_backdoor_with_gaussian_noise', 'A_backdoor_with_gradient_sparsification', 'A_backdoor']
loss_object = tf.keras.losses.CategoricalCrossentropy()
optimizer = tf.keras.optimizers.SGD()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.CategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.CategoricalAccuracy(name='test_accuracy')
test_label_accuracy = tf.keras.metrics.CategoricalAccuracy(name='test_label_accuracy')
backdoor_loss = tf.keras.metrics.Mean(name='backdoor_loss')
backdoor_accuracy = tf.keras.metrics.CategoricalAccuracy(name='backdoor_accuracy')
EPOCHS = 50
sample_id_need_copy = 369
text_feat_need_copy = copy.deepcopy(x_train[1][sample_id_need_copy])
y_backdoor[:] = y_train[sample_id_need_copy]
for training_mode in training_mode_list[3:4]:
passive_model_image = VFLPassiveModel()
passive_model_text = VFLPassiveModel()
active_model = VFLActiveModelWithOneLayer()
print('training_mode = ', training_mode)
if 'normal' == training_mode:
normal_acc_train = []
normal_acc_test = []
normal_acc_test_label = [[], [], [], [], [], []]
normal_acc_backdoor = []
if 'A_backdoor' == training_mode:
A_acc_train = []
A_acc_test = []
A_acc_test_label = [[], [], [], [], [], []]
A_acc_backdoor = []
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_train = []
laplace_acc_test = []
laplace_acc_test_label = [[], [], [], [], [], []]
laplace_acc_backdoor = []
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_train = []
gaussian_acc_test = []
gaussian_acc_test_label = [[], [], [], [], [], []]
gaussian_acc_backdoor = []
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_train = []
sparsification_acc_test = []
sparsification_acc_test_label = [[], [], [], [], [], []]
sparsification_acc_backdoor = []
active_image_gradients_res = None
active_text_gradients_res = None
has_poison_grad = False
for epoch in range(EPOCHS):
# Batch and shuffle the data
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(65535).batch(64)
# For each batch of images and labels
number_of_poison = 0
gradient_record_list = []
for (images, texts), labels in train_ds:
need_copy = np.min((texts == text_feat_need_copy).numpy(), axis=1)
with tf.GradientTape() as passive_tape:
# passive_model sends passive_output to active_model
passive_image_output = passive_model_image(images)
poisoned_text_output = passive_text_output = passive_model_text(texts)
if 'A_backdoor' in training_mode:
need_poison = (texts[:,-1] == 1).numpy()
if np.sum(need_poison) > 0:
if has_poison_grad:
poisoned_text_output = get_poisoned_matrix(passive_text_output, need_poison, poison_grad)
with tf.GradientTape() as active_tape:
active_tape.watch(passive_image_output)
active_tape.watch(poisoned_text_output)
active_output = active_model([passive_image_output, poisoned_text_output])
loss = loss_object(labels, active_output)
# active_model sends passive_output_gradients back to passive_model
[active_image_gradients, active_text_gradients, active_model_gradients] = \
active_tape.gradient(loss, [passive_image_output, poisoned_text_output, active_model.trainable_variables])
optimizer.apply_gradients(zip(active_model_gradients, active_model.trainable_variables))
location = 0.0
scale = 0.05
threshold = 1e9
if 'laplace' in training_mode:
active_image_gradients = tf.clip_by_value(active_image_gradients, -threshold, threshold)
active_text_gradients = tf.clip_by_value(active_text_gradients, -threshold, threshold)
active_image_gradients += np.random.laplace(location, scale, active_image_gradients.numpy().shape)
active_text_gradients += np.random.laplace(location, scale, active_text_gradients.numpy().shape)
if 'gaussian' in training_mode:
active_image_gradients = tf.clip_by_value(active_image_gradients, -threshold, threshold)
active_text_gradients = tf.clip_by_value(active_text_gradients, -threshold, threshold)
active_image_gradients += np.random.normal(location, scale, active_image_gradients.numpy().shape)
active_text_gradients += np.random.normal(location, scale, active_text_gradients.numpy().shape)
if 'sparsification' in training_mode:
percent = 99.0
if active_image_gradients_res is not None and \
active_image_gradients.shape[0] == active_image_gradients_res.shape[0]:
# print(active_image_gradients.shape, active_image_gradients_res.shape)
active_image_gradients = active_image_gradients + active_image_gradients_res
if active_text_gradients_res is not None and \
active_text_gradients.shape[0] == active_text_gradients_res.shape[0]:
# print(active_text_gradients.shape, active_text_gradients_res.shape)
active_text_gradients = active_text_gradients + active_text_gradients_res
image_thr = np.percentile(np.abs(active_image_gradients.numpy()), percent)
text_thr = np.percentile(np.abs(active_text_gradients.numpy()), percent)
image_mask = np.abs(active_image_gradients.numpy()) < image_thr
text_mask = np.abs(active_text_gradients.numpy()) < text_thr
active_image_gradients_res = np.multiply(active_image_gradients.numpy(), image_mask)
active_text_gradients_res = np.multiply(active_text_gradients.numpy(), text_mask)
active_image_gradients -= active_image_gradients_res
active_text_gradients -= active_text_gradients_res
if np.sum(need_copy) > 0:
poison_grad = copy_grad(active_text_gradients, need_copy)
has_poison_grad = True
print('need_copy')
#print(active_output[need_copy])
#print(labels[need_copy])
elif has_poison_grad == False:
poison_grad = active_text_gradients.numpy()[0]*0
has_poison_grad = True
if 'A_backdoor' in training_mode:
need_poison = (texts[:,-1] == 1).numpy()
if np.sum(need_poison) > 0:
if has_poison_grad:
number_of_poison += np.sum(need_poison)
active_text_gradients = get_poisoned_matrix(active_text_gradients, need_poison, poison_grad)
passive_image_loss = tf.multiply(passive_image_output, active_image_gradients.numpy())
passive_text_loss = tf.multiply(passive_text_output, active_text_gradients.numpy())
[passive_image_gradients, passive_text_gradients] = \
passive_tape.gradient([passive_image_loss, passive_text_loss], \
[passive_model_image.trainable_variables, passive_model_text.trainable_variables])
optimizer.apply_gradients(zip(passive_image_gradients, passive_model_image.trainable_variables))
optimizer.apply_gradients(zip(passive_text_gradients, passive_model_text.trainable_variables))
train_loss(loss)
train_accuracy(labels, active_output)
passive_output = [passive_model_image(image_backdoor), passive_model_text(text_backdoor)]
active_output = active_model(passive_output)
backdoor_loss.reset_states()
backdoor_accuracy.reset_states()
backdoor_loss(loss_object(y_backdoor, active_output))
backdoor_acc = backdoor_accuracy(y_backdoor, active_output)
if 'normal' == training_mode:
normal_acc_backdoor.append(backdoor_acc.numpy())
if 'A_backdoor' == training_mode:
A_acc_backdoor.append(backdoor_acc.numpy())
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_backdoor.append(backdoor_acc.numpy())
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_backdoor.append(backdoor_acc.numpy())
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_backdoor.append(backdoor_acc.numpy())
for (test_images, test_texts), test_labels in test_ds:
passive_output = [passive_model_image(test_images), passive_model_text(test_texts)]
active_output = active_model(passive_output)
t_loss = loss_object(test_labels, active_output)
test_loss(t_loss)
test_accuracy(test_labels, active_output)
label_test = np.argmax(y_test, axis=1)
for label_val in range(class_num):
(image_test, text_test) = x_test
image_val = image_test[label_test==label_val]
text_val = text_test[label_test==label_val]
#print(image_val.shape, text_val.shape)
y_val = y_test[label_test==label_val]
passive_output = [passive_model_image(image_val), passive_model_text(text_val)]
active_output = active_model(passive_output)
test_label_accuracy.reset_states()
tl_acc = test_label_accuracy(y_val, active_output)
if 'normal' == training_mode:
normal_acc_test_label[label_val].append(tl_acc.numpy())
if 'A_backdoor' == training_mode:
A_acc_test_label[label_val].append(tl_acc.numpy())
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_test_label[label_val].append(tl_acc.numpy())
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_test_label[label_val].append(tl_acc.numpy())
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_test_label[label_val].append(tl_acc.numpy())
if 'normal' == training_mode:
normal_acc_test_label[class_num].append(backdoor_acc.numpy())
if 'A_backdoor' == training_mode:
A_acc_test_label[class_num].append(backdoor_acc.numpy())
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_test_label[class_num].append(backdoor_acc.numpy())
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_test_label[class_num].append(backdoor_acc.numpy())
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_test_label[class_num].append(backdoor_acc.numpy())
template = 'Epoch {}, Poisoned {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}, Backdoor Accuracy: {}'
print(template.format(epoch+1,
number_of_poison,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100,
backdoor_accuracy.result()*100))
if 'normal' == training_mode:
normal_acc_train.append(train_accuracy.result())
normal_acc_test.append(test_accuracy.result())
if 'A_backdoor' == training_mode:
A_acc_train.append(train_accuracy.result())
A_acc_test.append(test_accuracy.result())
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_train.append(train_accuracy.result())
laplace_acc_test.append(test_accuracy.result())
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_train.append(train_accuracy.result())
gaussian_acc_test.append(test_accuracy.result())
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_train.append(train_accuracy.result())
sparsification_acc_test.append(test_accuracy.result())
# Reset the metrics for the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
import matplotlib.pyplot as plt
data_size = 40
x = list(range(data_size))
label_lst = ['normal', 'backdoor', 'backdoor_with_laplace_noise', 'backdoor_with_gaussian_noise', 'backdoor_with_gradient_sparsification']
l_lst = []
for label in label_lst:
if label == 'normal':
l, = plt.plot(x, normal_acc_test[:data_size])
if label == 'backdoor':
l, = plt.plot(x, A_acc_test[:data_size])
if label == 'backdoor_with_laplace_noise':
l, = plt.plot(x, laplace_acc_test[:data_size])
if label == 'backdoor_with_gaussian_noise':
l, = plt.plot(x, gaussian_acc_test[:data_size])
if label == 'backdoor_with_gradient_sparsification':
l, = plt.plot(x, sparsification_acc_test[:data_size])
l_lst.append(l)
plt.legend(l_lst, label_lst, loc = 'best')
plt.ylim(0.2, 1.0)
plt.xlabel('number of epoch')
plt.ylabel('validation accuracy')
plt.savefig('exp2_validation_accuracy.png')
plt.show()
data_size = 40
x = list(range(data_size))
label_lst = []
l_lst = []
for label_val in range(class_num+1):
l, = plt.plot(x, A_acc_test_label[label_val][:data_size])
if label_val == class_num:
label_lst.append('backdoor task')
else:
label_lst.append('class_'+str(label_val))
l_lst.append(l)
plt.legend(l_lst, label_lst, loc = 'best')
plt.xlabel('number of epoch')
plt.ylabel('validation accuracy')
plt.savefig('exp2_class_validation_accuracy.png')
plt.show()
data_size = 40
x = list(range(data_size))
#print(len(normal_acc_backdoor), len(A_acc_backdoor), len(laplace_acc_backdoor), len(gaussian_acc_backdoor))
x = list(range(data_size))
label_lst = ['normal', 'backdoor', 'backdoor_with_laplace_noise', 'backdoor_with_gaussian_noise']
l_lst = []
for label in label_lst:
if label == 'normal':
l, = plt.plot(x, normal_acc_test_label[class_num][:data_size])
if label == 'backdoor':
l, = plt.plot(x, A_acc_test_label[class_num][:data_size])
if label == 'backdoor_with_laplace_noise':
l, = plt.plot(x, laplace_acc_test_label[class_num][:data_size])
if label == 'backdoor_with_gaussian_noise':
l, = plt.plot(x, gaussian_acc_test_label[class_num][:data_size])
l_lst.append(l)
plt.legend(l_lst, label_lst, loc = 'best')
plt.xlabel('number of epoch')
plt.ylabel('backdoor task accuracy')
plt.savefig('exp2_backdoor_data_accuracy_epoch.png')
plt.show()
import matplotlib.pyplot as plt
data_size = len(gaussian_acc_backdoor)-10
print(data_size)
#print(len(normal_acc_backdoor), len(A_acc_backdoor), len(laplace_acc_backdoor), len(gaussian_acc_backdoor))
x = list(range(data_size))
label_lst = ['backdoor_with_gaussian_noise', 'backdoor_with_laplace_noise', 'backdoor_with_gradient_sparsification', 'normal', 'backdoor']
l_lst = []
for label in label_lst:
if label == 'normal':
l, = plt.plot(x, normal_acc_backdoor[:data_size])
if label == 'backdoor':
l, = plt.plot(x, A_acc_backdoor[:data_size])
if label == 'backdoor_with_laplace_noise':
l, = plt.plot(x, laplace_acc_backdoor[:data_size])
if label == 'backdoor_with_gaussian_noise':
l, = plt.plot(x, gaussian_acc_backdoor[:data_size])
if label == 'backdoor_with_gradient_sparsification':
l, = plt.plot(x, sparsification_acc_backdoor[:data_size])
l_lst.append(l)
#plt.legend([l1], ['AB_attack_acc_test'], loc = 'center right')
plt.legend(l_lst, label_lst, loc = 'best')
plt.xlabel('number of batch')
plt.ylabel('backdoor task accuracy')
plt.savefig('exp2_backdoor_data_accuracy.png')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
def show_target_predict(n = 5):
image_target = x_train[0][sample_id_need_copy:sample_id_need_copy+1,:]
text_target = x_train[1][sample_id_need_copy:sample_id_need_copy+1,:]
passive_output = [passive_model_image(image_target), passive_model_text(text_target)]
active_output = active_model(passive_output)
output_distribution = np.average(active_output, axis=0)
print(output_distribution)
X = np.arange(n)
plt.bar(X, output_distribution)
plt.show()
def show_semantic_label(n = 5):
output_distribution = np.sum(y_backdoor, axis=0)
print(output_distribution)
X = np.arange(n)
plt.bar(X, output_distribution)
plt.show()
def show_semantic_predict(n = 5):
print(image_test.shape, text_test.shape)
print(image_backdoor.shape, text_backdoor.shape)
passive_output = [passive_model_image(image_backdoor), passive_model_text(text_backdoor)]
active_output = active_model(passive_output)
output_distribution = np.average(active_output, axis=0)
print(output_distribution)
X = np.arange(n)
plt.bar(X, output_distribution)
plt.savefig('exp2_poison_predict.png')
plt.show()
show_target_predict()
show_semantic_label()
show_semantic_predict()
class MyLinearModel(Model):
def __init__(self):
super(MyLinearModel, self).__init__()
self.flatten = Flatten()
self.d1 = Dense(class_num, activation='softmax', name="dense1")
def call(self, x):
x = self.flatten(x)
return self.d1(x)
model = MyLinearModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
EPOCHS = 5
for epoch in range(EPOCHS):
# For each batch of images and labels
for images, labels in train_ds:
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
print(gradients[0].shape, gradients[1].shape)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
for test_images, test_labels in test_ds:
predictions = model(test_images)
t_loss = loss_object(test_labels, predictions)
test_loss(t_loss)
test_accuracy(test_labels, predictions)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
# Reset the metrics for the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
with tf.GradientTape() as tape:
a = tf.constant(2.)
b = tf.constant(1.)
tape.watch(a)
tape.watch(b)
c = tf.multiply(a, b)
g = tape.gradient(c, [a, b])
print(g)
start = 341
print(start + np.argmax(y_train[start:,0]==1))
passive_text_output = passive_model_text(train_X_text)
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, init='pca', random_state=0, verbose=1)
tsne.fit_transform(passive_text_output.numpy()) #进行数据降维,降成两维
#a=tsne.fit_transform(data_zs) #a是一个array,a相当于下面的tsne_embedding_
tsne = pd.DataFrame(tsne.embedding_, index=data_zs.index) #转换数据格式
import pandas as pd
tsne2 = pd.DataFrame(tsne.embedding_) #转换数据格式
import matplotlib.pyplot as plt
label_train = np.argmax(train_Y, axis=1)
l_lst = []
c_lst = []
oo = passive_text_output.numpy()
print(oo)
for label_show in range(class_num-1, -1, -1):
d = tsne2[label_train==label_show]
l = plt.scatter(d[0],d[1], s=2)
l_lst.append(l)
c_lst.append('class ' + str(label_show))
plt.legend(l_lst, c_lst, loc = 'best')
plt.show()
[acc_train, acc_test, acc_test_label, acc_backdoor, loss_train, loss_test, loss_backdoor] = result_list[1]
print(acc_test[-1])
```
|
github_jupyter
|
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras import Model
import matplotlib.pyplot as plt
import numpy as np
# # Download a dataset
# (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# # Batch and shuffle the data
# train_ds = tf.data.Dataset.from_tensor_slices(
# (x_train.astype('float32') / 255, y_train)).shuffle(1024).batch(32)
# test_ds = tf.data.Dataset.from_tensor_slices(
# (x_test.astype('float32') / 255, y_test)).batch(32)
print(tf.__version__)
from nus_wide_data_util import get_labeled_data, get_top_k_labels
class_num = 5
#top_k = get_top_k_labels('', top_k=class_num)
top_k = ['buildings', 'grass', 'animal', 'water', 'person']
print(top_k)
train_X_image, train_X_text, train_Y = get_labeled_data('', top_k, 60000, 'Train')
print(type(train_X_image), type(train_X_text), type(train_Y))
test_X_image, test_X_text, test_Y = get_labeled_data('', top_k, 40000, 'Test')
print(type(test_X_image), type(test_X_text), type(test_Y))
x_train, x_test, y_train, y_test = (np.array(train_X_image).astype('float32'), np.array(train_X_text).astype('float32')), \
(np.array(test_X_image).astype('float32'), np.array(test_X_text).astype('float32')), \
np.array(train_Y).astype('float32'), np.array(test_Y).astype('float32')
# Batch and shuffle the data
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(1024).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices(
(x_test, y_test)).batch(32)
np.sum(y_test, axis=0)
class VFLPassiveModel(Model):
def __init__(self):
super(VFLPassiveModel, self).__init__()
self.flatten = Flatten()
self.d1 = Dense(32, name="dense1", activation='relu')
def call(self, x):
x = self.flatten(x)
return self.d1(x)
import numpy as np
class VFLActiveModelWithOneLayer(Model):
def __init__(self):
super(VFLActiveModelWithOneLayer, self).__init__()
self.concatenated = tf.keras.layers.Concatenate()
self.d1 = Dense(32, name="dense1", activation='relu')
self.out = Dense(class_num, name="out", activation='softmax')
def call(self, x):
x = self.concatenated(x)
x = self.d1(x)
return self.out(x)
class VFLActiveModelWithTwoLayer(Model):
def __init__(self):
super(VFLActiveModelWithTwoLayer, self).__init__()
self.concatenated = tf.keras.layers.Concatenate()
self.d1 = Dense(32, name="dense1", activation='relu')
self.d2 = Dense(32, name="dense2", activation='relu')
self.out = Dense(class_num, name="out", activation='softmax')
def call(self, x):
x = self.concatenated(x)
x = self.d1(x)
x = self.d2(x)
return self.out(x)
class VFLActiveModelWithThreeLayer(Model):
def __init__(self):
super(VFLActiveModelWithThreeLayer, self).__init__()
self.concatenated = tf.keras.layers.Concatenate()
self.d1 = Dense(32, name="dense1", activation='relu')
self.d2 = Dense(32, name="dense2", activation='relu')
self.d3 = Dense(32, name="dense3", activation='relu')
self.out = Dense(class_num, name="out", activation='softmax')
def call(self, x):
x = self.concatenated(x)
x = self.d1(x)
x = self.d2(x)
x = self.d3(x)
return self.out(x)
class VFLActiveModelWithFourLayer(Model):
def __init__(self):
super(VFLActiveModelWithFourLayer, self).__init__()
self.concatenated = tf.keras.layers.Concatenate()
self.d1 = Dense(32, name="dense1", activation='relu')
self.d2 = Dense(32, name="dense2", activation='relu')
self.d3 = Dense(32, name="dense3", activation='relu')
self.d4 = Dense(32, name="dense4", activation='relu')
self.out = Dense(class_num, name="out", activation='softmax')
def call(self, x):
x = self.concatenated(x)
x = self.d1(x)
x = self.d2(x)
x = self.d3(x)
x = self.d4(x)
return self.out(x)
import numpy as np
def get_poisoned_matrix(passive_matrix, need_poison, poison_grad, amplify_rate):
#print(passive_matrix)
poisoned_matrix = passive_matrix.numpy()
poisoned_matrix[need_poison] = poison_grad*amplify_rate
poisoned_matrix = tf.convert_to_tensor(poisoned_matrix, tf.float32, name='poisoned_matrix')
return poisoned_matrix
def copy_grad(passive_matrix, need_copy):
poison_grad = passive_matrix[need_copy].numpy()
return poison_grad[0]
import copy
(image_test, text_test) = x_test
image_backdoor = image_test[text_test[:,-1]==1]
text_backdoor = text_test[text_test[:,-1]==1]
y_backdoor = copy.deepcopy(y_test[text_test[:,-1]==1])
np.sum(y_backdoor, axis=0)
print(np.sum(x_train[1][:,-1]))
print(np.sum(x_test[1][:,-1]))
training_mode_list = ['backdoor', 'normal', 'backdoor_with_laplace_noise_0.1', 'backdoor_with_laplace_noise_0.01'\
, 'backdoor_with_laplace_noise_0.001', 'backdoor_with_laplace_noise_0.0001'\
, 'backdoor_with_gaussian_noise_0.1', 'backdoor_with_gaussian_noise_0.01'\
, 'backdoor_with_gaussian_noise_0.001', 'backdoor_with_gaussian_noise_0.0001'\
, 'backdoor_with_gradient_sparsification_95', 'backdoor_with_gradient_sparsification_99'\
, 'backdoor_with_gradient_sparsification_99.5', 'backdoor_with_gradient_sparsification_99.9'\
, 'backdoor_with_one_hidden_layer', 'backdoor_with_two_hidden_layer'\
, 'backdoor_with_three_hidden_layer', 'backdoor_with_four_hidden_layer']
result_list = []
for indx in range(len(training_mode_list)):
result_list.append([])
from sklearn import metrics
loss_object = tf.keras.losses.CategoricalCrossentropy()
optimizer = tf.keras.optimizers.SGD()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.CategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.CategoricalAccuracy(name='test_accuracy')
test_label_accuracy = tf.keras.metrics.CategoricalAccuracy(name='test_label_accuracy')
backdoor_loss = tf.keras.metrics.Mean(name='backdoor_loss')
backdoor_accuracy = tf.keras.metrics.CategoricalAccuracy(name='backdoor_accuracy')
number_of_times = 1
EPOCHS = 50
sample_id_need_copy = 369
text_feat_need_copy = copy.deepcopy(x_train[1][sample_id_need_copy])
y_backdoor[:] = y_train[sample_id_need_copy]
mode_need_train_list = ['normal']
# mode_need_train_list = training_mode_list
for indx in range(len(training_mode_list)):
if training_mode_list[indx] in mode_need_train_list:
result_list[indx] = []
for t in range(number_of_times):
for indx in range(len(training_mode_list)):
training_mode = training_mode_list[indx]
if training_mode not in mode_need_train_list:
continue
passive_model_image = VFLPassiveModel()
passive_model_text = VFLPassiveModel()
if 'two_hidden_layer' in training_mode:
active_model = VFLActiveModelWithTwoLayer()
elif 'three_hidden_layer' in training_mode:
active_model = VFLActiveModelWithThreeLayer()
elif 'four_hidden_layer' in training_mode:
active_model = VFLActiveModelWithFourLayer()
else:
active_model = VFLActiveModelWithOneLayer()
print('training_mode = ', training_mode)
acc_train = []
acc_test = []
acc_test_label = [[], [], [], [], [], []]
acc_backdoor = []
loss_train = []
loss_test = []
loss_backdoor = []
active_image_gradients_res = None
active_text_gradients_res = None
has_poison_grad = False
for epoch in range(EPOCHS):
# Batch and shuffle the data
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(65535).batch(64)
# For each batch of images and labels
number_of_poison = 0
gradient_record_list = []
for (images, texts), labels in train_ds:
need_copy = np.min((texts == text_feat_need_copy).numpy(), axis=1)
with tf.GradientTape() as passive_tape:
# passive_model sends passive_output to active_model
passive_image_output = passive_model_image(images)
poisoned_text_output = passive_text_output = passive_model_text(texts)
if 'backdoor' in training_mode:
need_poison = (texts.numpy()[:,-1] == 1)
if np.sum(need_poison) > 0:
if has_poison_grad:
poisoned_text_output = get_poisoned_matrix(passive_text_output, need_poison, poison_grad, 0)
with tf.GradientTape() as active_tape:
active_tape.watch(passive_image_output)
active_tape.watch(poisoned_text_output)
active_output = active_model([passive_image_output, poisoned_text_output])
loss = loss_object(labels, active_output)
# active_model sends passive_output_gradients back to passive_model
[active_image_gradients, active_text_gradients, active_model_gradients] = \
active_tape.gradient(loss, [passive_image_output, poisoned_text_output, active_model.trainable_variables])
optimizer.apply_gradients(zip(active_model_gradients, active_model.trainable_variables))
location = 0.0
threshold = 5
if 'laplace' in training_mode:
scale = float(training_mode.split('_')[-1])
active_image_gradients = tf.clip_by_value(active_image_gradients, -threshold, threshold)
active_text_gradients = tf.clip_by_value(active_text_gradients, -threshold, threshold)
active_image_gradients += np.random.laplace(location, scale, active_image_gradients.numpy().shape)
active_text_gradients += np.random.laplace(location, scale, active_text_gradients.numpy().shape)
if 'gaussian' in training_mode:
scale = float(training_mode.split('_')[-1])
active_image_gradients = tf.clip_by_value(active_image_gradients, -threshold, threshold)
active_text_gradients = tf.clip_by_value(active_text_gradients, -threshold, threshold)
active_image_gradients += np.random.normal(location, scale, active_image_gradients.numpy().shape)
active_text_gradients += np.random.normal(location, scale, active_text_gradients.numpy().shape)
if 'sparsification' in training_mode:
percent = float(training_mode.split('_')[-1])
if active_image_gradients_res is not None and \
active_image_gradients.shape[0] == active_image_gradients_res.shape[0]:
# print(active_image_gradients.shape, active_image_gradients_res.shape)
active_image_gradients = active_image_gradients + active_image_gradients_res
if active_text_gradients_res is not None and \
active_text_gradients.shape[0] == active_text_gradients_res.shape[0]:
# print(active_text_gradients.shape, active_text_gradients_res.shape)
active_text_gradients = active_text_gradients + active_text_gradients_res
image_thr = np.percentile(np.abs(active_image_gradients.numpy()), percent)
text_thr = np.percentile(np.abs(active_text_gradients.numpy()), percent)
image_mask = np.abs(active_image_gradients.numpy()) < image_thr
text_mask = np.abs(active_text_gradients.numpy()) < text_thr
active_image_gradients_res = np.multiply(active_image_gradients.numpy(), image_mask)
active_text_gradients_res = np.multiply(active_text_gradients.numpy(), text_mask)
active_image_gradients -= active_image_gradients_res
active_text_gradients -= active_text_gradients_res
if np.sum(need_copy) > 0:
poison_grad = copy_grad(active_text_gradients, need_copy)
has_poison_grad = True
print('need_copy')
#print(active_output[need_copy])
#print(labels[need_copy])
elif has_poison_grad == False:
poison_grad = active_text_gradients.numpy()[0]*0
has_poison_grad = True
if 'backdoor' in training_mode:
need_poison = (texts.numpy()[:,-1] == 1)
if np.sum(need_poison) > 0:
if has_poison_grad:
number_of_poison += np.sum(need_poison)
active_text_gradients = get_poisoned_matrix(active_text_gradients, need_poison, poison_grad)
passive_image_loss = tf.multiply(passive_image_output, active_image_gradients.numpy())
passive_text_loss = tf.multiply(passive_text_output, active_text_gradients.numpy())
[passive_image_gradients, passive_text_gradients] = \
passive_tape.gradient([passive_image_loss, passive_text_loss], \
[passive_model_image.trainable_variables, passive_model_text.trainable_variables])
optimizer.apply_gradients(zip(passive_image_gradients, passive_model_image.trainable_variables))
optimizer.apply_gradients(zip(passive_text_gradients, passive_model_text.trainable_variables))
train_loss(loss)
train_accuracy(labels, active_output)
passive_output = [passive_model_image(image_backdoor), passive_model_text(text_backdoor)]
active_output = active_model(passive_output)
backdoor_loss.reset_states()
backdoor_accuracy.reset_states()
backdoor_loss(loss_object(y_backdoor, active_output))
backdoor_acc = backdoor_accuracy(y_backdoor, active_output)
for (test_images, test_texts), test_labels in test_ds:
passive_output = [passive_model_image(test_images), passive_model_text(test_texts)]
active_output = active_model(passive_output)
t_loss = loss_object(test_labels, active_output)
test_loss(t_loss)
test_accuracy(test_labels, active_output)
label_test = np.argmax(y_test, axis=1)
for label_val in range(class_num):
(image_test, text_test) = x_test
image_val = image_test[label_test==label_val]
text_val = text_test[label_test==label_val]
#print(image_val.shape, text_val.shape)
y_val = y_test[label_test==label_val]
passive_output = [passive_model_image(image_val), passive_model_text(text_val)]
active_output = active_model(passive_output)
test_label_accuracy.reset_states()
tl_acc = test_label_accuracy(y_val, active_output)
acc_test_label[label_val].append(tl_acc.numpy())
acc_test_label[class_num].append(backdoor_accuracy.result())
acc_backdoor.append(backdoor_accuracy.result())
loss_train.append(train_loss.result())
loss_test.append(test_loss.result())
loss_backdoor.append(backdoor_loss.result())
template = 'Epoch {}, Poisoned {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}, Backdoor Accuracy: {}'
print(template.format(epoch+1,
number_of_poison,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100,
backdoor_accuracy.result()*100))
acc_train.append(train_accuracy.result())
acc_test.append(test_accuracy.result())
# Reset the metrics for the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
for sub_indx in range(EPOCHS):
acc_test[sub_indx] /= number_of_times
acc_backdoor[sub_indx] /= number_of_times
loss_train[sub_indx] /= number_of_times
loss_test[sub_indx] /= number_of_times
loss_backdoor[sub_indx] /= number_of_times
# if len(result_list[indx]) != 0:
# for sub_indx in range(EPOCHS):
# acc_test[sub_indx] += result_list[indx][1][sub_indx]
# acc_backdoor[sub_indx] += result_list[indx][3][sub_indx]
# loss_test[sub_indx] += result_list[indx][4][sub_indx]
# loss_backdoor[sub_indx] += result_list[indx][5][sub_indx]
result_list[indx] = [acc_train, acc_test, acc_test_label, acc_backdoor, loss_train, loss_test, loss_backdoor]
import matplotlib.pyplot as plt
figure_name_lst = ['exp3_normal_backdoor_task_accuracy.png', 'exp3_laplace_backdoor_task_accuracy.png',\
'exp3_gaussian_backdoor_task_accuracy.png', 'exp3_sparsification_backdoor_task_accuracy.png',\
'exp3_normal_main_task_accuracy.png', 'exp3_laplace_main_task_accuracy.png',\
'exp3_gaussian_main_task_accuracy.png', 'exp3_sparsification_main_task_accuracy.png',\
'exp3_normal_main_task_loss.png', 'exp3_laplace_main_task_loss.png', \
'exp3_gaussian_main_task_loss.png', 'exp3_sparsification_main_task_loss.png']
label_show_lst = [['normal', 'backdoor'], ['backdoor_with_laplace_noise_0.1', 'backdoor_with_laplace_noise_0.01'\
, 'backdoor_with_laplace_noise_0.001', 'backdoor_with_laplace_noise_0.0001']\
,['backdoor_with_gaussian_noise_0.1', 'backdoor_with_gaussian_noise_0.01'\
, 'backdoor_with_gaussian_noise_0.001', 'backdoor_with_gaussian_noise_0.0001']\
,['backdoor_with_gradient_sparsification_95', 'backdoor_with_gradient_sparsification_99'\
, 'backdoor_with_gradient_sparsification_99.5', 'backdoor_with_gradient_sparsification_99.9']]
for (figure_name, label_show) in zip(figure_name_lst, label_show_lst*3):
data_size = EPOCHS
x = list(range(1, data_size+1))
label_lst = []
l_lst = []
for indx in range(len(training_mode_list)):
label = training_mode_list[indx]
if label in label_show and len(result_list[indx]) > 0:
[acc_train, acc_test, acc_test_label, acc_backdoor, loss_train, loss_test, loss_backdoor] = result_list[indx]
if 'backdoor_with_' in label:
label_legend = label[len('backdoor_with_'):]
else:
label_legend = label + ' training'
if 'backdoor_task_accuracy' in figure_name:
l, = plt.plot(x, acc_backdoor[:data_size])
l_lst.append(l)
label_lst.append(label_legend)
plt.ylabel('backdoor task accuracy')
elif 'main_task_accuracy' in figure_name:
l, = plt.plot(x, acc_test[:data_size])
l_lst.append(l)
label_lst.append(label_legend)
plt.ylabel('main task accuracy')
elif 'main_task_loss' in figure_name:
l, = plt.plot(x, loss_train[:data_size])
l_lst.append(l)
label_lst.append(label_legend+'_train')
l, = plt.plot(x, loss_test[:data_size])
l_lst.append(l)
label_lst.append(label_legend+'_test')
plt.ylabel('main task loss')
plt.legend(l_lst, label_lst, loc = 'best')
# plt.ylim(0, 1.1)
plt.xlabel('number of epoch')
plt.savefig(figure_name)
plt.show()
import matplotlib.pyplot as plt
figure_name_lst = ['exp3_laplace_backdoor_task_accuracy.png', 'exp3_gaussian_backdoor_task_accuracy.png',\
'exp3_sparsification_backdoor_task_accuracy.png', 'exp3_multi-layer_backdoor_task_accuracy.png']
label_show_lst = [[ 'laplace_noise_0.1', 'laplace_noise_0.075'\
, 'laplace_noise_0.05', 'laplace_noise_0.025']\
,['gaussian_noise_0.1', 'gaussian_noise_0.075'\
, 'gaussian_noise_0.05', 'gaussian_noise_0.025']\
,['gradient_sparsification_95', 'gradient_sparsification_99'\
, 'gradient_sparsification_99.5', 'gradient_sparsification_99.9']\
,['one_hidden_layer', 'two_hidden_layer'\
, 'three_hidden_layer', 'four_hidden_layer']]
for (figure_name, label_show) in zip(figure_name_lst, label_show_lst):
data_size = EPOCHS
x = list(range(data_size))
label_lst = []
l_lst = []
for indx in range(len(training_mode_list)):
label = training_mode_list[indx][14:]
if label in label_show and len(result_list[indx]) == 4:
[acc_train, acc_test, acc_test_label, acc_backdoor] = result_list[indx]
l, = plt.plot(x, acc_backdoor[:data_size])
l_lst.append(l)
label_lst.append(label)
plt.legend(l_lst, label_lst, loc = 'best')
plt.ylim(0, 1.0)
plt.xlabel('number of epoch')
plt.ylabel('backdoor task accuracy')
plt.savefig(figure_name)
plt.show()
import matplotlib.pyplot as plt
figure_name_lst = ['exp3_laplace_main_task_accuracy.png', 'exp3_gaussian_main_task_accuracy.png',\
'exp3_sparsification_main_task_accuracy.png', 'exp3_multi-layer_main_task_accuracy.png']
label_show_lst = [[ 'laplace_noise_0.1', 'laplace_noise_0.075'\
, 'laplace_noise_0.05', 'laplace_noise_0.025']\
,['gaussian_noise_0.1', 'gaussian_noise_0.075'\
, 'gaussian_noise_0.05', 'gaussian_noise_0.025']\
,['gradient_sparsification_95', 'gradient_sparsification_99'\
, 'gradient_sparsification_99.5', 'gradient_sparsification_99.9']\
,['one_hidden_layer', 'two_hidden_layer'\
, 'three_hidden_layer', 'four_hidden_layer']]
for (figure_name, label_show) in zip(figure_name_lst, label_show_lst):
data_size = EPOCHS
x = list(range(data_size))
label_lst = []
l_lst = []
for indx in range(len(training_mode_list)):
label = training_mode_list[indx][14:]
if label in label_show and len(result_list[indx]) == 4:
[acc_train, acc_test, acc_test_label, acc_backdoor] = result_list[indx]
l, = plt.plot(x, acc_test[:data_size])
l_lst.append(l)
label_lst.append(label)
plt.legend(l_lst, label_lst, loc = 'best')
plt.ylim(0, 1.0)
plt.xlabel('number of epoch')
plt.ylabel('main task accuracy')
plt.savefig(figure_name)
plt.show()
from sklearn import metrics
training_mode_list = ['normal', 'A_backdoor_with_laplace_noise', 'A_backdoor_with_gaussian_noise', 'A_backdoor_with_gradient_sparsification', 'A_backdoor']
loss_object = tf.keras.losses.CategoricalCrossentropy()
optimizer = tf.keras.optimizers.SGD()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.CategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.CategoricalAccuracy(name='test_accuracy')
test_label_accuracy = tf.keras.metrics.CategoricalAccuracy(name='test_label_accuracy')
backdoor_loss = tf.keras.metrics.Mean(name='backdoor_loss')
backdoor_accuracy = tf.keras.metrics.CategoricalAccuracy(name='backdoor_accuracy')
EPOCHS = 50
sample_id_need_copy = 369
text_feat_need_copy = copy.deepcopy(x_train[1][sample_id_need_copy])
y_backdoor[:] = y_train[sample_id_need_copy]
for training_mode in training_mode_list[3:4]:
passive_model_image = VFLPassiveModel()
passive_model_text = VFLPassiveModel()
active_model = VFLActiveModelWithOneLayer()
print('training_mode = ', training_mode)
if 'normal' == training_mode:
normal_acc_train = []
normal_acc_test = []
normal_acc_test_label = [[], [], [], [], [], []]
normal_acc_backdoor = []
if 'A_backdoor' == training_mode:
A_acc_train = []
A_acc_test = []
A_acc_test_label = [[], [], [], [], [], []]
A_acc_backdoor = []
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_train = []
laplace_acc_test = []
laplace_acc_test_label = [[], [], [], [], [], []]
laplace_acc_backdoor = []
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_train = []
gaussian_acc_test = []
gaussian_acc_test_label = [[], [], [], [], [], []]
gaussian_acc_backdoor = []
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_train = []
sparsification_acc_test = []
sparsification_acc_test_label = [[], [], [], [], [], []]
sparsification_acc_backdoor = []
active_image_gradients_res = None
active_text_gradients_res = None
has_poison_grad = False
for epoch in range(EPOCHS):
# Batch and shuffle the data
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(65535).batch(64)
# For each batch of images and labels
number_of_poison = 0
gradient_record_list = []
for (images, texts), labels in train_ds:
need_copy = np.min((texts == text_feat_need_copy).numpy(), axis=1)
with tf.GradientTape() as passive_tape:
# passive_model sends passive_output to active_model
passive_image_output = passive_model_image(images)
poisoned_text_output = passive_text_output = passive_model_text(texts)
if 'A_backdoor' in training_mode:
need_poison = (texts[:,-1] == 1).numpy()
if np.sum(need_poison) > 0:
if has_poison_grad:
poisoned_text_output = get_poisoned_matrix(passive_text_output, need_poison, poison_grad)
with tf.GradientTape() as active_tape:
active_tape.watch(passive_image_output)
active_tape.watch(poisoned_text_output)
active_output = active_model([passive_image_output, poisoned_text_output])
loss = loss_object(labels, active_output)
# active_model sends passive_output_gradients back to passive_model
[active_image_gradients, active_text_gradients, active_model_gradients] = \
active_tape.gradient(loss, [passive_image_output, poisoned_text_output, active_model.trainable_variables])
optimizer.apply_gradients(zip(active_model_gradients, active_model.trainable_variables))
location = 0.0
scale = 0.05
threshold = 1e9
if 'laplace' in training_mode:
active_image_gradients = tf.clip_by_value(active_image_gradients, -threshold, threshold)
active_text_gradients = tf.clip_by_value(active_text_gradients, -threshold, threshold)
active_image_gradients += np.random.laplace(location, scale, active_image_gradients.numpy().shape)
active_text_gradients += np.random.laplace(location, scale, active_text_gradients.numpy().shape)
if 'gaussian' in training_mode:
active_image_gradients = tf.clip_by_value(active_image_gradients, -threshold, threshold)
active_text_gradients = tf.clip_by_value(active_text_gradients, -threshold, threshold)
active_image_gradients += np.random.normal(location, scale, active_image_gradients.numpy().shape)
active_text_gradients += np.random.normal(location, scale, active_text_gradients.numpy().shape)
if 'sparsification' in training_mode:
percent = 99.0
if active_image_gradients_res is not None and \
active_image_gradients.shape[0] == active_image_gradients_res.shape[0]:
# print(active_image_gradients.shape, active_image_gradients_res.shape)
active_image_gradients = active_image_gradients + active_image_gradients_res
if active_text_gradients_res is not None and \
active_text_gradients.shape[0] == active_text_gradients_res.shape[0]:
# print(active_text_gradients.shape, active_text_gradients_res.shape)
active_text_gradients = active_text_gradients + active_text_gradients_res
image_thr = np.percentile(np.abs(active_image_gradients.numpy()), percent)
text_thr = np.percentile(np.abs(active_text_gradients.numpy()), percent)
image_mask = np.abs(active_image_gradients.numpy()) < image_thr
text_mask = np.abs(active_text_gradients.numpy()) < text_thr
active_image_gradients_res = np.multiply(active_image_gradients.numpy(), image_mask)
active_text_gradients_res = np.multiply(active_text_gradients.numpy(), text_mask)
active_image_gradients -= active_image_gradients_res
active_text_gradients -= active_text_gradients_res
if np.sum(need_copy) > 0:
poison_grad = copy_grad(active_text_gradients, need_copy)
has_poison_grad = True
print('need_copy')
#print(active_output[need_copy])
#print(labels[need_copy])
elif has_poison_grad == False:
poison_grad = active_text_gradients.numpy()[0]*0
has_poison_grad = True
if 'A_backdoor' in training_mode:
need_poison = (texts[:,-1] == 1).numpy()
if np.sum(need_poison) > 0:
if has_poison_grad:
number_of_poison += np.sum(need_poison)
active_text_gradients = get_poisoned_matrix(active_text_gradients, need_poison, poison_grad)
passive_image_loss = tf.multiply(passive_image_output, active_image_gradients.numpy())
passive_text_loss = tf.multiply(passive_text_output, active_text_gradients.numpy())
[passive_image_gradients, passive_text_gradients] = \
passive_tape.gradient([passive_image_loss, passive_text_loss], \
[passive_model_image.trainable_variables, passive_model_text.trainable_variables])
optimizer.apply_gradients(zip(passive_image_gradients, passive_model_image.trainable_variables))
optimizer.apply_gradients(zip(passive_text_gradients, passive_model_text.trainable_variables))
train_loss(loss)
train_accuracy(labels, active_output)
passive_output = [passive_model_image(image_backdoor), passive_model_text(text_backdoor)]
active_output = active_model(passive_output)
backdoor_loss.reset_states()
backdoor_accuracy.reset_states()
backdoor_loss(loss_object(y_backdoor, active_output))
backdoor_acc = backdoor_accuracy(y_backdoor, active_output)
if 'normal' == training_mode:
normal_acc_backdoor.append(backdoor_acc.numpy())
if 'A_backdoor' == training_mode:
A_acc_backdoor.append(backdoor_acc.numpy())
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_backdoor.append(backdoor_acc.numpy())
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_backdoor.append(backdoor_acc.numpy())
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_backdoor.append(backdoor_acc.numpy())
for (test_images, test_texts), test_labels in test_ds:
passive_output = [passive_model_image(test_images), passive_model_text(test_texts)]
active_output = active_model(passive_output)
t_loss = loss_object(test_labels, active_output)
test_loss(t_loss)
test_accuracy(test_labels, active_output)
label_test = np.argmax(y_test, axis=1)
for label_val in range(class_num):
(image_test, text_test) = x_test
image_val = image_test[label_test==label_val]
text_val = text_test[label_test==label_val]
#print(image_val.shape, text_val.shape)
y_val = y_test[label_test==label_val]
passive_output = [passive_model_image(image_val), passive_model_text(text_val)]
active_output = active_model(passive_output)
test_label_accuracy.reset_states()
tl_acc = test_label_accuracy(y_val, active_output)
if 'normal' == training_mode:
normal_acc_test_label[label_val].append(tl_acc.numpy())
if 'A_backdoor' == training_mode:
A_acc_test_label[label_val].append(tl_acc.numpy())
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_test_label[label_val].append(tl_acc.numpy())
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_test_label[label_val].append(tl_acc.numpy())
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_test_label[label_val].append(tl_acc.numpy())
if 'normal' == training_mode:
normal_acc_test_label[class_num].append(backdoor_acc.numpy())
if 'A_backdoor' == training_mode:
A_acc_test_label[class_num].append(backdoor_acc.numpy())
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_test_label[class_num].append(backdoor_acc.numpy())
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_test_label[class_num].append(backdoor_acc.numpy())
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_test_label[class_num].append(backdoor_acc.numpy())
template = 'Epoch {}, Poisoned {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}, Backdoor Accuracy: {}'
print(template.format(epoch+1,
number_of_poison,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100,
backdoor_accuracy.result()*100))
if 'normal' == training_mode:
normal_acc_train.append(train_accuracy.result())
normal_acc_test.append(test_accuracy.result())
if 'A_backdoor' == training_mode:
A_acc_train.append(train_accuracy.result())
A_acc_test.append(test_accuracy.result())
if 'A_backdoor_with_laplace_noise' == training_mode:
laplace_acc_train.append(train_accuracy.result())
laplace_acc_test.append(test_accuracy.result())
if 'A_backdoor_with_gaussian_noise' == training_mode:
gaussian_acc_train.append(train_accuracy.result())
gaussian_acc_test.append(test_accuracy.result())
if 'A_backdoor_with_gradient_sparsification' == training_mode:
sparsification_acc_train.append(train_accuracy.result())
sparsification_acc_test.append(test_accuracy.result())
# Reset the metrics for the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
import matplotlib.pyplot as plt
data_size = 40
x = list(range(data_size))
label_lst = ['normal', 'backdoor', 'backdoor_with_laplace_noise', 'backdoor_with_gaussian_noise', 'backdoor_with_gradient_sparsification']
l_lst = []
for label in label_lst:
if label == 'normal':
l, = plt.plot(x, normal_acc_test[:data_size])
if label == 'backdoor':
l, = plt.plot(x, A_acc_test[:data_size])
if label == 'backdoor_with_laplace_noise':
l, = plt.plot(x, laplace_acc_test[:data_size])
if label == 'backdoor_with_gaussian_noise':
l, = plt.plot(x, gaussian_acc_test[:data_size])
if label == 'backdoor_with_gradient_sparsification':
l, = plt.plot(x, sparsification_acc_test[:data_size])
l_lst.append(l)
plt.legend(l_lst, label_lst, loc = 'best')
plt.ylim(0.2, 1.0)
plt.xlabel('number of epoch')
plt.ylabel('validation accuracy')
plt.savefig('exp2_validation_accuracy.png')
plt.show()
data_size = 40
x = list(range(data_size))
label_lst = []
l_lst = []
for label_val in range(class_num+1):
l, = plt.plot(x, A_acc_test_label[label_val][:data_size])
if label_val == class_num:
label_lst.append('backdoor task')
else:
label_lst.append('class_'+str(label_val))
l_lst.append(l)
plt.legend(l_lst, label_lst, loc = 'best')
plt.xlabel('number of epoch')
plt.ylabel('validation accuracy')
plt.savefig('exp2_class_validation_accuracy.png')
plt.show()
data_size = 40
x = list(range(data_size))
#print(len(normal_acc_backdoor), len(A_acc_backdoor), len(laplace_acc_backdoor), len(gaussian_acc_backdoor))
x = list(range(data_size))
label_lst = ['normal', 'backdoor', 'backdoor_with_laplace_noise', 'backdoor_with_gaussian_noise']
l_lst = []
for label in label_lst:
if label == 'normal':
l, = plt.plot(x, normal_acc_test_label[class_num][:data_size])
if label == 'backdoor':
l, = plt.plot(x, A_acc_test_label[class_num][:data_size])
if label == 'backdoor_with_laplace_noise':
l, = plt.plot(x, laplace_acc_test_label[class_num][:data_size])
if label == 'backdoor_with_gaussian_noise':
l, = plt.plot(x, gaussian_acc_test_label[class_num][:data_size])
l_lst.append(l)
plt.legend(l_lst, label_lst, loc = 'best')
plt.xlabel('number of epoch')
plt.ylabel('backdoor task accuracy')
plt.savefig('exp2_backdoor_data_accuracy_epoch.png')
plt.show()
import matplotlib.pyplot as plt
data_size = len(gaussian_acc_backdoor)-10
print(data_size)
#print(len(normal_acc_backdoor), len(A_acc_backdoor), len(laplace_acc_backdoor), len(gaussian_acc_backdoor))
x = list(range(data_size))
label_lst = ['backdoor_with_gaussian_noise', 'backdoor_with_laplace_noise', 'backdoor_with_gradient_sparsification', 'normal', 'backdoor']
l_lst = []
for label in label_lst:
if label == 'normal':
l, = plt.plot(x, normal_acc_backdoor[:data_size])
if label == 'backdoor':
l, = plt.plot(x, A_acc_backdoor[:data_size])
if label == 'backdoor_with_laplace_noise':
l, = plt.plot(x, laplace_acc_backdoor[:data_size])
if label == 'backdoor_with_gaussian_noise':
l, = plt.plot(x, gaussian_acc_backdoor[:data_size])
if label == 'backdoor_with_gradient_sparsification':
l, = plt.plot(x, sparsification_acc_backdoor[:data_size])
l_lst.append(l)
#plt.legend([l1], ['AB_attack_acc_test'], loc = 'center right')
plt.legend(l_lst, label_lst, loc = 'best')
plt.xlabel('number of batch')
plt.ylabel('backdoor task accuracy')
plt.savefig('exp2_backdoor_data_accuracy.png')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
def show_target_predict(n = 5):
image_target = x_train[0][sample_id_need_copy:sample_id_need_copy+1,:]
text_target = x_train[1][sample_id_need_copy:sample_id_need_copy+1,:]
passive_output = [passive_model_image(image_target), passive_model_text(text_target)]
active_output = active_model(passive_output)
output_distribution = np.average(active_output, axis=0)
print(output_distribution)
X = np.arange(n)
plt.bar(X, output_distribution)
plt.show()
def show_semantic_label(n = 5):
output_distribution = np.sum(y_backdoor, axis=0)
print(output_distribution)
X = np.arange(n)
plt.bar(X, output_distribution)
plt.show()
def show_semantic_predict(n = 5):
print(image_test.shape, text_test.shape)
print(image_backdoor.shape, text_backdoor.shape)
passive_output = [passive_model_image(image_backdoor), passive_model_text(text_backdoor)]
active_output = active_model(passive_output)
output_distribution = np.average(active_output, axis=0)
print(output_distribution)
X = np.arange(n)
plt.bar(X, output_distribution)
plt.savefig('exp2_poison_predict.png')
plt.show()
show_target_predict()
show_semantic_label()
show_semantic_predict()
class MyLinearModel(Model):
def __init__(self):
super(MyLinearModel, self).__init__()
self.flatten = Flatten()
self.d1 = Dense(class_num, activation='softmax', name="dense1")
def call(self, x):
x = self.flatten(x)
return self.d1(x)
model = MyLinearModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
EPOCHS = 5
for epoch in range(EPOCHS):
# For each batch of images and labels
for images, labels in train_ds:
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
print(gradients[0].shape, gradients[1].shape)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
for test_images, test_labels in test_ds:
predictions = model(test_images)
t_loss = loss_object(test_labels, predictions)
test_loss(t_loss)
test_accuracy(test_labels, predictions)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
# Reset the metrics for the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
with tf.GradientTape() as tape:
a = tf.constant(2.)
b = tf.constant(1.)
tape.watch(a)
tape.watch(b)
c = tf.multiply(a, b)
g = tape.gradient(c, [a, b])
print(g)
start = 341
print(start + np.argmax(y_train[start:,0]==1))
passive_text_output = passive_model_text(train_X_text)
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, init='pca', random_state=0, verbose=1)
tsne.fit_transform(passive_text_output.numpy()) #进行数据降维,降成两维
#a=tsne.fit_transform(data_zs) #a是一个array,a相当于下面的tsne_embedding_
tsne = pd.DataFrame(tsne.embedding_, index=data_zs.index) #转换数据格式
import pandas as pd
tsne2 = pd.DataFrame(tsne.embedding_) #转换数据格式
import matplotlib.pyplot as plt
label_train = np.argmax(train_Y, axis=1)
l_lst = []
c_lst = []
oo = passive_text_output.numpy()
print(oo)
for label_show in range(class_num-1, -1, -1):
d = tsne2[label_train==label_show]
l = plt.scatter(d[0],d[1], s=2)
l_lst.append(l)
c_lst.append('class ' + str(label_show))
plt.legend(l_lst, c_lst, loc = 'best')
plt.show()
[acc_train, acc_test, acc_test_label, acc_backdoor, loss_train, loss_test, loss_backdoor] = result_list[1]
print(acc_test[-1])
| 0.660282 | 0.610337 |
# **KNN Classification**
The k-nearest neighbors (KNN) algorithm is a simple, supervised machine learning algorithm that can be used as for both:
- classification and
- regression problems.
KNN works by finding the distances between a query and all the examples in the data, selecting the specified number examples (K) closest to the query, then votes for the most frequent label (in the case of classification) or averages the labels (in the case of regression).
In this project we will classify movies based on various features using following steps:
- Step 1: Data Preprocessing & Exploration.
- Step 2: Data Visualization.
- Step 3: Data Training & Model Creation.
- Step 4: Performance Evaluation.
We will firstly preprocess & explore the data that we have and check for any null or missing values. If found then we'll clean tha data.
Then visualize it for better understanding and draw insights from it.
Then we'll proceed by data training i.e. spliting data into training and testing data.
Then train our model by providing training data and once the model will be trained, we will import & initialize the classifier, fit the data into it and perform prediction for the test data.
At last, we will evaluate the performance of the algorithms by error check and accuracy check.
- For the dataset being used in this project [click here](https://www.kaggle.com/balakrishcodes/others?select=Movie_classification.csv)
### **Step 1: Data Preprocessing & Exploration.**
```
import pandas as pd
data=pd.read_csv('/content/Movie_classification.csv')
data
data.head()
data.tail()
data.shape
data.columns
data.info()
data.describe()
data.isnull().sum()
data['Time_taken'].isnull().sum()
data['Time_taken'].mean()
data['Time_taken'].fillna('157',inplace=True)
data['Time_taken'].isnull().sum()
#Again check for null values
data.isnull().sum()
data.isnull().sum().sum()
```
There are no null or missing values i.e. we can now proceed with further steps.
### **Step 2: Data Visualization**
```
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.boxplot(x=data['Marketing expense'])
sns.boxplot(x=data['Production expense'])
sns.boxplot(x=data['Multiplex coverage'])
count=data['3D_available'].value_counts()
plt.figure(figsize=(8,6))
sns.barplot(count.index,count.values, alpha=0.8)
plt.title("3D Available for the movie", fontsize=20)
plt.ylabel('Number', fontsize=15)
plt.xlabel("Response", fontsize=15)
plt.show()
count=data['Avg_age_actors'].value_counts()
plt.figure(figsize=(12,6))
sns.barplot(count.index,count.values, alpha=0.8)
plt.title("Average age of actors of the movie", fontsize=20)
plt.ylabel('Number', fontsize=15)
plt.xlabel("Age", fontsize=15)
plt.show()
count=data['Genre'].value_counts()
plt.figure(figsize=(12,6))
sns.barplot(count.index,count.values, alpha=0.8)
plt.title("Genre of the movie", fontsize=20)
plt.ylabel('Number', fontsize=15)
plt.xlabel("Genre", fontsize=15)
plt.show()
data.info()
data1=data.drop(['3D_available','Time_taken','Genre'],axis=1)
data1.info()
#converting data into int datatype to avoid errors below.
prepareddata=data1.astype(int)
prepareddata.head()
```
### **Step 3: Data Training & Model Creation**
```
# Import train_test_split from sklearn.model_selection
from sklearn.model_selection import train_test_split
# Here, x is the data which will have features for classification and y will have our target.
x=prepareddata.drop(['Start_Tech_Oscar'],axis=1)
y=prepareddata['Start_Tech_Oscar']
# Split data into training data and testing data.
#Ratio used for splitting training and testing data is 8:2 respectively
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2,random_state=100)
```
Model Creation using KNN algorithm:
```
#importing knn model
from sklearn.neighbors import KNeighborsClassifier
clfr=KNeighborsClassifier(n_neighbors=20)
#Fitting data into the model.
clfr.fit(x_train, y_train)
# Making predictions
pred= clfr.predict(x_test)
pred
```
### **Step 4: Performance Evaluation**
```
import numpy as np
from sklearn.metrics import mean_squared_error
print("Model\t\t\t RootMeanSquareError \t\t Accuracy of the model")
print("""KNN \t\t {:.4f} \t \t\t {:.4f}""".format( np.sqrt(mean_squared_error(y_test, pred)), clfr.score(x_train,y_train)))
```
Conclusion drawn:
- For this project i.e. Movies classification we have used KNN Classification algorithm which is one of the simple and easy classification algorithms.
- The accuracy of KNN model is 60.40%
- The error for KNN model is 0.6493
|
github_jupyter
|
import pandas as pd
data=pd.read_csv('/content/Movie_classification.csv')
data
data.head()
data.tail()
data.shape
data.columns
data.info()
data.describe()
data.isnull().sum()
data['Time_taken'].isnull().sum()
data['Time_taken'].mean()
data['Time_taken'].fillna('157',inplace=True)
data['Time_taken'].isnull().sum()
#Again check for null values
data.isnull().sum()
data.isnull().sum().sum()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.boxplot(x=data['Marketing expense'])
sns.boxplot(x=data['Production expense'])
sns.boxplot(x=data['Multiplex coverage'])
count=data['3D_available'].value_counts()
plt.figure(figsize=(8,6))
sns.barplot(count.index,count.values, alpha=0.8)
plt.title("3D Available for the movie", fontsize=20)
plt.ylabel('Number', fontsize=15)
plt.xlabel("Response", fontsize=15)
plt.show()
count=data['Avg_age_actors'].value_counts()
plt.figure(figsize=(12,6))
sns.barplot(count.index,count.values, alpha=0.8)
plt.title("Average age of actors of the movie", fontsize=20)
plt.ylabel('Number', fontsize=15)
plt.xlabel("Age", fontsize=15)
plt.show()
count=data['Genre'].value_counts()
plt.figure(figsize=(12,6))
sns.barplot(count.index,count.values, alpha=0.8)
plt.title("Genre of the movie", fontsize=20)
plt.ylabel('Number', fontsize=15)
plt.xlabel("Genre", fontsize=15)
plt.show()
data.info()
data1=data.drop(['3D_available','Time_taken','Genre'],axis=1)
data1.info()
#converting data into int datatype to avoid errors below.
prepareddata=data1.astype(int)
prepareddata.head()
# Import train_test_split from sklearn.model_selection
from sklearn.model_selection import train_test_split
# Here, x is the data which will have features for classification and y will have our target.
x=prepareddata.drop(['Start_Tech_Oscar'],axis=1)
y=prepareddata['Start_Tech_Oscar']
# Split data into training data and testing data.
#Ratio used for splitting training and testing data is 8:2 respectively
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2,random_state=100)
#importing knn model
from sklearn.neighbors import KNeighborsClassifier
clfr=KNeighborsClassifier(n_neighbors=20)
#Fitting data into the model.
clfr.fit(x_train, y_train)
# Making predictions
pred= clfr.predict(x_test)
pred
import numpy as np
from sklearn.metrics import mean_squared_error
print("Model\t\t\t RootMeanSquareError \t\t Accuracy of the model")
print("""KNN \t\t {:.4f} \t \t\t {:.4f}""".format( np.sqrt(mean_squared_error(y_test, pred)), clfr.score(x_train,y_train)))
| 0.564339 | 0.990226 |
```
OUT_DIR = '/tmp/'
NUM_WORKERS = 16
BATCH_SIZE = 512
```
# Barebones starter example
### Imports
```
from robustness import model_utils, datasets, train, defaults
from robustness.datasets import CIFAR
import torch as ch
# We use cox (http://github.com/MadryLab/cox) to log, store and analyze
# results. Read more at https//cox.readthedocs.io.
from cox.utils import Parameters
import cox.store
```
### Make dataset and loaders
```
# Hard-coded dataset, architecture, batch size, workers
ds = CIFAR('/tmp/')
m, _ = model_utils.make_and_restore_model(arch='resnet18', dataset=ds)
train_loader, val_loader = ds.make_loaders(batch_size=BATCH_SIZE, workers=NUM_WORKERS)
```
### Make a cox store for logging
```
# Create a cox store for logging
out_store = cox.store.Store(OUT_DIR)
```
### Set up training arguments
```
# Hard-coded base parameters
train_kwargs = {
'out_dir': "train_out",
'adv_train': 1,
'constraint': '2',
'eps': 0.5,
'attack_lr': 0.1,
'attack_steps': 7,
'epochs': 5
}
train_args = Parameters(train_kwargs)
# Fill whatever parameters are missing from the defaults
train_args = defaults.check_and_fill_args(train_args,
defaults.TRAINING_ARGS, CIFAR)
train_args = defaults.check_and_fill_args(train_args,
defaults.PGD_ARGS, CIFAR)
```
### Train Model
```
# Train a model
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
pass
```
# Customizations
## Custom loss
```
train_crit = ch.nn.CrossEntropyLoss()
def custom_train_loss(logits, targ):
probs = ch.ones_like(logits) * 0.5
logits_to_multiply = ch.bernoulli(probs) * 9 + 1
return train_crit(logits_to_multiply * logits, targ)
adv_crit = ch.nn.CrossEntropyLoss(reduction='none').cuda()
def custom_adv_loss(model, inp, targ):
logits = model(inp)
probs = ch.ones_like(logits) * 0.5
logits_to_multiply = ch.bernoulli(probs) * 9 + 1
new_logits = logits_to_multiply * logits
return adv_crit(new_logits, targ), new_logits
train_args.custom_train_loss = custom_train_loss
train_args.custom_adv_loss = custom_adv_loss
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
```
## Custom data loaders
### Using LambdaLoader
```
from robustness.loaders import LambdaLoader
def label_noiser(ims, labels):
label_noise = ch.randint_like(labels, high=9)
probs = ch.ones_like(label_noise) * 0.1
labels_to_noise = ch.bernoulli(probs.float()).long()
new_labels = (labels + label_noise * labels_to_noise) % 10
return ims, new_labels
train_loader = LambdaLoader(train_loader, label_noiser)
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
pass
```
### Using TransformedLoader
```
from robustness.loaders import TransformedLoader
from robustness.data_augmentation import TRAIN_TRANSFORMS_DEFAULT
def make_rand_labels(ims, targs):
new_targs = ch.randint(0, high=10,size=targs.shape).long()
return ims, new_targs
train_loader_transformed = TransformedLoader(train_loader,
make_rand_labels,
TRAIN_TRANSFORMS_DEFAULT(32),
workers=8,
batch_size=BATCH_SIZE,
do_tqdm=True)
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
pass
```
## Custom per-iteration logging
```
CUSTOM_SCHEMA = {'iteration': int, 'weight_norm': float }
out_store.add_table('custom', CUSTOM_SCHEMA)
from torch.nn.utils import parameters_to_vector as flatten
def log_norm(mod, it, loop_type, inp, targ):
if loop_type == 'train':
curr_params = flatten(mod.parameters())
log_info_custom = { 'iteration': it,
'weight_norm': ch.norm(curr_params).detach().cpu().numpy() }
out_store['custom'].append_row(log_info_custom)
train_args.iteration_hook = log_norm
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
pass
```
## Custom architecture
```
from torch import nn
from robustness.model_utils import make_and_restore_model
class MLP(nn.Module):
# Must implement the num_classes argument
def __init__(self, num_classes=10):
super().__init__()
self.fc1 = nn.Linear(32*32*3, 1000)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(1000, num_classes)
def forward(self, x, *args, **kwargs):
out = x.view(x.shape[0], -1)
out = self.fc1(out)
out = self.relu1(out)
return self.fc2(out)
new_model = MLP(num_classes=10)
new_model, _ = make_and_restore_model(arch=new_model, dataset=ds)
train.train_model(train_args, new_model, (train_loader, val_loader), store=out_store)
pass
```
|
github_jupyter
|
OUT_DIR = '/tmp/'
NUM_WORKERS = 16
BATCH_SIZE = 512
from robustness import model_utils, datasets, train, defaults
from robustness.datasets import CIFAR
import torch as ch
# We use cox (http://github.com/MadryLab/cox) to log, store and analyze
# results. Read more at https//cox.readthedocs.io.
from cox.utils import Parameters
import cox.store
# Hard-coded dataset, architecture, batch size, workers
ds = CIFAR('/tmp/')
m, _ = model_utils.make_and_restore_model(arch='resnet18', dataset=ds)
train_loader, val_loader = ds.make_loaders(batch_size=BATCH_SIZE, workers=NUM_WORKERS)
# Create a cox store for logging
out_store = cox.store.Store(OUT_DIR)
# Hard-coded base parameters
train_kwargs = {
'out_dir': "train_out",
'adv_train': 1,
'constraint': '2',
'eps': 0.5,
'attack_lr': 0.1,
'attack_steps': 7,
'epochs': 5
}
train_args = Parameters(train_kwargs)
# Fill whatever parameters are missing from the defaults
train_args = defaults.check_and_fill_args(train_args,
defaults.TRAINING_ARGS, CIFAR)
train_args = defaults.check_and_fill_args(train_args,
defaults.PGD_ARGS, CIFAR)
# Train a model
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
pass
train_crit = ch.nn.CrossEntropyLoss()
def custom_train_loss(logits, targ):
probs = ch.ones_like(logits) * 0.5
logits_to_multiply = ch.bernoulli(probs) * 9 + 1
return train_crit(logits_to_multiply * logits, targ)
adv_crit = ch.nn.CrossEntropyLoss(reduction='none').cuda()
def custom_adv_loss(model, inp, targ):
logits = model(inp)
probs = ch.ones_like(logits) * 0.5
logits_to_multiply = ch.bernoulli(probs) * 9 + 1
new_logits = logits_to_multiply * logits
return adv_crit(new_logits, targ), new_logits
train_args.custom_train_loss = custom_train_loss
train_args.custom_adv_loss = custom_adv_loss
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
from robustness.loaders import LambdaLoader
def label_noiser(ims, labels):
label_noise = ch.randint_like(labels, high=9)
probs = ch.ones_like(label_noise) * 0.1
labels_to_noise = ch.bernoulli(probs.float()).long()
new_labels = (labels + label_noise * labels_to_noise) % 10
return ims, new_labels
train_loader = LambdaLoader(train_loader, label_noiser)
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
pass
from robustness.loaders import TransformedLoader
from robustness.data_augmentation import TRAIN_TRANSFORMS_DEFAULT
def make_rand_labels(ims, targs):
new_targs = ch.randint(0, high=10,size=targs.shape).long()
return ims, new_targs
train_loader_transformed = TransformedLoader(train_loader,
make_rand_labels,
TRAIN_TRANSFORMS_DEFAULT(32),
workers=8,
batch_size=BATCH_SIZE,
do_tqdm=True)
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
pass
CUSTOM_SCHEMA = {'iteration': int, 'weight_norm': float }
out_store.add_table('custom', CUSTOM_SCHEMA)
from torch.nn.utils import parameters_to_vector as flatten
def log_norm(mod, it, loop_type, inp, targ):
if loop_type == 'train':
curr_params = flatten(mod.parameters())
log_info_custom = { 'iteration': it,
'weight_norm': ch.norm(curr_params).detach().cpu().numpy() }
out_store['custom'].append_row(log_info_custom)
train_args.iteration_hook = log_norm
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)
pass
from torch import nn
from robustness.model_utils import make_and_restore_model
class MLP(nn.Module):
# Must implement the num_classes argument
def __init__(self, num_classes=10):
super().__init__()
self.fc1 = nn.Linear(32*32*3, 1000)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(1000, num_classes)
def forward(self, x, *args, **kwargs):
out = x.view(x.shape[0], -1)
out = self.fc1(out)
out = self.relu1(out)
return self.fc2(out)
new_model = MLP(num_classes=10)
new_model, _ = make_and_restore_model(arch=new_model, dataset=ds)
train.train_model(train_args, new_model, (train_loader, val_loader), store=out_store)
pass
| 0.799168 | 0.743378 |
# Importando o que importa (haha)
```
from freeSpace import *
from collections import defaultdict
import numpy as np
import math
import pandas as pd
from geopy.distance import vincenty
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from geopy.distance import great_circle
from sklearn.neighbors import KNeighborsRegressor
import matplotlib.pyplot as plt
from scipy.interpolate import griddata
```
# Pegando os Valores de Treinamento
```
#Erbs
csvErbs = pd.read_csv("erbs.csv")
erbsLidas = csvErbs[['lat','lon']].values
erbs_posicao = dict()
for i in range (1,len(erbsLidas)+1):
erbs_posicao[i] = (erbsLidas[i-1])
#Medicoes, dados de treinamento
csvMed = pd.read_csv("medicoes.csv")
medidas_posicao = csvMed[['lat','lon']].values # Valores em Tupla
medidas_potencia = csvMed[["RSSI_1","RSSI_2","RSSI_3","RSSI_4","RSSI_5","RSSI_6"]].values
```
# Interpolacao e criacao de pontos do grid
```
#ver link https://docs.scipy.org/doc/scipy/reference/interpolate.html
#ver link https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html
resolucao = 40 #valor referente a resolucao do grid. OBS: nao tem relacao com metros
#o grid so aceita valores positivos para os pontos
#intervalos do grid
lat_in = 8.08
lat_fim = 8.0605
lon_in = 34.91
lon_fim = 34.887
desloc_lat = (lat_in - lat_fim)/resolucao #valor correspondente ao deslocamento em latitude das celulas do grid, positivo
desloc_lon = (lon_in - lon_fim)/resolucao #valor correspondente ao deslocamento em longitude das celulas do grid
#construcao do grid
grid_lat, grid_lon = np.mgrid[lat_fim:lat_in:desloc_lat, lon_fim:lon_in:desloc_lon]
np.shape(grid_lat)
#grid_lon
#pegando pontos conhecidos para regressao, uma para cada erb
pontos_teste = -medidas_posicao #o grid so aceita coordenadas positivas
medidas_ref_erb1 = medidas_potencia[:,0] #medidas de ref para ajuste do modelo de regressao
medidas_ref_erb2 = medidas_potencia[:,1]
medidas_ref_erb3 = medidas_potencia[:,2]
medidas_ref_erb4 = medidas_potencia[:,3]
medidas_ref_erb5 = medidas_potencia[:,4]
medidas_ref_erb6 = medidas_potencia[:,5]
#grid_z0 = griddata(points, values, (grid_x, grid_y), method='nearest')
#interpolacao, considerada de foma independente para cada erb
grid_z1 = griddata(pontos_teste, medidas_ref_erb1, (grid_lat, grid_lon), method='linear')
grid_z2 = griddata(pontos_teste, medidas_ref_erb2, (grid_lat, grid_lon), method='linear')
grid_z3 = griddata(pontos_teste, medidas_ref_erb3, (grid_lat, grid_lon), method='linear')
grid_z4 = griddata(pontos_teste, medidas_ref_erb4, (grid_lat, grid_lon), method='linear')
grid_z5 = griddata(pontos_teste, medidas_ref_erb5, (grid_lat, grid_lon), method='linear')
grid_z6 = griddata(pontos_teste, medidas_ref_erb6, (grid_lat, grid_lon), method='linear')
#np.shape(medidas_teste)
np.shape(grid_z1)
#grid_z1[0:10][0:10]
#grid_z1
(grid_lat[9][26], grid_lon[9][26])
not(math.isnan(grid_z1[0][0]))
#adicionar dados validos obtidos da interpolacao para a base de dados do knn, criando um novo dataframe
#criar lista para receber novas coordenadas
coord_interpol = []
coord_interpol
for i in range(resolucao):
for j in range(resolucao):
if not(math.isnan(grid_z1[i][j])) and not(math.isnan(grid_z2[i][j])) and not(math.isnan(grid_z3[i][j])) and not(math.isnan(grid_z4[i][j])) and not(math.isnan(grid_z5[i][j])) and not(math.isnan(grid_z6[i][j])):
coord_interpol.append([-grid_lat[i][j] ,-grid_lon[i][j],grid_z1[i][j], grid_z2[i][j], grid_z3[i][j], grid_z4[i][j], grid_z5[i][j], grid_z6[i][j]])
#np.shape(coord_interpol)
#type(coord_interpol)
#criar dataframe e concatenar com os dados da base do knn
dados_iterpol = pd.DataFrame(coord_interpol,columns=['lat','lon','RSSI_1','RSSI_2','RSSI_3','RSSI_4','RSSI_5','RSSI_6'])
csvMed = pd.concat([csvMed, dados_iterpol])
np.shape(coord_interpol)
csvMed
```
# Gera Modelo
```
#medidas_potencia.values
medidas_posicao_lat = csvMed['lat'].values
medidas_posicao_lon = csvMed['lon'].values
medidas_potencia = csvMed[["RSSI_1","RSSI_2","RSSI_3","RSSI_4","RSSI_5","RSSI_6"]].values
#modela latitude e longitude separadamente
neigh_lat = KNeighborsRegressor(n_neighbors=10, weights = 'distance')
neigh_lat.fit(medidas_potencia,medidas_posicao_lat )
neigh_lon = KNeighborsRegressor(n_neighbors=10, weights = 'distance')
neigh_lon.fit(medidas_potencia,medidas_posicao_lon )
medidas_potencia[0:5]
```
# Pega os Valores de Teste
```
csvMedTest = pd.read_csv("testLoc.csv")
medidas_posicao_teste = csvMedTest[['lat','lon']].values # Valores em Tupla
medidas_potencia_teste = csvMedTest[["RSSI_1","RSSI_2","RSSI_3","RSSI_4","RSSI_5","RSSI_6"]].values
np.shape(medidas_posicao_teste)
```
# Faz a Predição
```
predicao_lat = neigh_lat.predict(medidas_potencia_teste)
predicao_lon = neigh_lon.predict(medidas_potencia_teste)
```
# Calcula o Erro
```
vet_err_lat = []
vet_err_lon = []
for i in range(0,200):
vet_err_lat.append(predicao_lat[i] - medidas_posicao_teste[i][0])
vet_err_lon.append(predicao_lon[i] - medidas_posicao_teste[i][1])
err_geral = []
for i in range(len(predicao_lat)):
err_geral.append(vincenty((predicao_lat[i],predicao_lon[i]), (medidas_posicao_teste[i][0], medidas_posicao_teste[i][1])).kilometers)
print("MEDIA LAT = " + str(np.mean(vet_err_lat)))
print("MEDIA LON = " + str(np.mean(vet_err_lon)))
print("MEDIA KM = " + str(np.mean(err_geral)))
print("STD = " + str(np.std(err_geral)))
plt.plot(range(200),err_geral, range(200),[np.mean(err_geral)for i in range(200)])
plt.show()
plt.clf()
bins = np.linspace(-0.01, 1, 100)
plt.hist(err_geral,bins)
plt.show()
X = medidas_posicao_lat
Y = medidas_posicao_lon
rssi1 = csvMed["RSSI_1"].values
Z = rssi1
plt.clf()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot(X,Y,Z)
plt.show()
plt.clf()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X,Y,Z)
plt.show()
plt.clf()
fig = plt.figure(figsize=(20,5))
axes = fig.add_axes([0.1,0.1,1,1])
plt.plot(X,Z,X,)
plt.show()
plt.clf()
fig = plt.figure(figsize=(20,5))
axes = fig.add_axes([0.1,0.1,1,1])
plt.plot(Y,Z)
plt.show()
plt.clf()
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_trisurf(X, Y, Z, cmap=cm.jet, linewidth=0)
fig.colorbar(surf)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
```
|
github_jupyter
|
from freeSpace import *
from collections import defaultdict
import numpy as np
import math
import pandas as pd
from geopy.distance import vincenty
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from geopy.distance import great_circle
from sklearn.neighbors import KNeighborsRegressor
import matplotlib.pyplot as plt
from scipy.interpolate import griddata
#Erbs
csvErbs = pd.read_csv("erbs.csv")
erbsLidas = csvErbs[['lat','lon']].values
erbs_posicao = dict()
for i in range (1,len(erbsLidas)+1):
erbs_posicao[i] = (erbsLidas[i-1])
#Medicoes, dados de treinamento
csvMed = pd.read_csv("medicoes.csv")
medidas_posicao = csvMed[['lat','lon']].values # Valores em Tupla
medidas_potencia = csvMed[["RSSI_1","RSSI_2","RSSI_3","RSSI_4","RSSI_5","RSSI_6"]].values
#ver link https://docs.scipy.org/doc/scipy/reference/interpolate.html
#ver link https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html
resolucao = 40 #valor referente a resolucao do grid. OBS: nao tem relacao com metros
#o grid so aceita valores positivos para os pontos
#intervalos do grid
lat_in = 8.08
lat_fim = 8.0605
lon_in = 34.91
lon_fim = 34.887
desloc_lat = (lat_in - lat_fim)/resolucao #valor correspondente ao deslocamento em latitude das celulas do grid, positivo
desloc_lon = (lon_in - lon_fim)/resolucao #valor correspondente ao deslocamento em longitude das celulas do grid
#construcao do grid
grid_lat, grid_lon = np.mgrid[lat_fim:lat_in:desloc_lat, lon_fim:lon_in:desloc_lon]
np.shape(grid_lat)
#grid_lon
#pegando pontos conhecidos para regressao, uma para cada erb
pontos_teste = -medidas_posicao #o grid so aceita coordenadas positivas
medidas_ref_erb1 = medidas_potencia[:,0] #medidas de ref para ajuste do modelo de regressao
medidas_ref_erb2 = medidas_potencia[:,1]
medidas_ref_erb3 = medidas_potencia[:,2]
medidas_ref_erb4 = medidas_potencia[:,3]
medidas_ref_erb5 = medidas_potencia[:,4]
medidas_ref_erb6 = medidas_potencia[:,5]
#grid_z0 = griddata(points, values, (grid_x, grid_y), method='nearest')
#interpolacao, considerada de foma independente para cada erb
grid_z1 = griddata(pontos_teste, medidas_ref_erb1, (grid_lat, grid_lon), method='linear')
grid_z2 = griddata(pontos_teste, medidas_ref_erb2, (grid_lat, grid_lon), method='linear')
grid_z3 = griddata(pontos_teste, medidas_ref_erb3, (grid_lat, grid_lon), method='linear')
grid_z4 = griddata(pontos_teste, medidas_ref_erb4, (grid_lat, grid_lon), method='linear')
grid_z5 = griddata(pontos_teste, medidas_ref_erb5, (grid_lat, grid_lon), method='linear')
grid_z6 = griddata(pontos_teste, medidas_ref_erb6, (grid_lat, grid_lon), method='linear')
#np.shape(medidas_teste)
np.shape(grid_z1)
#grid_z1[0:10][0:10]
#grid_z1
(grid_lat[9][26], grid_lon[9][26])
not(math.isnan(grid_z1[0][0]))
#adicionar dados validos obtidos da interpolacao para a base de dados do knn, criando um novo dataframe
#criar lista para receber novas coordenadas
coord_interpol = []
coord_interpol
for i in range(resolucao):
for j in range(resolucao):
if not(math.isnan(grid_z1[i][j])) and not(math.isnan(grid_z2[i][j])) and not(math.isnan(grid_z3[i][j])) and not(math.isnan(grid_z4[i][j])) and not(math.isnan(grid_z5[i][j])) and not(math.isnan(grid_z6[i][j])):
coord_interpol.append([-grid_lat[i][j] ,-grid_lon[i][j],grid_z1[i][j], grid_z2[i][j], grid_z3[i][j], grid_z4[i][j], grid_z5[i][j], grid_z6[i][j]])
#np.shape(coord_interpol)
#type(coord_interpol)
#criar dataframe e concatenar com os dados da base do knn
dados_iterpol = pd.DataFrame(coord_interpol,columns=['lat','lon','RSSI_1','RSSI_2','RSSI_3','RSSI_4','RSSI_5','RSSI_6'])
csvMed = pd.concat([csvMed, dados_iterpol])
np.shape(coord_interpol)
csvMed
#medidas_potencia.values
medidas_posicao_lat = csvMed['lat'].values
medidas_posicao_lon = csvMed['lon'].values
medidas_potencia = csvMed[["RSSI_1","RSSI_2","RSSI_3","RSSI_4","RSSI_5","RSSI_6"]].values
#modela latitude e longitude separadamente
neigh_lat = KNeighborsRegressor(n_neighbors=10, weights = 'distance')
neigh_lat.fit(medidas_potencia,medidas_posicao_lat )
neigh_lon = KNeighborsRegressor(n_neighbors=10, weights = 'distance')
neigh_lon.fit(medidas_potencia,medidas_posicao_lon )
medidas_potencia[0:5]
csvMedTest = pd.read_csv("testLoc.csv")
medidas_posicao_teste = csvMedTest[['lat','lon']].values # Valores em Tupla
medidas_potencia_teste = csvMedTest[["RSSI_1","RSSI_2","RSSI_3","RSSI_4","RSSI_5","RSSI_6"]].values
np.shape(medidas_posicao_teste)
predicao_lat = neigh_lat.predict(medidas_potencia_teste)
predicao_lon = neigh_lon.predict(medidas_potencia_teste)
vet_err_lat = []
vet_err_lon = []
for i in range(0,200):
vet_err_lat.append(predicao_lat[i] - medidas_posicao_teste[i][0])
vet_err_lon.append(predicao_lon[i] - medidas_posicao_teste[i][1])
err_geral = []
for i in range(len(predicao_lat)):
err_geral.append(vincenty((predicao_lat[i],predicao_lon[i]), (medidas_posicao_teste[i][0], medidas_posicao_teste[i][1])).kilometers)
print("MEDIA LAT = " + str(np.mean(vet_err_lat)))
print("MEDIA LON = " + str(np.mean(vet_err_lon)))
print("MEDIA KM = " + str(np.mean(err_geral)))
print("STD = " + str(np.std(err_geral)))
plt.plot(range(200),err_geral, range(200),[np.mean(err_geral)for i in range(200)])
plt.show()
plt.clf()
bins = np.linspace(-0.01, 1, 100)
plt.hist(err_geral,bins)
plt.show()
X = medidas_posicao_lat
Y = medidas_posicao_lon
rssi1 = csvMed["RSSI_1"].values
Z = rssi1
plt.clf()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot(X,Y,Z)
plt.show()
plt.clf()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X,Y,Z)
plt.show()
plt.clf()
fig = plt.figure(figsize=(20,5))
axes = fig.add_axes([0.1,0.1,1,1])
plt.plot(X,Z,X,)
plt.show()
plt.clf()
fig = plt.figure(figsize=(20,5))
axes = fig.add_axes([0.1,0.1,1,1])
plt.plot(Y,Z)
plt.show()
plt.clf()
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_trisurf(X, Y, Z, cmap=cm.jet, linewidth=0)
fig.colorbar(surf)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
| 0.552419 | 0.793066 |
```
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import pyperclip
import urllib.request
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
from pprint import pprint
path = r'C:\Users\재욱\Downloads\chromedriver.exe'
driver = webdriver.Chrome(path)
search_url = "https://en.wikipedia.org/wiki/List_of_companies_of_Greece"
driver.get(search_url)
xpath = '//*[@id="mw-content-text"]/div[1]/table[2]/tbody'
division = driver.find_element_by_xpath(xpath)
name_list = division.find_elements_by_tag_name('a')
print(division.text)
table_html = driver.page_source#division.get_attribute('innerHTML')
tables = pd.read_html(table_html)
print(tables[1])
print(tables[1].Name)
greece_company_series.to_csv("company_list.csv", columns = ['Name'])
df = pd.read_csv("company_list.csv", index_col = 0)
df
target_table = tables[1]
target_table ['img_url'] = None # img_url 열 생성
target_table ['file_path'] = None # file_path 열 생성
for idx, data in target_table.iterrows():
image_path = 'https://en.wikipedia.org/wiki/' + data.Name
target_table.loc[idx, 'img_url'] = image_path
pprint(target_table)
img_folder_path = './greece_company_img/'
os.makedirs(img_folder_path, exist_ok=True)
image_xpath = [
'//*[@id="mw-content-text"]/div[1]/table[1]/tbody',
'//*[@id="mw-content-text"]/div[1]/table[2]/tbody'
]
image_url_list = []
path = r'C:\Users\재욱\Downloads\chromedriver.exe'
driver = webdriver.Chrome(path)
question_mark = 'https://upload.wikimedia.org/wikipedia/en/thumb/9/99/Question_book-new.svg/50px-Question_book-new.svg.png'
exclamation_mark = 'https://upload.wikimedia.org/wikipedia/en/thumb/b/b4/Ambox_important.svg/40px-Ambox_important.svg.png'
for idx, data in target_table.iterrows():
driver.get(data.img_url)
#greece_company_series
for each in image_xpath:
try:
tbody = driver.find_element_by_xpath(each)
images = tbody.find_elements_by_tag_name('img')
# image_url_list.append(images[0].get_attribute('src'))
if (
images[0].get_attribute('src') == question_mark or
images[0].get_attribute('src') == exclamation_mark
):
target_table.loc[idx, 'file_path'] = None
raise ValueError()
target_table.loc[idx, 'file_path'] = images[0].get_attribute('src')
break
except Exception as e:
print(e)
pprint(target_table)
i = 0
for file_path in target_table.file_path:
try:
with urllib.request.urlopen(file_path) as f:
with open(img_folder_path + tables[1].Name[i] +'.jpg','wb') as h:
image = f.read()
h.write(image)
except Exception as e:
print(e)
i+=1
# None의 개수
count=0
for file_path in target_table.file_path:
if file_path == None:
count+=1
print(count)
```
|
github_jupyter
|
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import pyperclip
import urllib.request
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
from pprint import pprint
path = r'C:\Users\재욱\Downloads\chromedriver.exe'
driver = webdriver.Chrome(path)
search_url = "https://en.wikipedia.org/wiki/List_of_companies_of_Greece"
driver.get(search_url)
xpath = '//*[@id="mw-content-text"]/div[1]/table[2]/tbody'
division = driver.find_element_by_xpath(xpath)
name_list = division.find_elements_by_tag_name('a')
print(division.text)
table_html = driver.page_source#division.get_attribute('innerHTML')
tables = pd.read_html(table_html)
print(tables[1])
print(tables[1].Name)
greece_company_series.to_csv("company_list.csv", columns = ['Name'])
df = pd.read_csv("company_list.csv", index_col = 0)
df
target_table = tables[1]
target_table ['img_url'] = None # img_url 열 생성
target_table ['file_path'] = None # file_path 열 생성
for idx, data in target_table.iterrows():
image_path = 'https://en.wikipedia.org/wiki/' + data.Name
target_table.loc[idx, 'img_url'] = image_path
pprint(target_table)
img_folder_path = './greece_company_img/'
os.makedirs(img_folder_path, exist_ok=True)
image_xpath = [
'//*[@id="mw-content-text"]/div[1]/table[1]/tbody',
'//*[@id="mw-content-text"]/div[1]/table[2]/tbody'
]
image_url_list = []
path = r'C:\Users\재욱\Downloads\chromedriver.exe'
driver = webdriver.Chrome(path)
question_mark = 'https://upload.wikimedia.org/wikipedia/en/thumb/9/99/Question_book-new.svg/50px-Question_book-new.svg.png'
exclamation_mark = 'https://upload.wikimedia.org/wikipedia/en/thumb/b/b4/Ambox_important.svg/40px-Ambox_important.svg.png'
for idx, data in target_table.iterrows():
driver.get(data.img_url)
#greece_company_series
for each in image_xpath:
try:
tbody = driver.find_element_by_xpath(each)
images = tbody.find_elements_by_tag_name('img')
# image_url_list.append(images[0].get_attribute('src'))
if (
images[0].get_attribute('src') == question_mark or
images[0].get_attribute('src') == exclamation_mark
):
target_table.loc[idx, 'file_path'] = None
raise ValueError()
target_table.loc[idx, 'file_path'] = images[0].get_attribute('src')
break
except Exception as e:
print(e)
pprint(target_table)
i = 0
for file_path in target_table.file_path:
try:
with urllib.request.urlopen(file_path) as f:
with open(img_folder_path + tables[1].Name[i] +'.jpg','wb') as h:
image = f.read()
h.write(image)
except Exception as e:
print(e)
i+=1
# None의 개수
count=0
for file_path in target_table.file_path:
if file_path == None:
count+=1
print(count)
| 0.177882 | 0.177419 |
# ElasticNet with Standard Scaler & Power Transformer
This Code template is for Regression tasks using a ElasticNet based on the Regression linear model Technique with StandardScaler and feature transformation technique PowerTransformer in a pipeline
### Required Packages
```
import warnings as wr
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler,PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error
wr.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path) #reading file
df.head()
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
plt.figure(figsize = (15, 10))
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f")
plt.show()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
#spliting data into X(features) and Y(Target)
X=df[features]
Y=df[target]
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
#we can choose randomstate and test_size as over requerment
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 12) #performing datasplitting
```
### Data Rescaling
StandardScaler standardizes features by removing the mean and scaling to unit variance
The standard score of a sample x is calculated as:
z = (x - u) / s
where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) for parameters
### Feature Transformation
Apply a power transform featurewise to make data more Gaussian-like.
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html) for parameters
### Model
Elastic Net first emerged as a result of critique on Lasso, whose variable selection can be too dependent on data and thus unstable. The solution is to combine the penalties of Ridge regression and Lasso to get the best of both worlds.
**Features of ElasticNet Regression-**
* It combines the L1 and L2 approaches.
* It performs a more efficient regularization process.
* It has two parameters to be set, λ and α.
#### Model Tuning Parameters
1. alpha : float, default=1.0
> Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object.
2. l1_ratio : float, default=0.5
> The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2.
3. normalize : bool, default=False
>This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
4. max_iter : int, default=1000
>The maximum number of iterations.
5. tol : float, default=1e-4
>The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
6. selection : {‘cyclic’, ‘random’}, default=’cyclic’
>If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
```
model = make_pipeline(StandardScaler(),PowerTransformer(), ElasticNet(random_state = 42))
model.fit(X_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
#prediction on testing set
prediction=model.predict(X_test)
```
### Model evolution
**r2_score:** The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
**MAE:** The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
**MSE:** The mean squared error function squares the error(penalizes the model for large errors) by our model.
```
print('Mean Absolute Error:', mean_absolute_error(y_test, prediction))
print('Mean Squared Error:', mean_squared_error(y_test, prediction))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction)))
print("R-squared score : ",r2_score(y_test,prediction))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Akshar Nerkar , Github: [Profile](https://github.com/Akshar777)
|
github_jupyter
|
import warnings as wr
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler,PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error
wr.filterwarnings('ignore')
#filepath
file_path= ""
#x_values
features=[]
#y_value
target=''
df=pd.read_csv(file_path) #reading file
df.head()
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
plt.figure(figsize = (15, 10))
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f")
plt.show()
#spliting data into X(features) and Y(Target)
X=df[features]
Y=df[target]
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
#we can choose randomstate and test_size as over requerment
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 12) #performing datasplitting
model = make_pipeline(StandardScaler(),PowerTransformer(), ElasticNet(random_state = 42))
model.fit(X_train, y_train)
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
#prediction on testing set
prediction=model.predict(X_test)
print('Mean Absolute Error:', mean_absolute_error(y_test, prediction))
print('Mean Squared Error:', mean_squared_error(y_test, prediction))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction)))
print("R-squared score : ",r2_score(y_test,prediction))
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
| 0.343232 | 0.977219 |
# Exercício: análise de sentimento em reviews
**Objetivo:** predizer o sentimento (positivo / negativo) de um review textual
## O que é NLP?
NLP (ou _Natural Language Processing_ - a sigla em português é PLN, _Processamento de Linguagem Natural_, mas essa sigla também é usada para falar de _Programação Neurolinguística_, então, vamos continuar usando NLP, ok?) é a área relacionada a técnicas para o entendimento de linguagem humana (a linguagem natural). A aplicação de técnicas de NLP tem sido vista em muitos domínios, desde a correção de palavras e sugestão de palavras na barra de busca de sites como o _Google_, até em sistemas mais "sofisticados" como os tradutores automáticos e os _home assistant_ e _smart assistants_, como Siri, Alexa ou Google Home, que podem auxiliar na execução de certas tarefas, como agendar compromissos, embora eles ainda [estejam longe de ser à prova de erros](https://www.nytimes.com/2018/05/25/business/amazon-alexa-conversation-shared-echo.html). Com o crescimento de conteúdo (mais artigos e mais fontes espalhadas), a importância do NLP também tem crescido, pois tarefas como sumarização automática e classificação automática de conteúdo (por ex., para checagem contra _fake news_) são cada vez mais necessárias em nossas vidas.
Neste exercício, vamos focar no uso de algumas técnicas de NLP para processamento de texto. Entretanto, é importante lembrar que NLP não se restringe a textos escritos, podendo ser aplicado também para processamento de fala, como é o caso dos _smart assistants_.
Diferentes tarefas de NLP, em geral, têm como passos iniciais as seguintes etapas:
* pré-processamento do texto (que pode ser uma combinação de diferentes processamentos, envolvendo modificações a nível de palavra e identificação de entidades/funções sintáticas/etc.)
* transformação do texto em quantidades numéricas (tipicamente vetores de números inteiros ou reais)
```
# ! pip install matplotlib nltk pandas seaborn scikit-learn
import pandas as pd
## bibliotecas para visualização de dados
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
```
## Leitura de dados
Esse conjunto de dados é composto de reviews de produtos coletados do site `Americanas.com` entre janeiro e maio de 2018.
Mais informações podem ser encontradas no [repositório b2w-reviews01](https://github.com/b2wdigital/b2w-reviews01).
```
selected_cols = ['product_name', 'product_brand', 'site_category_lv1', 'site_category_lv2',
'review_title', 'review_text', 'recommend_to_a_friend']
path = 'data/datasets'
reviews_df = pd.read_csv('https://github.com/b2wdigital/b2w-reviews01/raw/master/B2W-Reviews01.csv', sep=';', low_memory=False)[selected_cols].fillna('')
reviews_df.head()
```
## Criação de novas colunas
* `text`: coluna que contém o review completo (título + corpo)
```
reviews_df['text'] = reviews_df['review_title'] + ' ' + reviews_df['review_text']
reviews_df.iloc[0]['text']
```
* Cheque se não há nada com texto vazio
```
reviews_df[reviews_df['text'].str.len() == 0]
```
### Formato da coluna target
A coluna `target` é a polaridade / o sentimento do review. Para calculá-lo, vamos considerar a coluna `recommend_to_a_friend` e transformá-la de forma que:
* se `recommend_to_a_friend` = `Yes`, então, `target` = 1
* se `recommend_to_a_friend` = `No`, então, `target` = 0
```
reviews_df['target'] = (reviews_df['recommend_to_a_friend'] == 'Yes').astype(int)
```
### Novas colunas
Como podemos ver, agora temos o texto do review e a polaridade da sentença.
```
reviews_df[['text', 'target']].head()
```
## Inspeção de algumas colunas
### Coluna `target`
```
reviews_df[['target']].value_counts()
```
### Coluna `text`
* distribuição de número de caracteres
* inspeção visual do texto
```
reviews_df['text_nchars'] = reviews_df['text'].str.len()
reviews_df[['text_nchars']].describe()
fig = plt.figure(figsize=(12, 4), dpi=120)
ax = fig.add_subplot(111)
sns.boxplot(x="text_nchars", data=reviews_df, palette="rainbow", ax=ax)
sns.despine()
fig = plt.figure(figsize=(10, 4), dpi=120)
ax = fig.add_subplot(111)
sns.histplot(x="text_nchars", data=reviews_df, palette="rainbow", ax=ax)
sns.despine()
```
Nosso objetivo é construir um **classificador de sentimentos**, que recebe uma sentença (referente a um review de produto) e é capaz de predizer se o review é positivo (o usuário recomendaria a um amigo) ou negativo (o usuário **não** recomendaria a um amigo).
```
reviews_df.sample(n=10)['text'].tolist()
```
**Nos exemplos de frases acima, podemos ver que as sentenças incluem pontuações, acentos, letras maiúsculas e minúsculas... Seria ideal que conseguíssemos _normalizar_ o texto, de forma a diminuir a quantidade de palavras diferentes.**
## Normalização do texto
Técnicas comuns:
* remoção de acentos
* remoção de palavras muito comuns (_stopwords_)
* remoção de pontuação
* remoção de dígitos
* padronização para letras minúsculas (ou maiúsculas)
* _stemming_ / _lematização_ - redução de palavras relacionadas a uma forma mínima comum (ex. `construirá -> construir`, `construção -> construir`)
* correção de palavras escritas incorretamente (uso de spellchecker)
**Leia mais:**
* sobre a diferença entre stemming e lematização em [uma discussão no StackOverflow](https://stackoverflow.com/questions/1787110/what-is-the-true-difference-between-lemmatization-vs-stemming);
* sobre a situação atual em relação à lematização para a língua portuguesa nesse [blog post](https://lars76.github.io/nlp/lemmatize-portuguese/).
**Atenção:** todas essas técnicas envolvem operações com `strings`. Como estamos trabalhando com um dataframe Pandas, quando possível, vamos usar os métodos descritos [na documentação do Pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/text.html).
#### Remoção de acentos
```
reviews_df['norm_text'] = reviews_df['text'].str.normalize('NFKD').str.encode('ascii', 'ignore').str.decode('utf-8')
```
Forma alternativa:
>```python
> from unicodedata import normalize
>
> def remove_accents(text):
return normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8')
>
> reviews_df['norm_text'] = reviews_df['text'].apply(normalize_text)
>```
```
reviews_df[['text', 'norm_text']].head()
```
#### Padronização para letras minúsculas
```
reviews_df['norm_text'] = reviews_df['norm_text'].str.lower()
reviews_df[['text', 'norm_text']].head()
```
#### Remoção de dígitos
Aqui, para facilitar, vamos usar as chamadas [expressões regulares](https://docs.python.org/pt-br/3.8/howto/regex.html).
Para mais informações, você pode consultar essa [página](https://kmee.github.io/treinamento-python/google/regular-expressions.html).
```
reviews_df['norm_text'] = reviews_df['norm_text'].str.replace(r'[0-9]', '')
```
Forma alternativa **sem** usar expressões regulares:
> ```python
> for digit in range(10):
reviews_df['norm_text'] = reviews_df['norm_text'].str.replace(str(digit), '')
> ```
Forma alternativa usando expressões regulares, explicitamente usando a biblioteca [re](https://docs.python.org/3.6/library/re.html):
> ```python
> import re
>
> def replace_digits(text):
return re.sub(r'\d', '', text)
>
> reviews_df['norm_text'] = reviews_df['norm_text'].apply(replace_digits)
> ```
```
reviews_df[reviews_df['text'].str.contains('0')][['text', 'norm_text']].head()
```
#### Remoção de pontuação
Para remover a pontuação, podemos usar o próprio módulo `string` do python, que já tem mapeadas as pontuações de texto possíveis.
```
import string
translation_table = str.maketrans({key: ' ' for key in string.punctuation})
def remove_punctuation(text):
return text.translate(translation_table)
```
* vamos testar a função?
```
print('Quando ela olhou, gritei bem alto: \n - "Não me engana, não, hein?!"')
remove_punctuation('Quando ela olhou, gritei bem alto: \n - "Não me engana, não, hein?!"')
reviews_df['norm_text'] = reviews_df['norm_text'].apply(remove_punctuation)
reviews_df[reviews_df['text'].str.contains('!')][['text', 'norm_text']].head()
```
#### Remoção de stopwords
```
import nltk
nltk.download('stopwords')
' / '.join(nltk.corpus.stopwords.words('portuguese'))
```
**_Ponto importante_:** Como nossa tarefa é uma análise de sentimentos, seria muito ruim perder certas _stopwords_ como palavras de negação, afinal, "Eu **não** gosto disso" é muito diferente de "Eu gosto disso"!
Assim, vamos manter algumas das palavras que podem ser essenciais para a classificação:
* `não`
* `mas`
Ou seja, gostaríamos de remover as palavras `não` e `mas` da lista de stopwords fornecida pelo `nltk`. Além disso, note que essa lista contém acentos. Vamos retirá-los.
```
from unicodedata import normalize
def remove_accents(text):
return normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8')
```
**Tarefa:**
Crie uma lista chamada `stopwords` que contém todas as palavras de `nltk.corpus.stopwords.words('portuguese')`, exceto as palavras `não` e `mas`. A cada palavra deve ser aplicada a função `normalize_text`.
1. defina uma lista chamada `not_allowed` com as palavras "não" e "mas".
2. crie uma lista vazia chamada stopwords
3. faça um loop para percorrer cada uma das palavras de `nltk.corpus.stopwords.words('portuguese')`
* a cada loop, você deve checar se a palavra atual está contida dentro de `not_allowed`
* se **não** estiver, você deve incluir a palavra à lista de stopwords
<!--
not_allowed = ["não", "mas"]
stopwords = []
for word in nltk.corpus.stopwords.words('portuguese'):
if word not in not_allowed:
stopwords.append(remove_accents(word))
-->
```
not_allowed = ["não", "mas"]
stopwords = []
for word in nltk.corpus.stopwords.words('portuguese'):
if word not in not_allowed:
stopwords.append(remove_accents(word))
' / '.join(stopwords)
```
* note que há palavras repetidas, por termos removido os acentos. Vamos retirá-las transformando `stopwords` em um _set_ (uma estrutura de dados de conjunto, que automaticamente remove repetições)
```
stopwords = set(stopwords)
' / '.join(stopwords)
```
## Transformação do texto em features numéricas - como funciona?
Antes de treinar o modelo, precisamos transformar o texto em _features_ numéricas.
A maneira mais simples de transformar um texto em um vetor de números é usando o método comumente chamado de _Bag of words_.
Como o nome já nos diz, a ideia por trás do _Bag of words_ é representar cada sentença como um "saco" de palavras. Isso significa que a ordem das palavras **não** importa.
Para representar cada sentença com relação às palavras que a compõem, é necessário primeiro definir um vocabulário. Por exemplo, imagine que eu tenha como vocabulário a lista:
```python
vocabulario = ["verde", "manga", "ovo", "almoço", "garfo"]
```
Então, a sentença `Comi uma manga verde no almoço` poderia ser representada por `[1, 1, 0, 1, 0]`. O tamanho do vetor deve ser o tamanho do vocabulário (no caso, 5). As palavras que não estão no vocabulário são ignoradas.
Mas como definir qual é o vocabulário e calcular isso para todas as sentenças? Para aplicar o _Bag of words_, é comum usar o [`CountVectorizer` do scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html). Alguns parâmetros importantes no uso dele são:
* `max_features`: quantidade máxima de `features` - quantidade máxima de palavras no vocabulário;
* `binary`: se `True`, então, só revela ausência (0) ou presença das palavras do vocabulário em cada sentença. Se `False`, conta a quantidade de cada uma das palavras na sentença.
O parâmetro `max_features` é em parte responsável pela escolha do vocabulário. Outros parâmetros que controlam as frequências mínima e máxima permitidas de cada palavra no corpus (o conjunto de sentenças do dataset de treino) também influenciam o vocabulário final e, consequentemente, a representação do texto.
```
from sklearn.feature_extraction.text import CountVectorizer
examples_for_bow = [
'camisa preta',
'botao feito linha preta',
'considera-se caro preco botao camisa botao',
'linha costurar botão mesma camisa',
'costurar linha camisa mesma botao'
]
cv = CountVectorizer(max_features=5, strip_accents='unicode', binary=True)
bow_matrix = cv.fit_transform(examples_for_bow)
bow_matrix
```
* Matriz
```
bow_matrix.todense()
```
* Vocabulário
```
cv.vocabulary_
pd.DataFrame(bow_matrix.todense(), columns=list(zip(*sorted(cv.vocabulary_.items(), key=lambda item: item[1])))[0])
```
Note que os exemplos `3` e `4` têm a mesma representação numérica, mesmo que a ordem das palavras não seja a mesma! Essa é uma característica desse método.
## Preparação dos dados
**Tarefa:** Divida o dataframe `reviews_df` em um dataframe de treino (`train_df`) e um de teste (`test_df`)
1. crie uma variável chamada `target_vals` com os valores da coluna `target` em uma lista
2. use a função do sklearn `train_test_split` para dividir o dataframe. Lembre-se de usar o parâmetro `stratify`, passandro para ele a variável `target_vals`.
<!--
target_vals = reviews_df['recommend_to_a_friend'].values
train_df, test_df = train_test_split(reviews_df, test_size=0.2, stratify=target_vals)
-->
```
from sklearn.model_selection import train_test_split
target_vals = reviews_df['target'].values
train_df, test_df = train_test_split(reviews_df, test_size=0.2, stratify=target_vals)
len(train_df), len(test_df)
train_df[['norm_text', 'target']].head()
test_df[['norm_text', 'target']].head()
```
## Construção do modelo
Usamos aqui como exemplo uma árvore de decisão.
Uma vantagem de usar a árvore de decisão é que ela é interpretável: ao final do treino, poderemos ver quais são as palavras que mais importam e explorar um pouco as regras de decisão para cada classificação.
**Leia mais:** Se quiser um exemplo usando outro tipo de algoritmo, você pode ver esse [post no Medium](https://medium.com/@minbaekim/text-mining-preprocess-and-naive-bayes-classifier-da0000f633b2).
**Aprofunde-se:** Veja a diferença entre utilizar SVM (Support Vector Machines) e Decision Trees para classificação de texto [aqui](https://www.codementor.io/blog/text-classification-6mmol0q8oj).
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import Pipeline
steps = [
('vect', CountVectorizer(max_features=200, stop_words=stopwords)),
('clf', DecisionTreeClassifier(min_samples_leaf=10, class_weight='balanced'))
]
pipeline = Pipeline(steps)
X_train = train_df['norm_text'].values
y_train = train_df['target'].values
sentiment_analyzer = pipeline.fit(X_train, y_train)
```
**Tarefa:** Teste seu classificador usando um texto de exemplo. Veja que você pode usar tanto o método `predict` como o método `predict_proba`.
1. crie uma variável `text` com qualquer texto ou com o texto de uma das avaliações do dataset
2. faça uma chamada com `sentiment_analyzer.predict` ou `sentiment_analyzer.predict_proba` passando como parâmetro uma lista contendo a variável `text`: `[text]`
<!--
text = test_df.iloc[0]['norm_text']
sentiment_analyzer.predict([text])
sentiment_analyzer.predict_proba([text])
-->
```
text = test_df.iloc[0]['norm_text']
sentiment_analyzer.predict([text])
sentiment_analyzer.predict_proba([text])
```
### Plotando as features mais importantes
```
from sklearn import tree
vect = sentiment_analyzer.named_steps['vect']
features = vect.get_feature_names()
sorted_features = sorted(zip(features, sentiment_analyzer.named_steps['clf'].feature_importances_), key=lambda elem: elem[1], reverse=True)
plt.rcParams['axes.axisbelow'] = True
fig = plt.figure(figsize=(7, 5), dpi=120)
ax = fig.add_subplot(111)
plt.barh(*zip(*sorted_features[:20][::-1]), color='skyblue', edgecolor='w')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.grid()
plt.title('Top 10 features mais importantes')
plt.show()
```
* profundidade da árvore
```
sentiment_analyzer.named_steps['clf'].get_depth()
```
* note o parâmetro `max_depth=2`, que indica que vamos plotar apenas até o nível de profundidade 2 da árvore
```
fig = plt.figure(figsize=(12, 6), dpi=120)
ax = fig.add_subplot(111)
_ = tree.plot_tree(sentiment_analyzer.named_steps['clf'], max_depth=2, label='root', proportion=False,
impurity=False, feature_names=features, class_names=['not_recommend', 'recommend'], ax=ax, fontsize=10)
```
## Avaliação do classificador
**Tarefa:** Faça a predição da coluna `norm_text` e compare o resultado com o vetor target (coluna `y`).
1. crie a variável `X_test` (dica: veja como criamos a variável `X_train`. Deve ser a mesma coisa, utilizando agora o dataframe `test_df`)
2. crie a variável `y_test` (dica: veja como criamos a variável `y_train`. Deve ser a mesma coisa, utilizando agora o dataframe `test_df`)
3. crie a variável `y_pred` com as predições do modelo `sentiment_analyzer` em `X_train`
4. imprima o [classification_report](https://scikit-learn.org/0.19/modules/generated/sklearn.metrics.classification_report.html#sklearn.metrics.classification_report) usando como parâmetros `y_test`, `y_pred` e como `target_names` uma lista `['positive', 'negative']`.
<!--
X_test = test_df['norm_text'].values
y_test = test_df['y'].values
y_pred = sentiment_analyzer.predict(X_test)
print(classification_report(y_test, y_pred, target_names=['negative', 'positive']))
-->
```
from sklearn.metrics import classification_report, confusion_matrix
X_test = test_df['norm_text'].values
y_test = test_df['target'].values
y_pred = sentiment_analyzer.predict(X_test)
print(classification_report(y_test, y_pred, target_names=['negative', 'positive']))
```
#### Comparação com um classificador _naive_
Como exemplo de classificador _naive_, vamos criar um classificador que identifica como negativo os textos que contêm `nao` e positivo os textos que **não** contém `nao`
```
naive_pred = [int('nao' in text) for text in X_test]
print(classification_report(y_test, naive_pred, target_names=['negative', 'positive']))
```
### Matriz de confusão
| | pred_0| pred_1|
|------|-------|-------|
| 0 | TN | FP |
| 1 | FN | TP |
Legenda:
* `TN`: verdadeiros negativos (predição está correta e o sentimento verdadeiro é negativo)
* `FP`: falsos positivos (predição está incorreta e o sentimento verdadeiro é negativo)
* `FN`: falsos negativos (predição está incorreta e o sentimento verdadeiro é positivo)
* `TP`: verdadeiros positivos (predição está correta e o sentimento verdadeiro é positivo)
```
pd.DataFrame(confusion_matrix(y_test, y_pred), columns=['pred_0', 'pred_1'])
```
**Pergunta final:** O que você achou do classificador? Ele é bom ou ruim?
|
github_jupyter
|
# ! pip install matplotlib nltk pandas seaborn scikit-learn
import pandas as pd
## bibliotecas para visualização de dados
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
selected_cols = ['product_name', 'product_brand', 'site_category_lv1', 'site_category_lv2',
'review_title', 'review_text', 'recommend_to_a_friend']
path = 'data/datasets'
reviews_df = pd.read_csv('https://github.com/b2wdigital/b2w-reviews01/raw/master/B2W-Reviews01.csv', sep=';', low_memory=False)[selected_cols].fillna('')
reviews_df.head()
reviews_df['text'] = reviews_df['review_title'] + ' ' + reviews_df['review_text']
reviews_df.iloc[0]['text']
reviews_df[reviews_df['text'].str.len() == 0]
reviews_df['target'] = (reviews_df['recommend_to_a_friend'] == 'Yes').astype(int)
reviews_df[['text', 'target']].head()
reviews_df[['target']].value_counts()
reviews_df['text_nchars'] = reviews_df['text'].str.len()
reviews_df[['text_nchars']].describe()
fig = plt.figure(figsize=(12, 4), dpi=120)
ax = fig.add_subplot(111)
sns.boxplot(x="text_nchars", data=reviews_df, palette="rainbow", ax=ax)
sns.despine()
fig = plt.figure(figsize=(10, 4), dpi=120)
ax = fig.add_subplot(111)
sns.histplot(x="text_nchars", data=reviews_df, palette="rainbow", ax=ax)
sns.despine()
reviews_df.sample(n=10)['text'].tolist()
reviews_df['norm_text'] = reviews_df['text'].str.normalize('NFKD').str.encode('ascii', 'ignore').str.decode('utf-8')
> from unicodedata import normalize
>
> def remove_accents(text):
return normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8')
>
> reviews_df['norm_text'] = reviews_df['text'].apply(normalize_text)
>```
#### Padronização para letras minúsculas
#### Remoção de dígitos
Aqui, para facilitar, vamos usar as chamadas [expressões regulares](https://docs.python.org/pt-br/3.8/howto/regex.html).
Para mais informações, você pode consultar essa [página](https://kmee.github.io/treinamento-python/google/regular-expressions.html).
Forma alternativa **sem** usar expressões regulares:
> ```python
> for digit in range(10):
reviews_df['norm_text'] = reviews_df['norm_text'].str.replace(str(digit), '')
> ```
Forma alternativa usando expressões regulares, explicitamente usando a biblioteca [re](https://docs.python.org/3.6/library/re.html):
> ```python
> import re
>
> def replace_digits(text):
return re.sub(r'\d', '', text)
>
> reviews_df['norm_text'] = reviews_df['norm_text'].apply(replace_digits)
> ```
#### Remoção de pontuação
Para remover a pontuação, podemos usar o próprio módulo `string` do python, que já tem mapeadas as pontuações de texto possíveis.
* vamos testar a função?
#### Remoção de stopwords
**_Ponto importante_:** Como nossa tarefa é uma análise de sentimentos, seria muito ruim perder certas _stopwords_ como palavras de negação, afinal, "Eu **não** gosto disso" é muito diferente de "Eu gosto disso"!
Assim, vamos manter algumas das palavras que podem ser essenciais para a classificação:
* `não`
* `mas`
Ou seja, gostaríamos de remover as palavras `não` e `mas` da lista de stopwords fornecida pelo `nltk`. Além disso, note que essa lista contém acentos. Vamos retirá-los.
**Tarefa:**
Crie uma lista chamada `stopwords` que contém todas as palavras de `nltk.corpus.stopwords.words('portuguese')`, exceto as palavras `não` e `mas`. A cada palavra deve ser aplicada a função `normalize_text`.
1. defina uma lista chamada `not_allowed` com as palavras "não" e "mas".
2. crie uma lista vazia chamada stopwords
3. faça um loop para percorrer cada uma das palavras de `nltk.corpus.stopwords.words('portuguese')`
* a cada loop, você deve checar se a palavra atual está contida dentro de `not_allowed`
* se **não** estiver, você deve incluir a palavra à lista de stopwords
<!--
not_allowed = ["não", "mas"]
stopwords = []
for word in nltk.corpus.stopwords.words('portuguese'):
if word not in not_allowed:
stopwords.append(remove_accents(word))
-->
* note que há palavras repetidas, por termos removido os acentos. Vamos retirá-las transformando `stopwords` em um _set_ (uma estrutura de dados de conjunto, que automaticamente remove repetições)
## Transformação do texto em features numéricas - como funciona?
Antes de treinar o modelo, precisamos transformar o texto em _features_ numéricas.
A maneira mais simples de transformar um texto em um vetor de números é usando o método comumente chamado de _Bag of words_.
Como o nome já nos diz, a ideia por trás do _Bag of words_ é representar cada sentença como um "saco" de palavras. Isso significa que a ordem das palavras **não** importa.
Para representar cada sentença com relação às palavras que a compõem, é necessário primeiro definir um vocabulário. Por exemplo, imagine que eu tenha como vocabulário a lista:
Então, a sentença `Comi uma manga verde no almoço` poderia ser representada por `[1, 1, 0, 1, 0]`. O tamanho do vetor deve ser o tamanho do vocabulário (no caso, 5). As palavras que não estão no vocabulário são ignoradas.
Mas como definir qual é o vocabulário e calcular isso para todas as sentenças? Para aplicar o _Bag of words_, é comum usar o [`CountVectorizer` do scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html). Alguns parâmetros importantes no uso dele são:
* `max_features`: quantidade máxima de `features` - quantidade máxima de palavras no vocabulário;
* `binary`: se `True`, então, só revela ausência (0) ou presença das palavras do vocabulário em cada sentença. Se `False`, conta a quantidade de cada uma das palavras na sentença.
O parâmetro `max_features` é em parte responsável pela escolha do vocabulário. Outros parâmetros que controlam as frequências mínima e máxima permitidas de cada palavra no corpus (o conjunto de sentenças do dataset de treino) também influenciam o vocabulário final e, consequentemente, a representação do texto.
* Matriz
* Vocabulário
Note que os exemplos `3` e `4` têm a mesma representação numérica, mesmo que a ordem das palavras não seja a mesma! Essa é uma característica desse método.
## Preparação dos dados
**Tarefa:** Divida o dataframe `reviews_df` em um dataframe de treino (`train_df`) e um de teste (`test_df`)
1. crie uma variável chamada `target_vals` com os valores da coluna `target` em uma lista
2. use a função do sklearn `train_test_split` para dividir o dataframe. Lembre-se de usar o parâmetro `stratify`, passandro para ele a variável `target_vals`.
<!--
target_vals = reviews_df['recommend_to_a_friend'].values
train_df, test_df = train_test_split(reviews_df, test_size=0.2, stratify=target_vals)
-->
## Construção do modelo
Usamos aqui como exemplo uma árvore de decisão.
Uma vantagem de usar a árvore de decisão é que ela é interpretável: ao final do treino, poderemos ver quais são as palavras que mais importam e explorar um pouco as regras de decisão para cada classificação.
**Leia mais:** Se quiser um exemplo usando outro tipo de algoritmo, você pode ver esse [post no Medium](https://medium.com/@minbaekim/text-mining-preprocess-and-naive-bayes-classifier-da0000f633b2).
**Aprofunde-se:** Veja a diferença entre utilizar SVM (Support Vector Machines) e Decision Trees para classificação de texto [aqui](https://www.codementor.io/blog/text-classification-6mmol0q8oj).
**Tarefa:** Teste seu classificador usando um texto de exemplo. Veja que você pode usar tanto o método `predict` como o método `predict_proba`.
1. crie uma variável `text` com qualquer texto ou com o texto de uma das avaliações do dataset
2. faça uma chamada com `sentiment_analyzer.predict` ou `sentiment_analyzer.predict_proba` passando como parâmetro uma lista contendo a variável `text`: `[text]`
<!--
text = test_df.iloc[0]['norm_text']
sentiment_analyzer.predict([text])
sentiment_analyzer.predict_proba([text])
-->
### Plotando as features mais importantes
* profundidade da árvore
* note o parâmetro `max_depth=2`, que indica que vamos plotar apenas até o nível de profundidade 2 da árvore
## Avaliação do classificador
**Tarefa:** Faça a predição da coluna `norm_text` e compare o resultado com o vetor target (coluna `y`).
1. crie a variável `X_test` (dica: veja como criamos a variável `X_train`. Deve ser a mesma coisa, utilizando agora o dataframe `test_df`)
2. crie a variável `y_test` (dica: veja como criamos a variável `y_train`. Deve ser a mesma coisa, utilizando agora o dataframe `test_df`)
3. crie a variável `y_pred` com as predições do modelo `sentiment_analyzer` em `X_train`
4. imprima o [classification_report](https://scikit-learn.org/0.19/modules/generated/sklearn.metrics.classification_report.html#sklearn.metrics.classification_report) usando como parâmetros `y_test`, `y_pred` e como `target_names` uma lista `['positive', 'negative']`.
<!--
X_test = test_df['norm_text'].values
y_test = test_df['y'].values
y_pred = sentiment_analyzer.predict(X_test)
print(classification_report(y_test, y_pred, target_names=['negative', 'positive']))
-->
#### Comparação com um classificador _naive_
Como exemplo de classificador _naive_, vamos criar um classificador que identifica como negativo os textos que contêm `nao` e positivo os textos que **não** contém `nao`
### Matriz de confusão
| | pred_0| pred_1|
|------|-------|-------|
| 0 | TN | FP |
| 1 | FN | TP |
Legenda:
* `TN`: verdadeiros negativos (predição está correta e o sentimento verdadeiro é negativo)
* `FP`: falsos positivos (predição está incorreta e o sentimento verdadeiro é negativo)
* `FN`: falsos negativos (predição está incorreta e o sentimento verdadeiro é positivo)
* `TP`: verdadeiros positivos (predição está correta e o sentimento verdadeiro é positivo)
| 0.512449 | 0.933249 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.