code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Playing around with spaCy
[spaCy](https://honnibal.github.io/spaCy/quickstart.html)
Using the basic introduction to spaCy, then playting with it. Let's load spaCy's english dictionary.
```
from __future__ import unicode_literals # If Python 2
import spacy.en
from spacy.tokens import Token
from spacy.parts_of_speech import ADV
nlp = spacy.en.English()
# Find log probability of Nth most frequent word
probs = [lex.prob for lex in nlp.vocab]
probs.sort()
words = [w for w in nlp.vocab if w.has_repvec]
```
spaCy tokenizes words, then treats each token as a Token object. Each token has an integer and string representation. Each token also has things like:
* **orth**
* The form of the word with no string normalization or processing, as it appears in the string, without trailing whitespace. i.e. " Frank " -> "frank"
* **head**
* The Token that is the immediate syntactic head of the word. If the word is the root of the dependency tree, the same word is returned.
* **lemma**
* The “base” of the word, with no inflectional suffixes, e.g. the lemma of “developing” is “develop”, the lemma of “geese” is “goose”, etc. Note that derivational suffixes are not stripped, e.g. the lemma of “instutitions” is “institution”, not “institute”. Lemmatization is performed using the WordNet data, but extended to also cover closed-class words such as pronouns. By default, the WN lemmatizer returns “hi” as the lemma of “his”. We assign pronouns the lemma -PRON-.
* **prob**
* The unigram log-probability of the word, estimated from counts from a large corpus, smoothed using Simple Good Turing estimation.
* **cluster**
* The Brown cluster ID of the word. These are often useful features for linear models. If you’re using a non-linear model, particularly a neural net or random forest, consider using the real-valued word representation vector, in Token.repvec, instead.
* **repvec**
* A “word embedding” representation: a dense real-valued vector that supports similarity queries between words. By default, spaCy currently loads vectors produced by the Levy and Goldberg (2014) dependency-based word2vec model.
```
tokens = nlp(u'"I ran to the wall quickly," Frank explained to the robot.')
ran = tokens[2]
quickly = tokens[6]
run = nlp(moved.lemma_)[0]
# the integer and string representations of "moved" and its head
print (ran.orth, ran.orth_, ran.head.lemma, ran.head.lemma_)
print (quickly.orth, quickly.orth_, quickly.lemma, quickly.lemma_,)
print (quickly.head.orth_, quickly.head.lemma_)
print (ran.prob, run.prob, quickly.prob)
print (ran.cluster, run.cluster, quickly.cluster)
```
Given a test sentence (in this case: **"I ran to the wall quickly," Frank explained to the robot.**), we can highlight parts of speech (i.e. adverbs):
```
is_adverb = lambda tok: tok.pos == ADV and tok.prob < probs[-1000]
str_ = u'"I ran to the wall quickly," Frank explained to the robot.'
tokens = nlp(str_)
print u''.join(tok.string.upper() if is_adverb(tok) else tok.string for tok in tokens)
quickly = tokens[6]
```
Find similar words to 'quickly' via [cosine similarity](http://en.wikipedia.org/wiki/Cosine_similarity):
```
from numpy import dot
from numpy.linalg import norm
cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2))
words.sort(key=lambda w: cosine(w.repvec, quickly.repvec))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n100-110:')
print('\n'.join(w.orth_ for w in words[100:110]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
print('\n50000-50010:')
print('\n'.join(w.orth_ for w in words[50000:50010]))
```
We can focus on one meaning of *quickly* and find similar words if we average over related words:
```
say_adverbs = ['quickly', 'swiftly', 'speedily', 'rapidly']
say_vector = sum(nlp.vocab[adverb].repvec for adverb in say_adverbs) / len(say_adverbs)
words.sort(key=lambda w: cosine(w.repvec, say_vector))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
```
Let's look at other parts of speech from our original sentence:
```
from spacy.parts_of_speech import NOUN
is_noun = lambda tok: tok.pos == NOUN and tok.prob < probs[-1000]
print u''.join(tok.string.upper() if is_noun(tok) else tok.string for tok in tokens)
nouns = [tok for tok in tokens if is_noun(tok)]
```
How closely does one test noun match each noun found in our sentence? That is, if we say, "barrier", is it closer to "wall," "Frank", or "robot"? How about "car" or "agent"?
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
barrier = nlp('barrier')[0]
car = nlp('car')[0]
agent = nlp('android')[0]
test_nouns = nouns + [barrier] + [car] + [agent]
n = len(test_nouns)
barrier_relations = np.zeros(n)
car_relations = np.zeros(n)
agent_relations = np.zeros(n)
for i, noun in enumerate(test_nouns):
barrier_relations[i] = cosine(barrier.repvec, noun.repvec)
car_relations[i] = cosine(car.repvec, noun.repvec)
agent_relations[i] = cosine(agent.repvec, noun.repvec)
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n)
bar_width = 0.2
opacity = 0.4
rects1 = plt.bar(index, barrier_relations, bar_width,
alpha=opacity,
color='b',
label=barrier.orth_)
rects2 = plt.bar(index + bar_width, car_relations, bar_width,
alpha=opacity,
color='r',
label=car.orth_)
rects3 = plt.bar(index + 2 * bar_width, agent_relations, bar_width,
alpha=opacity,
color='g',
label=agent.orth_)
labels = [tok.orth_ for tok in test_nouns]
plt.xlabel('Test Word')
plt.ylabel('Similarity')
plt.title('Similarity of words')
plt.xticks(index + bar_width, labels)
plt.legend()
from IPython.core.display import HTML
# Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
|
github_jupyter
|
from __future__ import unicode_literals # If Python 2
import spacy.en
from spacy.tokens import Token
from spacy.parts_of_speech import ADV
nlp = spacy.en.English()
# Find log probability of Nth most frequent word
probs = [lex.prob for lex in nlp.vocab]
probs.sort()
words = [w for w in nlp.vocab if w.has_repvec]
tokens = nlp(u'"I ran to the wall quickly," Frank explained to the robot.')
ran = tokens[2]
quickly = tokens[6]
run = nlp(moved.lemma_)[0]
# the integer and string representations of "moved" and its head
print (ran.orth, ran.orth_, ran.head.lemma, ran.head.lemma_)
print (quickly.orth, quickly.orth_, quickly.lemma, quickly.lemma_,)
print (quickly.head.orth_, quickly.head.lemma_)
print (ran.prob, run.prob, quickly.prob)
print (ran.cluster, run.cluster, quickly.cluster)
is_adverb = lambda tok: tok.pos == ADV and tok.prob < probs[-1000]
str_ = u'"I ran to the wall quickly," Frank explained to the robot.'
tokens = nlp(str_)
print u''.join(tok.string.upper() if is_adverb(tok) else tok.string for tok in tokens)
quickly = tokens[6]
from numpy import dot
from numpy.linalg import norm
cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2))
words.sort(key=lambda w: cosine(w.repvec, quickly.repvec))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n100-110:')
print('\n'.join(w.orth_ for w in words[100:110]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
print('\n50000-50010:')
print('\n'.join(w.orth_ for w in words[50000:50010]))
say_adverbs = ['quickly', 'swiftly', 'speedily', 'rapidly']
say_vector = sum(nlp.vocab[adverb].repvec for adverb in say_adverbs) / len(say_adverbs)
words.sort(key=lambda w: cosine(w.repvec, say_vector))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
from spacy.parts_of_speech import NOUN
is_noun = lambda tok: tok.pos == NOUN and tok.prob < probs[-1000]
print u''.join(tok.string.upper() if is_noun(tok) else tok.string for tok in tokens)
nouns = [tok for tok in tokens if is_noun(tok)]
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
barrier = nlp('barrier')[0]
car = nlp('car')[0]
agent = nlp('android')[0]
test_nouns = nouns + [barrier] + [car] + [agent]
n = len(test_nouns)
barrier_relations = np.zeros(n)
car_relations = np.zeros(n)
agent_relations = np.zeros(n)
for i, noun in enumerate(test_nouns):
barrier_relations[i] = cosine(barrier.repvec, noun.repvec)
car_relations[i] = cosine(car.repvec, noun.repvec)
agent_relations[i] = cosine(agent.repvec, noun.repvec)
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n)
bar_width = 0.2
opacity = 0.4
rects1 = plt.bar(index, barrier_relations, bar_width,
alpha=opacity,
color='b',
label=barrier.orth_)
rects2 = plt.bar(index + bar_width, car_relations, bar_width,
alpha=opacity,
color='r',
label=car.orth_)
rects3 = plt.bar(index + 2 * bar_width, agent_relations, bar_width,
alpha=opacity,
color='g',
label=agent.orth_)
labels = [tok.orth_ for tok in test_nouns]
plt.xlabel('Test Word')
plt.ylabel('Similarity')
plt.title('Similarity of words')
plt.xticks(index + bar_width, labels)
plt.legend()
from IPython.core.display import HTML
# Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
| 0.580114 | 0.964052 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn as sk
# Генерируем уникальный seed
my_code = "Соколов"
seed_limit = 2 ** 32
my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit
np.random.seed(my_seed)
# Формируем случайную нормально распределенную выборку sample
N = 10000
sample = np.random.normal(0, 1, N)
plt.hist(sample, bins=100)
plt.show()
# Формируем массив целевых метока классов: 0 - если значение в sample меньше t и 1 - если больше
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
plt.hist(target_labels, bins=100)
plt.show()
# Используя данные заготовки (или, при желании, не используя),
# реализуйте функции для рассчета accuracy, precision, recall и F1
def confusion_matrix(target_labels, model_labels) :
tp = 0
tn = 0
fp = 0
fn = 0
for i in range(len(target_labels)) :
if target_labels[i] == 1 and model_labels[i] == 1 :
tp += 1
if target_labels[i] == 0 and model_labels[i] == 0 :
tn += 1
if target_labels[i] == 0 and model_labels[i] == 1 :
fp += 1
if target_labels[i] == 1 and model_labels[i] == 0 :
fn += 1
return tp, tn, fp, fn
def accuracy (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
if (tp+fp+tn+fn)!=0:
acc=(tp+tn)/(tp+fp+tn+fn)
else: acc='-'
return acc
def precision (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
if (tp+fp)!=0:
prec=tp/(tp+fp)
else:
prec='-'
return prec
def recall (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
if (tp+fn)!=0:
rec=tp/(tp+fn)
else: rec='-'
return rec
def F1 (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
if precision(target_labels, model_labels)!='-' and recall(target_labels, model_labels)!='-':
fone=(precision(target_labels, model_labels)*recall(target_labels, model_labels))/(precision(target_labels, model_labels)+recall(target_labels, model_labels))
else: fone='-'
return fone
# Первый эксперимент: t = 0, модель с вероятностью 50% возвращает 0 и 1
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.random.randint(2, size=N)
print(accuracy(target_labels, model_labels))
print(precision(target_labels, model_labels))
print(recall(target_labels, model_labels))
print(F1(target_labels, model_labels))
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
# Второй эксперимент: t = 0, модель с вероятностью 25% возвращает 0 и с 75% - 1
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
labels = np.random.randint(4, size=N)
model_labels = np.array([0 if i == 0 else 1 for i in labels])
np.random.shuffle(model_labels)
print(accuracy(target_labels, model_labels))
print(precision(target_labels, model_labels))
print(recall(target_labels, model_labels))
print(F1(target_labels, model_labels))
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
# Проанализируйте, какие из метрик применимы в первом и втором экспериментах.
#Все метрики
# Третий эксперимент: t = 2, модель с вероятностью 50% возвращает 0 и 1
t = 2
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.random.randint(2, size=N)
print(accuracy(target_labels, model_labels))
print(precision(target_labels, model_labels))
print(recall(target_labels, model_labels))
print(F1(target_labels, model_labels))
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
# Четвёртый эксперимент: t = 2, модель с вероятностью 100% возвращает 0
t = 2
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.zeros(N)
print(accuracy(target_labels, model_labels))
print(precision(target_labels, model_labels))
print(recall(target_labels, model_labels))
print(F1(target_labels, model_labels))
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
# Проанализируйте, какие из метрик применимы в третьем и четвёртом экспериментах.
#accuracy и recall
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn as sk
# Генерируем уникальный seed
my_code = "Соколов"
seed_limit = 2 ** 32
my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit
np.random.seed(my_seed)
# Формируем случайную нормально распределенную выборку sample
N = 10000
sample = np.random.normal(0, 1, N)
plt.hist(sample, bins=100)
plt.show()
# Формируем массив целевых метока классов: 0 - если значение в sample меньше t и 1 - если больше
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
plt.hist(target_labels, bins=100)
plt.show()
# Используя данные заготовки (или, при желании, не используя),
# реализуйте функции для рассчета accuracy, precision, recall и F1
def confusion_matrix(target_labels, model_labels) :
tp = 0
tn = 0
fp = 0
fn = 0
for i in range(len(target_labels)) :
if target_labels[i] == 1 and model_labels[i] == 1 :
tp += 1
if target_labels[i] == 0 and model_labels[i] == 0 :
tn += 1
if target_labels[i] == 0 and model_labels[i] == 1 :
fp += 1
if target_labels[i] == 1 and model_labels[i] == 0 :
fn += 1
return tp, tn, fp, fn
def accuracy (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
if (tp+fp+tn+fn)!=0:
acc=(tp+tn)/(tp+fp+tn+fn)
else: acc='-'
return acc
def precision (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
if (tp+fp)!=0:
prec=tp/(tp+fp)
else:
prec='-'
return prec
def recall (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
if (tp+fn)!=0:
rec=tp/(tp+fn)
else: rec='-'
return rec
def F1 (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
if precision(target_labels, model_labels)!='-' and recall(target_labels, model_labels)!='-':
fone=(precision(target_labels, model_labels)*recall(target_labels, model_labels))/(precision(target_labels, model_labels)+recall(target_labels, model_labels))
else: fone='-'
return fone
# Первый эксперимент: t = 0, модель с вероятностью 50% возвращает 0 и 1
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.random.randint(2, size=N)
print(accuracy(target_labels, model_labels))
print(precision(target_labels, model_labels))
print(recall(target_labels, model_labels))
print(F1(target_labels, model_labels))
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
# Второй эксперимент: t = 0, модель с вероятностью 25% возвращает 0 и с 75% - 1
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
labels = np.random.randint(4, size=N)
model_labels = np.array([0 if i == 0 else 1 for i in labels])
np.random.shuffle(model_labels)
print(accuracy(target_labels, model_labels))
print(precision(target_labels, model_labels))
print(recall(target_labels, model_labels))
print(F1(target_labels, model_labels))
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
# Проанализируйте, какие из метрик применимы в первом и втором экспериментах.
#Все метрики
# Третий эксперимент: t = 2, модель с вероятностью 50% возвращает 0 и 1
t = 2
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.random.randint(2, size=N)
print(accuracy(target_labels, model_labels))
print(precision(target_labels, model_labels))
print(recall(target_labels, model_labels))
print(F1(target_labels, model_labels))
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
# Четвёртый эксперимент: t = 2, модель с вероятностью 100% возвращает 0
t = 2
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.zeros(N)
print(accuracy(target_labels, model_labels))
print(precision(target_labels, model_labels))
print(recall(target_labels, model_labels))
print(F1(target_labels, model_labels))
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
# Проанализируйте, какие из метрик применимы в третьем и четвёртом экспериментах.
#accuracy и recall
| 0.197677 | 0.770637 |
# Clustering Methods Comparison of P300 ERP EEG Data
We can explore machine learning methods of classification and clustering of the EEG data.
## Setup
Clone the repository and install the packages.
```
!git clone https://github.com/NeuroTechX/eeg-notebooks
%cd eeg-notebooks
!pip install -e .
from collections import OrderedDict
from eegnb.analysis.utils import load_data, plot_conditions
from eegnb.datasets import fetch_dataset
from functools import partial
from itertools import cycle, islice
from matplotlib import pyplot as plt
from mne import Epochs, find_events
from mne.decoding import Vectorizer
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import os
from pyriemann.classification import MDM
from pyriemann.embedding import Embedding
from pyriemann.estimation import XdawnCovariances
from pyriemann.tangentspace import TangentSpace
from pyriemann.utils.viz import plot_confusion_matrix
import seaborn as sns
from sklearn import cluster, mixture
from sklearn.manifold import LocallyLinearEmbedding, Isomap, MDS, SpectralEmbedding, TSNE
from sklearn.neighbors import kneighbors_graph
from sklearn.preprocessing import StandardScaler
from pyriemann.classification import MDM
from time import time
import warnings
warnings.filterwarnings('ignore')
```
Load the data.
```
eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')
p300_data_path = os.path.join(eegnb_data_path, 'visual-P300', 'eegnb_examples')
# If dataset hasn't been downloaded yet, download it
if not os.path.isdir(p300_data_path):
fetch_dataset(data_dir=eegnb_data_path, experiment='visual-P300', site='eegnb_examples');
subject = 1
session = 1
p300raw = load_data(subject,session,
experiment='visual-P300', site='eegnb_examples', device_name='muse2016',
data_dir = eegnb_data_path)
```
Filter the data.
```
p300raw.filter(1,30, method='iir')
```
Visualize.
```
p300raw.plot_psd(fmin=1, fmax=30);
```
Perform the epoching.
```
# Create an array containing the timestamps and type of each event/stimulus.
p300events = find_events(p300raw)
p300event_id = {'Non-Target': 1, 'Target': 2}
# Create an MNE Epochs object representing all the epochs around stimulus presentation
p300epochs = Epochs(p300raw, events=p300events, event_id=p300event_id,
tmin=-0.1, tmax=0.8, baseline=None,
reject={'eeg': 100e-6}, preload=True,
verbose=False, picks=[0,1,2,3])
p300epochs
print('sample drop %: ', (1 - len(p300epochs.events)/len(p300events)) * 100)
# Get data from Epochs
p300X = p300epochs.get_data()
p300y = p300epochs.events[:, -1]
p300conditions = OrderedDict()
p300conditions['Non-target'] = [1]
p300conditions['Target'] = [2]
fig, ax = plot_conditions(p300epochs, conditions=p300conditions,
ci=97.5, n_boot=1000, title='',
diff_waveform=(1, 2))
```
## Comparison of Clustering Methods
Compare different methods of clustering
```
plt.figure(figsize=(9 * 2 + 3, 12.5))
plt.subplots_adjust(left=.02, right=.98, bottom=.001, top=.96, wspace=.05,
hspace=.01)
plot_num = 1
default_base = {'quantile': .3,
'eps': .3,
'damping': .9,
'preference': -200,
'n_neighbors': 281,
'n_clusters': 2,
'min_samples': 20,
'xi': 0.05,
'min_cluster_size': 0.1}
p300 = [p300X, p300y]
datasets = [
(p300, {'damping': .77, 'preference': -240,
'quantile': .2, 'n_clusters': 2,
'min_samples': 20, 'xi': 0.25})]
for i_dataset, (dataset, algo_params) in enumerate(datasets):
# Update the parameters with dataset-specific values.
params = default_base.copy()
params.update(algo_params)
X, y = dataset
# Reshape the input data to two dimensions.
nsamples, nx, ny = X.shape
d2X = X.reshape((nsamples,nx*ny))
# Normalize the dataset for easier parameter selection.
X = StandardScaler().fit_transform(d2X)
# Estimate bandwidth for mean shift.
bandwidth = cluster.estimate_bandwidth(X, quantile=params['quantile'])
# Create the connectivity matrix for structured Ward.
connectivity = kneighbors_graph(
X, n_neighbors=params['n_neighbors'], include_self=False)
# Calculate the connectivity symmetric.
connectivity = 0.5 * (connectivity + connectivity.T)
# Create cluster objects.
ms = cluster.MeanShift(bandwidth=bandwidth, bin_seeding=True)
two_means = cluster.MiniBatchKMeans(n_clusters=params['n_clusters'])
ward = cluster.AgglomerativeClustering(
n_clusters=params['n_clusters'], linkage='ward',
connectivity=connectivity)
spectral = cluster.SpectralClustering(
n_clusters=params['n_clusters'], eigen_solver='arpack',
affinity="nearest_neighbors")
dbscan = cluster.DBSCAN(eps=params['eps'])
optics = cluster.OPTICS(min_samples=params['min_samples'],
xi=params['xi'],
min_cluster_size=params['min_cluster_size'])
affinity_propagation = cluster.AffinityPropagation(
damping=params['damping'], preference=params['preference'])
average_linkage = cluster.AgglomerativeClustering(
linkage="average", affinity="cityblock",
n_clusters=params['n_clusters'], connectivity=connectivity)
birch = cluster.Birch(n_clusters=params['n_clusters'])
gmm = mixture.GaussianMixture(
n_components=params['n_clusters'], covariance_type='full')
clustering_algorithms = (
('MiniBatchKMeans', two_means),
('AffinityPropagation', affinity_propagation),
('MeanShift', ms),
('SpectralClustering', spectral),
('Ward', ward),
('AgglomerativeClustering', average_linkage),
# ('DBSCAN', dbscan),
# ('OPTICS', optics),
# ('Birch', birch),
# ('GaussianMixture', gmm)
)
for name, algorithm in clustering_algorithms:
t0 = time()
# Catch warnings related to kneighbors_graph.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="the number of connected components of the " +
"connectivity matrix is [0-9]{1,2}" +
" > 1. Completing it to avoid stopping the tree early.",
category=UserWarning)
warnings.filterwarnings(
"ignore",
message="Graph is not fully connected, spectral embedding" +
" may not work as expected.",
category=UserWarning)
algorithm.fit(X)
t1 = time()
if hasattr(algorithm, 'labels_'):
y_pred = algorithm.labels_.astype(np.int)
else:
y_pred = algorithm.predict(X)
plt.subplot(len(datasets), len(clustering_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
colors = np.array(list(islice(cycle(['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']),
int(max(y_pred) + 1))))
# Add black color for outliers (if any).
colors = np.append(colors, ["#000000"])
plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[y_pred])
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
plt.xticks(())
plt.yticks(())
plt.text(.99, .01, ('%.2fs' % (t1 - t0)).lstrip('0'),
transform=plt.gca().transAxes, size=15,
horizontalalignment='right')
plot_num += 1
plt.show()
```
|
github_jupyter
|
!git clone https://github.com/NeuroTechX/eeg-notebooks
%cd eeg-notebooks
!pip install -e .
from collections import OrderedDict
from eegnb.analysis.utils import load_data, plot_conditions
from eegnb.datasets import fetch_dataset
from functools import partial
from itertools import cycle, islice
from matplotlib import pyplot as plt
from mne import Epochs, find_events
from mne.decoding import Vectorizer
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import os
from pyriemann.classification import MDM
from pyriemann.embedding import Embedding
from pyriemann.estimation import XdawnCovariances
from pyriemann.tangentspace import TangentSpace
from pyriemann.utils.viz import plot_confusion_matrix
import seaborn as sns
from sklearn import cluster, mixture
from sklearn.manifold import LocallyLinearEmbedding, Isomap, MDS, SpectralEmbedding, TSNE
from sklearn.neighbors import kneighbors_graph
from sklearn.preprocessing import StandardScaler
from pyriemann.classification import MDM
from time import time
import warnings
warnings.filterwarnings('ignore')
eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')
p300_data_path = os.path.join(eegnb_data_path, 'visual-P300', 'eegnb_examples')
# If dataset hasn't been downloaded yet, download it
if not os.path.isdir(p300_data_path):
fetch_dataset(data_dir=eegnb_data_path, experiment='visual-P300', site='eegnb_examples');
subject = 1
session = 1
p300raw = load_data(subject,session,
experiment='visual-P300', site='eegnb_examples', device_name='muse2016',
data_dir = eegnb_data_path)
p300raw.filter(1,30, method='iir')
p300raw.plot_psd(fmin=1, fmax=30);
# Create an array containing the timestamps and type of each event/stimulus.
p300events = find_events(p300raw)
p300event_id = {'Non-Target': 1, 'Target': 2}
# Create an MNE Epochs object representing all the epochs around stimulus presentation
p300epochs = Epochs(p300raw, events=p300events, event_id=p300event_id,
tmin=-0.1, tmax=0.8, baseline=None,
reject={'eeg': 100e-6}, preload=True,
verbose=False, picks=[0,1,2,3])
p300epochs
print('sample drop %: ', (1 - len(p300epochs.events)/len(p300events)) * 100)
# Get data from Epochs
p300X = p300epochs.get_data()
p300y = p300epochs.events[:, -1]
p300conditions = OrderedDict()
p300conditions['Non-target'] = [1]
p300conditions['Target'] = [2]
fig, ax = plot_conditions(p300epochs, conditions=p300conditions,
ci=97.5, n_boot=1000, title='',
diff_waveform=(1, 2))
plt.figure(figsize=(9 * 2 + 3, 12.5))
plt.subplots_adjust(left=.02, right=.98, bottom=.001, top=.96, wspace=.05,
hspace=.01)
plot_num = 1
default_base = {'quantile': .3,
'eps': .3,
'damping': .9,
'preference': -200,
'n_neighbors': 281,
'n_clusters': 2,
'min_samples': 20,
'xi': 0.05,
'min_cluster_size': 0.1}
p300 = [p300X, p300y]
datasets = [
(p300, {'damping': .77, 'preference': -240,
'quantile': .2, 'n_clusters': 2,
'min_samples': 20, 'xi': 0.25})]
for i_dataset, (dataset, algo_params) in enumerate(datasets):
# Update the parameters with dataset-specific values.
params = default_base.copy()
params.update(algo_params)
X, y = dataset
# Reshape the input data to two dimensions.
nsamples, nx, ny = X.shape
d2X = X.reshape((nsamples,nx*ny))
# Normalize the dataset for easier parameter selection.
X = StandardScaler().fit_transform(d2X)
# Estimate bandwidth for mean shift.
bandwidth = cluster.estimate_bandwidth(X, quantile=params['quantile'])
# Create the connectivity matrix for structured Ward.
connectivity = kneighbors_graph(
X, n_neighbors=params['n_neighbors'], include_self=False)
# Calculate the connectivity symmetric.
connectivity = 0.5 * (connectivity + connectivity.T)
# Create cluster objects.
ms = cluster.MeanShift(bandwidth=bandwidth, bin_seeding=True)
two_means = cluster.MiniBatchKMeans(n_clusters=params['n_clusters'])
ward = cluster.AgglomerativeClustering(
n_clusters=params['n_clusters'], linkage='ward',
connectivity=connectivity)
spectral = cluster.SpectralClustering(
n_clusters=params['n_clusters'], eigen_solver='arpack',
affinity="nearest_neighbors")
dbscan = cluster.DBSCAN(eps=params['eps'])
optics = cluster.OPTICS(min_samples=params['min_samples'],
xi=params['xi'],
min_cluster_size=params['min_cluster_size'])
affinity_propagation = cluster.AffinityPropagation(
damping=params['damping'], preference=params['preference'])
average_linkage = cluster.AgglomerativeClustering(
linkage="average", affinity="cityblock",
n_clusters=params['n_clusters'], connectivity=connectivity)
birch = cluster.Birch(n_clusters=params['n_clusters'])
gmm = mixture.GaussianMixture(
n_components=params['n_clusters'], covariance_type='full')
clustering_algorithms = (
('MiniBatchKMeans', two_means),
('AffinityPropagation', affinity_propagation),
('MeanShift', ms),
('SpectralClustering', spectral),
('Ward', ward),
('AgglomerativeClustering', average_linkage),
# ('DBSCAN', dbscan),
# ('OPTICS', optics),
# ('Birch', birch),
# ('GaussianMixture', gmm)
)
for name, algorithm in clustering_algorithms:
t0 = time()
# Catch warnings related to kneighbors_graph.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="the number of connected components of the " +
"connectivity matrix is [0-9]{1,2}" +
" > 1. Completing it to avoid stopping the tree early.",
category=UserWarning)
warnings.filterwarnings(
"ignore",
message="Graph is not fully connected, spectral embedding" +
" may not work as expected.",
category=UserWarning)
algorithm.fit(X)
t1 = time()
if hasattr(algorithm, 'labels_'):
y_pred = algorithm.labels_.astype(np.int)
else:
y_pred = algorithm.predict(X)
plt.subplot(len(datasets), len(clustering_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
colors = np.array(list(islice(cycle(['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']),
int(max(y_pred) + 1))))
# Add black color for outliers (if any).
colors = np.append(colors, ["#000000"])
plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[y_pred])
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
plt.xticks(())
plt.yticks(())
plt.text(.99, .01, ('%.2fs' % (t1 - t0)).lstrip('0'),
transform=plt.gca().transAxes, size=15,
horizontalalignment='right')
plot_num += 1
plt.show()
| 0.831485 | 0.883437 |
<a href="1. FIPS Code and Population Data.ipynb"><- Back to previous notebook</a>
# Step 2: Sourcing sales data.
In this section we'll generate some fake sales data. Normally you would get this from some enterprise sales system, partners if you're using resellers, etc. Let's say we want some data that has the amount of the sale, the date/time the purchase was made, the location (county/FIPS), and some unique transaction ID.
<img src="images/sample-sales.png">
### Data quality concern: data in context
Immediately, though, we have some questions:
- Amount: is that in US dollars? Local currency if sold outside the US? If we have to convert it, what conversion rate do we use - today's, or the one at the time of purchase? When rounding, do you round up or truncate? Accuracy is critical, especially when there's money involved.
- Date/time: is that the date/time of the purchase in local time? Daylight Saving Time? What timezone was the purchase in?
### The Data Catalog
In the examples above, it's really important for a Subject Matter Expert (SME) that's familiar with the sales data to clearly define each of these fields and what they represent. This can be recorded in a data catalog entry for the data source. For example, the catalog entry for this table might look like this:
| Column | Type | Description |
| :-- | :-- | :---- |
| amount | decimal(8, 2) | Amount in USD, rounded to the nearest cent. Conversion from non-USD is done at the time of transaction with the conversion rate at midnight UTC of the date of purchase. |
| trans_time | <a href="https://www.postgresql.org/docs/current/datatype-datetime.html">timestamp</a> | Date/time of purchase in UTC |
| id | <a href="https://www.postgresql.org/docs/current/datatype-uuid.html">uuid</a> | A GUID representing a globally unique identifier for the transaction. |
| fips | varchar(5) | The FIPS code (county/state) where the purchase was made |
A data catalog entry might also contain information about data stewards or subject matter experts, the lineage of the table (e.g. joins with other tables), sample data, and more.
## 2.1 Generating the sales data
We'll use <a href="https://faker.readthedocs.io/en/master/">faker</a>, an excellent Python library, to generate a bunch of fake sales information. To make the example more interesting, though, we'll want to make sure that we weight our fake "purchases" more into the top 100 sales regions (FIPS codes) we have, similar to what would likely happen in real life. So, 50% of the time, we'll pick at random one of the top 100 FIPS codes we inserted in the last notebook. The other 50% of the time we'll look up a random record in the fips table.
## 2.2 Read FIPS codes into lists for easy/fast retrieval
### Data quality concern: the FIPS table has state and county data intermingled
We don't want the state-level data for this next section, so we'll filter out anything that ends in '000' except for Washington DC.
Another way we could have handled this would have been to delete that data when we imported it, but it's better this way -- if anyone else in my organization ever wants to reuse this data set, the entire set of data is there for them to use. We can make a note of the state and county info being in there in the data catalog entry for this data set, if we have one.
```
# First, let's read the top 100 FIPS codes into a list in memory. Pandas makes this extremely easy:
from my_connect import my_connect
import pandas
connection = my_connect()
# In this case, we only want counties; the WHERE clause filters out state-level data appropriately
q = """
SELECT fipstxt FROM fips
WHERE NOT(fipstxt LIKE '%000' AND state <> 'DC')
ORDER BY pop_estimate_2019 DESC LIMIT 100
"""
df = pandas.io.sql.read_sql_query(q, connection)
top_fips = df['fipstxt'].values.tolist()
# Now we'll get all valid FIPS codes.
q = "SELECT fipstxt FROM fips WHERE NOT(fipstxt LIKE '%000' AND state <> 'DC')"
df = pandas.io.sql.read_sql_query(q, connection)
all_fips = df['fipstxt'].values.tolist()
!pip install faker
```
## 2.3 Create the 'sales' table
```
connection = my_connect()
cursor = connection.cursor()
q = """
CREATE TABLE IF NOT EXISTS sales (
id UUID PRIMARY KEY,
trans_time TIMESTAMP,
amount DECIMAL(8, 2),
fips VARCHAR(5)
)
"""
cursor.execute(q)
connection.commit()
```
## 2.4 Quick helper function for inserting each sales row
```
def insert_sale(connection, id, trans_time, amount, fips):
cursor = connection.cursor()
q = sql.SQL("INSERT INTO sales (id, trans_time, amount, fips) VALUES ({}, {}, {}, {});")
cursor.execute(q.format(sql.Literal(str(id)), sql.Literal(trans_time), sql.Literal(amount), sql.Literal(fips)))
connection.commit()
```
## 2.5 Generate fake sales data and insert it into the database
```
import random
from faker import Faker
import uuid
import psycopg2.sql as sql
connection = my_connect()
cursor = connection.cursor()
random.seed()
fake = Faker()
# Zero out the table before starting
cursor.execute("DELETE FROM sales;")
connection.commit()
TOTAL_RECORDS = 50000
for i in range(TOTAL_RECORDS):
id = uuid.uuid4()
trans_time = fake.date_time_between(start_date='-1y', end_date='-1d')
amount = fake.pyfloat(left_digits=4, right_digits=2, positive=True, min_value=10, max_value=1500)
# 50% chance of picking a FIPS from the top FIPS to help skew our fake sales into heavily populated regions
if (random.choice(["Top", "Random"]) is "Top"):
fips = random.choice(top_fips)
else:
fips = random.choice(all_fips)
insert_sale(connection, id, trans_time, amount, fips)
# Print a status message every 5000 rows
if (i % 5000) == 0:
print("%s records inserted" % i)
print("Done")
```
## 2.6 Aggregation: show total sales for the top 10 states
Now that we have some sales data in there, we can start to get a little value out of it. Let's say we want to aggregate the data by state and see which states have the highest sales. Here's an example query:
```
SELECT SUM(sales.amount) AS total, fips.state AS state FROM sales
INNER JOIN fips ON sales.fips = fips.fipstxt
GROUP BY (fips.state)
ORDER BY total DESC LIMIT 10;
```
Here's an example of what the result will look like:
```
total state
0 3312283.72 TX
1 3173485.84 CA
2 2113674.22 NY
3 2017619.26 FL
4 1627246.43 GA
5 1380399.77 IL
6 1106723.17 OH
7 1036475.05 MA
8 1023290.36 MO
9 1003630.76 MI
```
Here's the actual query:
```
import pandas
connection = my_connect()
q = """
SELECT SUM(sales.amount) AS total, fips.state AS state FROM sales
INNER JOIN fips ON sales.fips = fips.fipstxt
GROUP BY (fips.state)
ORDER BY total DESC LIMIT 10;
"""
df = pandas.io.sql.read_sql_query(q, connection)
print(df.head(10))
```
# Next notebook: adding salespeople
Next we will add in salesperson information so we can see who the top salespeople are.
<a href="3. Generate Salespeople.ipynb">Go to the next notebook -></a>
*Contents © Copyright 2020 HP Development Company, L.P. SPDX-License-Identifier: MIT*
|
github_jupyter
|
# First, let's read the top 100 FIPS codes into a list in memory. Pandas makes this extremely easy:
from my_connect import my_connect
import pandas
connection = my_connect()
# In this case, we only want counties; the WHERE clause filters out state-level data appropriately
q = """
SELECT fipstxt FROM fips
WHERE NOT(fipstxt LIKE '%000' AND state <> 'DC')
ORDER BY pop_estimate_2019 DESC LIMIT 100
"""
df = pandas.io.sql.read_sql_query(q, connection)
top_fips = df['fipstxt'].values.tolist()
# Now we'll get all valid FIPS codes.
q = "SELECT fipstxt FROM fips WHERE NOT(fipstxt LIKE '%000' AND state <> 'DC')"
df = pandas.io.sql.read_sql_query(q, connection)
all_fips = df['fipstxt'].values.tolist()
!pip install faker
connection = my_connect()
cursor = connection.cursor()
q = """
CREATE TABLE IF NOT EXISTS sales (
id UUID PRIMARY KEY,
trans_time TIMESTAMP,
amount DECIMAL(8, 2),
fips VARCHAR(5)
)
"""
cursor.execute(q)
connection.commit()
def insert_sale(connection, id, trans_time, amount, fips):
cursor = connection.cursor()
q = sql.SQL("INSERT INTO sales (id, trans_time, amount, fips) VALUES ({}, {}, {}, {});")
cursor.execute(q.format(sql.Literal(str(id)), sql.Literal(trans_time), sql.Literal(amount), sql.Literal(fips)))
connection.commit()
import random
from faker import Faker
import uuid
import psycopg2.sql as sql
connection = my_connect()
cursor = connection.cursor()
random.seed()
fake = Faker()
# Zero out the table before starting
cursor.execute("DELETE FROM sales;")
connection.commit()
TOTAL_RECORDS = 50000
for i in range(TOTAL_RECORDS):
id = uuid.uuid4()
trans_time = fake.date_time_between(start_date='-1y', end_date='-1d')
amount = fake.pyfloat(left_digits=4, right_digits=2, positive=True, min_value=10, max_value=1500)
# 50% chance of picking a FIPS from the top FIPS to help skew our fake sales into heavily populated regions
if (random.choice(["Top", "Random"]) is "Top"):
fips = random.choice(top_fips)
else:
fips = random.choice(all_fips)
insert_sale(connection, id, trans_time, amount, fips)
# Print a status message every 5000 rows
if (i % 5000) == 0:
print("%s records inserted" % i)
print("Done")
SELECT SUM(sales.amount) AS total, fips.state AS state FROM sales
INNER JOIN fips ON sales.fips = fips.fipstxt
GROUP BY (fips.state)
ORDER BY total DESC LIMIT 10;
total state
0 3312283.72 TX
1 3173485.84 CA
2 2113674.22 NY
3 2017619.26 FL
4 1627246.43 GA
5 1380399.77 IL
6 1106723.17 OH
7 1036475.05 MA
8 1023290.36 MO
9 1003630.76 MI
import pandas
connection = my_connect()
q = """
SELECT SUM(sales.amount) AS total, fips.state AS state FROM sales
INNER JOIN fips ON sales.fips = fips.fipstxt
GROUP BY (fips.state)
ORDER BY total DESC LIMIT 10;
"""
df = pandas.io.sql.read_sql_query(q, connection)
print(df.head(10))
| 0.287468 | 0.955319 |
```
from beakerx import *
Plot(title="test title",
xLabel="x label",
yLabel="y label")
plot1 = Plot()
plot1.add(Bars(displayName="Bar",
x=[20,40,60],
y=[100, 120, 90],
width=10))
plot2 = Plot()
plot2.add(Line(x=[1, 5, 3], y=[1, 2, 6]))
plot3 = Plot()
plot3.add(Points(y=[1, 3, 6, 3, 1],
x=[1, 2, 3, 4, 5],
size=10,
shape=ShapeType.DIAMOND))
plot4 = Plot();
plot4.add(Stems(y= [1.5, 1, 6, 5]))
plot5 = Plot(crosshair = Crosshair())
plot5.add(Area(x = [0, 1, 2, 3], y = [3, 5, 2, 3]))
Plot().add(ConstantLine(y=0.1)).add(ConstantLine(x=0.3, y=0.4, color=Color.gray, showLabel=True))
Plot().add(Line(y=[-3, 1, 3, 4, 5])).add(ConstantBand(x=[1, 2], y=[1, 3]))
from beakerx.plot import Text as BeakerxText
plot = Plot()
xs = [1, 2, 3, 4]
ys = [8.6, 6.1, 7.4, 2.5]
for i in range(0, 4):
plot.add(BeakerxText(x= xs[i], y= ys[i], text= 'test'))
plot.add(Line(x= xs, y= ys))
import pandas as pd
tableRows = pd.read_csv('../../../doc/resources/data/interest-rates.csv')
pp1 = Plot()
pp1.add(Bars(y=tableRows.y1))
pp2 = Plot()
pp2.add(Line(
x=pd.Series([10, 20, 30, 40, 50, 60, 70]),
y=pd.Series([0, 60, 10, 50, 20, 40, 30]),
width=5))
y1 = [1,5,3,2,3]
y2 = [1,2,4,1,3]
p = Plot()
a1 = Area(y=y1, displayName='y1')
a2 = Area(y=y2, displayName='y2')
stacker = XYStacker()
p.add(stacker.stack([a1, a2]))
SimpleTimePlot(tableRows, ["y1", "y10"], # column names
timeColumn="time", # time is default value for a timeColumn
displayNames=["1 Year", "10 Year"])
import time
millis = 1507541201624;
hour = round(1000 * 60 * 60);
xs = [];
ys = [];
for i in range(11):
xs.append(millis + hour * i);
ys.append(i);
plot = TimePlot(timeZone="America/New_York")
# list of milliseconds
plot.add(Points(x=xs, y=ys))
millis = millis = 1507541201624;
nanos = millis * 1000 * 1000
xs = []
ys = []
for i in range(11):
xs.append(nanos + 7 * i)
ys.append(i);
np = NanoPlot()
np.add(Points(x=xs, y=ys))
sp = Plot()
sp.add(YAxis(label= "Test y axis"))
sp.add(Line(
x=pd.Series([10, 20, 30, 40, 50, 60, 70]),
y=pd.Series([0, 60, 10, 50, 20, 40, 30])))
sp.add(Line(
x=pd.Series([5, 15, 25, 35, 45, 55, 65]),
y=pd.Series([5, 65, 15, 55, 25, 45, 35]), yAxis= "Test y axis"))
import math
points = 100;
xs = [];
for i in range(0, points):
xs.append(i)
cplot = CombinedPlot(xLabel= "CombinedPlot");
linearPlot = Plot(title= "Linear x, Linear y");
linearPlot.add(Line(x= xs, y= xs));
cplot.add(linearPlot, 3);
logYPlot = Plot(logY= True, logX=False, title= "Linear x, Log y");
logYPlot.add(Line(x= xs, y= xs));
cplot.add(logYPlot, 3);
logYPlot = Plot(logY= False, logX=True, title= "Log x, Linear y");
logYPlot.add(Line(x= xs, y= xs));
cplot.add(logYPlot, 3);
cplot
```
|
github_jupyter
|
from beakerx import *
Plot(title="test title",
xLabel="x label",
yLabel="y label")
plot1 = Plot()
plot1.add(Bars(displayName="Bar",
x=[20,40,60],
y=[100, 120, 90],
width=10))
plot2 = Plot()
plot2.add(Line(x=[1, 5, 3], y=[1, 2, 6]))
plot3 = Plot()
plot3.add(Points(y=[1, 3, 6, 3, 1],
x=[1, 2, 3, 4, 5],
size=10,
shape=ShapeType.DIAMOND))
plot4 = Plot();
plot4.add(Stems(y= [1.5, 1, 6, 5]))
plot5 = Plot(crosshair = Crosshair())
plot5.add(Area(x = [0, 1, 2, 3], y = [3, 5, 2, 3]))
Plot().add(ConstantLine(y=0.1)).add(ConstantLine(x=0.3, y=0.4, color=Color.gray, showLabel=True))
Plot().add(Line(y=[-3, 1, 3, 4, 5])).add(ConstantBand(x=[1, 2], y=[1, 3]))
from beakerx.plot import Text as BeakerxText
plot = Plot()
xs = [1, 2, 3, 4]
ys = [8.6, 6.1, 7.4, 2.5]
for i in range(0, 4):
plot.add(BeakerxText(x= xs[i], y= ys[i], text= 'test'))
plot.add(Line(x= xs, y= ys))
import pandas as pd
tableRows = pd.read_csv('../../../doc/resources/data/interest-rates.csv')
pp1 = Plot()
pp1.add(Bars(y=tableRows.y1))
pp2 = Plot()
pp2.add(Line(
x=pd.Series([10, 20, 30, 40, 50, 60, 70]),
y=pd.Series([0, 60, 10, 50, 20, 40, 30]),
width=5))
y1 = [1,5,3,2,3]
y2 = [1,2,4,1,3]
p = Plot()
a1 = Area(y=y1, displayName='y1')
a2 = Area(y=y2, displayName='y2')
stacker = XYStacker()
p.add(stacker.stack([a1, a2]))
SimpleTimePlot(tableRows, ["y1", "y10"], # column names
timeColumn="time", # time is default value for a timeColumn
displayNames=["1 Year", "10 Year"])
import time
millis = 1507541201624;
hour = round(1000 * 60 * 60);
xs = [];
ys = [];
for i in range(11):
xs.append(millis + hour * i);
ys.append(i);
plot = TimePlot(timeZone="America/New_York")
# list of milliseconds
plot.add(Points(x=xs, y=ys))
millis = millis = 1507541201624;
nanos = millis * 1000 * 1000
xs = []
ys = []
for i in range(11):
xs.append(nanos + 7 * i)
ys.append(i);
np = NanoPlot()
np.add(Points(x=xs, y=ys))
sp = Plot()
sp.add(YAxis(label= "Test y axis"))
sp.add(Line(
x=pd.Series([10, 20, 30, 40, 50, 60, 70]),
y=pd.Series([0, 60, 10, 50, 20, 40, 30])))
sp.add(Line(
x=pd.Series([5, 15, 25, 35, 45, 55, 65]),
y=pd.Series([5, 65, 15, 55, 25, 45, 35]), yAxis= "Test y axis"))
import math
points = 100;
xs = [];
for i in range(0, points):
xs.append(i)
cplot = CombinedPlot(xLabel= "CombinedPlot");
linearPlot = Plot(title= "Linear x, Linear y");
linearPlot.add(Line(x= xs, y= xs));
cplot.add(linearPlot, 3);
logYPlot = Plot(logY= True, logX=False, title= "Linear x, Log y");
logYPlot.add(Line(x= xs, y= xs));
cplot.add(logYPlot, 3);
logYPlot = Plot(logY= False, logX=True, title= "Log x, Linear y");
logYPlot.add(Line(x= xs, y= xs));
cplot.add(logYPlot, 3);
cplot
| 0.529993 | 0.7586 |
# Attention Mechanism
In :numref:chapter_seq2seq, we encode the source sequence input information in the recurrent unit state and then pass it to the decoder to generate the target sequence. A token in the target sequence may closely relate to some tokens in the source sequence instead of the whole source sequence. For example, when translating "Hello world." to "Bonjour le monde.", "Bonjour" maps to "Hello" and "monde" maps to "world". In the seq2seq model, the decoder may implicitly select the corresponding information from the state passed by the decoder. The attention mechanism, however, makes this selection explicit.
Attention is a generalized pooling method with bias alignment over inputs. The core component in the attention mechanism is the attention layer, or called attention for simplicity. An input of the attention layer is called a query. For a query, the attention layer returns the output based on its memory, which is a set of key-value pairs. To be more specific, assume a query $\mathbf{q}\in\mathbb R^{d_q}$, and the memory contains $n$ key-value pairs, $(\mathbf{k}_1, \mathbf{v}_1), \ldots, (\mathbf{k}_n, \mathbf{v}_n)$, with $\mathbf{k}_i\in\mathbb R^{d_k}$, $\mathbf{v}_i\in\mathbb R^{d_v}$. The attention layer then returns an output $\mathbf o\in\mathbb R^{d_v}$ with the same shape as a value.
```
from IPython.display import SVG
SVG('./img/attention.svg')
```
To compute the output, we first assume there is a score function $\alpha$ which measures the similarity between the query and a key. Then we compute all $n$ scores $a_1, \ldots, a_n$ by
$$a_i = \alpha(\mathbf q, \mathbf k_i).$$
Next we use softmax to obtain the attention weights
$$b_1, \ldots, b_n = \textrm{softmax}(a_1, \ldots, a_n).$$
The output is then a weighted sum of the values
$$\mathbf o = \sum_{i=1}^n b_i \mathbf v_i.$$
Different choices of the score function lead to different attention layers. We will discuss two commonly used attention layers in the rest of this section. Before diving into the implementation, we first introduce a masked version of the softmax operator and explain a specialized dot operator nd.batched_dot.
```
import math
import torch
import torch.nn as nn
```
The masked softmax takes a 3-dim input and allows us to filter out some elements by specifying valid lengths for the last dimension. (Refer to :numref:chapter_machine_translation for the definition of a valid length.)
```
def SequenceMask(X, X_len,value=0):
maxlen = X.size(1)
mask = torch.arange((maxlen),dtype=torch.float)[None, :] < X_len[:, None]
X[~mask]=value
return X
# Save to the d2l package.
def masked_softmax(X, valid_length):
# X: 3-D tensor, valid_length: 1-D or 2-D tensor
softmax = nn.Softmax()
if valid_length is None:
return softmax(X)
else:
shape = X.shape
if valid_length.dim() == 1:
valid_length = torch.FloatTensor(valid_length.numpy().repeat(shape[1], axis=0))
else:
valid_length = valid_length.reshape((-1,))
# fill masked elements with a large negative, whose exp is 0
X = SequenceMask(X.reshape((-1, shape[-1])), valid_length)
return softmax(X).reshape(shape)
```
Construct two examples, where each example is a 2-by-4 matrix, as the input. If we specify the valid length for the first example to be 2, then only the first two columns of this example are used to compute softmax.
```
masked_softmax(torch.rand((2,2,4),dtype=torch.float), torch.FloatTensor([2,3]))
```
The operator nd.batched_dot takes two inputs 𝑋 and 𝑌 with shapes (𝑏,𝑛,𝑚) and (𝑏,𝑚,𝑘) , respectively. It computes 𝑏 dot products, with Z[i,:,:]=dot(X[i,:,:], Y[i,:,:] for 𝑖=1,…,𝑛 .
```
torch.bmm(torch.ones((2,1,3), dtype = torch.float), torch.ones((2,3,2), dtype = torch.float))
```
# Dot Product Attention
The dot product assumes the query has the same dimension as the keys, namely 𝐪,𝐤𝑖∈ℝ𝑑 for all 𝑖 . It computes the score by an inner product between the query and a key, often then divided by 𝑑‾‾√ to make the scores less sensitive to the dimension 𝑑 . In other words,
𝛼(𝐪,𝐤)=⟨𝐪,𝐤⟩/𝑑‾‾√.
Assume 𝐐∈ℝ𝑚×𝑑 contains 𝑚 queries and 𝐊∈ℝ𝑛×𝑑 has all 𝑛 keys. We can compute all 𝑚𝑛 scores by
𝛼(𝐐,𝐊)=𝐐𝐊𝑇/𝑑‾‾√.
Now let's implement this layer that supports a batch of queries and key-value pairs. In addition, it supports randomly dropping some attention weights as a regularization.
```
# Save to the d2l package.
class DotProductAttention(nn.Module):
def __init__(self, dropout, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
# query: (batch_size, #queries, d)
# key: (batch_size, #kv_pairs, d)
# value: (batch_size, #kv_pairs, dim_v)
# valid_length: either (batch_size, ) or (batch_size, xx)
def forward(self, query, key, value, valid_length=None):
d = query.shape[-1]
# set transpose_b=True to swap the last two dimensions of key
scores = torch.bmm(query, key.transpose(1,2)) / math.sqrt(d)
attention_weights = self.dropout(masked_softmax(scores, valid_length))
return torch.bmm(attention_weights, value)
```
Now we create two batches, and each batch has one query and 10 key-value pairs. We specify through valid_length that for the first batch, we will only pay attention to the first 2 key-value pairs, while for the second batch, we will check the first 6 key-value pairs. Therefore, though both batches have the same query and key-value pairs, we obtain different outputs.
```
atten = DotProductAttention(dropout=0.5)
keys = torch.ones((2,10,2),dtype=torch.float)
values = torch.arange((40), dtype=torch.float).view(1,10,4).repeat(2,1,1)
atten(torch.ones((2,1,2),dtype=torch.float), keys, values, torch.FloatTensor([2, 6]))
```
# Multilayer Perceptron Attention
In multilayer perceptron attention, we first project both query and keys into ℝℎ .
To be more specific, assume learnable parameters 𝐖𝑘∈ℝℎ×𝑑𝑘 , 𝐖𝑞∈ℝℎ×𝑑𝑞 , and 𝐯∈ℝ𝑝 . Then the score function is defined by
𝛼(𝐤,𝐪)=𝐯𝑇tanh(𝐖𝑘𝐤+𝐖𝑞𝐪).
This concatenates the key and value in the feature dimension and feeds them into a single hidden layer perceptron with hidden layer size ℎ and output layer size 1 . The hidden layer activation function is tanh and no bias is applied.
```
# Save to the d2l package.
class MLPAttention(nn.Module):
def __init__(self, units, dropout, **kwargs):
super(MLPAttention, self).__init__(**kwargs)
# Use flatten=True to keep query's and key's 3-D shapes.
self.W_k = nn.Linear(2, units, bias=False)
self.W_q = nn.Linear(2, units, bias=False)
self.v = nn.Linear(8, 1, bias=False)
self.dropout = nn.Dropout(dropout)
def forward(self, query, key, value, valid_length):
query, key = self.W_k(query), self.W_q(key)
# expand query to (batch_size, #querys, 1, units), and key to
# (batch_size, 1, #kv_pairs, units). Then plus them with broadcast.
features = query.unsqueeze(2) + key.unsqueeze(1)
scores = self.v(features).squeeze(-1)
attention_weights = self.dropout(masked_softmax(scores, valid_length))
return torch.bmm(attention_weights, value)
```
Despite MLPAttention containing an additional MLP model, given the same inputs with identical keys, we obtain the same output as for DotProductAttention.
```
atten = MLPAttention(units = 8, dropout=0.1)
atten(torch.ones((2,1,2), dtype = torch.float), keys, values, torch.FloatTensor([2, 6]))
```
# Summary
- An attention layer explicitly selects related information.
- An attention layer's memory consists of key-value pairs, so its output is close to the values whose keys are similar to the query.
|
github_jupyter
|
from IPython.display import SVG
SVG('./img/attention.svg')
import math
import torch
import torch.nn as nn
def SequenceMask(X, X_len,value=0):
maxlen = X.size(1)
mask = torch.arange((maxlen),dtype=torch.float)[None, :] < X_len[:, None]
X[~mask]=value
return X
# Save to the d2l package.
def masked_softmax(X, valid_length):
# X: 3-D tensor, valid_length: 1-D or 2-D tensor
softmax = nn.Softmax()
if valid_length is None:
return softmax(X)
else:
shape = X.shape
if valid_length.dim() == 1:
valid_length = torch.FloatTensor(valid_length.numpy().repeat(shape[1], axis=0))
else:
valid_length = valid_length.reshape((-1,))
# fill masked elements with a large negative, whose exp is 0
X = SequenceMask(X.reshape((-1, shape[-1])), valid_length)
return softmax(X).reshape(shape)
masked_softmax(torch.rand((2,2,4),dtype=torch.float), torch.FloatTensor([2,3]))
torch.bmm(torch.ones((2,1,3), dtype = torch.float), torch.ones((2,3,2), dtype = torch.float))
# Save to the d2l package.
class DotProductAttention(nn.Module):
def __init__(self, dropout, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
# query: (batch_size, #queries, d)
# key: (batch_size, #kv_pairs, d)
# value: (batch_size, #kv_pairs, dim_v)
# valid_length: either (batch_size, ) or (batch_size, xx)
def forward(self, query, key, value, valid_length=None):
d = query.shape[-1]
# set transpose_b=True to swap the last two dimensions of key
scores = torch.bmm(query, key.transpose(1,2)) / math.sqrt(d)
attention_weights = self.dropout(masked_softmax(scores, valid_length))
return torch.bmm(attention_weights, value)
atten = DotProductAttention(dropout=0.5)
keys = torch.ones((2,10,2),dtype=torch.float)
values = torch.arange((40), dtype=torch.float).view(1,10,4).repeat(2,1,1)
atten(torch.ones((2,1,2),dtype=torch.float), keys, values, torch.FloatTensor([2, 6]))
# Save to the d2l package.
class MLPAttention(nn.Module):
def __init__(self, units, dropout, **kwargs):
super(MLPAttention, self).__init__(**kwargs)
# Use flatten=True to keep query's and key's 3-D shapes.
self.W_k = nn.Linear(2, units, bias=False)
self.W_q = nn.Linear(2, units, bias=False)
self.v = nn.Linear(8, 1, bias=False)
self.dropout = nn.Dropout(dropout)
def forward(self, query, key, value, valid_length):
query, key = self.W_k(query), self.W_q(key)
# expand query to (batch_size, #querys, 1, units), and key to
# (batch_size, 1, #kv_pairs, units). Then plus them with broadcast.
features = query.unsqueeze(2) + key.unsqueeze(1)
scores = self.v(features).squeeze(-1)
attention_weights = self.dropout(masked_softmax(scores, valid_length))
return torch.bmm(attention_weights, value)
atten = MLPAttention(units = 8, dropout=0.1)
atten(torch.ones((2,1,2), dtype = torch.float), keys, values, torch.FloatTensor([2, 6]))
| 0.894179 | 0.991652 |
```
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import torch
print(torch.__version__)
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data_utils
from torch.utils.data import DataLoader, Dataset, Sampler
from torch.utils.data.dataloader import default_collate
from torch.utils.tensorboard import SummaryWriter
from pytorch_lightning.metrics import Accuracy
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
INPUT_SIZE = 36
HIDDEN_SIZE = 25
OUTPUT_SIZE = 5
LEARNING_RATE = 1e-2
EPOCHS = 400
BATCH_SIZE = 256
EMBEDDING_SIZE = 5
class CustomDataset(Dataset):
# Конструктор, где считаем датасет
def __init__(self):
X = pd.read_csv('./data/X_cat.csv', sep='\t', index_col=0)
target = pd.read_csv('./data/y_cat.csv', sep='\t', index_col=0, names=['status']) # header=-1,
weekday_columns = ['Weekday_0', 'Weekday_1', 'Weekday_2',
'Weekday_3', 'Weekday_4', 'Weekday_5', 'Weekday_6']
weekdays = np.argmax(X[weekday_columns].values, axis=1)
X.drop(weekday_columns, axis=1, inplace=True)
X['Weekday_cos'] = np.cos((2 * np.pi / 7.) * weekdays)
X['Weekday_sin'] = np.sin((2 * np.pi / 7.) * weekdays)
X['Hour_cos'] = np.cos((2 * np.pi / 24.) * X['Hour'].values)
X['Hour_sin'] = np.sin((2 * np.pi / 24.) * X['Hour'].values)
X['Month_cos'] = np.cos((2 * np.pi / 12.) * X['Month'].values)
X['Month_sin'] = np.sin((2 * np.pi / 12.) * X['Month'].values)
X['Gender'] = np.argmax(X[['Sex_Female', 'Sex_Male', 'Sex_Unknown']].values, axis=1)
X.drop(['Sex_Female', 'Sex_Male', 'Sex_Unknown'], axis=1, inplace=True)
print(X.shape)
print(X.head())
target = target.iloc[:, :].values
target[target == 'Died'] = 'Euthanasia'
le = LabelEncoder()
self.y = le.fit_transform(target)
self.X = X.values
self.columns = X.columns.values
self.embedding_column = 'Gender'
self.nrof_emb_categories = 3
self.numeric_columns = ['IsDog', 'Age', 'HasName', 'NameLength', 'NameFreq', 'MixColor', 'ColorFreqAsIs',
'ColorFreqBase', 'TabbyColor', 'MixBreed', 'Domestic', 'Shorthair', 'Longhair',
'Year', 'Day', 'Breed_Chihuahua Shorthair Mix', 'Breed_Domestic Medium Hair Mix',
'Breed_Domestic Shorthair Mix', 'Breed_German Shepherd Mix', 'Breed_Labrador Retriever Mix',
'Breed_Pit Bull Mix', 'Breed_Rare',
'SexStatus_Flawed', 'SexStatus_Intact', 'SexStatus_Unknown',
'Weekday_cos', 'Weekday_sin', 'Hour_cos', 'Hour_sin',
'Month_cos', 'Month_sin']
return
def __len__(self):
return len(self.X)
# Переопределяем метод,
# который достает по индексу наблюдение из датасет
def __getitem__(self, idx):
row = self.X[idx, :]
row = {col: torch.tensor(row[i]) for i, col in enumerate(self.columns)}
return row, self.y[idx]
class MLPNet(nn.Module):
def __init__(self, input_size, hidden_size, output_size, nrof_cat, emb_dim,
emb_columns, numeric_columns):
super(MLPNet, self).__init__()
self.emb_columns = emb_columns
self.numeric_columns = numeric_columns
self.emb_layer = torch.nn.Embedding(nrof_cat, emb_dim)
self.feature_bn = torch.nn.BatchNorm1d(input_size)
self.linear1 = torch.nn.Linear(input_size, hidden_size)
self.linear1.apply(self.init_weights)
self.bn1 = torch.nn.BatchNorm1d(hidden_size)
self.linear2 = torch.nn.Linear(hidden_size, hidden_size)
self.linear2.apply(self.init_weights)
self.bn2 = torch.nn.BatchNorm1d(hidden_size)
self.linear3 = torch.nn.Linear(hidden_size, output_size)
def init_weights(self, m):
if type(m) == nn.Linear:
torch.nn.init.xavier_uniform(m.weight)
# m.bias.data.fill_(0.001)
def forward(self, x):
emb_output = self.emb_layer(torch.tensor(x[self.emb_columns], dtype=torch.int64))
numeric_feats = torch.tensor(pd.DataFrame(x)[self.numeric_columns].values, dtype=torch.float32)
concat_input = torch.cat([numeric_feats, emb_output], dim=1)
output = self.feature_bn(concat_input)
output = self.linear1(output)
output = self.bn1(output)
output = torch.relu(output)
output = self.linear2(output)
output = self.bn2(output)
output = torch.relu(output)
output = self.linear3(output)
predictions = torch.softmax(output, dim=1)
return predictions
def run_train(model, train_loader):
step = 0
for epoch in range(EPOCHS):
model.train()
for features, label in train_loader:
# Reset gradients
optimizer.zero_grad()
output = model(features)
# Calculate error and backpropagate
loss = criterion(output, label)
loss.backward()
acc = accuracy(output, label).item()
# Update weights with gradients
optimizer.step()
step += 1
if step % 100 == 0:
print('EPOCH %d STEP %d : train_loss: %f train_acc: %f' %
(epoch, step, loss.item(), acc))
return step
animal_dataset = CustomDataset()
train_loader = data_utils.DataLoader(dataset=animal_dataset,
batch_size=BATCH_SIZE, shuffle=True)
model = MLPNet(INPUT_SIZE, HIDDEN_SIZE, OUTPUT_SIZE, animal_dataset.nrof_emb_categories,
EMBEDDING_SIZE,
animal_dataset.embedding_column, animal_dataset.numeric_columns)
criterion = nn.CrossEntropyLoss()
accuracy = Accuracy()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
step = run_train(model, train_loader)
```
|
github_jupyter
|
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import torch
print(torch.__version__)
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data_utils
from torch.utils.data import DataLoader, Dataset, Sampler
from torch.utils.data.dataloader import default_collate
from torch.utils.tensorboard import SummaryWriter
from pytorch_lightning.metrics import Accuracy
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
INPUT_SIZE = 36
HIDDEN_SIZE = 25
OUTPUT_SIZE = 5
LEARNING_RATE = 1e-2
EPOCHS = 400
BATCH_SIZE = 256
EMBEDDING_SIZE = 5
class CustomDataset(Dataset):
# Конструктор, где считаем датасет
def __init__(self):
X = pd.read_csv('./data/X_cat.csv', sep='\t', index_col=0)
target = pd.read_csv('./data/y_cat.csv', sep='\t', index_col=0, names=['status']) # header=-1,
weekday_columns = ['Weekday_0', 'Weekday_1', 'Weekday_2',
'Weekday_3', 'Weekday_4', 'Weekday_5', 'Weekday_6']
weekdays = np.argmax(X[weekday_columns].values, axis=1)
X.drop(weekday_columns, axis=1, inplace=True)
X['Weekday_cos'] = np.cos((2 * np.pi / 7.) * weekdays)
X['Weekday_sin'] = np.sin((2 * np.pi / 7.) * weekdays)
X['Hour_cos'] = np.cos((2 * np.pi / 24.) * X['Hour'].values)
X['Hour_sin'] = np.sin((2 * np.pi / 24.) * X['Hour'].values)
X['Month_cos'] = np.cos((2 * np.pi / 12.) * X['Month'].values)
X['Month_sin'] = np.sin((2 * np.pi / 12.) * X['Month'].values)
X['Gender'] = np.argmax(X[['Sex_Female', 'Sex_Male', 'Sex_Unknown']].values, axis=1)
X.drop(['Sex_Female', 'Sex_Male', 'Sex_Unknown'], axis=1, inplace=True)
print(X.shape)
print(X.head())
target = target.iloc[:, :].values
target[target == 'Died'] = 'Euthanasia'
le = LabelEncoder()
self.y = le.fit_transform(target)
self.X = X.values
self.columns = X.columns.values
self.embedding_column = 'Gender'
self.nrof_emb_categories = 3
self.numeric_columns = ['IsDog', 'Age', 'HasName', 'NameLength', 'NameFreq', 'MixColor', 'ColorFreqAsIs',
'ColorFreqBase', 'TabbyColor', 'MixBreed', 'Domestic', 'Shorthair', 'Longhair',
'Year', 'Day', 'Breed_Chihuahua Shorthair Mix', 'Breed_Domestic Medium Hair Mix',
'Breed_Domestic Shorthair Mix', 'Breed_German Shepherd Mix', 'Breed_Labrador Retriever Mix',
'Breed_Pit Bull Mix', 'Breed_Rare',
'SexStatus_Flawed', 'SexStatus_Intact', 'SexStatus_Unknown',
'Weekday_cos', 'Weekday_sin', 'Hour_cos', 'Hour_sin',
'Month_cos', 'Month_sin']
return
def __len__(self):
return len(self.X)
# Переопределяем метод,
# который достает по индексу наблюдение из датасет
def __getitem__(self, idx):
row = self.X[idx, :]
row = {col: torch.tensor(row[i]) for i, col in enumerate(self.columns)}
return row, self.y[idx]
class MLPNet(nn.Module):
def __init__(self, input_size, hidden_size, output_size, nrof_cat, emb_dim,
emb_columns, numeric_columns):
super(MLPNet, self).__init__()
self.emb_columns = emb_columns
self.numeric_columns = numeric_columns
self.emb_layer = torch.nn.Embedding(nrof_cat, emb_dim)
self.feature_bn = torch.nn.BatchNorm1d(input_size)
self.linear1 = torch.nn.Linear(input_size, hidden_size)
self.linear1.apply(self.init_weights)
self.bn1 = torch.nn.BatchNorm1d(hidden_size)
self.linear2 = torch.nn.Linear(hidden_size, hidden_size)
self.linear2.apply(self.init_weights)
self.bn2 = torch.nn.BatchNorm1d(hidden_size)
self.linear3 = torch.nn.Linear(hidden_size, output_size)
def init_weights(self, m):
if type(m) == nn.Linear:
torch.nn.init.xavier_uniform(m.weight)
# m.bias.data.fill_(0.001)
def forward(self, x):
emb_output = self.emb_layer(torch.tensor(x[self.emb_columns], dtype=torch.int64))
numeric_feats = torch.tensor(pd.DataFrame(x)[self.numeric_columns].values, dtype=torch.float32)
concat_input = torch.cat([numeric_feats, emb_output], dim=1)
output = self.feature_bn(concat_input)
output = self.linear1(output)
output = self.bn1(output)
output = torch.relu(output)
output = self.linear2(output)
output = self.bn2(output)
output = torch.relu(output)
output = self.linear3(output)
predictions = torch.softmax(output, dim=1)
return predictions
def run_train(model, train_loader):
step = 0
for epoch in range(EPOCHS):
model.train()
for features, label in train_loader:
# Reset gradients
optimizer.zero_grad()
output = model(features)
# Calculate error and backpropagate
loss = criterion(output, label)
loss.backward()
acc = accuracy(output, label).item()
# Update weights with gradients
optimizer.step()
step += 1
if step % 100 == 0:
print('EPOCH %d STEP %d : train_loss: %f train_acc: %f' %
(epoch, step, loss.item(), acc))
return step
animal_dataset = CustomDataset()
train_loader = data_utils.DataLoader(dataset=animal_dataset,
batch_size=BATCH_SIZE, shuffle=True)
model = MLPNet(INPUT_SIZE, HIDDEN_SIZE, OUTPUT_SIZE, animal_dataset.nrof_emb_categories,
EMBEDDING_SIZE,
animal_dataset.embedding_column, animal_dataset.numeric_columns)
criterion = nn.CrossEntropyLoss()
accuracy = Accuracy()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
step = run_train(model, train_loader)
| 0.801354 | 0.579698 |
# Setup
```
from google.colab import drive
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import roc_auc_score
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model
from keras import initializers, regularizers, constraints, optimizers, layers
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
```
## Read Data
```
train = pd.read_csv('./drive/My Drive/Data-X: GGWP Toxic Behavior Public Data/data/train[1].csv')
train.loc[train['obscene'] == 1, :].head()
train.head()
train.shape
```
# EDA
# Class Inbalances
```
clean_comments = train.iloc[:, 2:].sum(axis=1) == 0
clean_comments
print(f"Clean comments make up {round(clean_comments.sum() / len(train)* 100, 2)} % of the Training Data")
clean_series = pd.Series(clean_comments.sum(), index=['clean'])
tags = train.iloc[:, 2:].sum()
tags = tags.append(clean_series)
tags = tags.sort_values()
fig, ax = plt.subplots()
ax = sns.barplot(x=tags.index, y=tags)
ax.tick_params(axis='x', rotation=45)
ax.set(xlabel='Tags', ylabel='Number of Comments', title='Distribution of Tags')
plt.show()
```
# LSTM Baseline Model
## Basic Preprocessing
```
x_train = train.loc[:, 'comment_text']
y_train = train.loc[:, 'toxic':'identity_hate'].values
max_features = 20000
# Convert Words into Index using Tokenizer. Basically, every word is an index number
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(x_train))
x_train = tokenizer.texts_to_sequences(x_train)
```
## Choose best length of each vector
```
plt.hist([len(comment) for comment in x_train], bins=np.linspace(0, 400, 50))
plt.show()
```
Seems like choosing 50 is a reasonable amount
```
maxlen = 50
# Makes the vector lengths the same regardless of how many words in comments
x_train = pad_sequences(x_train, maxlen=maxlen)
```
## Setting up the Model Architecture
Hyperparameters for Architecture
```
embed_size = 128
```
1. Input Layer / Tensor
```
inp = Input(shape=(maxlen,))
```
2. Embedding Layer
```
x = Embedding(max_features, embed_size)(inp)
```
3. LSTM Layer
```
x = LSTM(60, return_sequences=True, name='lstm_layer')(x)
```
4. Global Max Pooling
```
x = GlobalMaxPool1D()(x) # Used to convert the 3D tensor into a 2D one
```
5. Dense
```
x = Dense(50, activation='relu')(x)
```
6. Dropout
```
x = Dropout(0.1)(x)
```
7. Dense
```
x = Dense(6, activation='sigmoid')(x)
```
## Train Model
```
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
```
### Model Configurations
```
batch_size = 32
epochs = 2
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
x_predict = model.predict(x_train, batch_size=1024, verbose=1)
x_predict[:5, ]
```
# Get Validation Score
```
val_data = pd.read_csv('./drive/My Drive/Data-X: GGWP Toxic Behavior Public Data/data/combined.csv')
x_val = val_data.loc[:, 'text']
y_val = val_data.loc[:, 'toxic':'indentity_hate'].values
# Vectorize
x_val = tokenizer.texts_to_sequences(x_val)
# Pad
x_val = pad_sequences(x_val, maxlen=maxlen) # Makes the vector lengths the same regardless of how many words in comments
y_pred_val = model.predict(x_val, batch_size=1024, verbose=1)
x_val_score = roc_auc_score(y_val, y_pred_val)
x_val_score
```
## Add LSTM Results to Model Results CSV
```
#model_results = pd.read_csv('./drive/My Drive/Data-X: GGWP Toxic Behavior Public Data/models/model_results.csv', index_col='Unnamed: 0')
#model_results = pd.concat([model_results, pd.DataFrame({'Model': ['LSTM'], 'val_auc_score': [x_val_score]})])
#model_results.to_csv('./drive/My Drive/Data-X: GGWP Toxic Behavior Public Data/models/model_results.csv')
```
|
github_jupyter
|
from google.colab import drive
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import roc_auc_score
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model
from keras import initializers, regularizers, constraints, optimizers, layers
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
train = pd.read_csv('./drive/My Drive/Data-X: GGWP Toxic Behavior Public Data/data/train[1].csv')
train.loc[train['obscene'] == 1, :].head()
train.head()
train.shape
clean_comments = train.iloc[:, 2:].sum(axis=1) == 0
clean_comments
print(f"Clean comments make up {round(clean_comments.sum() / len(train)* 100, 2)} % of the Training Data")
clean_series = pd.Series(clean_comments.sum(), index=['clean'])
tags = train.iloc[:, 2:].sum()
tags = tags.append(clean_series)
tags = tags.sort_values()
fig, ax = plt.subplots()
ax = sns.barplot(x=tags.index, y=tags)
ax.tick_params(axis='x', rotation=45)
ax.set(xlabel='Tags', ylabel='Number of Comments', title='Distribution of Tags')
plt.show()
x_train = train.loc[:, 'comment_text']
y_train = train.loc[:, 'toxic':'identity_hate'].values
max_features = 20000
# Convert Words into Index using Tokenizer. Basically, every word is an index number
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(x_train))
x_train = tokenizer.texts_to_sequences(x_train)
plt.hist([len(comment) for comment in x_train], bins=np.linspace(0, 400, 50))
plt.show()
maxlen = 50
# Makes the vector lengths the same regardless of how many words in comments
x_train = pad_sequences(x_train, maxlen=maxlen)
embed_size = 128
inp = Input(shape=(maxlen,))
x = Embedding(max_features, embed_size)(inp)
x = LSTM(60, return_sequences=True, name='lstm_layer')(x)
x = GlobalMaxPool1D()(x) # Used to convert the 3D tensor into a 2D one
x = Dense(50, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(6, activation='sigmoid')(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
batch_size = 32
epochs = 2
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
x_predict = model.predict(x_train, batch_size=1024, verbose=1)
x_predict[:5, ]
val_data = pd.read_csv('./drive/My Drive/Data-X: GGWP Toxic Behavior Public Data/data/combined.csv')
x_val = val_data.loc[:, 'text']
y_val = val_data.loc[:, 'toxic':'indentity_hate'].values
# Vectorize
x_val = tokenizer.texts_to_sequences(x_val)
# Pad
x_val = pad_sequences(x_val, maxlen=maxlen) # Makes the vector lengths the same regardless of how many words in comments
y_pred_val = model.predict(x_val, batch_size=1024, verbose=1)
x_val_score = roc_auc_score(y_val, y_pred_val)
x_val_score
#model_results = pd.read_csv('./drive/My Drive/Data-X: GGWP Toxic Behavior Public Data/models/model_results.csv', index_col='Unnamed: 0')
#model_results = pd.concat([model_results, pd.DataFrame({'Model': ['LSTM'], 'val_auc_score': [x_val_score]})])
#model_results.to_csv('./drive/My Drive/Data-X: GGWP Toxic Behavior Public Data/models/model_results.csv')
| 0.53607 | 0.794106 |
# Integration with External Verilog
This guide is targeted towards users working with existing Verilog designs (as
opposed to Python/Magma circuits).
The approach relies on Magma's external Verilog integration features to
construct a Magma circuit representation that can then be used with a
`fault.Tester` object. This approach enables the use of most of fault's
features, except for logic that relies on descending into a design hierarchy
(e.g. peeking and poking sub-instance ports).
The first, simplest approach is to use `m.define_from_verilog` (which
takes a string containing verilog code) or
`m.define_from_verilog_file` (which takes a path to a file that contains verilog code)
to import a verilog design into Magma.
Here's an example using `m.define_from_verilog`
```
import magma as m
import logging
logging.basicConfig(level=logging.INFO)
import fault
# NOTE: define_from_verilog returns a list of modules
# (since there could be multiple), so in this case we
# simply index the first and only module with `[0]`
foo = m.define_from_verilog("""\
module foo(input I, output O);
assign O = I;
endmodule
""", target_modules=["foo"])[0]
print(f"Imported as magma circuit: {foo}")
tester = fault.Tester(foo)
tester.circuit.I = 1
tester.eval()
tester.circuit.O.expect(1)
tester.compile_and_run("verilator")
```
Here's an example using `m.define_from_verilog_file`
```
# write verilog string to a file
with open("foo.v", "w") as f:
f.write("""\
module foo(input I, output O);
assign O = I;
endmodule
""")
foo = m.define_from_verilog_file("foo.v", target_modules=["foo"])[0]
print(f"Imported as magma circuit: {foo}")
tester = fault.Tester(foo)
tester.circuit.I = 1
tester.eval()
tester.circuit.O.expect(1)
tester.compile_and_run("verilator")
```
An alternative to using `define_from_verilog` is to use `declare_from_verilog` to import a module interface, and provide the implementation to the simulator by copying the source verilog file into the simulation directory. This is useful when the source file contains code that is not supported by Magma's Verilog parser (e.g. advanced system verilog features), or when parsing takes a long time (e.g. a post-synthesis netlist file).
```
with open("foo_stub.v", "w") as f:
f.write("""\
module foo(input I, output O);
endmodule
""")
# You can similarly use declare_from_verilog with a Verilog string
foo = m.declare_from_verilog_file("foo_stub.v", target_modules=["foo"])[0]
print(f"Imported as magma circuit: {foo}")
tester = fault.Tester(foo)
tester.circuit.I = 1
tester.eval()
tester.circuit.O.expect(1)
import tempfile
import shutil
with tempfile.TemporaryDirectory() as dir_:
# Copy actual implementation to test directory
shutil.copy("foo.v", dir_)
# Set test directory with directory= kwarg
# Note: we also tell magma to skip compilation (skip_compile=True)
# since the verilog file is already present in the test directory
# (copied in the previous line)
tester.compile_and_run("verilator", directory=dir_, skip_compile=True)
```
A similar approach is to declare the interface using magma. This has the advantage of providing the capability of writing a sophisticated interface generator for your external module (e.g. if you're integrating with an external generator framework). In doing this, you may find it useful to first define/declare a Circuit for the verilog module (as done above) and use the basic corresponding types. Then, in a magma circuiat wrapper, you can use more complex types and wire them up to the underlying verilog ports.
```
# Declare magma circuit with the same name and equivalent interface
# to a verilog circuit
class foo(m.Circuit):
io = m.IO(I=m.In(m.Bit), O=m.Out(m.Bit))
tester = fault.Tester(foo)
tester.circuit.I = 1
tester.eval()
tester.circuit.O.expect(1)
import tempfile
import shutil
with tempfile.TemporaryDirectory() as dir_:
# Copy actual implementation to test directory
shutil.copy("foo.v", dir_)
# Set test directory with directory= kwarg
tester.compile_and_run("verilator", directory=dir_, skip_compile=True)
```
|
github_jupyter
|
import magma as m
import logging
logging.basicConfig(level=logging.INFO)
import fault
# NOTE: define_from_verilog returns a list of modules
# (since there could be multiple), so in this case we
# simply index the first and only module with `[0]`
foo = m.define_from_verilog("""\
module foo(input I, output O);
assign O = I;
endmodule
""", target_modules=["foo"])[0]
print(f"Imported as magma circuit: {foo}")
tester = fault.Tester(foo)
tester.circuit.I = 1
tester.eval()
tester.circuit.O.expect(1)
tester.compile_and_run("verilator")
# write verilog string to a file
with open("foo.v", "w") as f:
f.write("""\
module foo(input I, output O);
assign O = I;
endmodule
""")
foo = m.define_from_verilog_file("foo.v", target_modules=["foo"])[0]
print(f"Imported as magma circuit: {foo}")
tester = fault.Tester(foo)
tester.circuit.I = 1
tester.eval()
tester.circuit.O.expect(1)
tester.compile_and_run("verilator")
with open("foo_stub.v", "w") as f:
f.write("""\
module foo(input I, output O);
endmodule
""")
# You can similarly use declare_from_verilog with a Verilog string
foo = m.declare_from_verilog_file("foo_stub.v", target_modules=["foo"])[0]
print(f"Imported as magma circuit: {foo}")
tester = fault.Tester(foo)
tester.circuit.I = 1
tester.eval()
tester.circuit.O.expect(1)
import tempfile
import shutil
with tempfile.TemporaryDirectory() as dir_:
# Copy actual implementation to test directory
shutil.copy("foo.v", dir_)
# Set test directory with directory= kwarg
# Note: we also tell magma to skip compilation (skip_compile=True)
# since the verilog file is already present in the test directory
# (copied in the previous line)
tester.compile_and_run("verilator", directory=dir_, skip_compile=True)
# Declare magma circuit with the same name and equivalent interface
# to a verilog circuit
class foo(m.Circuit):
io = m.IO(I=m.In(m.Bit), O=m.Out(m.Bit))
tester = fault.Tester(foo)
tester.circuit.I = 1
tester.eval()
tester.circuit.O.expect(1)
import tempfile
import shutil
with tempfile.TemporaryDirectory() as dir_:
# Copy actual implementation to test directory
shutil.copy("foo.v", dir_)
# Set test directory with directory= kwarg
tester.compile_and_run("verilator", directory=dir_, skip_compile=True)
| 0.626581 | 0.958304 |
In this notebook, I show the results of using the equilibrium penalized network in SAM. Following some previous work, I am not applying the neural networks output within a few hundred kilometers of the poleward boundaries.
```
import holoviews as hv
hv.extension('bokeh')
%opts Image[width=600, height=400, colorbar=True](cmap='viridis')
%opts Curve[width=400]
```
# Loading the data
```
run_path = "../data/runs/2018-11-09-model188-equilibriation-penalty"
files_3d = f"{run_path}/OUT_3D/*.nc"
files_2d = f"{run_path}/OUT_2D/CASE__1.2Dbin_1.nc"
ds_2d =xr.open_dataset(files_2d)
ds = xr.open_mfdataset(files_3d)
```
# Some basic plots
Here is the precipitable water:
```
%%opts Image{+framewise}
hv.Dataset(ds_2d.PW[::20]).to.image(["x", "y"]).redim.range(PW=(0, 50))
```
The PW time series is actually pretty promising, but it does diverge around day 112. for some reason.
We can see this in the preciptable water time series
```
%%opts Curve[logy=True]
hv.Curve(ds_2d.PW[:,32,0])
```
And the W500 time series diverges:
```
hv.Curve(ds_2d.W500[:,32,0])
```
I don't think the neural network is causing this problem. Here is the $Q_2$ field at day 112.8, which is right before:
```
time_divergence = 112.88
def plot_images_at_time(da, time):
opts = "Image{+framewise}"
return hv.Dataset(da.sel(time=time, method='nearest')).to.image(["x", "y"]).opts(opts)
plot_images_at_time(ds.FQTNN[:,:20:2], time_divergence)
```
There is a large cluster of very strong drying around 10000 km by 6000 km, but this is not that unusally in the data. Notice that $Q_2$ is zero near the meridional boundaries. On the other hand, the vertical velocity field shows some strong artifacts near the boundaries.
```
%%opts Image[colorbar=True, width=600]
w_around_blow_up = ds_2d.W500.sel(time=slice(time_divergence-1, time_divergence + .3))[::2]
hv.Dataset(w_around_blow_up).to.image(["x", "y"], dynamic=True).redim.range(W500=(-.2 ,.2))
```
A couple of things stand out:
1. There is a very fast gibbs like wave propagating west along the southern boundary
2. There is also a 2 Delta x type ringing near the northern boundary which is probably causing he blow.
It is also kind of interesting to look at the absolute vorticity field.
```
from uwnet.thermo import coriolis_ngaqua
def vorcitity(u, v):
f = coriolis_ngaqua(u.y)
psi = u.differentiate('y') - v.differentiate('x')
psi.name = 'Vorticity'
return psi
vort = vorcitity(ds_2d.U850, ds_2d.V850)
%%opts Image(cmap='RdBu_r')
hv.Dataset(vort[::15]).to.image(["x", "y"], label="Relative Vort")\
.redim.range(Vorticity=(-1e-4, 1e-4))
```
One thing I note is that all of the cyclones are flung apart at the very start of the simulation, and they are blurred out very quickly by the hyperdiffusion. Then a ton of vorticity accumulates in the southern boundary. Here is the absolute vorticity. Indeed, there is a reversal in at the southern boundary.
```
abs_vort = vort + coriolis_ngaqua(vort.y)
abs_vort[140].mean('x').plot()
plt.ylabel("Absolute Vorticity (1/s)");
```
# Imbalance of the model
## Drift in PW Over the simulation
There is also a substantial drift in the mean moisture. Here is the zonal mean change in PW from the initial condition for three time points.
```
pw0 = ds_2d.PW.mean('x')[0]
pw_anom = ds_2d.PW.mean('x') - pw0
pw_anom[[10, 50, 140]].plot(hue='time')
```
This shows that the tropics are drying out and the subtropics getting much moister.
## Semi-prognostic imbalance
By the way, the model is somewhat imbalanced even when evaluated in semi-prognostic mode:
```
import torch
model = torch.load("../models/188/5.pkl")
ds = xr.open_dataset("../data/processed/training.nc")
output = model.call_with_xr(ds.isel(time=slice(0,None,20), step=0))
qt_mu = output.QT.mean(['x', 'time'])
fqt_mu = ds.FQT.mean(['x', 'time'])*86400
(fqt_mu+qt_mu).plot()
imbalance = ((fqt_mu+qt_mu)*ds.layer_mass).sum('z')/1000
```
I really should figure out how to make the scheme predict the right thing in the upper atmosphere.
```
imbalance.plot()
```
This net imbalance field lines up pretty well with the moistening/drying pattern we saw [above](#Drift-in-PW-Over-the-simulation).
|
github_jupyter
|
import holoviews as hv
hv.extension('bokeh')
%opts Image[width=600, height=400, colorbar=True](cmap='viridis')
%opts Curve[width=400]
run_path = "../data/runs/2018-11-09-model188-equilibriation-penalty"
files_3d = f"{run_path}/OUT_3D/*.nc"
files_2d = f"{run_path}/OUT_2D/CASE__1.2Dbin_1.nc"
ds_2d =xr.open_dataset(files_2d)
ds = xr.open_mfdataset(files_3d)
%%opts Image{+framewise}
hv.Dataset(ds_2d.PW[::20]).to.image(["x", "y"]).redim.range(PW=(0, 50))
%%opts Curve[logy=True]
hv.Curve(ds_2d.PW[:,32,0])
hv.Curve(ds_2d.W500[:,32,0])
time_divergence = 112.88
def plot_images_at_time(da, time):
opts = "Image{+framewise}"
return hv.Dataset(da.sel(time=time, method='nearest')).to.image(["x", "y"]).opts(opts)
plot_images_at_time(ds.FQTNN[:,:20:2], time_divergence)
%%opts Image[colorbar=True, width=600]
w_around_blow_up = ds_2d.W500.sel(time=slice(time_divergence-1, time_divergence + .3))[::2]
hv.Dataset(w_around_blow_up).to.image(["x", "y"], dynamic=True).redim.range(W500=(-.2 ,.2))
from uwnet.thermo import coriolis_ngaqua
def vorcitity(u, v):
f = coriolis_ngaqua(u.y)
psi = u.differentiate('y') - v.differentiate('x')
psi.name = 'Vorticity'
return psi
vort = vorcitity(ds_2d.U850, ds_2d.V850)
%%opts Image(cmap='RdBu_r')
hv.Dataset(vort[::15]).to.image(["x", "y"], label="Relative Vort")\
.redim.range(Vorticity=(-1e-4, 1e-4))
abs_vort = vort + coriolis_ngaqua(vort.y)
abs_vort[140].mean('x').plot()
plt.ylabel("Absolute Vorticity (1/s)");
pw0 = ds_2d.PW.mean('x')[0]
pw_anom = ds_2d.PW.mean('x') - pw0
pw_anom[[10, 50, 140]].plot(hue='time')
import torch
model = torch.load("../models/188/5.pkl")
ds = xr.open_dataset("../data/processed/training.nc")
output = model.call_with_xr(ds.isel(time=slice(0,None,20), step=0))
qt_mu = output.QT.mean(['x', 'time'])
fqt_mu = ds.FQT.mean(['x', 'time'])*86400
(fqt_mu+qt_mu).plot()
imbalance = ((fqt_mu+qt_mu)*ds.layer_mass).sum('z')/1000
imbalance.plot()
| 0.398875 | 0.934335 |
```
import numpy as np
import os
import csv
from pickle import load, dump
with open('game_data_public.STX.PremierDraft.csv', newline='') as f:
reader = csv.reader(f)
data = list(reader)
if not os.path.exists("data"):
os.mkdir("data")
x = 0
card_dict = {}
card_index = {}
drafted_cards = []
for index, word in enumerate(data[0]):
if word.startswith('deck_'):
card = word.split("deck_")[1]
print(card)
card_dict[card] = x
card_index[x] = card
x = x + 1
drafted_cards.append(index)
if word.startswith("sideboard_"):
drafted_cards.append(index)
print(x)
dump(card_dict, open('data/cardtoindex.pkl', 'wb'))
dump(card_index, open('data/indextocard.pkl', 'wb'))
draftID = "0x"
clean_data = []
wins = 0
losses = 0
for index, value in enumerate(data[1:], 1):
if value[2] != draftID:
draftID = value[2]
wins = 0
losses = 0
draft_data = {}
if value[2] == draftID:
if value[15] == "True":
wins = wins + 1
else:
losses = losses + 1
if index == len(data) - 1 or data[index + 1][2] != draftID:
draft_data = {"DraftID": draftID}
for c_index in drafted_cards:
if int(value[c_index]) > 0:
card_name = card_index[(c_index - 702) % 342]
if card_name == "Mountain" or card_name == "Island" or card_name == "Plains" or card_name == "Swamp" or card_name == "Forest":
continue
if c_index < 1045:
draft_data["main_" + card_name] = value[c_index]
if "draft_" + card_name in draft_data:
draft_data["draft_" + card_name] = int(draft_data["draft_" + card_name]) + int(value[c_index])
else:
draft_data["draft_" + card_name] = value[c_index]
draft_data["Wins"] = wins
draft_data["Losses"] = losses
clean_data.append(draft_data)
dump(clean_data, open('data/clean_data.pkl', 'wb'))
winning_data = []
for values in clean_data:
if values["Wins"] > 2:
winning_data.append(values)
dump(winning_data, open('data/winning_data.pkl', 'wb'))
def datasetup(draft):
cards = 0
alldrafted = np.zeros(343, np.int8)
maindeck = np.zeros(343, np.int8)
for key, value in draft.items():
if key.startswith("main_"):
cards = cards + int(value)
card = key.split("main_")[1]
maindeck[int(card_dict[card])] = value
if key.startswith("draft_"):
card = key.split("draft_")[1]
alldrafted[int(card_dict[card])] = value
return cards, alldrafted, maindeck
for index, deck in enumerate(winning_data): # TODO add weighting based on number of wins
cardcount, alldrafted, removedmain = datasetup(deck)
y = removedmain
X1 = alldrafted
np.save('data/' + str(index) + '.npy', np.array(([X1], [y])))
```
|
github_jupyter
|
import numpy as np
import os
import csv
from pickle import load, dump
with open('game_data_public.STX.PremierDraft.csv', newline='') as f:
reader = csv.reader(f)
data = list(reader)
if not os.path.exists("data"):
os.mkdir("data")
x = 0
card_dict = {}
card_index = {}
drafted_cards = []
for index, word in enumerate(data[0]):
if word.startswith('deck_'):
card = word.split("deck_")[1]
print(card)
card_dict[card] = x
card_index[x] = card
x = x + 1
drafted_cards.append(index)
if word.startswith("sideboard_"):
drafted_cards.append(index)
print(x)
dump(card_dict, open('data/cardtoindex.pkl', 'wb'))
dump(card_index, open('data/indextocard.pkl', 'wb'))
draftID = "0x"
clean_data = []
wins = 0
losses = 0
for index, value in enumerate(data[1:], 1):
if value[2] != draftID:
draftID = value[2]
wins = 0
losses = 0
draft_data = {}
if value[2] == draftID:
if value[15] == "True":
wins = wins + 1
else:
losses = losses + 1
if index == len(data) - 1 or data[index + 1][2] != draftID:
draft_data = {"DraftID": draftID}
for c_index in drafted_cards:
if int(value[c_index]) > 0:
card_name = card_index[(c_index - 702) % 342]
if card_name == "Mountain" or card_name == "Island" or card_name == "Plains" or card_name == "Swamp" or card_name == "Forest":
continue
if c_index < 1045:
draft_data["main_" + card_name] = value[c_index]
if "draft_" + card_name in draft_data:
draft_data["draft_" + card_name] = int(draft_data["draft_" + card_name]) + int(value[c_index])
else:
draft_data["draft_" + card_name] = value[c_index]
draft_data["Wins"] = wins
draft_data["Losses"] = losses
clean_data.append(draft_data)
dump(clean_data, open('data/clean_data.pkl', 'wb'))
winning_data = []
for values in clean_data:
if values["Wins"] > 2:
winning_data.append(values)
dump(winning_data, open('data/winning_data.pkl', 'wb'))
def datasetup(draft):
cards = 0
alldrafted = np.zeros(343, np.int8)
maindeck = np.zeros(343, np.int8)
for key, value in draft.items():
if key.startswith("main_"):
cards = cards + int(value)
card = key.split("main_")[1]
maindeck[int(card_dict[card])] = value
if key.startswith("draft_"):
card = key.split("draft_")[1]
alldrafted[int(card_dict[card])] = value
return cards, alldrafted, maindeck
for index, deck in enumerate(winning_data): # TODO add weighting based on number of wins
cardcount, alldrafted, removedmain = datasetup(deck)
y = removedmain
X1 = alldrafted
np.save('data/' + str(index) + '.npy', np.array(([X1], [y])))
| 0.151969 | 0.197599 |
<a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5>Lab: Connect to Db2 database on Cloud using Python</font></h1>
# Introduction
This notebook illustrates how to access a DB2 database on Cloud using Python by following the steps below:
1. Import the `ibm_db` Python library
1. Enter the database connection credentials
1. Create the database connection
1. Close the database connection
__Note:__ Please follow the instructions given in the first Lab of this course to Create a database service instance of Db2 on Cloud and retrieve your database Service Credentials.
## Import the `ibm_db` Python library
The `ibm_db` [API ](https://pypi.python.org/pypi/ibm_db/) provides a variety of useful Python functions for accessing and manipulating data in an IBM® data server database, including functions for connecting to a database, preparing and issuing SQL statements, fetching rows from result sets, calling stored procedures, committing and rolling back transactions, handling errors, and retrieving metadata.
We first import the ibm_db library into our Python Application
Execute the following cell by clicking within it and then
press `Shift` and `Enter` keys simultaneously
```
import ibm_db
```
When the command above completes, the `ibm_db` library is loaded in your notebook.
## Identify the database connection credentials
Connecting to dashDB or DB2 database requires the following information:
* Driver Name
* Database name
* Host DNS name or IP address
* Host port
* Connection protocol
* User ID (or username)
* User Password
__Notice:__ To obtain credentials please refer to the instructions given in the first Lab of this course
Now enter your database credentials below and execute the cell with `Shift` + `Enter`
All the details from the IBM cloud
{
"hostname": "dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net",
"password": "5145b^cqrwp9nz0t",
"https_url": "https://dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:8443",
"port": 50000,
"ssldsn": "DATABASE=BLUDB;HOSTNAME=dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net;PORT=50001;PROTOCOL=TCPIP;UID=gnn77376;PWD=5145b^cqrwp9nz0t;Security=SSL;",
"host": "dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net",
"jdbcurl": "jdbc:db2://dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB",
"uri": "db2://gnn77376:5145b%5Ecqrwp9nz0t@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB",
"db": "BLUDB",
"dsn": "DATABASE=BLUDB;HOSTNAME=dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net;PORT=50000;PROTOCOL=TCPIP;UID=gnn77376;PWD=5145b^cqrwp9nz0t;",
"username": "gnn77376",
"ssljdbcurl": "jdbc:db2://dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50001/BLUDB:sslConnection=true;"
}
```
#Replace the placeholder values with your actual Db2 hostname, username, and password:
dsn_hostname = "dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net" # e.g.: "dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net"
dsn_uid = "gnn77376" # e.g. "abc12345"
dsn_pwd = "5145b^cqrwp9nz0t" # e.g. "7dBZ3wWt9XN6$o0J"
dsn_driver = "{IBM DB2 ODBC DRIVER}"
dsn_database = "BLUDB" # e.g. "BLUDB"
dsn_port = "50000" # e.g. "50000"
dsn_protocol = "TCPIP" # i.e. "TCPIP"
```
## Create the DB2 database connection
Ibm_db API uses the IBM Data Server Driver for ODBC and CLI APIs to connect to IBM DB2 and Informix.
Lets build the dsn connection string using the credentials you entered above
```
#DO NOT MODIFY THIS CELL. Just RUN it with Shift + Enter
#Create the dsn connection string
dsn = (
"DRIVER={0};"
"DATABASE={1};"
"HOSTNAME={2};"
"PORT={3};"
"PROTOCOL={4};"
"UID={5};"
"PWD={6};").format(dsn_driver, dsn_database, dsn_hostname, dsn_port, dsn_protocol, dsn_uid, dsn_pwd)
#print the connection string to check correct values are specified
print(dsn)
```
Now establish the connection to the database
```
#DO NOT MODIFY THIS CELL. Just RUN it with Shift + Enter
#Create database connection
try:
conn = ibm_db.connect(dsn, "", "")
print ("Connected to database: ", dsn_database, "as user: ", dsn_uid, "on host: ", dsn_hostname)
except:
print ("Unable to connect: ", ibm_db.conn_errormsg() )
```
Congratulations if you were able to connect successfuly. Otherwise check the error and try again.
```
#Retrieve Metadata for the Database Server
server = ibm_db.server_info(conn)
print ("DBMS_NAME: ", server.DBMS_NAME)
print ("DBMS_VER: ", server.DBMS_VER)
print ("DB_NAME: ", server.DB_NAME)
#Retrieve Metadata for the Database Client / Driver
client = ibm_db.client_info(conn)
print ("DRIVER_NAME: ", client.DRIVER_NAME)
print ("DRIVER_VER: ", client.DRIVER_VER)
print ("DATA_SOURCE_NAME: ", client.DATA_SOURCE_NAME)
print ("DRIVER_ODBC_VER: ", client.DRIVER_ODBC_VER)
print ("ODBC_VER: ", client.ODBC_VER)
print ("ODBC_SQL_CONFORMANCE: ", client.ODBC_SQL_CONFORMANCE)
print ("APPL_CODEPAGE: ", client.APPL_CODEPAGE)
print ("CONN_CODEPAGE: ", client.CONN_CODEPAGE)
```
## Close the Connection
We free all resources by closing the connection. Remember that it is always important to close connections so that we can avoid unused connections taking up resources.
```
ibm_db.close(conn)
```
## Summary
In this tutorial you established a connection to a DB2 database on Cloud database from a Python notebook using ibm_db API.
Copyright © 2017 [cognitiveclass.ai](cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
|
github_jupyter
|
import ibm_db
#Replace the placeholder values with your actual Db2 hostname, username, and password:
dsn_hostname = "dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net" # e.g.: "dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net"
dsn_uid = "gnn77376" # e.g. "abc12345"
dsn_pwd = "5145b^cqrwp9nz0t" # e.g. "7dBZ3wWt9XN6$o0J"
dsn_driver = "{IBM DB2 ODBC DRIVER}"
dsn_database = "BLUDB" # e.g. "BLUDB"
dsn_port = "50000" # e.g. "50000"
dsn_protocol = "TCPIP" # i.e. "TCPIP"
#DO NOT MODIFY THIS CELL. Just RUN it with Shift + Enter
#Create the dsn connection string
dsn = (
"DRIVER={0};"
"DATABASE={1};"
"HOSTNAME={2};"
"PORT={3};"
"PROTOCOL={4};"
"UID={5};"
"PWD={6};").format(dsn_driver, dsn_database, dsn_hostname, dsn_port, dsn_protocol, dsn_uid, dsn_pwd)
#print the connection string to check correct values are specified
print(dsn)
#DO NOT MODIFY THIS CELL. Just RUN it with Shift + Enter
#Create database connection
try:
conn = ibm_db.connect(dsn, "", "")
print ("Connected to database: ", dsn_database, "as user: ", dsn_uid, "on host: ", dsn_hostname)
except:
print ("Unable to connect: ", ibm_db.conn_errormsg() )
#Retrieve Metadata for the Database Server
server = ibm_db.server_info(conn)
print ("DBMS_NAME: ", server.DBMS_NAME)
print ("DBMS_VER: ", server.DBMS_VER)
print ("DB_NAME: ", server.DB_NAME)
#Retrieve Metadata for the Database Client / Driver
client = ibm_db.client_info(conn)
print ("DRIVER_NAME: ", client.DRIVER_NAME)
print ("DRIVER_VER: ", client.DRIVER_VER)
print ("DATA_SOURCE_NAME: ", client.DATA_SOURCE_NAME)
print ("DRIVER_ODBC_VER: ", client.DRIVER_ODBC_VER)
print ("ODBC_VER: ", client.ODBC_VER)
print ("ODBC_SQL_CONFORMANCE: ", client.ODBC_SQL_CONFORMANCE)
print ("APPL_CODEPAGE: ", client.APPL_CODEPAGE)
print ("CONN_CODEPAGE: ", client.CONN_CODEPAGE)
ibm_db.close(conn)
| 0.099591 | 0.809916 |
# Randomized Hilbert-Schmidt Criterion From Scratch
```
import sys
sys.path.insert(0, '/home/emmanuel/code/kernellib')
import numba
import numpy as np
from sklearn.utils import check_random_state
from kernellib.dependence import get_sample_data, HSIC, RHSIC
from kernellib.kernels import estimate_length_scale, rbf_kernel
from kernellib.kernels.kernel_approximation import RFF
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
plt.style.use('fivethirtyeight')
my_colors = [
'#000000',
'#ff0000',
]
cmap=LinearSegmentedColormap.from_list('mycmap', my_colors)
%matplotlib inline
%load_ext autoreload
%autoreload 2
X, Y = get_sample_data(dataset='hh', num_points=1000, seed=1234, noise=0.1)
n_samples, d_dimensions = X.shape
fig, ax = plt.subplots()
ax.scatter(X, Y)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_title('Data: High Correlation, High Dependence')
plt.show()
```
## RHSIC Algorithm
### Step I: Compute Kernel
#### RFF Kernel
```
# Estimate Length_scale Parameters
sub_sample = 1000
method = 'median'
seed = 1234
factor = 1 / (n_samples - 1)**2
sigma_x = estimate_length_scale(
X,
sub_sample=sub_sample,
method=method,
random_state=seed
)
sigma_y = estimate_length_scale(
Y,
sub_sample=sub_sample,
method=method,
random_state=seed
)
print(f"\u03BB_x = {sigma_x:.4f}")
print(f"\u03BB_y = {sigma_x:.4f}")
# Number of random features
n_features = 500
random_state = 1234
rng = check_random_state(random_state)
# ======================
# Kernel Matrix: X
# ======================
# Generate n_components iid samples (Random Projection Matrix)
Wx = (1 / sigma_x) * rng.randn(d_dimensions, n_features)
# Explicitly project the features
Zx = (1 / np.sqrt(n_features)) * np.exp(1j * X @ Wx)
# ======================
# Kernel Matrix: Y
# ======================
rng = check_random_state(random_state)
# Generate n_components iid samples (Random Projection Matrix)
Wy = (1 / sigma_y) * rng.randn(d_dimensions, n_features)
# Explicitly project the features
Zy = (1 / np.sqrt(n_features)) * np.exp(1j * Y @ Wy)
```
### Step II: Center Kernels
```
# Remove the Mean
Zxc = Zx - np.mean(Zx, axis=0)[None, :]
Zyc = Zy - np.mean(Zy, axis=0)[None, :]
fig, ax = plt.subplots(nrows=1, ncols=2)
p0 = ax[0].imshow(np.real(Zxc), )
p1 = ax[1].imshow(np.real(Zyc), )
plt.show()
```
### RFF Kernel Approximation Class
```
rff_model = RFF(
n_components=n_features,
length_scale=None,
method='median',
center=False,
random_state=1234
)
Zx_ = rff_model.fit_transform(X)
np.testing.assert_array_almost_equal(Zx, Zx_)
rff_model = RFF(
n_components=n_features,
length_scale=None,
method='median',
center=True,
random_state=1234
)
Zxc_ = rff_model.fit_transform(X)
np.testing.assert_array_almost_equal(Zxc, Zxc_)
```
### Step III - Compute HSIC Value
$$RHSIC() = factor \left(\tilde{Z}_x\tilde{Z}_x^{\top} \tilde{Z}_y\tilde{Z}_y^{\top} \right)$$
$$RHSIC() = factor \left(\tilde{Z}_x^{\top}\tilde{Z}_y \tilde{Z}_y^{\top}\tilde{Z}_x \right)$$
```
Rxy = np.matrix.getH(Zxc).dot(Zyc)
rh = factor * np.real(np.einsum('ij,ji->', Rxy, np.matrix.getH(Rxy)))
print(rh)
```
### Step I - Data & Parameters
```
X = rhsic_data['x']
Y = rhsic_data['y']
n, d = X.shape
n_features = rhsic_data['D'][0][0]
fig, ax = plt.subplots()
ax.scatter(X, Y)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_title('Original Data')
plt.show()
```
## MATLAB Calculation
```
factor = 1 / (n - 1)**2
factor_matlab = rhsic_data['factor'][0][0]
print(f"factor: {factor}")
print(f'factor (matlab): {factor_matlab}')
zx_matlab = rhsic_data['zx']
Wx_matlab = rhsic_data['Wx']
Zx_matlab = rhsic_data['Zx']
Zxc_matlab = rhsic_data['Zxc']
zy_matlab = rhsic_data['zy']
Wy_matlab = rhsic_data['Wy']
Zy_matlab = rhsic_data['Zy']
Zyc_matlab = rhsic_data['Zyc']
hsic_matlab = rhsic_data['hsic'][0][0]
print(f"MATLAB HSIC: {hsic_matlab}")
# np.testing.assert_array_almost_equal(Rxy, Rxy_matlab, decimal=1)
print(Zxc_matlab.shape, Zyc_matlab.shape)
print(Rxy_matlab.shape, Rxy.shape)
# Calculate RHSIC
Rxy = Zxc_matlab.T @ Zyc_matlab #Zxc_matlab.T @ Zyc_matlab
Rxy_matlab = rhsic_data['Rxy']
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].imshow(np.real(Rxy))
ax[1].imshow(np.real(Rxy_matlab))
ax[0].set_title('Python Multi')
ax[1].set_title('MATLAB Multi')
plt.show()
Rxy_matlab[:2, :2]
np.transpose(Rxy_matlab[:2, :2])
hsic = factor * np.real(np.trace( Rxy @ Rxy.T))
hsic_matlab = factor * np.real(np.trace( Rxy_matlab @ np.matrix.getH(Rxy_matlab)))
print(f'HSIC Py (I): {hsic}')
print(f'HSIC Matlab (I): {hsic_matlab}')
# Calculate RHSIC
Zxx_matlab = Zx_matlab @ Zxc_matlab.T
Zyy_matlab = Zy_matlab @ Zyc_matlab.T
hsic_matlab = (1 / (n - 1)**2) * np.real(np.trace(Zxx_matlab * Zyy_matlab))
print(f'HSIC Matlab (II): {hsic_matlab}')
```
### Step II - Estimate Kernel Parameters
```
# Use matlab kernel parameters
sigma_x = rhsic_data['sigmax'][0][0]
sigma_y = rhsic_data['sigmay'][0][0]
print(f'Sigma_x: {sigma_x}')
print(f'Sigma_y: {sigma_y}')
```
### Generate Random normal vector
$zx \sim \mathcal{N}(0, 1) \in \mathbb{R}^{DxF}$
where:
* $D$ is the number of dimensions
* $F$ is the number of features
```
# set random generator
rng = check_random_state(1234)
n, d = X.shape
n_features = rhsic_data['D'][0][0]
# generate random samples
zx = rng.randn(d, n_features)
zx_matlab = rhsic_data['zx']
print(zx.shape, zx_matlab.shape)
```
### Random Features Kernel
```
Wx = (1 / sigma_x) * zx
Wx_matlab = (1 / sigma_x) * zx_matlab
```
|
github_jupyter
|
import sys
sys.path.insert(0, '/home/emmanuel/code/kernellib')
import numba
import numpy as np
from sklearn.utils import check_random_state
from kernellib.dependence import get_sample_data, HSIC, RHSIC
from kernellib.kernels import estimate_length_scale, rbf_kernel
from kernellib.kernels.kernel_approximation import RFF
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
plt.style.use('fivethirtyeight')
my_colors = [
'#000000',
'#ff0000',
]
cmap=LinearSegmentedColormap.from_list('mycmap', my_colors)
%matplotlib inline
%load_ext autoreload
%autoreload 2
X, Y = get_sample_data(dataset='hh', num_points=1000, seed=1234, noise=0.1)
n_samples, d_dimensions = X.shape
fig, ax = plt.subplots()
ax.scatter(X, Y)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_title('Data: High Correlation, High Dependence')
plt.show()
# Estimate Length_scale Parameters
sub_sample = 1000
method = 'median'
seed = 1234
factor = 1 / (n_samples - 1)**2
sigma_x = estimate_length_scale(
X,
sub_sample=sub_sample,
method=method,
random_state=seed
)
sigma_y = estimate_length_scale(
Y,
sub_sample=sub_sample,
method=method,
random_state=seed
)
print(f"\u03BB_x = {sigma_x:.4f}")
print(f"\u03BB_y = {sigma_x:.4f}")
# Number of random features
n_features = 500
random_state = 1234
rng = check_random_state(random_state)
# ======================
# Kernel Matrix: X
# ======================
# Generate n_components iid samples (Random Projection Matrix)
Wx = (1 / sigma_x) * rng.randn(d_dimensions, n_features)
# Explicitly project the features
Zx = (1 / np.sqrt(n_features)) * np.exp(1j * X @ Wx)
# ======================
# Kernel Matrix: Y
# ======================
rng = check_random_state(random_state)
# Generate n_components iid samples (Random Projection Matrix)
Wy = (1 / sigma_y) * rng.randn(d_dimensions, n_features)
# Explicitly project the features
Zy = (1 / np.sqrt(n_features)) * np.exp(1j * Y @ Wy)
# Remove the Mean
Zxc = Zx - np.mean(Zx, axis=0)[None, :]
Zyc = Zy - np.mean(Zy, axis=0)[None, :]
fig, ax = plt.subplots(nrows=1, ncols=2)
p0 = ax[0].imshow(np.real(Zxc), )
p1 = ax[1].imshow(np.real(Zyc), )
plt.show()
rff_model = RFF(
n_components=n_features,
length_scale=None,
method='median',
center=False,
random_state=1234
)
Zx_ = rff_model.fit_transform(X)
np.testing.assert_array_almost_equal(Zx, Zx_)
rff_model = RFF(
n_components=n_features,
length_scale=None,
method='median',
center=True,
random_state=1234
)
Zxc_ = rff_model.fit_transform(X)
np.testing.assert_array_almost_equal(Zxc, Zxc_)
Rxy = np.matrix.getH(Zxc).dot(Zyc)
rh = factor * np.real(np.einsum('ij,ji->', Rxy, np.matrix.getH(Rxy)))
print(rh)
X = rhsic_data['x']
Y = rhsic_data['y']
n, d = X.shape
n_features = rhsic_data['D'][0][0]
fig, ax = plt.subplots()
ax.scatter(X, Y)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_title('Original Data')
plt.show()
factor = 1 / (n - 1)**2
factor_matlab = rhsic_data['factor'][0][0]
print(f"factor: {factor}")
print(f'factor (matlab): {factor_matlab}')
zx_matlab = rhsic_data['zx']
Wx_matlab = rhsic_data['Wx']
Zx_matlab = rhsic_data['Zx']
Zxc_matlab = rhsic_data['Zxc']
zy_matlab = rhsic_data['zy']
Wy_matlab = rhsic_data['Wy']
Zy_matlab = rhsic_data['Zy']
Zyc_matlab = rhsic_data['Zyc']
hsic_matlab = rhsic_data['hsic'][0][0]
print(f"MATLAB HSIC: {hsic_matlab}")
# np.testing.assert_array_almost_equal(Rxy, Rxy_matlab, decimal=1)
print(Zxc_matlab.shape, Zyc_matlab.shape)
print(Rxy_matlab.shape, Rxy.shape)
# Calculate RHSIC
Rxy = Zxc_matlab.T @ Zyc_matlab #Zxc_matlab.T @ Zyc_matlab
Rxy_matlab = rhsic_data['Rxy']
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].imshow(np.real(Rxy))
ax[1].imshow(np.real(Rxy_matlab))
ax[0].set_title('Python Multi')
ax[1].set_title('MATLAB Multi')
plt.show()
Rxy_matlab[:2, :2]
np.transpose(Rxy_matlab[:2, :2])
hsic = factor * np.real(np.trace( Rxy @ Rxy.T))
hsic_matlab = factor * np.real(np.trace( Rxy_matlab @ np.matrix.getH(Rxy_matlab)))
print(f'HSIC Py (I): {hsic}')
print(f'HSIC Matlab (I): {hsic_matlab}')
# Calculate RHSIC
Zxx_matlab = Zx_matlab @ Zxc_matlab.T
Zyy_matlab = Zy_matlab @ Zyc_matlab.T
hsic_matlab = (1 / (n - 1)**2) * np.real(np.trace(Zxx_matlab * Zyy_matlab))
print(f'HSIC Matlab (II): {hsic_matlab}')
# Use matlab kernel parameters
sigma_x = rhsic_data['sigmax'][0][0]
sigma_y = rhsic_data['sigmay'][0][0]
print(f'Sigma_x: {sigma_x}')
print(f'Sigma_y: {sigma_y}')
# set random generator
rng = check_random_state(1234)
n, d = X.shape
n_features = rhsic_data['D'][0][0]
# generate random samples
zx = rng.randn(d, n_features)
zx_matlab = rhsic_data['zx']
print(zx.shape, zx_matlab.shape)
Wx = (1 / sigma_x) * zx
Wx_matlab = (1 / sigma_x) * zx_matlab
| 0.505615 | 0.859192 |
## RVT default module
Module rvt.default is meant to quickly calculate or save any rvt visualization (suitable for beginner python users).
To calculate visualization we need visualization parameters (e.g. hillshade sun azimuth). Module default has a class rvt.default.DefultValues() where all visualization parameters are stored as attributes of this class. This class also contains methods to get (calculate) numpy array of specific visualization or to calculate and save specific visualization as GeoTIFF, all methods use class atributes (set parameters). For get methods we need dem numpy array, for save methods we need dem path. If you call save method for specific visualization (e.g. default.save_hillshade()) it will be saved in DEM (dem_path) direcotry, to change output directory you have to input output directory as string in custom_dir (save methods parameter). Save methods also have two boolean parameters save_float and save_8bit. If save_float is True method will save visulization as float and if save_8bit is True method will bytescale visualization (0-255) and save it. Both can be True to save both.
Let's import modules:
```
import matplotlib.pyplot as plt
import rvt.default
```
To get visualization array we also need input DEM numpy array. We will use default module function get_raster_arr() to read it.
```
dem_path = r"../test_data/TM1_564_146.tif" # set path to your dem
dict_dem = rvt.default.get_raster_arr(dem_path)
dem_arr = dict_dem["array"] # numpy array of DEM
dem_resolution = dict_dem["resolution"]
dem_res_x = dem_resolution[0] # resolution in X direction
dem_res_y = dem_resolution[1] # resolution in Y direction
dem_no_data = dict_dem["no_data"]
plt.imshow(dem_arr, cmap='gray') # show DEM
```
Create rvt.default.DefaultValues() class:
```
default = rvt.default.DefaultValues() # we created instance of class and stored it in default variable
```
Our DEM (example DEM: TM1_564_146.tif) don't have noData pixels. This is why we will set fill_no_data and keep_original_no_data to False. We also won't define no_data in all get visualization methods.
```
default.fill_no_data = False
default.keep_original_no_data = False
```
### Slope Gradient
Set parameters:
```
default.slp_output_units = "degree"
```
Calculate numpy array:
```
slope_arr = default.get_slope(dem_arr=dem_arr, resolution_x=dem_res_x, resolution_y=dem_res_y)
plt.imshow(slope_arr, cmap='gray')
```
Calculate and save as GeoTIFF in DEM directory:
```
default.save_slope(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
```
### Hillshade
Set parameters:
```
default.hs_sun_el = 35
default.hs_sun_azi = 315
```
Calculate numpy array:
```
hillshade_arr = default.get_hillshade(dem_arr=dem_arr, resolution_x=dem_res_x, resolution_y=dem_res_y)
plt.imshow(hillshade_arr, cmap='gray')
```
Calculate and save as GeoTIFF in DEM directory:
```
default.save_hillshade(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
```
### Multiple directions hillshade
Set parameters:
```
default.mhs_nr_dir = 16
default.mhs_sun_el = 35
```
Calculate numpy array:
```
mhs_arr = default.get_multi_hillshade(dem_arr=dem_arr, resolution_x=dem_res_x, resolution_y=dem_res_y)
```
Calculate and save as GeoTIFF in DEM directory:
```
default.save_multi_hillshade(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
```
### Simple local relief model
Set parameters:
```
default.slrm_rad_cell = 20
```
Calculate numpy array:
```
slrm_arr = default.get_slrm(dem_arr=dem_arr)
plt.imshow(slrm_arr, cmap='gray')
```
Calculate and save as GeoTIFF in DEM directory:
```
default.save_slrm(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
```
### Multi-scale relief model
Set parameters:
```
default.msrm_feature_min = 1
default.msrm_feature_max = 5
default.msrm_scaling_factor = 3
```
Calculate numpy array:
```
msrm_arr = default.get_msrm(dem_arr=dem_arr, resolution=dem_res_x)
plt.imshow(msrm_arr, cmap='gray')
```
Calculate and save as GeoTIFF in DEM directory:
```
default.save_msrm(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
```
### Sky-view factor, Anisotropic Sky-view factor, Positive - Openness
Set parameters:
```
# parameters for all three
default.svf_n_dir = 16
default.svf_r_max = 10
default.svf_noise = 0
# parameters for asvf
default.asvf_dir = 315
default.asvf_level = 1
```
Calculate numpy array:
```
svf_asvf_opns_dict = default.get_sky_view_factor(dem_arr=dem_arr, resolution=dem_res_x,
compute_svf=True, compute_asvf=True, compute_opns=True)
svf_arr = svf_asvf_opns_dict["svf"]
plt.imshow(svf_arr, cmap='gray')
asvf_arr = svf_asvf_opns_dict["asvf"]
plt.imshow(asvf_arr, cmap='gray')
opns_arr = svf_asvf_opns_dict["opns"]
plt.imshow(opns_arr, cmap='gray')
```
Calculate and save as GeoTIFF in DEM directory:
```
default.save_sky_view_factor(dem_path=dem_path, save_svf=True, save_asvf=True, save_opns=True,
custom_dir=None, save_float=True, save_8bit=True)
```
### Negative - Openness
Set parameters (svf_parameters):
```
default.svf_n_dir = 16
default.svf_r_max = 10
default.svf_noise = 0
```
Calculate numpy array:
```
neg_opns_arr = default.get_neg_opns(dem_arr=dem_arr, resolution=dem_res_x)
plt.imshow(neg_opns_arr, cmap='gray')
```
Calculate and save as GeoTIFF in DEM directory:
```
default.save_neg_opns(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
```
### Local dominance
Set parameters:
```
default.ld_min_rad = 10
default.ld_max_rad = 20
default.ld_rad_inc = 1
default.ld_anglr_res = 15
default.ld_observer_h = 1.7
```
Calculate numpy array:
```
local_dom_arr = default.get_local_dominance(dem_arr=dem_arr)
plt.imshow(local_dom_arr, cmap='gray')
```
Calculate and save as GeoTIFF in DEM directory:
```
default.save_local_dominance(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
```
### Sky illumination
Set parameters:
```
default.sim_sky_mod = "overcast"
default.sim_compute_shadow = 0
default.sim_shadow_dist = 100
default.sim_nr_dir = 32
default.sim_shadow_az = 315
default.sim_shadow_el = 35
```
Calculate numpy array:
```
sky_illum_arr = default.get_sky_illumination(dem_arr=dem_arr, resolution=dem_res_x)
plt.imshow(sky_illum_arr, cmap='gray')
```
Calculate and save as GeoTIFF in DEM directory:
```
default.save_sky_illumination(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import rvt.default
dem_path = r"../test_data/TM1_564_146.tif" # set path to your dem
dict_dem = rvt.default.get_raster_arr(dem_path)
dem_arr = dict_dem["array"] # numpy array of DEM
dem_resolution = dict_dem["resolution"]
dem_res_x = dem_resolution[0] # resolution in X direction
dem_res_y = dem_resolution[1] # resolution in Y direction
dem_no_data = dict_dem["no_data"]
plt.imshow(dem_arr, cmap='gray') # show DEM
default = rvt.default.DefaultValues() # we created instance of class and stored it in default variable
default.fill_no_data = False
default.keep_original_no_data = False
default.slp_output_units = "degree"
slope_arr = default.get_slope(dem_arr=dem_arr, resolution_x=dem_res_x, resolution_y=dem_res_y)
plt.imshow(slope_arr, cmap='gray')
default.save_slope(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
default.hs_sun_el = 35
default.hs_sun_azi = 315
hillshade_arr = default.get_hillshade(dem_arr=dem_arr, resolution_x=dem_res_x, resolution_y=dem_res_y)
plt.imshow(hillshade_arr, cmap='gray')
default.save_hillshade(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
default.mhs_nr_dir = 16
default.mhs_sun_el = 35
mhs_arr = default.get_multi_hillshade(dem_arr=dem_arr, resolution_x=dem_res_x, resolution_y=dem_res_y)
default.save_multi_hillshade(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
default.slrm_rad_cell = 20
slrm_arr = default.get_slrm(dem_arr=dem_arr)
plt.imshow(slrm_arr, cmap='gray')
default.save_slrm(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
default.msrm_feature_min = 1
default.msrm_feature_max = 5
default.msrm_scaling_factor = 3
msrm_arr = default.get_msrm(dem_arr=dem_arr, resolution=dem_res_x)
plt.imshow(msrm_arr, cmap='gray')
default.save_msrm(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
# parameters for all three
default.svf_n_dir = 16
default.svf_r_max = 10
default.svf_noise = 0
# parameters for asvf
default.asvf_dir = 315
default.asvf_level = 1
svf_asvf_opns_dict = default.get_sky_view_factor(dem_arr=dem_arr, resolution=dem_res_x,
compute_svf=True, compute_asvf=True, compute_opns=True)
svf_arr = svf_asvf_opns_dict["svf"]
plt.imshow(svf_arr, cmap='gray')
asvf_arr = svf_asvf_opns_dict["asvf"]
plt.imshow(asvf_arr, cmap='gray')
opns_arr = svf_asvf_opns_dict["opns"]
plt.imshow(opns_arr, cmap='gray')
default.save_sky_view_factor(dem_path=dem_path, save_svf=True, save_asvf=True, save_opns=True,
custom_dir=None, save_float=True, save_8bit=True)
default.svf_n_dir = 16
default.svf_r_max = 10
default.svf_noise = 0
neg_opns_arr = default.get_neg_opns(dem_arr=dem_arr, resolution=dem_res_x)
plt.imshow(neg_opns_arr, cmap='gray')
default.save_neg_opns(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
default.ld_min_rad = 10
default.ld_max_rad = 20
default.ld_rad_inc = 1
default.ld_anglr_res = 15
default.ld_observer_h = 1.7
local_dom_arr = default.get_local_dominance(dem_arr=dem_arr)
plt.imshow(local_dom_arr, cmap='gray')
default.save_local_dominance(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
default.sim_sky_mod = "overcast"
default.sim_compute_shadow = 0
default.sim_shadow_dist = 100
default.sim_nr_dir = 32
default.sim_shadow_az = 315
default.sim_shadow_el = 35
sky_illum_arr = default.get_sky_illumination(dem_arr=dem_arr, resolution=dem_res_x)
plt.imshow(sky_illum_arr, cmap='gray')
default.save_sky_illumination(dem_path=dem_path, custom_dir=None, save_float=True, save_8bit=True)
| 0.397821 | 0.934902 |
# Word2vec
可參考
* https://en.wikipedia.org/wiki/Word2vec
* http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/
* https://www.tensorflow.org/tutorials/word2vec
```
# windows only hack for graphviz path
import os
for path in os.environ['PATH'].split(os.pathsep):
if path.endswith("Library\\bin"):
os.environ['PATH']+=os.pathsep+os.path.join(path, 'graphviz')
# 設定環境變數來控制 keras, theano
os.environ['KERAS_BACKEND']="tensorflow"
os.environ['THEANO_FLAGS']="floatX=float32, device=cuda"
import numpy as np
```
## 下載資料
```
import os
import urllib
from urllib.request import urlretrieve
# 大小是 26M
dataset ="text8.bz2"
origin_url = "https://github.com/tjwei/tf-play/raw/master/text8.bz2"
def reporthook(a,b,c):
print("\rdownloading: %5.1f%%"%(a*b*100.0/c), end="")
if not os.path.isfile(dataset):
print('Downloading data from %s' % origin_url)
urlretrieve(origin_url, dataset, reporthook=reporthook)
import bz2
with bz2.open(dataset, "rt") as text8_file:
words = text8_file.read().split()
print('Data size', len(words))
words[:20]
```
### 先處理掉少用字
```
# 總共有多少種字?
len(set(words))
```
我們只考慮最常用的 50000 字, 其他字用 UNK 取代
```
import collections
# 先統計字數
counter = collections.Counter(words)
# 可以看一下 counter 是的內容
# 最常見的 20 個字
counter.most_common(20)
vocabulary_size = 50000
wordfreq = counter.most_common(vocabulary_size-1)
# 建立 編號: 字 的對照表
num2word = ['UNK'] + [w for (w, _) in wordfreq]
freq = np.array([0]+[n for (_, n) in wordfreq], dtype="float64")
freq[0] = len(words) - freq.sum()
freq = (np.sqrt(freq/0.001)+1)*9.001/freq
# 建立一個 字: 編號 的對照表
word2num = {w: i for i, w in enumerate(num2word)}
# 把 words 轉成對定的編號
data = np.array([word2num.get(word, 0) for word in words])
# 不需要 words 了
del words
del wordfreq
freq[:10]
```
看一下目前的狀況
```
print(data[:20])
print(" - ".join(num2word[n] for n in data[:20]))
```
生成 skip-gram 模型的 batch
keywords skipgram, cbow, n-gram
```
import keras.backend as K
from keras.layers import Embedding, Dense, Flatten, Input
from keras.models import Sequential, Model
import keras.backend as K
import tensorflow as tf
# vector 的維度
embedding_size = 128
# 這其實只是線性映射,只不過輸入不是 one hot 而是 integer, 所以等同查表
word2vec = Sequential()
word2vec.add(Embedding(vocabulary_size, embedding_size, input_length=1))
word2vec.add(Flatten())
train_input = word2vec.inputs[0]
embeddings = word2vec.layers[0].embeddings
# 對應到的 context
train_labels = Input(shape=(1,), dtype="int32")
# 這裡利用 tensorflow 的 nce_loss
nce_W = K.variable(K.random_normal((vocabulary_size, embedding_size),stddev=(embedding_size)**-0.5))
loss = K.mean(tf.nn.nce_loss(
weights=nce_W,
biases=K.zeros((vocabulary_size,)),
labels=train_labels,
inputs=word2vec.output,
num_sampled=64, # Number of negative examples to sample.
num_classes=vocabulary_size))
# 利用 tensorflow 的 optimizer
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
# 之後要拿來檢驗的例子
valid_examples = np.array([word2num[x] for x in ["five", "many", "american", "time", "see", "war", "history", "he"]])
valid_size = len(valid_examples)
valid_dataset = K.constant(valid_examples[:, None], "int32")
valid_embeddings = word2vec(valid_dataset)
# 正規化 embeddings, 來找 nearest neighbor
normalized_embeddings = K.l2_normalize(embeddings, 1)
similarity = K.dot(valid_embeddings, K.transpose(normalized_embeddings))
# Add variable initializer.
init = tf.global_variables_initializer()
def skipgram_batch(data, batch_size, num_skips, skip_window):
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
context_length = skip_window*2+1
X = np.ndarray(shape=batch_size, dtype=np.int32)
Y = np.ndarray(shape=batch_size, dtype=np.int32)
idx = 0
while True:
for i in range(0, batch_size, num_skips):
X[i:i+num_skips] = data[idx+skip_window]
context = data[idx:idx+context_length][np.arange(context_length) != skip_window]
# subsampling 機率
#p = np.ones(2*skip_window)/2/skip_window
Y[i:i+num_skips] = np.random.choice(context, size=num_skips, replace=False)
idx = (idx+1)%(len(data)-context_length)
yield X[:, None], Y
# 測試看看
X,Y = next(skipgram_batch(data, 20, 4, 3))
for x,y in zip(X, Y):
print("{} -> {}".format(num2word[x[0]], num2word[y]) )
import time
t0 = time.time()
batch_gen = skipgram_batch(data, batch_size=128, num_skips=4, skip_window=3)
with tf.Session() as sess:
sess.run(init)
average_loss = 0
for step in range(0,200001):
X,Y = next(batch_gen)
feed_dict = {train_input: X, train_labels: Y[:, None]}
_, loss_val = sess.run([optimizer, loss], feed_dict=feed_dict)
average_loss += loss_val
if step >0 and step %10000 == 0:
print(step, "average loss", average_loss/2000, time.time()-t0)
average_loss = 0
if step % 50000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = num2word[valid_examples[i]]
nearest = (-sim[i, :]).argsort()[1:8 + 1]
print(valid_word, [num2word[x] for x in nearest])
final_embeddings = normalized_embeddings.eval()
def find_sim(v, num=10):
if isinstance(v, str):
v = w2v(v)
return [num2word[x] for x in (final_embeddings @ v).argsort()[-num-1:-1][::-1]]
def w2v(w):
return final_embeddings[word2num.get(w, 0)]
find_sim('dog')
find_sim('car')
find_sim('king')
find_sim(w2v('king')-w2v('men')+w2v("women"))
# 用 t-sne 降低維度
from sklearn.manifold import TSNE
samples = 500
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
low_dim_embs = tsne.fit_transform(final_embeddings[:samples])
labels = num2word[:samples]
# 畫出來
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(20,20))
plt.scatter(low_dim_embs[:, 0], low_dim_embs[:, 1])
for i, label in enumerate(labels):
x, y = low_dim_embs[i]
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
fontsize=14,
ha='right',
va='bottom')
```
|
github_jupyter
|
# windows only hack for graphviz path
import os
for path in os.environ['PATH'].split(os.pathsep):
if path.endswith("Library\\bin"):
os.environ['PATH']+=os.pathsep+os.path.join(path, 'graphviz')
# 設定環境變數來控制 keras, theano
os.environ['KERAS_BACKEND']="tensorflow"
os.environ['THEANO_FLAGS']="floatX=float32, device=cuda"
import numpy as np
import os
import urllib
from urllib.request import urlretrieve
# 大小是 26M
dataset ="text8.bz2"
origin_url = "https://github.com/tjwei/tf-play/raw/master/text8.bz2"
def reporthook(a,b,c):
print("\rdownloading: %5.1f%%"%(a*b*100.0/c), end="")
if not os.path.isfile(dataset):
print('Downloading data from %s' % origin_url)
urlretrieve(origin_url, dataset, reporthook=reporthook)
import bz2
with bz2.open(dataset, "rt") as text8_file:
words = text8_file.read().split()
print('Data size', len(words))
words[:20]
# 總共有多少種字?
len(set(words))
import collections
# 先統計字數
counter = collections.Counter(words)
# 可以看一下 counter 是的內容
# 最常見的 20 個字
counter.most_common(20)
vocabulary_size = 50000
wordfreq = counter.most_common(vocabulary_size-1)
# 建立 編號: 字 的對照表
num2word = ['UNK'] + [w for (w, _) in wordfreq]
freq = np.array([0]+[n for (_, n) in wordfreq], dtype="float64")
freq[0] = len(words) - freq.sum()
freq = (np.sqrt(freq/0.001)+1)*9.001/freq
# 建立一個 字: 編號 的對照表
word2num = {w: i for i, w in enumerate(num2word)}
# 把 words 轉成對定的編號
data = np.array([word2num.get(word, 0) for word in words])
# 不需要 words 了
del words
del wordfreq
freq[:10]
print(data[:20])
print(" - ".join(num2word[n] for n in data[:20]))
import keras.backend as K
from keras.layers import Embedding, Dense, Flatten, Input
from keras.models import Sequential, Model
import keras.backend as K
import tensorflow as tf
# vector 的維度
embedding_size = 128
# 這其實只是線性映射,只不過輸入不是 one hot 而是 integer, 所以等同查表
word2vec = Sequential()
word2vec.add(Embedding(vocabulary_size, embedding_size, input_length=1))
word2vec.add(Flatten())
train_input = word2vec.inputs[0]
embeddings = word2vec.layers[0].embeddings
# 對應到的 context
train_labels = Input(shape=(1,), dtype="int32")
# 這裡利用 tensorflow 的 nce_loss
nce_W = K.variable(K.random_normal((vocabulary_size, embedding_size),stddev=(embedding_size)**-0.5))
loss = K.mean(tf.nn.nce_loss(
weights=nce_W,
biases=K.zeros((vocabulary_size,)),
labels=train_labels,
inputs=word2vec.output,
num_sampled=64, # Number of negative examples to sample.
num_classes=vocabulary_size))
# 利用 tensorflow 的 optimizer
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
# 之後要拿來檢驗的例子
valid_examples = np.array([word2num[x] for x in ["five", "many", "american", "time", "see", "war", "history", "he"]])
valid_size = len(valid_examples)
valid_dataset = K.constant(valid_examples[:, None], "int32")
valid_embeddings = word2vec(valid_dataset)
# 正規化 embeddings, 來找 nearest neighbor
normalized_embeddings = K.l2_normalize(embeddings, 1)
similarity = K.dot(valid_embeddings, K.transpose(normalized_embeddings))
# Add variable initializer.
init = tf.global_variables_initializer()
def skipgram_batch(data, batch_size, num_skips, skip_window):
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
context_length = skip_window*2+1
X = np.ndarray(shape=batch_size, dtype=np.int32)
Y = np.ndarray(shape=batch_size, dtype=np.int32)
idx = 0
while True:
for i in range(0, batch_size, num_skips):
X[i:i+num_skips] = data[idx+skip_window]
context = data[idx:idx+context_length][np.arange(context_length) != skip_window]
# subsampling 機率
#p = np.ones(2*skip_window)/2/skip_window
Y[i:i+num_skips] = np.random.choice(context, size=num_skips, replace=False)
idx = (idx+1)%(len(data)-context_length)
yield X[:, None], Y
# 測試看看
X,Y = next(skipgram_batch(data, 20, 4, 3))
for x,y in zip(X, Y):
print("{} -> {}".format(num2word[x[0]], num2word[y]) )
import time
t0 = time.time()
batch_gen = skipgram_batch(data, batch_size=128, num_skips=4, skip_window=3)
with tf.Session() as sess:
sess.run(init)
average_loss = 0
for step in range(0,200001):
X,Y = next(batch_gen)
feed_dict = {train_input: X, train_labels: Y[:, None]}
_, loss_val = sess.run([optimizer, loss], feed_dict=feed_dict)
average_loss += loss_val
if step >0 and step %10000 == 0:
print(step, "average loss", average_loss/2000, time.time()-t0)
average_loss = 0
if step % 50000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = num2word[valid_examples[i]]
nearest = (-sim[i, :]).argsort()[1:8 + 1]
print(valid_word, [num2word[x] for x in nearest])
final_embeddings = normalized_embeddings.eval()
def find_sim(v, num=10):
if isinstance(v, str):
v = w2v(v)
return [num2word[x] for x in (final_embeddings @ v).argsort()[-num-1:-1][::-1]]
def w2v(w):
return final_embeddings[word2num.get(w, 0)]
find_sim('dog')
find_sim('car')
find_sim('king')
find_sim(w2v('king')-w2v('men')+w2v("women"))
# 用 t-sne 降低維度
from sklearn.manifold import TSNE
samples = 500
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
low_dim_embs = tsne.fit_transform(final_embeddings[:samples])
labels = num2word[:samples]
# 畫出來
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(20,20))
plt.scatter(low_dim_embs[:, 0], low_dim_embs[:, 1])
for i, label in enumerate(labels):
x, y = low_dim_embs[i]
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
fontsize=14,
ha='right',
va='bottom')
| 0.361052 | 0.753308 |
<img src="../images/aeropython_logo.png" alt="AeroPython" style="width: 300px;"/>
# Importando NumPy ...
# ... y otras bibliotecas
Python es un lenguaje que está altamente modularizado: está dividido en __bibliotecas que realizan tareas específicas__. Para hacer uso de ellas debemos importarlas. Podemos importar cosas de la [biblioteca estándar](https://docs.python.org/3.4/library/), de paquetes que hayamos descargado (o se enceuntren en [nuestra distribución](http://docs.continuum.io/anaconda/pkg-docs.html)) o de módulos que nosotros mismos construyamos.
## `import ______`
Existen varias formas de importar:
import numpy
Cada vez que queramos acceder a una función de numpy, deberemos escribir:
numpy.sin(5)
numpy.linspace(0,100,50)
## `import _____ as __`
Como esto puede resultar tedioso, suele utilizarse un __namespace__, el recomendado en la documentación oficial y que usaremos en el curso es:
import numpy as np
Ahora podremos llamar a funciones escribiendo:
np.sin(5)
np.linspace(0,100,50)
## `from _____ import ___, ___, ___`
También podríamos importar funciones concretas dentro del paquete que queramos usar, por ejemplo:
```
from numpy import linspace, sin
```
## `from _____ import *`
Si esto te sigue pareciendo demasido escribir puedes hacer (__altamente no recomendado__):
from numpy import *
El asterisco, quiere decir _TODO_. Esto genera varios problemas:
* __Imporatará gran cantidad de funciones y clases que puede que no necesites__.
* El nombre de estas funciones, puede coincidir con el de alguna de otro módulo que hayas importado, de manera que "la machacará", por lo que __se producirán ambigüedades__.
## Ejemplo: ¿por qué no hacer from numpy import * ?
__La función seno que incorporá math no es la misma que la de NumPy__. Ambas proporcionarán el seno de un número, evidentemente, el mismo resultado para el mismo número, pero una acepta listas y la otra no. Al hacer la segunda importación, la función seno de NumPy se ha sustituido por la de math y la misma sentencia, da un error. Esto puede hacer que te vuelvas un poco loco si tu código es grande o acabes volviendo loco a alguien si usa tu código.
¿Suficiente? Ahora ya sabes por qué tendrás que escribir `np.loquesea` __siempre__.
---
___Hemos aprendido:___
* Cómo importar bibliotecas en Python
* Por qué debemos importar `numpy` como `np`
---
<br/>
#### <h4 align="right">¡Síguenos en Twitter!
<br/>
###### <a href="https://twitter.com/AeroPython" class="twitter-follow-button" data-show-count="false">Follow @AeroPython</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
<br/>
###### Este notebook ha sido realizado por: Juan Luis Cano y Álex Sáez
<br/>
##### <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es"><img alt="Licencia Creative Commons" style="border-width:0" src="http://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Curso AeroPython</span> por <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Juan Luis Cano Rodriguez y Alejandro Sáez Mollejo</span> se distribuye bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es">Licencia Creative Commons Atribución 4.0 Internacional</a>.
---
_Las siguientes celdas contienen configuración del Notebook_
_Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_
File > Trusted Notebook
```
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
```
|
github_jupyter
|
from numpy import linspace, sin
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
| 0.40592 | 0.944536 |
# Machine Learning with Support Vector Machines and Parameter Tuning
In this short micro-project, we'll work on classifying flowers from the famous Iris data set into different categories.
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
```
## Data
Fisher's Iris data set is a multivariate data set introduced by Sir Ronald Fisher in the 1936 as an example of discriminant analysis.
The iris dataset contains measurements for 150 iris flowers from three different species.
The three classes in the Iris dataset:
Iris-setosa (n=50)
Iris-versicolor (n=50)
Iris-virginica (n=50)
The four features of the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
The dataset is built into seaborn, so we can use the library to import the data.
```
iris = sns.load_dataset('iris')
```
## Exploratory Analysis
Let's check out the dataset.
```
iris.head()
sns.pairplot(iris,hue='species')
```
A quick look at the pairplot, and we can see that the setosa species seems to be the most separable of the three.
# Model Building
We'll begin by splitting the data into training and test sets.
```
from sklearn.model_selection import train_test_split
X = iris.drop('species',axis=1)
y = iris['species']
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.3)
```
Now its time to train a Support Vector Machine Classifier.
```
from sklearn.svm import SVC
sv = SVC()
sv.fit(X_train,y_train)
```
## Predictions and Evaluations
```
preds = sv.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(confusion_matrix(y_test,preds))
print(classification_report(y_test,preds))
```
And it seems like our model did pretty well!
We can try and improve the results by tuning the parameters for the classifier.
Scikit's inbuilt 'GridSearch' module lets us do that automatically, to an extent. Let's try and use that.
## Parameter Tuning using GridSearch
```
from sklearn.model_selection import GridSearchCV
#Defining the initial parameter grid to search in
param_grid = {'C': [0.1,1, 10, 100], 'gamma': [1,0.1,0.01,0.001]}
grid = GridSearchCV(SVC(),param_grid,refit=True)
grid.fit(X_train,y_train)
```
### New Predictions and Results
```
grid_predictions = grid.predict(X_test)
print(confusion_matrix(y_test,grid_predictions))
print(classification_report(y_test,grid_predictions))
```
A little better this time, with only one point that we weren't able to grab. This might be a good thing in real world applications as we don't want a model that overfits to the taining set completely.
This concludes our micro-project!
|
github_jupyter
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
iris = sns.load_dataset('iris')
iris.head()
sns.pairplot(iris,hue='species')
from sklearn.model_selection import train_test_split
X = iris.drop('species',axis=1)
y = iris['species']
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.3)
from sklearn.svm import SVC
sv = SVC()
sv.fit(X_train,y_train)
preds = sv.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(confusion_matrix(y_test,preds))
print(classification_report(y_test,preds))
from sklearn.model_selection import GridSearchCV
#Defining the initial parameter grid to search in
param_grid = {'C': [0.1,1, 10, 100], 'gamma': [1,0.1,0.01,0.001]}
grid = GridSearchCV(SVC(),param_grid,refit=True)
grid.fit(X_train,y_train)
grid_predictions = grid.predict(X_test)
print(confusion_matrix(y_test,grid_predictions))
print(classification_report(y_test,grid_predictions))
| 0.52902 | 0.984752 |
# Introduction to Copulas
## Probability Review
Let's start by reviewing some basic probability concepts.
We'll focus specifically on continuous random variables, which is what the Copulas library is primarily intended to support.
### Probability Density Function
A probability density function $f(x)$ captures the likelihood that a random sample from the distribution is equal to $x$. For example, the probability density function for the standard normal distribution is given by
\begin{equation}
f(x) = \frac{1}{2 \pi} e^{-x^2/2}
\end{equation}
Note that the probability density function does **not** return a probability but rather a "relative likelihood" which can take on values in the interval $[0, \infty)$; however, the integral over the probability density function from $-\infty$ to $\infty$ must be equal to one.
### Cumulative Distribution Function
In many cases, the probability density function can be hard to work with directly. Instead, we will use the cumulative distribution function $F(x)$ which is defined as the integral of the probability density function
\begin{equation}
F(x) = \int_{-\infty}^x f(x)
\end{equation}
The below figure shows the probability density function $f(x)$ and the cumulative distribution function $F(x)$ for a normal standard distribution with mean $0.0$ and variance $1$.
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
def plot_cdf_pdf():
# Generate 10000 evenly distributed values from -1 to 1
x = np.linspace(-4.0, 4.0, 10000)
# Compute their Probability Densities and Cumulative Distributions
pdf = stats.norm.pdf(x)
cdf = stats.norm.cdf(x)
figure = plt.figure(figsize=(16, 4))
figure.add_subplot(1, 2, 1)
plt.plot(x, pdf)
plt.title("Probability Density Function")
plt.xlabel("x")
plt.ylabel("f(x)")
figure.add_subplot(1, 2, 2)
plt.plot(x, cdf)
plt.title("Cumulative Density Function")
plt.xlabel("x")
plt.ylabel("F(x)")
plot_cdf_pdf()
```
### Probability Integral Transform
The probability integral transform is a key component in our toolkit for working with probability distributions. Suppose we have a random variable $X$ that comes from a distribution with cumulative density function $F(X)$. Then, we can define a random variable $Y$ as
\begin{equation}
Y = F(X)
\end{equation}
and prove that $Y$ follows a uniform distribution over the interval $[0.0, 1.0]$.
The figure below shows an example of this. We sample some data from a normal distribution and plot it on the left. Then, we use the CDF of the normal distribution to transform the data, plot it on the right, and observe that it resembles an uniform distribution.
```
from scipy import stats
from matplotlib import pyplot as plt
X = stats.norm.rvs(size=10000)
X_pit = stats.norm.cdf(X)
fig = plt.figure(figsize=(16, 4))
fig.add_subplot(1, 2, 1)
plt.hist(X, density=True, bins=10)
plt.title("Samples")
plt.xlabel("x")
fig.add_subplot(1, 2, 2)
plt.hist(X_pit, density=True, bins=10)
plt.title("Transformed Samples")
plt.xlabel("x")
plt.show()
```
## Copulas
The key intuition underlying copula functions is the idea that marginal distributions can be modeled independently from the joint distribution. For example, consider a dataset with two columns containing age and income. A copula-based modeling approach would:
1. Model age and income independently, transforming them into uniform distributions using the *probability integral transform* explained above.
2. Model the relationship between the transformed variables using the copula function.
In this section, we demonstrate a simplified example of a Gaussian copula.
```
from copulas.datasets import sample_bivariate_age_income
df = sample_bivariate_age_income()
df.head()
from copulas.visualization import scatter_2d
scatter_2d(df)
```
Here's what the age and income variables look like separately.
```
from copulas.visualization import hist_1d, side_by_side
side_by_side(hist_1d, {'Age': df['age'], 'Income': df['income']})
```
To model this using a Gaussian copula, we can simply run the following:
```
from copulas.multivariate import GaussianMultivariate
copula = GaussianMultivariate()
copula.fit(df)
```
The GaussianMultivariate class will automatically transform the columns using the best available distribution; let's take a look at what the transformed age and income variables look like.
```
age_cdf = copula.univariates[0].cdf(df['age'])
inc_cdf = copula.univariates[1].cdf(df['income'])
side_by_side(hist_1d, {'Age': age_cdf, 'Income': inc_cdf})
```
Note that this transformed data looks much more uniform than the original values. Using this transformed data, we can then model the relationship between age and income more easily and generate some synthetic data.
```
synthetic = copula.sample(len(df))
synthetic.head()
from copulas.visualization import compare_2d
compare_2d(df, synthetic)
```
|
github_jupyter
|
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
def plot_cdf_pdf():
# Generate 10000 evenly distributed values from -1 to 1
x = np.linspace(-4.0, 4.0, 10000)
# Compute their Probability Densities and Cumulative Distributions
pdf = stats.norm.pdf(x)
cdf = stats.norm.cdf(x)
figure = plt.figure(figsize=(16, 4))
figure.add_subplot(1, 2, 1)
plt.plot(x, pdf)
plt.title("Probability Density Function")
plt.xlabel("x")
plt.ylabel("f(x)")
figure.add_subplot(1, 2, 2)
plt.plot(x, cdf)
plt.title("Cumulative Density Function")
plt.xlabel("x")
plt.ylabel("F(x)")
plot_cdf_pdf()
from scipy import stats
from matplotlib import pyplot as plt
X = stats.norm.rvs(size=10000)
X_pit = stats.norm.cdf(X)
fig = plt.figure(figsize=(16, 4))
fig.add_subplot(1, 2, 1)
plt.hist(X, density=True, bins=10)
plt.title("Samples")
plt.xlabel("x")
fig.add_subplot(1, 2, 2)
plt.hist(X_pit, density=True, bins=10)
plt.title("Transformed Samples")
plt.xlabel("x")
plt.show()
from copulas.datasets import sample_bivariate_age_income
df = sample_bivariate_age_income()
df.head()
from copulas.visualization import scatter_2d
scatter_2d(df)
from copulas.visualization import hist_1d, side_by_side
side_by_side(hist_1d, {'Age': df['age'], 'Income': df['income']})
from copulas.multivariate import GaussianMultivariate
copula = GaussianMultivariate()
copula.fit(df)
age_cdf = copula.univariates[0].cdf(df['age'])
inc_cdf = copula.univariates[1].cdf(df['income'])
side_by_side(hist_1d, {'Age': age_cdf, 'Income': inc_cdf})
synthetic = copula.sample(len(df))
synthetic.head()
from copulas.visualization import compare_2d
compare_2d(df, synthetic)
| 0.774754 | 0.99207 |
# AutoGluon Tabular Example
>__NOTE:__ Make sure to use the Pyton 3 (Data Science) Jupyter Kernel.
## Prerequisites
### Intalling the Image Build CLI
```
%%capture
import sys
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
!{sys.executable} -m pip install -U pip sagemaker-studio-image-build
```
### Configuring the AutoGluon Training/Testing Script
```
%%writefile train.py
import os
import json
import boto3
import json
import warnings
import numpy as np
import pandas as pd
from autogluon.tabular import TabularDataset, TabularPredictor
warnings.filterwarnings("ignore", category=DeprecationWarning)
prefix = "/opt/ml"
input_path = os.path.join(prefix, "input/data")
output_path = os.path.join(prefix, "output")
model_path = os.path.join(prefix, "model")
param_path = os.path.join(prefix, 'input/config/hyperparameters.json')
def train(params):
label = params["label"]
channel_name = "training"
training_path = os.path.join(input_path, channel_name)
training_dataset = TabularDataset(os.path.join(training_path, "training.csv"))
predictor = TabularPredictor(label=label, path=model_path).fit(training_dataset)
with open(os.path.join(model_path, "Fit_Summary.txt"), "w") as f:
print(predictor.fit_summary(), file=f)
return predictor
def test(params, predictor):
label = params["label"]
channel_name = "testing"
testing_path = os.path.join(input_path, channel_name)
testing_dataset = TabularDataset(os.path.join(testing_path, "testing.csv"))
ground_truth = testing_dataset[label]
testing_data = testing_dataset.drop(columns=label)
predictions = predictor.predict(testing_data)
with open(os.path.join(model_path, "Model_Evaluation.txt"), "w") as f:
print(
json.dumps(
predictor.evaluate_predictions(
y_true=ground_truth,
y_pred=predictions,
auxiliary_metrics=True
),
indent=4
),
file=f
)
leaderboard = predictor.leaderboard(testing_dataset, silent=True)
leaderboard.to_csv(os.path.join(model_path, "Leaderboard.csv"))
if __name__ == "__main__":
print("Loading Parameters\n")
with open(param_path) as f:
params = json.load(f)
print("Training Models\n")
predictor = train(params)
print("Testig Models\n")
test(params, predictor)
print("AutoGluon Job Complete")
```
### Container Image Build Instructions (Dockerfile)
```
%%writefile Dockerfile
ARG REGION
FROM 763104351884.dkr.ecr.${REGION}.amazonaws.com/autogluon-training:0.3.1-cpu-py37-ubuntu18.04
RUN pip install -U pip
RUN pip install bokeh==2.0.1
RUN mkdir -p /opt/program
RUN mkdir -p /opt/ml
COPY train.py /opt/program
WORKDIR /opt/program
ENTRYPOINT ["python", "train.py"]
```
### Container Build Process
```
import boto3
import sagemaker
aws_region = sagemaker.Session().boto_session.region_name
!sm-docker build --build-arg REGION={aws_region} .
```
---
## AutoGluon Experiment
### Download the Abalone Data
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
column_names = ["sex", "length", "diameter", "height", "whole_weight", "shucked_weight", "viscera_weight", "shell_weight", "rings"]
abalone_data = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data", names=column_names)
training_data, testing_data = train_test_split(abalone_data, test_size=0.1)
training_data.to_csv("training.csv")
testing_data.to_csv("testing.csv")
```
### Experiment Parameters
>__NOTE:__ Update the `image_uri` parameter with the _Image URI_ output the __Container Build Process__.
```
import sagemaker
import datetime
image_uri = "<Enter the Image URI from the sm-docker output>"
role = sagemaker.get_execution_role()
session = sagemaker.session.Session()
bucket = session.default_bucket()
job_version = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')[:-3]
job_name = f"abalone-autogluon-{job_version}"
```
### Create the AutoGluon Estimator
```
from sagemaker.estimator import Estimator
autogluon = Estimator(
image_uri=image_uri,
role=role,
output_path=f"s3://{bucket}/{job_name}",
base_job_name=job_name,
instance_count=1,
instance_type="ml.m5.xlarge",
hyperparameters={
"label": "rings",
"bucket": bucket,
"training_job": job_name
},
volume_size=20
)
```
### Execute the Experiment
```
autogluon.fit(
inputs={
"training": session.upload_data(
"training.csv",
bucket=bucket,
key_prefix=f"{job_name}/input"
),
"testing": session.upload_data(
"testing.csv",
bucket=bucket,
key_prefix=f"{job_name}/input"
)
}
)
```
### Experiment Results
#### Download Model Artifacts
```
!mkdir extract
sagemaker.s3.S3Downloader.download(autogluon.model_data, "./")
!tar xfz ./model.tar.gz -C extract
```
#### Review Model Leaderboard
```
df = pd.read_csv("./extract/Leaderboard.csv")
df = df.filter(["model","score_test", "score_val"]).sort_values(by="score_val", ascending=False).reset_index().drop(columns="index")
df
```
#### Plot Model Comparison
```
import IPython
IPython.display.HTML(filename="./extract/SummaryOfModels.html")
```
|
github_jupyter
|
%%capture
import sys
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
!{sys.executable} -m pip install -U pip sagemaker-studio-image-build
%%writefile train.py
import os
import json
import boto3
import json
import warnings
import numpy as np
import pandas as pd
from autogluon.tabular import TabularDataset, TabularPredictor
warnings.filterwarnings("ignore", category=DeprecationWarning)
prefix = "/opt/ml"
input_path = os.path.join(prefix, "input/data")
output_path = os.path.join(prefix, "output")
model_path = os.path.join(prefix, "model")
param_path = os.path.join(prefix, 'input/config/hyperparameters.json')
def train(params):
label = params["label"]
channel_name = "training"
training_path = os.path.join(input_path, channel_name)
training_dataset = TabularDataset(os.path.join(training_path, "training.csv"))
predictor = TabularPredictor(label=label, path=model_path).fit(training_dataset)
with open(os.path.join(model_path, "Fit_Summary.txt"), "w") as f:
print(predictor.fit_summary(), file=f)
return predictor
def test(params, predictor):
label = params["label"]
channel_name = "testing"
testing_path = os.path.join(input_path, channel_name)
testing_dataset = TabularDataset(os.path.join(testing_path, "testing.csv"))
ground_truth = testing_dataset[label]
testing_data = testing_dataset.drop(columns=label)
predictions = predictor.predict(testing_data)
with open(os.path.join(model_path, "Model_Evaluation.txt"), "w") as f:
print(
json.dumps(
predictor.evaluate_predictions(
y_true=ground_truth,
y_pred=predictions,
auxiliary_metrics=True
),
indent=4
),
file=f
)
leaderboard = predictor.leaderboard(testing_dataset, silent=True)
leaderboard.to_csv(os.path.join(model_path, "Leaderboard.csv"))
if __name__ == "__main__":
print("Loading Parameters\n")
with open(param_path) as f:
params = json.load(f)
print("Training Models\n")
predictor = train(params)
print("Testig Models\n")
test(params, predictor)
print("AutoGluon Job Complete")
%%writefile Dockerfile
ARG REGION
FROM 763104351884.dkr.ecr.${REGION}.amazonaws.com/autogluon-training:0.3.1-cpu-py37-ubuntu18.04
RUN pip install -U pip
RUN pip install bokeh==2.0.1
RUN mkdir -p /opt/program
RUN mkdir -p /opt/ml
COPY train.py /opt/program
WORKDIR /opt/program
ENTRYPOINT ["python", "train.py"]
import boto3
import sagemaker
aws_region = sagemaker.Session().boto_session.region_name
!sm-docker build --build-arg REGION={aws_region} .
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
column_names = ["sex", "length", "diameter", "height", "whole_weight", "shucked_weight", "viscera_weight", "shell_weight", "rings"]
abalone_data = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data", names=column_names)
training_data, testing_data = train_test_split(abalone_data, test_size=0.1)
training_data.to_csv("training.csv")
testing_data.to_csv("testing.csv")
import sagemaker
import datetime
image_uri = "<Enter the Image URI from the sm-docker output>"
role = sagemaker.get_execution_role()
session = sagemaker.session.Session()
bucket = session.default_bucket()
job_version = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')[:-3]
job_name = f"abalone-autogluon-{job_version}"
from sagemaker.estimator import Estimator
autogluon = Estimator(
image_uri=image_uri,
role=role,
output_path=f"s3://{bucket}/{job_name}",
base_job_name=job_name,
instance_count=1,
instance_type="ml.m5.xlarge",
hyperparameters={
"label": "rings",
"bucket": bucket,
"training_job": job_name
},
volume_size=20
)
autogluon.fit(
inputs={
"training": session.upload_data(
"training.csv",
bucket=bucket,
key_prefix=f"{job_name}/input"
),
"testing": session.upload_data(
"testing.csv",
bucket=bucket,
key_prefix=f"{job_name}/input"
)
}
)
!mkdir extract
sagemaker.s3.S3Downloader.download(autogluon.model_data, "./")
!tar xfz ./model.tar.gz -C extract
df = pd.read_csv("./extract/Leaderboard.csv")
df = df.filter(["model","score_test", "score_val"]).sort_values(by="score_val", ascending=False).reset_index().drop(columns="index")
df
import IPython
IPython.display.HTML(filename="./extract/SummaryOfModels.html")
| 0.338952 | 0.723639 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# http://rp5.ua/archive.php?wmo_id=34300&lang=ru
# lt_kh - local Kharkov Time, a30 - end
COLUM_NAME = [
"lt_kh","T","Po","P","Pa","U","DD","Ff","ff10","ff3","N","WW","W1","W2","Tn","Tx","Cl","Nh","H","Cm","Ch","VV","Td","RRR","tR","E","Tg","E'","sss","a30"
]
def show_extrems(df, freq):
dataf = d.groupby(pd.Grouper(key="lt_kh", freq=freq)).mean()
plt.plot(dataf['T'])
plt.show()
plt.plot(dataf['Ff'])
plt.show()
plt.plot(dataf['tR'])
plt.show()
##print(dataf)
dataf_max = np.max(dataf,axis=0)
print("freq", freq, "data frame max ----->\n", dataf_max)
dataf_min = np.min(dataf,axis=0)
print("freq", freq, "data frame min ---->\n", dataf_min)
if __name__ == "__main__":
d = pd.read_csv(
'34300.01.01.2016.01.01.2017.1.0.0.ru.csv.gz',
comment='#',
parse_dates=[0, ],
skiprows=[0, 1, 2, 3 , 4, 5, 6],
delimiter=';',
names = COLUM_NAME,
compression='gzip',
error_bad_lines=False
)
print(d)
print("np.max(d,axis=0)[1]", np.max(d,axis=0)[1])
print("np.min(d,axis=0)[1]", np.min(d,axis=0)[1])
show_extrems(d, '1M')
show_extrems(d, '1W')
show_extrems(d, '1D')
def show_extrems_column(df, freq, column, maximum):
dataframe = df.groupby(pd.Grouper(key="lt_kh", freq=freq)).mean()
if maximum:
data = np.max(dataframe,axis=0)
else:
data = np.min(dataframe,axis=0)
arr = dataframe[column]
d = np.where(arr==data[column])
try:
res1, res2 = int(d[-1]), data[column]
except TypeError:
res1, res2 = d, data[column]
return res1, res2
# 1. Найти самый ветреный месяц - (месяц и средняя скорость ветра)
# 2. Найти самый холодный месяц - (месяц и средняя температура)
# 3. Найти самый холодный день - (день и средняя температура)
# 4. Найти самый тёплый месяц - (месяц и средняя температура)
# 5. Найти самый тёплый день - (день и средняя температура)
# 6. Найти самую дождливую неделю - (период и количество осадков)
if __name__ == "__main__":
d = pd.read_csv(
'34300.01.01.2016.01.01.2017.1.0.0.ru.csv.gz',
comment='#',
parse_dates=[0, ],
skiprows=[0, 1, 2, 3 , 4, 5, 6],
delimiter=';',
names = COLUM_NAME,
compression='gzip',
error_bad_lines=False
)
print("#1. Найти самый ветреный месяц - (месяц и средняя скорость ветра)", show_extrems_column(d, '1M', 'Ff', maximum = True))
print("#2. Найти самый холодный месяц - (месяц и средняя температура)", show_extrems_column(d, '1M', 'T', maximum = False))
print("#3. Найти самый холодный день - (день и средняя температура)", show_extrems_column(d, '1D', 'T', maximum = False))
print("#4. Найти самый тёплый месяц - (месяц и средняя температура)", show_extrems_column(d, '1M', 'T', maximum = True))
print("#5. Найти самый тёплый день - (день и средняя температура)", show_extrems_column(d, '1D', 'T', maximum = True))
print("#6. Найти самую дождливую неделю - (период и количество осадков)", show_extrems_column(d, '1W', 'tR', maximum = True))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# http://rp5.ua/archive.php?wmo_id=34300&lang=ru
# lt_kh - local Kharkov Time, a30 - end
COLUM_NAME = [
"lt_kh","T","Po","P","Pa","U","DD","Ff","ff10","ff3","N","WW","W1","W2","Tn","Tx","Cl","Nh","H","Cm","Ch","VV","Td","RRR","tR","E","Tg","E'","sss","a30"
]
def show_extrems(df, freq):
dataf = d.groupby(pd.Grouper(key="lt_kh", freq=freq)).mean()
plt.plot(dataf['T'])
plt.show()
plt.plot(dataf['Ff'])
plt.show()
plt.plot(dataf['tR'])
plt.show()
##print(dataf)
dataf_max = np.max(dataf,axis=0)
print("freq", freq, "data frame max ----->\n", dataf_max)
dataf_min = np.min(dataf,axis=0)
print("freq", freq, "data frame min ---->\n", dataf_min)
if __name__ == "__main__":
d = pd.read_csv(
'34300.01.01.2016.01.01.2017.1.0.0.ru.csv.gz',
comment='#',
parse_dates=[0, ],
skiprows=[0, 1, 2, 3 , 4, 5, 6],
delimiter=';',
names = COLUM_NAME,
compression='gzip',
error_bad_lines=False
)
print(d)
print("np.max(d,axis=0)[1]", np.max(d,axis=0)[1])
print("np.min(d,axis=0)[1]", np.min(d,axis=0)[1])
show_extrems(d, '1M')
show_extrems(d, '1W')
show_extrems(d, '1D')
def show_extrems_column(df, freq, column, maximum):
dataframe = df.groupby(pd.Grouper(key="lt_kh", freq=freq)).mean()
if maximum:
data = np.max(dataframe,axis=0)
else:
data = np.min(dataframe,axis=0)
arr = dataframe[column]
d = np.where(arr==data[column])
try:
res1, res2 = int(d[-1]), data[column]
except TypeError:
res1, res2 = d, data[column]
return res1, res2
# 1. Найти самый ветреный месяц - (месяц и средняя скорость ветра)
# 2. Найти самый холодный месяц - (месяц и средняя температура)
# 3. Найти самый холодный день - (день и средняя температура)
# 4. Найти самый тёплый месяц - (месяц и средняя температура)
# 5. Найти самый тёплый день - (день и средняя температура)
# 6. Найти самую дождливую неделю - (период и количество осадков)
if __name__ == "__main__":
d = pd.read_csv(
'34300.01.01.2016.01.01.2017.1.0.0.ru.csv.gz',
comment='#',
parse_dates=[0, ],
skiprows=[0, 1, 2, 3 , 4, 5, 6],
delimiter=';',
names = COLUM_NAME,
compression='gzip',
error_bad_lines=False
)
print("#1. Найти самый ветреный месяц - (месяц и средняя скорость ветра)", show_extrems_column(d, '1M', 'Ff', maximum = True))
print("#2. Найти самый холодный месяц - (месяц и средняя температура)", show_extrems_column(d, '1M', 'T', maximum = False))
print("#3. Найти самый холодный день - (день и средняя температура)", show_extrems_column(d, '1D', 'T', maximum = False))
print("#4. Найти самый тёплый месяц - (месяц и средняя температура)", show_extrems_column(d, '1M', 'T', maximum = True))
print("#5. Найти самый тёплый день - (день и средняя температура)", show_extrems_column(d, '1D', 'T', maximum = True))
print("#6. Найти самую дождливую неделю - (период и количество осадков)", show_extrems_column(d, '1W', 'tR', maximum = True))
| 0.265976 | 0.402011 |
## Here create function to update the model from a HTML form output format
Example form output:
`spec1_amt=10.0&spec2_amt=10.0&spec3_amt=10.0&reac1_expr=koff%20%2B%2028*kval&reac2_expr=`
Note: here develop/test model imported as module
```
from app.molybdenum import ModelRepresentation
```
Define an example that will be updated with values from the form. Note that all values in the form are values that should be overwritten in the values of the model, no need to compare or anything like that, just overwrite if they have the proper type
```
example_mbmodel = {'species': {'spec1': {'name': 'E', 'amt': 5e-21, 'fixed': False},
'spec2': {'name': 'S', 'amt': 1e-20, 'fixed': False},
'spec3': {'name': 'ES', 'amt': 0.0, 'fixed': False},
'spec4': {'name': 'P', 'amt': 0.0, 'fixed': False}},
'reactions': {'reac5': {'name': 'veq',
'reagents': ['E', 'S'],
'products': ['ES'],
'expression': '(kon*E*S-koff*ES)'},
'reac6': {'name': 'vcat',
'reagents': ['ES'],
'products': ['P'],
'expression': 'kcat*ES'}},
'params': {'param1': {'name': 'koff', 'val': 0.2},
'param2': {'name': 'kon', 'val': 10000000.0},
'param3': {'name': 'kcat', 'val': 0.1}},
'sim_params': {}}
mb_model = ModelRepresentation()
mb_model.loadm(example_mbmodel)
```
### current approach
Form is passed in as a ImmutableMultiDictionary, which I converted to a list similar to this kind:
`[('spec1_amt', ['2333']), ('spec1_fixed', ['True']), ('spec2_amt', ['3']), ('spec2_fixed', ['True']), ('reac1_expression', ['ae koff +x'])]`
```
form_list = [('spec1_amt', ['10.0']),
('spec1_fixed', ['True']),
('spec2_amt', ['5']),
('spec3_amt', ['42']),
('reac5_expression', ['']),
('reac6_expression', ['koff + 2*kon'])]
for form_input in form_list:
try:
info, value = form_input
except:
raise ValueError(f'Invalid form input: "{form_input}" should be a two-element tuple')
# info further divided into component id and assigned attribute
try:
comp_id, att = info.split('_')
except:
raise ValueError(f'Invalid component or attribute: "{info}" has zero or more than one "_" signs, exactly one required')
# value should only have one element
if len(value) != 1:
raise ValueError(f'Invalid form input: {value} should only be one element')
value = value[0]
# find component_id in species, reactions or params, find attribute and assign or raise error
print(comp_id, att, value)
# same as in previous approach
```
## Previous approach
Directly from the form string, complex, less pythonic and more prone to errors
Example form update
```
form_str = 'spec1_amt=10.0&spec2_amt=5&spec1_fixed=True&spec3_amt=42&reac6_expression=koff%20%2B%2028*kval&reac5_expression='
mb_model.update_from_form(form_str)
mb_model.todict()
```
Start breaking the string into each parameter using the '&' symbol
```
form_str.split('&')
```
For each, split by the equal sign between information and value
Previous code
```python
from urllib.parse import unquote
# Split string into each input using the '&' symbol
for form_input in form_str.split('&'):
try:
info, value = form_input.split('=')
except:
raise ValueError(f'Invalid form input: "{form_input}" has zero or more than one "=" signs, exactly one required')
# info further divided into component id and assigned attribute
try:
comp_id, att = info.split('_')
except:
raise ValueError(f'Invalid component or attribute: "{info}" has zero or more than one "_" signs, exactly one required')
# find component_id in species, reactions or params, find attribute and assign or raise error
if comp_id in self.species.keys():
try:
if att == 'amt':
self.species[comp_id][att] = float(value)
elif att == 'fixed':
self.species[comp_id][att] = bool(value)
else:
raise ValueError(f'Unrecognized attribute {att} in {form_input}')
except:
raise TypeError(f'Attribute {att} in {form_input} does not match with expected type')
elif comp_id in self.reactions.keys():
try:
if att == 'expr':
# expression contains html characters for symbols and spaces, decode with unquote
self.reactions[comp_id][att] = str(unquote(value))
else:
raise ValueError(f'Unrecognized attribute {att} in {form_input}')
except:
raise TypeError(f'Attribute {att} in {form_input} does not match with expected type')
elif comp_id in self.params.keys():
try:
if att == 'name':
self.params[comp_id][att] = str(value)
elif att == 'val':
self.params[comp_id][att] = float(value)
else:
raise ValueError(f'Unrecognized attribute {att} in {form_input}')
except:
raise TypeError(f'Attribute {att} in {form_input} does not match with expected type')
else:
raise ValueError(f'Component {comp_id} not found in species, reactions or params')
```
|
github_jupyter
|
from app.molybdenum import ModelRepresentation
example_mbmodel = {'species': {'spec1': {'name': 'E', 'amt': 5e-21, 'fixed': False},
'spec2': {'name': 'S', 'amt': 1e-20, 'fixed': False},
'spec3': {'name': 'ES', 'amt': 0.0, 'fixed': False},
'spec4': {'name': 'P', 'amt': 0.0, 'fixed': False}},
'reactions': {'reac5': {'name': 'veq',
'reagents': ['E', 'S'],
'products': ['ES'],
'expression': '(kon*E*S-koff*ES)'},
'reac6': {'name': 'vcat',
'reagents': ['ES'],
'products': ['P'],
'expression': 'kcat*ES'}},
'params': {'param1': {'name': 'koff', 'val': 0.2},
'param2': {'name': 'kon', 'val': 10000000.0},
'param3': {'name': 'kcat', 'val': 0.1}},
'sim_params': {}}
mb_model = ModelRepresentation()
mb_model.loadm(example_mbmodel)
form_list = [('spec1_amt', ['10.0']),
('spec1_fixed', ['True']),
('spec2_amt', ['5']),
('spec3_amt', ['42']),
('reac5_expression', ['']),
('reac6_expression', ['koff + 2*kon'])]
for form_input in form_list:
try:
info, value = form_input
except:
raise ValueError(f'Invalid form input: "{form_input}" should be a two-element tuple')
# info further divided into component id and assigned attribute
try:
comp_id, att = info.split('_')
except:
raise ValueError(f'Invalid component or attribute: "{info}" has zero or more than one "_" signs, exactly one required')
# value should only have one element
if len(value) != 1:
raise ValueError(f'Invalid form input: {value} should only be one element')
value = value[0]
# find component_id in species, reactions or params, find attribute and assign or raise error
print(comp_id, att, value)
# same as in previous approach
form_str = 'spec1_amt=10.0&spec2_amt=5&spec1_fixed=True&spec3_amt=42&reac6_expression=koff%20%2B%2028*kval&reac5_expression='
mb_model.update_from_form(form_str)
mb_model.todict()
form_str.split('&')
from urllib.parse import unquote
# Split string into each input using the '&' symbol
for form_input in form_str.split('&'):
try:
info, value = form_input.split('=')
except:
raise ValueError(f'Invalid form input: "{form_input}" has zero or more than one "=" signs, exactly one required')
# info further divided into component id and assigned attribute
try:
comp_id, att = info.split('_')
except:
raise ValueError(f'Invalid component or attribute: "{info}" has zero or more than one "_" signs, exactly one required')
# find component_id in species, reactions or params, find attribute and assign or raise error
if comp_id in self.species.keys():
try:
if att == 'amt':
self.species[comp_id][att] = float(value)
elif att == 'fixed':
self.species[comp_id][att] = bool(value)
else:
raise ValueError(f'Unrecognized attribute {att} in {form_input}')
except:
raise TypeError(f'Attribute {att} in {form_input} does not match with expected type')
elif comp_id in self.reactions.keys():
try:
if att == 'expr':
# expression contains html characters for symbols and spaces, decode with unquote
self.reactions[comp_id][att] = str(unquote(value))
else:
raise ValueError(f'Unrecognized attribute {att} in {form_input}')
except:
raise TypeError(f'Attribute {att} in {form_input} does not match with expected type')
elif comp_id in self.params.keys():
try:
if att == 'name':
self.params[comp_id][att] = str(value)
elif att == 'val':
self.params[comp_id][att] = float(value)
else:
raise ValueError(f'Unrecognized attribute {att} in {form_input}')
except:
raise TypeError(f'Attribute {att} in {form_input} does not match with expected type')
else:
raise ValueError(f'Component {comp_id} not found in species, reactions or params')
| 0.450601 | 0.910942 |
### 1. Write a Python program to check if the given number is a Disarium Number?
A disarium number is a number in which the sum of the digits to the power of their respective position is equal to the number itself (position is counted from left to right starting from 1).
```
def disarium(n):
n = int(input('Enter a number: '))
temp = n
sum = 0
while n > 0:
order = len(str(n))
digit = n%10
sum += digit**order
order -= 1
n = n//10
if sum == temp:
print(f'{temp} is a Disarium Number')
else:
print(f'{temp} is not a Disarium Number')
disarium(n)
```
### 2. Write a Python program to print all disarium numbers between 1 to 100?
```
def disarium(n):
sum = 0
while n > 0:
order = len(str(n))
digit = n%10
sum += digit**order
order -= 1
n = n//10
return sum
print("Disarium numbers in range 1 to 100")
disarium_number = []
for i in range(1,101):
sum = disarium(i)
if sum == i:
disarium_number.append(i)
print(disarium_number)
```
### 3. Write a Python program to check if the given number is Happy Number?
A number is called happy if it leads to 1 after a sequence of steps wherein each step number is replaced by the sum of squares of its digit that is if we start with Happy Number and keep replacing it with digits square sum, we reach 1.
Input: n = 19
Output: True
19 is Happy Number,
1^2 + 9^2 = 82
8^2 + 2^2 = 68
6^2 + 8^2 = 100
1^2 + 0^2 + 0^2 = 1
As we reached to 1, 19 is a Happy Number.
```
def Happy_number(n):
sum = 0
while n > 0:
digit = n%10
sum += digit**2
n = n//10
return sum
n = int(input('Please enter a number: '))
result = n
while (result != 1 and result != 4):
result = Happy_number(result)
if result == 1:
print(f"{n} is a Happy Number")
else:
print(f"{n} is not a Happy Number")
```
### 4. Write a Python program to print all happy numbers between 1 and 100?
```
def Happy_number(n):
sum = 0
while n > 0:
digit = n%10
sum += digit**2
n = n//10
return sum
print("Happy Numbers in range 1 to 100")
result=num=i=0
HappyNumber = []
for i in range(1,101):
result = i
while (result != 1 and result != 4):
result = Happy_number(result)
if result == 1:
HappyNumber.append(i)
print(HappyNumber)
```
### 5. Write a Python program to determine whether the given number is a Harshad Number?
A positive integer which is divisible by the sum of its digits, is called a Harshad Number.
```
def harshad(n):
sum = 0
while n > 0:
digit = n % 10
sum += digit
n = n//10
return sum
n = int(input('Please enter a number: '))
sum = harshad(n)
if n % sum == 0:
print(f"{n} is a Harshad Number")
else:
print(f"{n} is not a Harshad Number")
```
### 6. Write a Python program to print all pronic numbers between 1 and 100?
A pronic number is a number which is the product of two consecutive integers, that is, a number of the form n(n + 1).
```
def pronic(n):
PronicNum = False
for i in range(1, n+1):
if i*(i+1) == n:
PronicNum = True
break
return PronicNum
print('Pronic Numbers in range between 1 and 100: ')
pronic_number = []
for i in range(1,101):
if pronic(i):
pronic_number.append(i)
print(pronic_number)
```
|
github_jupyter
|
def disarium(n):
n = int(input('Enter a number: '))
temp = n
sum = 0
while n > 0:
order = len(str(n))
digit = n%10
sum += digit**order
order -= 1
n = n//10
if sum == temp:
print(f'{temp} is a Disarium Number')
else:
print(f'{temp} is not a Disarium Number')
disarium(n)
def disarium(n):
sum = 0
while n > 0:
order = len(str(n))
digit = n%10
sum += digit**order
order -= 1
n = n//10
return sum
print("Disarium numbers in range 1 to 100")
disarium_number = []
for i in range(1,101):
sum = disarium(i)
if sum == i:
disarium_number.append(i)
print(disarium_number)
def Happy_number(n):
sum = 0
while n > 0:
digit = n%10
sum += digit**2
n = n//10
return sum
n = int(input('Please enter a number: '))
result = n
while (result != 1 and result != 4):
result = Happy_number(result)
if result == 1:
print(f"{n} is a Happy Number")
else:
print(f"{n} is not a Happy Number")
def Happy_number(n):
sum = 0
while n > 0:
digit = n%10
sum += digit**2
n = n//10
return sum
print("Happy Numbers in range 1 to 100")
result=num=i=0
HappyNumber = []
for i in range(1,101):
result = i
while (result != 1 and result != 4):
result = Happy_number(result)
if result == 1:
HappyNumber.append(i)
print(HappyNumber)
def harshad(n):
sum = 0
while n > 0:
digit = n % 10
sum += digit
n = n//10
return sum
n = int(input('Please enter a number: '))
sum = harshad(n)
if n % sum == 0:
print(f"{n} is a Harshad Number")
else:
print(f"{n} is not a Harshad Number")
def pronic(n):
PronicNum = False
for i in range(1, n+1):
if i*(i+1) == n:
PronicNum = True
break
return PronicNum
print('Pronic Numbers in range between 1 and 100: ')
pronic_number = []
for i in range(1,101):
if pronic(i):
pronic_number.append(i)
print(pronic_number)
| 0.10376 | 0.952175 |
```
%matplotlib notebook
import control as c
import ipywidgets as w
import numpy as np
from IPython.display import display, HTML
import matplotlib.pyplot as plt
import matplotlib.animation as animation
display(HTML('<script> $(document).ready(function() { $("div.input").hide(); }); </script>'))
```
## Controllo di un sistema del primo ordine con PID discreto
Nell'esempio seguente, esploreremo l'effetto del campionamento a tempo discreto su un sistema del primo ordine con un controller PID. Il controller è progettato e messo a punto con parametri a tempo continuo, ma viene poi convertito in tempo discreto da un mantenitore di ordine zero (ZOH) o con la tecnica del "pole matching". Il segnale del controller viene riportato in tempo continuo utilizzando un altro mantenitore di ordine zero e viene quindi fornito al sistema.
<br>
$$G(s)=\frac{1}{\tau s + 1}$$
<br>
<img src="Images/discrete.png" width="40%" />
<br>
<b>Scegli una costante di tempo per il sistema e il tempo di campionamento per la discretizzazione!</b>
```
# Figure definition
fig1, ((f1_ax1), (f1_ax2)) = plt.subplots(2, 1)
fig1.set_size_inches((9.8, 5))
fig1.set_tight_layout(True)
f1_line1, = f1_ax1.plot([], [])
f1_line2, = f1_ax2.plot([], [])
f1_ax1.grid(which='both', axis='both', color='lightgray')
f1_ax2.grid(which='both', axis='both', color='lightgray')
f1_ax1.autoscale(enable=True, axis='both', tight=True)
f1_ax2.autoscale(enable=True, axis='both', tight=True)
f1_ax1.set_title('Diagramma del modulo', fontsize=11)
f1_ax1.set_xscale('log')
f1_ax1.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=10)
f1_ax1.set_ylabel(r'$A\/$[dB]', labelpad=0, fontsize=10)
f1_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
f1_ax2.set_title('Diagramma della fase', fontsize=11)
f1_ax2.set_xscale('log')
f1_ax2.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=10)
f1_ax2.set_ylabel(r'$\phi\/$[°]', labelpad=0, fontsize=10)
f1_ax2.tick_params(axis='both', which='both', pad=0, labelsize=8)
# System model
def system_model(T1, dts):
W_syscont = c.tf([1], [T1, 1])
W_sys = c.sample_system(W_syscont, dts, method='zoh')
# Zero Order Hold conversion to include continuous system in discrete model
print('Funzione di trasferimento del sistema:')
print(W_syscont)
print('\nFunzione di trasferimento discretizzata:')
print(W_sys)
# System analysis
poles_cont = c.pole(W_syscont) # Poles
poles = c.pole(W_sys)
print('\nPoli del sistema:')
print(poles_cont)
print('\nPoli del sistema discretizzato:')
print(poles)
global f1_line1, f1_line2
f1_ax1.lines.remove(f1_line1)
f1_ax2.lines.remove(f1_line2)
mag, phase, omega = c.bode_plot(W_sys, Plot=False) # Bode-plot
f1_line1, = f1_ax1.plot(omega/2/np.pi, 20*np.log10(mag), lw=1, color='blue')
f1_line2, = f1_ax2.plot(omega/2/np.pi, phase*180/np.pi, lw=1, color='blue')
f1_ax1.relim()
f1_ax2.relim()
f1_ax1.autoscale_view()
f1_ax2.autoscale_view()
# GUI widgets
T1_slider = w.FloatLogSlider(value=0.1, base=10, min=-4, max=1, description='T1 [s] :', continuous_update=False,
layout=w.Layout(width='75%'))
dts_slider = w.FloatLogSlider(value=0.1, base=10, min=-4, max=0, description='dts [s] :', continuous_update=False,
layout=w.Layout(width='75%'))
input_data = w.interactive_output(system_model, {'T1':T1_slider, 'dts':dts_slider})
display(w.HBox([T1_slider, dts_slider]), input_data)
```
Dopo aver osservato le caratteristiche del sistema, <b>scegli un tipo di controllore!</b>
```
#Controller type select
typeSelect = w.ToggleButtons(
options=[('P (matched)', 0), ('PI (ZOH)', 1), ('PD (matched)', 2), ('PD Reale (ZOH)', 3), ('PID Reale (ZOH)', 4)],
description='Controller: ', style={'description_width':'15%'})
display(typeSelect)
```
<b>Modifica i valori del controller in modo che il tempo di salita/assestamento, l'overshoot o l'errore rimanente siano minimi!</b><br>
Non è possibile ottenere i migliori risultati per ogni parametro in una singola configurazione. Crea più soluzioni, una per ogni tipo!
```
# PID control
# Figure definition
fig2, ((f2_ax1, f2_ax2, f2_ax3), (f2_ax4, f2_ax5, f2_ax6)) = plt.subplots(2, 3)
fig2.set_size_inches((9.8, 5))
fig2.set_tight_layout(True)
f2_line1, = f2_ax1.plot([], [])
f2_line2, = f2_ax2.plot([], [])
f2_line3, = f2_ax3.plot([], [])
f2_line4, = f2_ax4.plot([], [])
f2_line5, = f2_ax5.plot([], [])
f2_line6, = f2_ax6.plot([], [])
f2_ax1.grid(which='both', axis='both', color='lightgray')
f2_ax2.grid(which='both', axis='both', color='lightgray')
f2_ax3.grid(which='both', axis='both', color='lightgray')
f2_ax4.grid(which='both', axis='both', color='lightgray')
f2_ax5.grid(which='both', axis='both', color='lightgray')
f2_ax6.grid(which='both', axis='both', color='lightgray')
f2_ax1.autoscale(enable=True, axis='both', tight=True)
f2_ax2.autoscale(enable=True, axis='both', tight=True)
f2_ax3.autoscale(enable=True, axis='both', tight=True)
f2_ax4.autoscale(enable=True, axis='both', tight=True)
f2_ax5.autoscale(enable=True, axis='both', tight=True)
f2_ax6.autoscale(enable=True, axis='both', tight=True)
f2_ax1.set_title('Risposta al gradino in anello chiuso', fontsize=9)
f2_ax1.set_xlabel(r'$t\/$[s]', labelpad=0, fontsize=8)
f2_ax1.set_ylabel(r'$x\/$[m]', labelpad=0, fontsize=8)
f2_ax1.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax2.set_title('Diagramma di Nyquist', fontsize=9)
f2_ax2.set_xlabel(r'Re', labelpad=0, fontsize=8)
f2_ax2.set_ylabel(r'Im', labelpad=0, fontsize=8)
f2_ax2.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax3.set_title('Diagramma del modulo', fontsize=9)
f2_ax3.set_xscale('log')
f2_ax3.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=8)
f2_ax3.set_ylabel(r'$A\/$[dB]', labelpad=0, fontsize=8)
f2_ax3.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax4.set_title('Risposta impulsiva in anello chiuso', fontsize=9)
f2_ax4.set_xlabel(r'$t\/$[s]', labelpad=0, fontsize=8)
f2_ax4.set_ylabel(r'$x\/$[m]', labelpad=0, fontsize=8)
f2_ax4.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax5.set_title('Risposta al gradino in anello aperto', fontsize=9)
f2_ax5.set_xlabel(r'$t\/$[s]', labelpad=0, fontsize=8)
f2_ax5.set_ylabel(r'$x\/$[m]', labelpad=0, fontsize=8)
f2_ax5.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax6.set_title('Diagramma della fase', fontsize=9)
f2_ax6.set_xscale('log')
f2_ax6.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=8)
f2_ax6.set_ylabel(r'$\phi\/$[°]', labelpad=0, fontsize=8)
f2_ax6.tick_params(axis='both', which='both', pad=0, labelsize=6)
def pid_control(Kp, Ti, Td, Fd, type_select, T1, dts):
W_syscont = c.tf([1], [T1, 1])
W_sys = c.sample_system(W_syscont, dts, method='zoh')
# Zero Order Hold conversion to include continuous system in discrete model
if type_select in (1, 4):
Ti0 = 1
else:
Ti0 = 0
if type_select in (2, 3, 4):
Td0 = 1
else :
Td0 = 0
if type_select in (3, 4):
Fd0 = 1
else:
Fd0 = 0
if type_select in (0, 2):
convmet = "matched"
else:
convmet = "zoh"
# PID Controller
P = Kp # Proportional term
I = Kp / Ti # Integral term
D = Kp * Td # Derivative term
Td_f = Td / Fd # Derivative term filter
W_PID_cont = c.parallel(c.tf([P], [1]),
c.tf([I * Ti0], [1 * Ti0, 1 * (not Ti0)]),
c.tf([D * Td0, 0], [Td_f * Td0 * Fd0, 1])) # PID controller in time constant format
W_PID = c.sample_system(W_PID_cont, dts, method=convmet) # PID discretization
W_open = c.series(W_PID, W_sys) # Open loop
W_closed = c.feedback(W_open, 1, -1) # Closed loop with negative feedback
# Display
global f2_line1, f2_line2, f2_line3, f2_line4, f2_line5, f2_line6
try:
f2_ax1.lines.remove(f2_line1)
f2_ax2.lines.remove(f2_line2)
f2_ax3.lines.remove(f2_line3)
f2_ax4.lines.remove(f2_line4)
f2_ax5.lines.remove(f2_line5)
f2_ax6.lines.remove(f2_line6)
except:
pass
tin = np.arange(0, 10*T1, dts)
if tin.size > 1:
tout, yout = c.step_response(W_closed, tin)
maxint = min(tout.size, yout.size)
f2_line1, = f2_ax1.plot(tout[0:maxint], yout[0:maxint], lw=1, color='blue')
_, _, ob = c.nyquist_plot(W_open, Plot=False) # Small resolution plot to determine bounds
real, imag, freq = c.nyquist_plot(W_open, omega=np.logspace(np.log10(ob[0]), np.log10(ob[-1]), 1000), Plot=False)
f2_line2, = f2_ax2.plot(real, imag, lw=1, color='blue')
mag, phase, omega = c.bode_plot(W_open, Plot=False)
f2_line3, = f2_ax3.plot(omega/2/np.pi, 20*np.log10(mag), lw=1, color='blue')
f2_line6, = f2_ax6.plot(omega/2/np.pi, phase*180/np.pi, lw=1, color='blue')
tout, yout = c.impulse_response(W_closed, tin)
maxint = min(tout.size, yout.size)
f2_line4, = f2_ax4.plot(tout[0:maxint], yout[0:maxint], lw=1, color='blue')
tout, yout = c.step_response(W_open, tin)
maxint = min(tout.size, yout.size)
f2_line5, = f2_ax5.plot(tout[0:maxint], yout[0:maxint], lw=1, color='blue')
f2_ax1.relim()
f2_ax2.relim()
f2_ax3.relim()
f2_ax4.relim()
f2_ax5.relim()
f2_ax6.relim()
f2_ax1.autoscale_view()
f2_ax2.autoscale_view()
f2_ax3.autoscale_view()
f2_ax4.autoscale_view()
f2_ax5.autoscale_view()
f2_ax6.autoscale_view()
# GUI widgets
def draw_controllers(type_select):
global Kp_slider
global Ti_slider
global Td_slider
global Fd_slider
Kp_slider = w.FloatLogSlider(value=0.5, base=10, min=-1, max=4, description='Kp:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'))
if type_select in (1, 4):
Ti_slider = w.FloatLogSlider(value=0.0035, base=10, min=-4, max=1, description='Ti:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'))
else:
Ti_slider = w.FloatLogSlider(value=0.0035, base=10, min=-4, max=1, description='Ti:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=True)
if type_select in (2, 3, 4):
Td_slider = w.FloatLogSlider(value=1, base=10, min=-4, max=1, description='Td:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'))
else:
Td_slider = w.FloatLogSlider(value=1, base=10, min=-4, max=1, description='Td:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=True)
if type_select in (3, 4):
Fd_slider = w.FloatLogSlider(value=1, base=10, min=0, max=3, description='Fd:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'))
else:
Fd_slider = w.FloatLogSlider(value=1, base=10, min=0, max=3, description='Fd:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=True)
input_data = w.interactive_output(pid_control, {'Kp': Kp_slider, 'Ti': Ti_slider, 'Td': Td_slider,
'Fd': Fd_slider, 'type_select':typeSelect, 'T1':T1_slider, 'dts':dts_slider})
display(w.HBox([Kp_slider, Ti_slider, Td_slider, Fd_slider]), input_data)
w.interactive_output(draw_controllers, {'type_select':typeSelect})
```
È possibile testare le funzionalità di inseguimento del riferimento del sistema controllato utilizzando la simulazione.<br>
<b>Modifica il controller in modo che sia capace di seguire una sinusoide in modo accettabile!</b>
```
# Simulation data
anim_fig = plt.figure()
anim_fig.set_size_inches((9.8, 4))
anim_fig.set_tight_layout(True)
anim_ax1 = anim_fig.add_subplot(111)
frame_count=1000
scope_rounds=4
l1 = anim_ax1.plot([], [], lw=1, color='blue')
l2 = anim_ax1.plot([], [], lw=2, color='red')
line1 = l1[0]
line2 = l2[0]
anim_ax1.legend(l1+l2, ['Riferimento', 'Uscita'], loc=1)
anim_ax1.set_title('Simulazione', fontsize=12)
anim_ax1.set_xlabel(r'$t\/$[s]', labelpad=0, fontsize=10)
anim_ax1.set_ylabel(r'$y\/$[/]', labelpad=0, fontsize=10)
anim_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
anim_ax1.grid(which='both', axis='both', color='lightgray')
T_plot = []
X_plot = []
R_plot = []
#Simulation function
def simulation(Kp, Ti, Td, Fd, type_select, T1, dts, T, X, Xf, Xa):
W_syscont = c.tf([1], [T1, 1])
W_sys = c.sample_system(W_syscont, dts, method='zoh')
# Zero Order Hold conversion to include continuous system in discrete model
if type_select in (1, 4):
Ti0 = 1
else:
Ti0 = 0
if type_select in (2, 3, 4):
Td0 = 1
else :
Td0 = 0
if type_select in (3, 4):
Fd0 = 1
else:
Fd0 = 0
if type_select in (0, 2):
convmet = "matched"
else:
convmet = "zoh"
# Controller
P = Kp # Proportional term
I = Kp / Ti # Integral term
D = Kp * Td # Derivative term
Td_f = Td * Fd # Derivative term filter
W_PID_cont = c.parallel(c.tf([P], [1]),
c.tf([I * Ti0], [1 * Ti0, 1 * (not Ti0)]),
c.tf([D * Td0, 0], [Td_f * Td0 * Fd0, 1])) # PID controller in time constant format
# Model
W_PID = c.sample_system(W_PID_cont, dts, method=convmet) # PID discretization
W_open = c.series(W_PID, W_sys) # Open loop
W_closed = c.feedback(W_open, 1, -1) # Closed loop with negative feedback
# Reference and disturbance signals
T_sim = np.arange(0, T, dts, dtype=np.float64)
if X == 0: # Sine wave reference
X_sim = np.sin(2 * np.pi * Xf * T_sim) * Xa
elif X == 1: # Square wave reference
X_sim = np.sign(np.sin(2 * np.pi * Xf * T_sim)) * Xa
# System response
Tx, youtx, xoutx = c.forced_response(W_closed, T_sim, X_sim)
# Display
XR_max = max(np.amax(np.absolute(np.concatenate((X_sim, youtx)))), Xa)
if not np.isnan(XR_max):
anim_ax1.set_ylim((-1.2 * XR_max, 1.2 * XR_max))
global T_plot, X_plot, R_plot
T_plot = np.linspace(0, T, frame_count*(scope_rounds+1), dtype=np.float32)
X_plot = np.interp(T_plot, T_sim, X_sim)
R_plot = np.interp(T_plot, T_sim, youtx)
def anim_init():
line1.set_data([], [])
line2.set_data([], [])
anim_ax1.set_xlim((0, T_plot[frame_count-1]))
return (line1, line2, anim_ax1,)
def animate(i):
line1.set_data(T_plot[scope_rounds*i:scope_rounds*i+frame_count-1], X_plot[scope_rounds*i:scope_rounds*i+frame_count-1])
line2.set_data(T_plot[scope_rounds*i:scope_rounds*i+frame_count-1], R_plot[scope_rounds*i:scope_rounds*i+frame_count-1])
anim_ax1.set_xlim((T_plot[i*scope_rounds], T_plot[i*scope_rounds+frame_count-1]))
return (line1, line2, anim_ax1,)
anim = animation.FuncAnimation(anim_fig, animate, init_func=anim_init,
frames=frame_count, interval=10, blit=True,
repeat=True)
# Controllers
T_slider = w.FloatLogSlider(value=10, base=10, min=-0.7, max=1, step=0.01,
description='Durata [s]:', continuous_update=False,
orientation='vertical', layout=w.Layout(width='auto', height='auto', flex='1 1 auto'))
X_type = w.Dropdown(options=[('Sinusoide', 0), ('Onda quadra', 1)], value=1,
description='Riferimento: ', continuous_update=False, layout=w.Layout(width='auto', flex='3 3 auto'))
Xf_slider = w.FloatLogSlider(value=0.5, base=10, min=-2, max=2, step=0.01,
description='Frequenza [Hz]:', continuous_update=False,
orientation='vertical', layout=w.Layout(width='auto', height='auto', flex='1 1 auto'))
Xa_slider = w.FloatLogSlider(value=1, base=10, min=-2, max=2, step=0.01,
description='Ampiezza [/]:', continuous_update=False,
orientation='vertical', layout=w.Layout(width='auto', height='auto', flex='1 1 auto'))
input_data = w.interactive_output(simulation, {'Kp': Kp_slider, 'Ti': Ti_slider, 'Td': Td_slider,'Fd': Fd_slider,
'type_select': typeSelect, 'T1': T1_slider, 'dts':dts_slider,
'T': T_slider,
'X': X_type, 'Xf': Xf_slider, 'Xa': Xa_slider})
display(w.HBox([w.HBox([T_slider], layout=w.Layout(width='25%')),
w.Box([], layout=w.Layout(width='5%')),
w.VBox([X_type, w.HBox([Xf_slider, Xa_slider])], layout=w.Layout(width='30%')),
w.Box([], layout=w.Layout(width='5%'))],
layout=w.Layout(width='100%', justify_content='center')), input_data)
```
Il parametro della durata controlla il periodo di tempo simulato e non influisce sul tempo di esecuzione dell'animazione.
|
github_jupyter
|
%matplotlib notebook
import control as c
import ipywidgets as w
import numpy as np
from IPython.display import display, HTML
import matplotlib.pyplot as plt
import matplotlib.animation as animation
display(HTML('<script> $(document).ready(function() { $("div.input").hide(); }); </script>'))
# Figure definition
fig1, ((f1_ax1), (f1_ax2)) = plt.subplots(2, 1)
fig1.set_size_inches((9.8, 5))
fig1.set_tight_layout(True)
f1_line1, = f1_ax1.plot([], [])
f1_line2, = f1_ax2.plot([], [])
f1_ax1.grid(which='both', axis='both', color='lightgray')
f1_ax2.grid(which='both', axis='both', color='lightgray')
f1_ax1.autoscale(enable=True, axis='both', tight=True)
f1_ax2.autoscale(enable=True, axis='both', tight=True)
f1_ax1.set_title('Diagramma del modulo', fontsize=11)
f1_ax1.set_xscale('log')
f1_ax1.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=10)
f1_ax1.set_ylabel(r'$A\/$[dB]', labelpad=0, fontsize=10)
f1_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
f1_ax2.set_title('Diagramma della fase', fontsize=11)
f1_ax2.set_xscale('log')
f1_ax2.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=10)
f1_ax2.set_ylabel(r'$\phi\/$[°]', labelpad=0, fontsize=10)
f1_ax2.tick_params(axis='both', which='both', pad=0, labelsize=8)
# System model
def system_model(T1, dts):
W_syscont = c.tf([1], [T1, 1])
W_sys = c.sample_system(W_syscont, dts, method='zoh')
# Zero Order Hold conversion to include continuous system in discrete model
print('Funzione di trasferimento del sistema:')
print(W_syscont)
print('\nFunzione di trasferimento discretizzata:')
print(W_sys)
# System analysis
poles_cont = c.pole(W_syscont) # Poles
poles = c.pole(W_sys)
print('\nPoli del sistema:')
print(poles_cont)
print('\nPoli del sistema discretizzato:')
print(poles)
global f1_line1, f1_line2
f1_ax1.lines.remove(f1_line1)
f1_ax2.lines.remove(f1_line2)
mag, phase, omega = c.bode_plot(W_sys, Plot=False) # Bode-plot
f1_line1, = f1_ax1.plot(omega/2/np.pi, 20*np.log10(mag), lw=1, color='blue')
f1_line2, = f1_ax2.plot(omega/2/np.pi, phase*180/np.pi, lw=1, color='blue')
f1_ax1.relim()
f1_ax2.relim()
f1_ax1.autoscale_view()
f1_ax2.autoscale_view()
# GUI widgets
T1_slider = w.FloatLogSlider(value=0.1, base=10, min=-4, max=1, description='T1 [s] :', continuous_update=False,
layout=w.Layout(width='75%'))
dts_slider = w.FloatLogSlider(value=0.1, base=10, min=-4, max=0, description='dts [s] :', continuous_update=False,
layout=w.Layout(width='75%'))
input_data = w.interactive_output(system_model, {'T1':T1_slider, 'dts':dts_slider})
display(w.HBox([T1_slider, dts_slider]), input_data)
#Controller type select
typeSelect = w.ToggleButtons(
options=[('P (matched)', 0), ('PI (ZOH)', 1), ('PD (matched)', 2), ('PD Reale (ZOH)', 3), ('PID Reale (ZOH)', 4)],
description='Controller: ', style={'description_width':'15%'})
display(typeSelect)
# PID control
# Figure definition
fig2, ((f2_ax1, f2_ax2, f2_ax3), (f2_ax4, f2_ax5, f2_ax6)) = plt.subplots(2, 3)
fig2.set_size_inches((9.8, 5))
fig2.set_tight_layout(True)
f2_line1, = f2_ax1.plot([], [])
f2_line2, = f2_ax2.plot([], [])
f2_line3, = f2_ax3.plot([], [])
f2_line4, = f2_ax4.plot([], [])
f2_line5, = f2_ax5.plot([], [])
f2_line6, = f2_ax6.plot([], [])
f2_ax1.grid(which='both', axis='both', color='lightgray')
f2_ax2.grid(which='both', axis='both', color='lightgray')
f2_ax3.grid(which='both', axis='both', color='lightgray')
f2_ax4.grid(which='both', axis='both', color='lightgray')
f2_ax5.grid(which='both', axis='both', color='lightgray')
f2_ax6.grid(which='both', axis='both', color='lightgray')
f2_ax1.autoscale(enable=True, axis='both', tight=True)
f2_ax2.autoscale(enable=True, axis='both', tight=True)
f2_ax3.autoscale(enable=True, axis='both', tight=True)
f2_ax4.autoscale(enable=True, axis='both', tight=True)
f2_ax5.autoscale(enable=True, axis='both', tight=True)
f2_ax6.autoscale(enable=True, axis='both', tight=True)
f2_ax1.set_title('Risposta al gradino in anello chiuso', fontsize=9)
f2_ax1.set_xlabel(r'$t\/$[s]', labelpad=0, fontsize=8)
f2_ax1.set_ylabel(r'$x\/$[m]', labelpad=0, fontsize=8)
f2_ax1.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax2.set_title('Diagramma di Nyquist', fontsize=9)
f2_ax2.set_xlabel(r'Re', labelpad=0, fontsize=8)
f2_ax2.set_ylabel(r'Im', labelpad=0, fontsize=8)
f2_ax2.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax3.set_title('Diagramma del modulo', fontsize=9)
f2_ax3.set_xscale('log')
f2_ax3.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=8)
f2_ax3.set_ylabel(r'$A\/$[dB]', labelpad=0, fontsize=8)
f2_ax3.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax4.set_title('Risposta impulsiva in anello chiuso', fontsize=9)
f2_ax4.set_xlabel(r'$t\/$[s]', labelpad=0, fontsize=8)
f2_ax4.set_ylabel(r'$x\/$[m]', labelpad=0, fontsize=8)
f2_ax4.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax5.set_title('Risposta al gradino in anello aperto', fontsize=9)
f2_ax5.set_xlabel(r'$t\/$[s]', labelpad=0, fontsize=8)
f2_ax5.set_ylabel(r'$x\/$[m]', labelpad=0, fontsize=8)
f2_ax5.tick_params(axis='both', which='both', pad=0, labelsize=6)
f2_ax6.set_title('Diagramma della fase', fontsize=9)
f2_ax6.set_xscale('log')
f2_ax6.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=8)
f2_ax6.set_ylabel(r'$\phi\/$[°]', labelpad=0, fontsize=8)
f2_ax6.tick_params(axis='both', which='both', pad=0, labelsize=6)
def pid_control(Kp, Ti, Td, Fd, type_select, T1, dts):
W_syscont = c.tf([1], [T1, 1])
W_sys = c.sample_system(W_syscont, dts, method='zoh')
# Zero Order Hold conversion to include continuous system in discrete model
if type_select in (1, 4):
Ti0 = 1
else:
Ti0 = 0
if type_select in (2, 3, 4):
Td0 = 1
else :
Td0 = 0
if type_select in (3, 4):
Fd0 = 1
else:
Fd0 = 0
if type_select in (0, 2):
convmet = "matched"
else:
convmet = "zoh"
# PID Controller
P = Kp # Proportional term
I = Kp / Ti # Integral term
D = Kp * Td # Derivative term
Td_f = Td / Fd # Derivative term filter
W_PID_cont = c.parallel(c.tf([P], [1]),
c.tf([I * Ti0], [1 * Ti0, 1 * (not Ti0)]),
c.tf([D * Td0, 0], [Td_f * Td0 * Fd0, 1])) # PID controller in time constant format
W_PID = c.sample_system(W_PID_cont, dts, method=convmet) # PID discretization
W_open = c.series(W_PID, W_sys) # Open loop
W_closed = c.feedback(W_open, 1, -1) # Closed loop with negative feedback
# Display
global f2_line1, f2_line2, f2_line3, f2_line4, f2_line5, f2_line6
try:
f2_ax1.lines.remove(f2_line1)
f2_ax2.lines.remove(f2_line2)
f2_ax3.lines.remove(f2_line3)
f2_ax4.lines.remove(f2_line4)
f2_ax5.lines.remove(f2_line5)
f2_ax6.lines.remove(f2_line6)
except:
pass
tin = np.arange(0, 10*T1, dts)
if tin.size > 1:
tout, yout = c.step_response(W_closed, tin)
maxint = min(tout.size, yout.size)
f2_line1, = f2_ax1.plot(tout[0:maxint], yout[0:maxint], lw=1, color='blue')
_, _, ob = c.nyquist_plot(W_open, Plot=False) # Small resolution plot to determine bounds
real, imag, freq = c.nyquist_plot(W_open, omega=np.logspace(np.log10(ob[0]), np.log10(ob[-1]), 1000), Plot=False)
f2_line2, = f2_ax2.plot(real, imag, lw=1, color='blue')
mag, phase, omega = c.bode_plot(W_open, Plot=False)
f2_line3, = f2_ax3.plot(omega/2/np.pi, 20*np.log10(mag), lw=1, color='blue')
f2_line6, = f2_ax6.plot(omega/2/np.pi, phase*180/np.pi, lw=1, color='blue')
tout, yout = c.impulse_response(W_closed, tin)
maxint = min(tout.size, yout.size)
f2_line4, = f2_ax4.plot(tout[0:maxint], yout[0:maxint], lw=1, color='blue')
tout, yout = c.step_response(W_open, tin)
maxint = min(tout.size, yout.size)
f2_line5, = f2_ax5.plot(tout[0:maxint], yout[0:maxint], lw=1, color='blue')
f2_ax1.relim()
f2_ax2.relim()
f2_ax3.relim()
f2_ax4.relim()
f2_ax5.relim()
f2_ax6.relim()
f2_ax1.autoscale_view()
f2_ax2.autoscale_view()
f2_ax3.autoscale_view()
f2_ax4.autoscale_view()
f2_ax5.autoscale_view()
f2_ax6.autoscale_view()
# GUI widgets
def draw_controllers(type_select):
global Kp_slider
global Ti_slider
global Td_slider
global Fd_slider
Kp_slider = w.FloatLogSlider(value=0.5, base=10, min=-1, max=4, description='Kp:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'))
if type_select in (1, 4):
Ti_slider = w.FloatLogSlider(value=0.0035, base=10, min=-4, max=1, description='Ti:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'))
else:
Ti_slider = w.FloatLogSlider(value=0.0035, base=10, min=-4, max=1, description='Ti:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=True)
if type_select in (2, 3, 4):
Td_slider = w.FloatLogSlider(value=1, base=10, min=-4, max=1, description='Td:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'))
else:
Td_slider = w.FloatLogSlider(value=1, base=10, min=-4, max=1, description='Td:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=True)
if type_select in (3, 4):
Fd_slider = w.FloatLogSlider(value=1, base=10, min=0, max=3, description='Fd:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'))
else:
Fd_slider = w.FloatLogSlider(value=1, base=10, min=0, max=3, description='Fd:', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=True)
input_data = w.interactive_output(pid_control, {'Kp': Kp_slider, 'Ti': Ti_slider, 'Td': Td_slider,
'Fd': Fd_slider, 'type_select':typeSelect, 'T1':T1_slider, 'dts':dts_slider})
display(w.HBox([Kp_slider, Ti_slider, Td_slider, Fd_slider]), input_data)
w.interactive_output(draw_controllers, {'type_select':typeSelect})
# Simulation data
anim_fig = plt.figure()
anim_fig.set_size_inches((9.8, 4))
anim_fig.set_tight_layout(True)
anim_ax1 = anim_fig.add_subplot(111)
frame_count=1000
scope_rounds=4
l1 = anim_ax1.plot([], [], lw=1, color='blue')
l2 = anim_ax1.plot([], [], lw=2, color='red')
line1 = l1[0]
line2 = l2[0]
anim_ax1.legend(l1+l2, ['Riferimento', 'Uscita'], loc=1)
anim_ax1.set_title('Simulazione', fontsize=12)
anim_ax1.set_xlabel(r'$t\/$[s]', labelpad=0, fontsize=10)
anim_ax1.set_ylabel(r'$y\/$[/]', labelpad=0, fontsize=10)
anim_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
anim_ax1.grid(which='both', axis='both', color='lightgray')
T_plot = []
X_plot = []
R_plot = []
#Simulation function
def simulation(Kp, Ti, Td, Fd, type_select, T1, dts, T, X, Xf, Xa):
W_syscont = c.tf([1], [T1, 1])
W_sys = c.sample_system(W_syscont, dts, method='zoh')
# Zero Order Hold conversion to include continuous system in discrete model
if type_select in (1, 4):
Ti0 = 1
else:
Ti0 = 0
if type_select in (2, 3, 4):
Td0 = 1
else :
Td0 = 0
if type_select in (3, 4):
Fd0 = 1
else:
Fd0 = 0
if type_select in (0, 2):
convmet = "matched"
else:
convmet = "zoh"
# Controller
P = Kp # Proportional term
I = Kp / Ti # Integral term
D = Kp * Td # Derivative term
Td_f = Td * Fd # Derivative term filter
W_PID_cont = c.parallel(c.tf([P], [1]),
c.tf([I * Ti0], [1 * Ti0, 1 * (not Ti0)]),
c.tf([D * Td0, 0], [Td_f * Td0 * Fd0, 1])) # PID controller in time constant format
# Model
W_PID = c.sample_system(W_PID_cont, dts, method=convmet) # PID discretization
W_open = c.series(W_PID, W_sys) # Open loop
W_closed = c.feedback(W_open, 1, -1) # Closed loop with negative feedback
# Reference and disturbance signals
T_sim = np.arange(0, T, dts, dtype=np.float64)
if X == 0: # Sine wave reference
X_sim = np.sin(2 * np.pi * Xf * T_sim) * Xa
elif X == 1: # Square wave reference
X_sim = np.sign(np.sin(2 * np.pi * Xf * T_sim)) * Xa
# System response
Tx, youtx, xoutx = c.forced_response(W_closed, T_sim, X_sim)
# Display
XR_max = max(np.amax(np.absolute(np.concatenate((X_sim, youtx)))), Xa)
if not np.isnan(XR_max):
anim_ax1.set_ylim((-1.2 * XR_max, 1.2 * XR_max))
global T_plot, X_plot, R_plot
T_plot = np.linspace(0, T, frame_count*(scope_rounds+1), dtype=np.float32)
X_plot = np.interp(T_plot, T_sim, X_sim)
R_plot = np.interp(T_plot, T_sim, youtx)
def anim_init():
line1.set_data([], [])
line2.set_data([], [])
anim_ax1.set_xlim((0, T_plot[frame_count-1]))
return (line1, line2, anim_ax1,)
def animate(i):
line1.set_data(T_plot[scope_rounds*i:scope_rounds*i+frame_count-1], X_plot[scope_rounds*i:scope_rounds*i+frame_count-1])
line2.set_data(T_plot[scope_rounds*i:scope_rounds*i+frame_count-1], R_plot[scope_rounds*i:scope_rounds*i+frame_count-1])
anim_ax1.set_xlim((T_plot[i*scope_rounds], T_plot[i*scope_rounds+frame_count-1]))
return (line1, line2, anim_ax1,)
anim = animation.FuncAnimation(anim_fig, animate, init_func=anim_init,
frames=frame_count, interval=10, blit=True,
repeat=True)
# Controllers
T_slider = w.FloatLogSlider(value=10, base=10, min=-0.7, max=1, step=0.01,
description='Durata [s]:', continuous_update=False,
orientation='vertical', layout=w.Layout(width='auto', height='auto', flex='1 1 auto'))
X_type = w.Dropdown(options=[('Sinusoide', 0), ('Onda quadra', 1)], value=1,
description='Riferimento: ', continuous_update=False, layout=w.Layout(width='auto', flex='3 3 auto'))
Xf_slider = w.FloatLogSlider(value=0.5, base=10, min=-2, max=2, step=0.01,
description='Frequenza [Hz]:', continuous_update=False,
orientation='vertical', layout=w.Layout(width='auto', height='auto', flex='1 1 auto'))
Xa_slider = w.FloatLogSlider(value=1, base=10, min=-2, max=2, step=0.01,
description='Ampiezza [/]:', continuous_update=False,
orientation='vertical', layout=w.Layout(width='auto', height='auto', flex='1 1 auto'))
input_data = w.interactive_output(simulation, {'Kp': Kp_slider, 'Ti': Ti_slider, 'Td': Td_slider,'Fd': Fd_slider,
'type_select': typeSelect, 'T1': T1_slider, 'dts':dts_slider,
'T': T_slider,
'X': X_type, 'Xf': Xf_slider, 'Xa': Xa_slider})
display(w.HBox([w.HBox([T_slider], layout=w.Layout(width='25%')),
w.Box([], layout=w.Layout(width='5%')),
w.VBox([X_type, w.HBox([Xf_slider, Xa_slider])], layout=w.Layout(width='30%')),
w.Box([], layout=w.Layout(width='5%'))],
layout=w.Layout(width='100%', justify_content='center')), input_data)
| 0.524151 | 0.866302 |
<img align="left" src="https://ithaka-labs.s3.amazonaws.com/static-files/images/tdm/tdmdocs/CC_BY.png"><br />
Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)<br />
For questions/comments/improvements, email nathan.kelber@ithaka.org.<br />
___
# Creating a Stopwords List
**Description:**
This [notebook](https://docs.constellate.org/key-terms/#jupyter-notebook) explains what a stopwords list is and how to create one. The following processes are described:
* Loading the NLTK stopwords list
* Modifying the stopwords list in Python
* Saving a stopwords list to a .csv file
* Loading a stopwords list from a .csv file
**Use Case:** For Learners (Detailed explanation, not ideal for researchers)
[Take me to **Research Version** of this notebook ->](./creating-stopwords-list-for-research.ipynb)
**Difficulty:** Intermediate
**Completion time:** 20 minutes
**Knowledge Required:**
* Python Basics Series ([Start Python Basics I](./python-basics-1.ipynb))
**Knowledge Recommended:** None
**Data Format:** CSV files
**Libraries Used:**
* **[nltk](https://docs.constellate.org/key-terms/#nltk)** to create an initial stopwords list
* **csv** to read and write the stopwords to a file
**Research Pipeline:** None
___
## The Purpose of a Stopwords List
Many text analytics techniques are based on counting the occurrence of words in a given text or set of texts (called a corpus). The most frequent words can reveal general textual patterns, but the most frequent words for any given text in English tend to look very similar to this:
|Word|Frequency|
|---|---|
|the| 1,160,276|
|of|906,898|
|and|682,419|
|in|461,328|
|to|418,017|
|a|334,082|
|is|214,663|
|that|204,277|
|by|181,605|
|as|177,774|
There are many [function words](https://docs.constellate.org/key-terms/#function-words), words like "the", "in", and "of" that are grammatically important but do not carry as much semantic meaning in comparison to [content words](https://docs.constellate.org/key-terms/#content-words), such as nouns and verbs.
For this reason, many analysts remove common [function words](https://docs.constellate.org/key-terms/#function-words) using a [stopwords](https://docs.constellate.org/key-terms/#stop-words) list. There are many sources for stopwords lists. (We'll use the Natural Language Toolkit stopwords list in this lesson.) **There is no official, standardized stopwords list for text analysis.**
An effective stopwords list depends on:
* the texts being analyzed
* the purpose of the analysis
Even if we remove all common function words, there are often formulaic repetitions in texts that may be counter-productive for the research goal.**The researcher is responsible for making educated decisions about whether or not to include any particular stopword given the research context.**
Here are a few examples where additional stopwords may be necessary:
* A corpus of law books is likely to have formulaic, archaic repetition, such as, "hereunto this law is enacted..."
* A corpus of dramatic plays is likely to have speech markers for each line, leading to an over-representation of character names (Hamlet, Gertrude, Laertes, etc.)
* A corpus of emails is likely to have header language (to, from, cc, bcc), technical language (attached, copied, thread, chain) and salutations (attached, best, dear, cheers, etc.)
Because every research project may require unique stopwords, it is important for researchers to learn to create and modify stopwords lists.
## Examining the NLTK Stopwords List
The Natural Language Toolkit Stopwords list is well-known and a natural starting point for creating your own list. Let's take a look at what it contains before learning to make our own modifications.
We will store our stopwords in a Python list variable called `stop_words`.
```
# Creating a stop_words list from the NLTK. We could also use the set of stopwords from Spacy or Gensim.
from nltk.corpus import stopwords # Import stopwords from nltk.corpus
stop_words = stopwords.words('english') # Create a list `stop_words` that contains the English stop words list
```
If you're curious what is in our stopwords list, we can use the `print()` or `list()` functions to find out.
```
list(stop_words) # Show each string in our stopwords list
```
## Storing Stopwords in a CSV File
Storing the stopwords list in a variable like `stop_words` is useful for analysis, but we will likely want to keep the list even after the session is over for future changes and analyses. We can store our stop words list in a CSV file. A CSV, or "Comma-Separated Values" file, is a plain-text file with commas separating each entry. The file can be opened and modified with a text editor or spreadsheet software such as Excel or Google Sheets.
Here's what our NLTK stopwords list will look like as a CSV file opened in a plain text editor.

Let's create an example CSV using the `csv` module.
```
# Create a CSV file to store a set of stopwords
import csv # Import the csv module to work with csv files
with open('data/stop_words.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(stop_words)
```
We have created a new file called data/stop_words.csv that you can open and modify using a basic text editor. Go ahead and make a change to your data/stop_words.csv (either adding or subtracting words) using a text editor. Remember, there are no spaces between words in the CSV file. If you want to edit the CSV right inside Jupyter Lab, right-click on the file and select "Open With > Editor."

Now go ahead and add in a new word. Remember a few things:
* Each word is separated from the next word by a comma.
* There are no spaces between the words.
* You must save changes to the file if you're using a text editor, Excel, or the Jupyter Lab editor.
* You can reopen the file to make sure your changes were saved.
Now let's read our CSV file back and overwrite our original `stop_words` list variable.
## Reading in a Stopwords CSV
```
# Open the CSV file and list the contents
with open('data/stop_words.csv', 'r') as f:
stop_words = f.read().strip().split(",")
stop_words[-10:]
```
Refining a stopwords list for your analysis can take time. It depends on:
* What you are hoping to discover (for example, are function words important?)
* The material you are analyzing (for example, journal articles may repeat words like "abstract")
If your results are not satisfactory, you can always come back and adjust the stopwords. You may need to run your analysis many times to refine a good stopword list.
|
github_jupyter
|
# Creating a stop_words list from the NLTK. We could also use the set of stopwords from Spacy or Gensim.
from nltk.corpus import stopwords # Import stopwords from nltk.corpus
stop_words = stopwords.words('english') # Create a list `stop_words` that contains the English stop words list
list(stop_words) # Show each string in our stopwords list
# Create a CSV file to store a set of stopwords
import csv # Import the csv module to work with csv files
with open('data/stop_words.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(stop_words)
# Open the CSV file and list the contents
with open('data/stop_words.csv', 'r') as f:
stop_words = f.read().strip().split(",")
stop_words[-10:]
| 0.548674 | 0.969469 |
# 10 Minutes to cuDF and CuPy
This notebook provides introductory examples of how you can use cuDF and CuPy together to take advantage of CuPy array functionality (such as advanced linear algebra operations).
```
import time
from numba import cuda
import cupy as cp
import cudf
```
### Converting a cuDF DataFrame to a CuPy Array
If we want to convert a cuDF DataFrame to a CuPy ndarray, There are multiple ways to do it:
1. The best way is to use the [dlpack](https://github.com/dmlc/dlpack) interface.
2. We can also convert via the [CUDA array interface](https://numba.pydata.org/numba-doc/dev/cuda/cuda_array_interface.html) by using cuDF's `as_gpu_matrix` and CuPy's `asarray` functionality. Because CuPy arrays have a single dtype, each column in our DataFrame must have the same dtype, regardless of which method we use.
```
nelem = 10000
df = cudf.DataFrame({'a':range(nelem),
'b':range(500, nelem + 500),
'c':range(1000, nelem + 1000)}
)
%time arr_cupy = cp.fromDlpack(df.to_dlpack())
arr_cupy
cp.asarray(df.as_gpu_matrix())
```
### Converting a cuDF Series to a CuPy Array
There are multiple ways to convert a cuDF Series to a CuPy array:
1. **Easiest & Preferred**: You can convert a cuDF Series to a CuPy by passing the Series to `cupy.asarray` as cuDF Series exposes [`__cuda_array_interface__`](https://docs-cupy.chainer.org/en/stable/reference/interoperability.html)
2. By passing the underlying Numba DeviceNDArray to `cupy.asarray`.
3. We can also leverage the dlpack interface. `to_dlpack()`
```
col = 'a'
%time cola_cupy = cp.asarray(df[col])
%time cola_cupy = cp.asarray(df[col].data)
%time cola_cupy = cp.fromDlpack(df[col].to_dlpack())
type(cola_cupy)
```
From here, we can proceed with normal CuPy workflows, such as reshaping the array, getting the diagonal, or calculating the norm.
```
reshaped_arr = cola_cupy.reshape(50, 200)
reshaped_arr
reshaped_arr.diagonal()
cp.linalg.norm(reshaped_arr)
```
### Converting a CuPy Array to a cuDF DataFrame
We can also convert a CuPy ndarray to a cuDF DataFrame. As above, we can use the either the dlpack interface or CUDA array interface with cuDF's `from_gpu_matrix`. Either way, we'll need to make sure that our CuPy array is Fortran contiguous in memory (if it's not already). We can either transpose the array or simply coerce it to be Fortran contiguous beforehand.
We can check whether our array is Fortran contiguous by using `cupy.isfortran` or looking at the [flags](https://docs-cupy.chainer.org/en/stable/reference/generated/cupy.ndarray.html#cupy.ndarray.flags) of the array.
```
cp.isfortran(reshaped_arr)
```
In this case, we'll need to convert it before going to a cuDF DataFrame. In the next two cells, we create the DataFrame by leveraging dlpack and the CUDA array interface, respectively.
```
reshaped_arr = cp.asfortranarray(reshaped_arr)
reshaped_df = cudf.from_dlpack(reshaped_arr.toDlpack())
reshaped_df.head()
reshaped_df = cudf.DataFrame.from_gpu_matrix(reshaped_arr)
reshaped_df.head()
```
### Converting a CuPy Array to a cuDF Series
To convert an array to a Series, we can directly pass the array to the constructor. We just need to make sure that the array is stored in contiguous memory. If it's not, we need to create a contiguous array with `ascontiguousarray`. We could also use `asfortranarray`, but it won't matter in the case of this one-dimensional array.
```
diag_data = cp.ascontiguousarray(reshaped_arr.diagonal())
cudf.Series(diag_data).head()
```
### Interweaving CuDF and CuPy for Smooth PyData Workflows
RAPIDS libraries and the entire GPU PyData ecosystem are developing quickly, but sometimes a one library may not have the functionality you need. One example of this might be taking the row-wise sum (or mean) of a Pandas DataFrame. cuDF's support for row-wise operations isn't mature, so you'd need to either transpose the DataFrame or write a UDF and explicitly calculate the sum across each row. Transposing could lead to hundreds of thousands of columns (which cuDF wouldn't perform well with) depending on your data's shape, and writing a UDF can be time intensive.
By leveraging the interoperability of the GPU PyData ecosystem, this operation becomes very easy. Let's take the row-wise sum of our previously reshaped cuDF DataFrame.
```
reshaped_df.head()
```
We can just transform it into a CuPy array via dlpack and use the `axis` argument of `sum`.
```
new_arr = cp.fromDlpack(reshaped_df.to_dlpack())
new_arr.sum(axis=1)
```
With just that single line, we're able to seamlessly move between data structures in this ecosystem, giving us enormous flexibility without sacrificing speed.
### Converting a cuDF DataFrame to a CuPy Sparse Matrix
We can also convert a DataFrame or Series to a CuPy sparse matrix. We might want to do this if downstream processes expect CuPy sparse matrices as an input.
The sparse matrix data structure is defined by three dense arrays, which we could create manually from an existing cuDF DataFrame or Series. Luckily, we don't need to do that. We can simply leverage dlpack again. We'll define a small helper function for cleanliness.
```
def cudf_to_cupy_sparse_matrix(data, sparseformat='column'):
"""Converts a cuDF object to a CuPy Sparse Column matrix.
"""
if sparseformat not in ('row', 'column',):
raise ValueError("Let's focus on column and row formats for now.")
_sparse_constructor = cp.sparse.csc_matrix
if sparseformat == 'row':
_sparse_constructor = cp.sparse.csr_matrix
return _sparse_constructor(cp.fromDlpack(data.to_dlpack()))
```
We can define a sparsely populated DataFrame to illustrate this conversion to either sparse matrix format.
```
df = cudf.DataFrame()
nelem = 10000
nonzero = 1000
for i in range(20):
arr = cp.random.normal(5, 5, nelem)
arr[cp.random.choice(arr.shape[0], nelem-nonzero, replace=False)] = 0
df['a' + str(i)] = cp.ascontiguousarray(arr)
df.head()
sparse_data = cudf_to_cupy_sparse_matrix(df)
sparse_data
```
From here, we could continue our workflow with a CuPy sparse matrix.
For a full list of the functionality built into these libraries, we encourage you to check out the API docs for [cuDF](https://docs.rapids.ai/api/cudf/nightly/) and [CuPy](https://docs-cupy.chainer.org/en/stable/index.html).
|
github_jupyter
|
import time
from numba import cuda
import cupy as cp
import cudf
nelem = 10000
df = cudf.DataFrame({'a':range(nelem),
'b':range(500, nelem + 500),
'c':range(1000, nelem + 1000)}
)
%time arr_cupy = cp.fromDlpack(df.to_dlpack())
arr_cupy
cp.asarray(df.as_gpu_matrix())
col = 'a'
%time cola_cupy = cp.asarray(df[col])
%time cola_cupy = cp.asarray(df[col].data)
%time cola_cupy = cp.fromDlpack(df[col].to_dlpack())
type(cola_cupy)
reshaped_arr = cola_cupy.reshape(50, 200)
reshaped_arr
reshaped_arr.diagonal()
cp.linalg.norm(reshaped_arr)
cp.isfortran(reshaped_arr)
reshaped_arr = cp.asfortranarray(reshaped_arr)
reshaped_df = cudf.from_dlpack(reshaped_arr.toDlpack())
reshaped_df.head()
reshaped_df = cudf.DataFrame.from_gpu_matrix(reshaped_arr)
reshaped_df.head()
diag_data = cp.ascontiguousarray(reshaped_arr.diagonal())
cudf.Series(diag_data).head()
reshaped_df.head()
new_arr = cp.fromDlpack(reshaped_df.to_dlpack())
new_arr.sum(axis=1)
def cudf_to_cupy_sparse_matrix(data, sparseformat='column'):
"""Converts a cuDF object to a CuPy Sparse Column matrix.
"""
if sparseformat not in ('row', 'column',):
raise ValueError("Let's focus on column and row formats for now.")
_sparse_constructor = cp.sparse.csc_matrix
if sparseformat == 'row':
_sparse_constructor = cp.sparse.csr_matrix
return _sparse_constructor(cp.fromDlpack(data.to_dlpack()))
df = cudf.DataFrame()
nelem = 10000
nonzero = 1000
for i in range(20):
arr = cp.random.normal(5, 5, nelem)
arr[cp.random.choice(arr.shape[0], nelem-nonzero, replace=False)] = 0
df['a' + str(i)] = cp.ascontiguousarray(arr)
df.head()
sparse_data = cudf_to_cupy_sparse_matrix(df)
sparse_data
| 0.426322 | 0.980469 |
```
%matplotlib inline
```
Compile Tensorflow Models
=========================
This article is an introductory tutorial to deploy tensorflow models with TVM.
For us to begin with, tensorflow python module is required to be installed.
Please refer to https://www.tensorflow.org/install
```
# tvm, relay
import tvm
from tvm import te
from tvm import relay
# os and numpy
import numpy as np
import os.path
# Tensorflow imports
import tensorflow as tf
# Ask tensorflow to limit its GPU memory to what's actually needed
# instead of gobbling everything that's available.
# https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
# This way this tutorial is a little more friendly to sphinx-gallery.
gpus = tf.config.list_physical_devices("GPU")
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
print("tensorflow will use experimental.set_memory_growth(True)")
except RuntimeError as e:
print("experimental.set_memory_growth option is not available: {}".format(e))
try:
tf_compat_v1 = tf.compat.v1
except ImportError:
tf_compat_v1 = tf
# Tensorflow utility functions
import tvm.relay.testing.tf as tf_testing
# Base location for model related files.
repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/InceptionV1/"
# Test image
img_name = "elephant-299.jpg"
image_url = os.path.join(repo_base, img_name)
```
Tutorials
---------
Please refer docs/frontend/tensorflow.md for more details for various models
from tensorflow.
```
model_name = "classify_image_graph_def-with_shapes.pb"
model_url = os.path.join(repo_base, model_name)
# Image label map
map_proto = "imagenet_2012_challenge_label_map_proto.pbtxt"
map_proto_url = os.path.join(repo_base, map_proto)
# Human readable text for labels
label_map = "imagenet_synset_to_human_label_map.txt"
label_map_url = os.path.join(repo_base, label_map)
# Target settings
# Use these commented settings to build for cuda.
# target = tvm.target.Target("cuda", host="llvm")
# layout = "NCHW"
# dev = tvm.cuda(0)
target = tvm.target.Target("llvm", host="llvm")
layout = None
dev = tvm.cpu(0)
```
Download required files
-----------------------
Download files listed above.
```
from tvm.contrib.download import download_testdata
img_path = download_testdata(image_url, img_name, module="data")
model_path = download_testdata(model_url, model_name, module=["tf", "InceptionV1"])
map_proto_path = download_testdata(map_proto_url, map_proto, module="data")
label_path = download_testdata(label_map_url, label_map, module="data")
```
Import model
------------
Creates tensorflow graph definition from protobuf file.
```
with tf_compat_v1.gfile.GFile(model_path, "rb") as f:
graph_def = tf_compat_v1.GraphDef()
graph_def.ParseFromString(f.read())
graph = tf.import_graph_def(graph_def, name="")
# Call the utility to import the graph definition into default graph.
graph_def = tf_testing.ProcessGraphDefParam(graph_def)
# Add shapes to the graph.
with tf_compat_v1.Session() as sess:
graph_def = tf_testing.AddShapesToGraphDef(sess, "softmax")
```
Decode image
------------
<div class="alert alert-info"><h4>Note</h4><p>tensorflow frontend import doesn't support preprocessing ops like JpegDecode.
JpegDecode is bypassed (just return source node).
Hence we supply decoded frame to TVM instead.</p></div>
```
from PIL import Image
image = Image.open(img_path).resize((299, 299))
x = np.array(image)
```
Import the graph to Relay
-------------------------
Import tensorflow graph definition to relay frontend.
Results:
sym: relay expr for given tensorflow protobuf.
params: params converted from tensorflow params (tensor protobuf).
```
shape_dict = {"DecodeJpeg/contents": x.shape}
dtype_dict = {"DecodeJpeg/contents": "uint8"}
mod, params = relay.frontend.from_tensorflow(graph_def, layout=layout, shape=shape_dict)
print("Tensorflow protobuf imported to relay frontend.")
```
Relay Build
-----------
Compile the graph to llvm target with given input specification.
Results:
graph: Final graph after compilation.
params: final params after compilation.
lib: target library which can be deployed on target with TVM runtime.
```
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, target, params=params)
```
Execute the portable graph on TVM
---------------------------------
Now we can try deploying the compiled model on target.
```
from tvm.contrib import graph_executor
dtype = "uint8"
m = graph_executor.GraphModule(lib["default"](dev))
# set inputs
m.set_input("DecodeJpeg/contents", tvm.nd.array(x.astype(dtype)))
# execute
m.run()
# get outputs
tvm_output = m.get_output(0, tvm.nd.empty(((1, 1008)), "float32"))
```
Process the output
------------------
Process the model output to human readable text for InceptionV1.
```
predictions = tvm_output.numpy()
predictions = np.squeeze(predictions)
# Creates node ID --> English string lookup.
node_lookup = tf_testing.NodeLookup(label_lookup_path=map_proto_path, uid_lookup_path=label_path)
# Print top 5 predictions from TVM output.
top_k = predictions.argsort()[-5:][::-1]
for node_id in top_k:
human_string = node_lookup.id_to_string(node_id)
score = predictions[node_id]
print("%s (score = %.5f)" % (human_string, score))
```
Inference on tensorflow
-----------------------
Run the corresponding model on tensorflow
```
def create_graph():
"""Creates a graph from saved GraphDef file and returns a saver."""
# Creates graph from saved graph_def.pb.
with tf_compat_v1.gfile.GFile(model_path, "rb") as f:
graph_def = tf_compat_v1.GraphDef()
graph_def.ParseFromString(f.read())
graph = tf.import_graph_def(graph_def, name="")
# Call the utility to import the graph definition into default graph.
graph_def = tf_testing.ProcessGraphDefParam(graph_def)
def run_inference_on_image(image):
"""Runs inference on an image.
Parameters
----------
image: String
Image file name.
Returns
-------
Nothing
"""
if not tf_compat_v1.gfile.Exists(image):
tf.logging.fatal("File does not exist %s", image)
image_data = tf_compat_v1.gfile.GFile(image, "rb").read()
# Creates graph from saved GraphDef.
create_graph()
with tf_compat_v1.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name("softmax:0")
predictions = sess.run(softmax_tensor, {"DecodeJpeg/contents:0": image_data})
predictions = np.squeeze(predictions)
# Creates node ID --> English string lookup.
node_lookup = tf_testing.NodeLookup(
label_lookup_path=map_proto_path, uid_lookup_path=label_path
)
# Print top 5 predictions from tensorflow.
top_k = predictions.argsort()[-5:][::-1]
print("===== TENSORFLOW RESULTS =======")
for node_id in top_k:
human_string = node_lookup.id_to_string(node_id)
score = predictions[node_id]
print("%s (score = %.5f)" % (human_string, score))
run_inference_on_image(img_path)
```
|
github_jupyter
|
%matplotlib inline
# tvm, relay
import tvm
from tvm import te
from tvm import relay
# os and numpy
import numpy as np
import os.path
# Tensorflow imports
import tensorflow as tf
# Ask tensorflow to limit its GPU memory to what's actually needed
# instead of gobbling everything that's available.
# https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
# This way this tutorial is a little more friendly to sphinx-gallery.
gpus = tf.config.list_physical_devices("GPU")
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
print("tensorflow will use experimental.set_memory_growth(True)")
except RuntimeError as e:
print("experimental.set_memory_growth option is not available: {}".format(e))
try:
tf_compat_v1 = tf.compat.v1
except ImportError:
tf_compat_v1 = tf
# Tensorflow utility functions
import tvm.relay.testing.tf as tf_testing
# Base location for model related files.
repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/InceptionV1/"
# Test image
img_name = "elephant-299.jpg"
image_url = os.path.join(repo_base, img_name)
model_name = "classify_image_graph_def-with_shapes.pb"
model_url = os.path.join(repo_base, model_name)
# Image label map
map_proto = "imagenet_2012_challenge_label_map_proto.pbtxt"
map_proto_url = os.path.join(repo_base, map_proto)
# Human readable text for labels
label_map = "imagenet_synset_to_human_label_map.txt"
label_map_url = os.path.join(repo_base, label_map)
# Target settings
# Use these commented settings to build for cuda.
# target = tvm.target.Target("cuda", host="llvm")
# layout = "NCHW"
# dev = tvm.cuda(0)
target = tvm.target.Target("llvm", host="llvm")
layout = None
dev = tvm.cpu(0)
from tvm.contrib.download import download_testdata
img_path = download_testdata(image_url, img_name, module="data")
model_path = download_testdata(model_url, model_name, module=["tf", "InceptionV1"])
map_proto_path = download_testdata(map_proto_url, map_proto, module="data")
label_path = download_testdata(label_map_url, label_map, module="data")
with tf_compat_v1.gfile.GFile(model_path, "rb") as f:
graph_def = tf_compat_v1.GraphDef()
graph_def.ParseFromString(f.read())
graph = tf.import_graph_def(graph_def, name="")
# Call the utility to import the graph definition into default graph.
graph_def = tf_testing.ProcessGraphDefParam(graph_def)
# Add shapes to the graph.
with tf_compat_v1.Session() as sess:
graph_def = tf_testing.AddShapesToGraphDef(sess, "softmax")
from PIL import Image
image = Image.open(img_path).resize((299, 299))
x = np.array(image)
shape_dict = {"DecodeJpeg/contents": x.shape}
dtype_dict = {"DecodeJpeg/contents": "uint8"}
mod, params = relay.frontend.from_tensorflow(graph_def, layout=layout, shape=shape_dict)
print("Tensorflow protobuf imported to relay frontend.")
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, target, params=params)
from tvm.contrib import graph_executor
dtype = "uint8"
m = graph_executor.GraphModule(lib["default"](dev))
# set inputs
m.set_input("DecodeJpeg/contents", tvm.nd.array(x.astype(dtype)))
# execute
m.run()
# get outputs
tvm_output = m.get_output(0, tvm.nd.empty(((1, 1008)), "float32"))
predictions = tvm_output.numpy()
predictions = np.squeeze(predictions)
# Creates node ID --> English string lookup.
node_lookup = tf_testing.NodeLookup(label_lookup_path=map_proto_path, uid_lookup_path=label_path)
# Print top 5 predictions from TVM output.
top_k = predictions.argsort()[-5:][::-1]
for node_id in top_k:
human_string = node_lookup.id_to_string(node_id)
score = predictions[node_id]
print("%s (score = %.5f)" % (human_string, score))
def create_graph():
"""Creates a graph from saved GraphDef file and returns a saver."""
# Creates graph from saved graph_def.pb.
with tf_compat_v1.gfile.GFile(model_path, "rb") as f:
graph_def = tf_compat_v1.GraphDef()
graph_def.ParseFromString(f.read())
graph = tf.import_graph_def(graph_def, name="")
# Call the utility to import the graph definition into default graph.
graph_def = tf_testing.ProcessGraphDefParam(graph_def)
def run_inference_on_image(image):
"""Runs inference on an image.
Parameters
----------
image: String
Image file name.
Returns
-------
Nothing
"""
if not tf_compat_v1.gfile.Exists(image):
tf.logging.fatal("File does not exist %s", image)
image_data = tf_compat_v1.gfile.GFile(image, "rb").read()
# Creates graph from saved GraphDef.
create_graph()
with tf_compat_v1.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name("softmax:0")
predictions = sess.run(softmax_tensor, {"DecodeJpeg/contents:0": image_data})
predictions = np.squeeze(predictions)
# Creates node ID --> English string lookup.
node_lookup = tf_testing.NodeLookup(
label_lookup_path=map_proto_path, uid_lookup_path=label_path
)
# Print top 5 predictions from tensorflow.
top_k = predictions.argsort()[-5:][::-1]
print("===== TENSORFLOW RESULTS =======")
for node_id in top_k:
human_string = node_lookup.id_to_string(node_id)
score = predictions[node_id]
print("%s (score = %.5f)" % (human_string, score))
run_inference_on_image(img_path)
| 0.737725 | 0.882377 |
```
# Day_03_02_gan.py
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
import os
mnist = input_data.read_data_sets('mnist', one_hot=True)
ph_x = tf.placeholder(tf.float32, shape=[None, 784])
ph_z = tf.placeholder(tf.float32, shape=[None, 128])
def generator(noises):
with tf.variable_scope('generator'):
hidden = tf.layers.dense(noises, 256, tf.nn.relu)
output = tf.layers.dense(hidden, 784, tf.nn.sigmoid)
return output
# reuse는 변수를 다시 갖다 쓴다는 개념
def discriminator(inputs, reuse=None):
with tf.variable_scope('discriminator') as scope:
if reuse:
scope.reuse_variables()
hidden = tf.layers.dense(inputs, 256, tf.nn.relu)
output = tf.layers.dense(hidden, 784, tf.nn.sigmoid)
return output
def get_noises(batch_size, n_noise):
# 난수를 만드는 함수
return np.float32(np.random.normal(size=[batch_size, n_noise]))
K = generator(ph_z)
D = discriminator(ph_x)
G = discriminator(K, reuse=True)
D_loss = tf.reduce_mean(tf.log(D) + tf.log(1-G))
G_loss = tf.reduce_mean(tf.log(G))
D_opt = tf.train.AdamOptimizer(0.0002)
G_opt = tf.train.AdamOptimizer(0.0002)
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='generator')
D_train = D_opt.minimize(-D_loss, var_list=D_vars)
G_train = G_opt.minimize(-G_loss, var_list=G_vars)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
epochs = 30
batch_size = 100 # 출구를 몇개씩 조사할건지
n_iters = mnist.train.num_examples // batch_size # 550
samples = []
sample_count = 7
# epcochs를 1번 돌고 념
for i in range(epochs):
D_cost, G_cost = 0, 0
for j in range(n_iters):
xx, _ = mnist.train.next_batch(batch_size)
noises = get_noises(batch_size, 128)
_, c1 = sess.run([D_train, D_loss], feed_dict={ph_x: xx, ph_z:noises})
_, c2 = sess.run([G_train, G_loss], feed_dict={ph_z: noises})
D_cost += c1
G_cost += c2
print('[{}] {:5.3f} : {:5.3f}'.format(i, D_cost / n_iters, G_cost /n_iters))
noises = get_noises(sample_count, 128)
samples.append(sess.run(K, {ph_z: noises}))
_, ax = plt.subplots(epochs, sample_count, figsize=(sample_count, epochs))
for i in range(epochs):
for j in range(sample_count):
ax[i, j].set_axis_off()
ax[i, j].imshow(np.reshape(samples[i][j], [28, 28]), cmap='gray')
plt.tight_layout()
plt.show()
sess.close()
```
|
github_jupyter
|
# Day_03_02_gan.py
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
import os
mnist = input_data.read_data_sets('mnist', one_hot=True)
ph_x = tf.placeholder(tf.float32, shape=[None, 784])
ph_z = tf.placeholder(tf.float32, shape=[None, 128])
def generator(noises):
with tf.variable_scope('generator'):
hidden = tf.layers.dense(noises, 256, tf.nn.relu)
output = tf.layers.dense(hidden, 784, tf.nn.sigmoid)
return output
# reuse는 변수를 다시 갖다 쓴다는 개념
def discriminator(inputs, reuse=None):
with tf.variable_scope('discriminator') as scope:
if reuse:
scope.reuse_variables()
hidden = tf.layers.dense(inputs, 256, tf.nn.relu)
output = tf.layers.dense(hidden, 784, tf.nn.sigmoid)
return output
def get_noises(batch_size, n_noise):
# 난수를 만드는 함수
return np.float32(np.random.normal(size=[batch_size, n_noise]))
K = generator(ph_z)
D = discriminator(ph_x)
G = discriminator(K, reuse=True)
D_loss = tf.reduce_mean(tf.log(D) + tf.log(1-G))
G_loss = tf.reduce_mean(tf.log(G))
D_opt = tf.train.AdamOptimizer(0.0002)
G_opt = tf.train.AdamOptimizer(0.0002)
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='generator')
D_train = D_opt.minimize(-D_loss, var_list=D_vars)
G_train = G_opt.minimize(-G_loss, var_list=G_vars)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
epochs = 30
batch_size = 100 # 출구를 몇개씩 조사할건지
n_iters = mnist.train.num_examples // batch_size # 550
samples = []
sample_count = 7
# epcochs를 1번 돌고 념
for i in range(epochs):
D_cost, G_cost = 0, 0
for j in range(n_iters):
xx, _ = mnist.train.next_batch(batch_size)
noises = get_noises(batch_size, 128)
_, c1 = sess.run([D_train, D_loss], feed_dict={ph_x: xx, ph_z:noises})
_, c2 = sess.run([G_train, G_loss], feed_dict={ph_z: noises})
D_cost += c1
G_cost += c2
print('[{}] {:5.3f} : {:5.3f}'.format(i, D_cost / n_iters, G_cost /n_iters))
noises = get_noises(sample_count, 128)
samples.append(sess.run(K, {ph_z: noises}))
_, ax = plt.subplots(epochs, sample_count, figsize=(sample_count, epochs))
for i in range(epochs):
for j in range(sample_count):
ax[i, j].set_axis_off()
ax[i, j].imshow(np.reshape(samples[i][j], [28, 28]), cmap='gray')
plt.tight_layout()
plt.show()
sess.close()
| 0.727298 | 0.730218 |
### Random Seeds
The `random` module provides a variety of functions related to (pseudo) random numbers.
The problem when you use random numbers in your code is that it can be difficult to debug because the same random number sequence is not the same from run to run of your program. If your code fails somewhere in the middle of a run it is difficult to make the problem **repeatable**. Debugging intermittent and non-repeatable failures is one of the worst things to do!
Fortunately, when using the `random` module, we can set the `seed` for the random underlying random number generator.
Random numbers are not truly random - they are generated in such a way that the numbers *appear* random and evenly distributed, but in fact they are being generated using a specific algorithm.
That algorithm depends on a **seed** value. That seed value will determine the exact sequence of randomly generated numbers (so as you can see, it's not truly random). Setting different seeds will result in different random sequences, but setting the seed to the same value will result in the same sequence being generated.
By default, the seed uses the system time, hence every time you run your program a different seed is set. But we can easily set the seed to something specific - very useful for debugging purposes.
```
import random
for _ in range(10):
print(random.randint(10, 20), random.random())
for _ in range(10):
print(random.randint(10, 20), random.random())
```
As you can see the sequence of numbers is not the same (and even restarting the kernel will result in different numbers).
We can set the **seed** as follows:
```
random.seed(0)
for i in range(10):
print(random.randint(10, 20), random.random())
```
If we run this code again, the sequence will still be different:
```
for i in range(10):
print(random.randint(10, 20), random.random())
```
Instead what we have to do is reset the seed (which happens if you set the seed to a specific number at the start of running your program - then evey random number generated will be repeatable from run to run).
Here, we just need to reset the seed before running that loop to get the same effect:
```
random.seed(0)
for i in range(20):
print(random.randint(10, 20), random.random())
random.seed(0)
for i in range(20):
print(random.randint(10, 20), random.random())
```
As you can see, the sequence of random numbers generated is now the same every time.
What's interesting is that even functions like `shuffle` will shuffle in the same order!
Let's see this:
```
def generate_random_stuff(seed=None):
random.seed(seed)
results = []
# randint will generate the same sequence (for same seed)
for _ in range(5):
results.append(random.randint(0, 5))
# even shuffling generates in the same way (for same seed)
characters = ['a', 'b', 'c']
random.shuffle(characters)
results.append(characters)
# same with the Gaussian distribution
for _ in range(5):
results.append(random.gauss(0, 1))
return results
print(generate_random_stuff())
print(generate_random_stuff())
```
Now let's use a seed value:
```
print(generate_random_stuff(0))
print(generate_random_stuff(0))
```
As long as we use the same seed value the results are repeatable. But if we set different seed values the sequences will be different (but still be the same for the same seed):
```
print(generate_random_stuff(100))
print(generate_random_stuff(100))
```
Lastly let's see how we would calculate the frequency of randomly generated integers, just to see how even the distribution is.
Basically, given a sequence of random integers, we are going to create a dictionary that contains the integers as keys, and the values will the frequency of each:
```
def freq_analysis(lst):
return {k: lst.count(k) for k in set(lst)}
lst = [random.randint(0, 10) for _ in range(100)]
print(lst)
random.seed(0)
freq_analysis(lst)
random.seed(0)
freq_analysis([random.randint(0, 10) for _ in range(1_000_000)])
```
Of course, it usually pays to know what's in the standard library :-)
The collections library has a Counter class that can be used to do this precise thing!
```
from collections import Counter
random.seed(0)
Counter([random.randint(0, 10) for _ in range(1_000_000)])
```
|
github_jupyter
|
import random
for _ in range(10):
print(random.randint(10, 20), random.random())
for _ in range(10):
print(random.randint(10, 20), random.random())
random.seed(0)
for i in range(10):
print(random.randint(10, 20), random.random())
for i in range(10):
print(random.randint(10, 20), random.random())
random.seed(0)
for i in range(20):
print(random.randint(10, 20), random.random())
random.seed(0)
for i in range(20):
print(random.randint(10, 20), random.random())
def generate_random_stuff(seed=None):
random.seed(seed)
results = []
# randint will generate the same sequence (for same seed)
for _ in range(5):
results.append(random.randint(0, 5))
# even shuffling generates in the same way (for same seed)
characters = ['a', 'b', 'c']
random.shuffle(characters)
results.append(characters)
# same with the Gaussian distribution
for _ in range(5):
results.append(random.gauss(0, 1))
return results
print(generate_random_stuff())
print(generate_random_stuff())
print(generate_random_stuff(0))
print(generate_random_stuff(0))
print(generate_random_stuff(100))
print(generate_random_stuff(100))
def freq_analysis(lst):
return {k: lst.count(k) for k in set(lst)}
lst = [random.randint(0, 10) for _ in range(100)]
print(lst)
random.seed(0)
freq_analysis(lst)
random.seed(0)
freq_analysis([random.randint(0, 10) for _ in range(1_000_000)])
from collections import Counter
random.seed(0)
Counter([random.randint(0, 10) for _ in range(1_000_000)])
| 0.144843 | 0.969179 |
# Distribuciones de probabilidad
```
# Importamos librerías a trabajar en todas las simulaciones
import matplotlib.pyplot as plt
import numpy as np
from itertools import cycle # Librería para hacer ciclos
import scipy.stats as st # Librería estadística
from math import factorial as fac # Importo la operación factorial
%matplotlib inline
```
## 1. Distrución de probabilidad uniforme
$X\sim U(a,b)$ Parámetros $a,b \rightarrow $ intervalo
$$\textbf{Función de densidad de probabilidad}\\f(x)=\begin{cases}\frac{1}{b-a} & a\leq x \leq b\\0& \text{otro caso}\end{cases}$$
$$ \textbf{Función de distribución de probabilidad}\\F(x)=\begin{cases}0& x<a\\\frac{x-a}{b-a} & a\leq x \leq b\\1& x\geq b\end{cases}$$

### Uso en python
```
a,b=1,2 # Interval
U = np.random.uniform(a,b)
U
```
## 2. Distribución normal
$X\sim N(\mu,\sigma^2)$ Parámetros: Media=$\mu$ y varianza=$\sigma^2$
$$ \textbf{Función de densidad de probabilidad}\\ f(x)= \frac{1}{\sigma\sqrt{2\pi}}e^{\frac{-(x-\mu)^2}{2\sigma^2}}$$
$$ \textbf{Función de distribución de probabilidad}\\ F(x)= \frac{1}{\sigma\sqrt(2\pi)}\int_{-\infty}^{x}e^{\frac{-(v-\mu)^2}{2\sigma^2}}dv$$

### Propiedades

### Estandarización de variables aleatorias normales
Como consecuencia de que la función normal es simétrica en $\mu$ es posible relacionar todas las variables aleatorias normales con la distribución normal estándar.
Si $X\sim N(\mu ,\sigma ^{2})$, entonces
$$Z = \frac{X - \mu}{\sigma}$$
es una variable aleatoria normal estándar: $Z\sim N(0,1)$.
### El Teorema del Límite Central
El Teorema del límite central establece que bajo ciertas condiciones (como pueden ser independientes e idénticamente distribuidas con varianza finita), la suma de un gran número de variables aleatorias se distribuye aproximadamente como una normal. **(Hablar de la importancia del uso)**
### Incidencia
Cuando en un fenómeno se sospecha la presencia de un gran número de pequeñas causas actuando de forma aditiva e independiente es razonable pensar que las observaciones serán "normales". **(Debido al TLC)**
Hay causas que pueden actuar de forma multiplicativa (más que aditiva). En este caso, la suposición de normalidad no está justificada y es el logaritmo de la variable en cuestión el que estaría normalmente distribuido. **(log-normal)**.
### Ejemplo de aplicación
En variables financieras, el modelo de Black-Scholes, el cúal es empleado para estimar el valor actual de una opción europea para la compra (Call), o venta (Put), de acciones en una fecha futura, supone normalidad en algunas variables económicas. ver:https://es.wikipedia.org/wiki/Modelo_de_Black-Scholes para información adicional.
> Referencia: https://es.wikipedia.org/wiki/Distribuci%C3%B3n_normal
### Uso en python
```
mu, sigma = 0, 0.1 # mean and standard deviation
N = np.random.normal(mu, sigma,5)
N
st.norm
```
## 3. Distribución exponencial
$X\sim Exp(\beta)$ Parámetros: Media $\beta>0$ o tasa = $\lambda = 1/\beta$
$$\textbf{Función de densidad de probabilidad}\\f(x) = \frac{1}{\beta} e^{-\frac{x}{\beta}}$$
$$\textbf{Función de distribución de probabilidad}\\F(x) = 1-e^{-\frac{x}{\beta}}$$

### Ejemplos
Ejemplos para la distribución exponencial **es la distribución de la longitud de los intervalos de una variable continua que transcurren entre dos sucesos**, que se distribuyen según la distribución de Poisson.
- El tiempo transcurrido en un centro de llamadas hasta recibir la primera llamada del día se podría modelar como una exponencial.
- El intervalo de tiempo entre terremotos (de una determinada magnitud) sigue una distribución exponencial.
- Supongamos una máquina que produce hilo de alambre, la cantidad de metros de alambre hasta encontrar una falla en el alambre se podría modelar como una exponencial.
- En fiabilidad de sistemas, un dispositivo con tasa de fallo constante sigue una distribución exponencial.
### Relaciones
La suma de k variables aleatorias independientes de distribución exponencial con parámetro $\lambda$ es una variable aleatoria de distribución de Erlang.
> Referencia: https://en.wikipedia.org/wiki/Exponential_distribution
### Uso en python
```
beta = 4
E = np.random.exponential(beta,1)
E
st.expon
```
## 4. Distribución erlang
Parámetros: Tamaño $k \in \mathbb{N}$, escala=$\frac{1}{\beta}$
$$\textbf{Función de densidad de probabilidad}\\f(x)=x^{k-1}\frac{e^{-x/\beta}}{\beta^k\Gamma(k)}\equiv x^{k-1}\frac{e^{-x/\beta}}{\beta^k(k-1)!}$$
$$\textbf{Función de distribución de probabilidad}\\F(x)=1-\sum_{n=0}^{k-1}\frac{1}{n!}e^{-\frac{1}{\beta}x}\big(\frac{x}{\beta}\big)^n$$

### Simplificaciones
La distribución Erlang con tamaño $k=1$ se simplifica a una distribución exponencial. Esta es una distribución de la suma de $k$ variables exponenciales donde cada una tiene media $\beta$
### Ocurrencia
**Tiempos de espera**
Los eventos que ocurren de forma independiente con una tasa promedio se modelan con un proceso de Poisson. Los tiempos de espera entre k ocurrencias del evento son distribuidos Erlang. (La cuestión relacionada con el número de eventos en una cantidad de tiempo dada descrita por una distribución de Poisson).
Las fórmulas de Erlang se han utilizado en economía de negocios para describir los tiempos entre compras de un activo.
> Referencia: https://en.wikipedia.org/wiki/Erlang_distribution
### Uso en python
```
from scipy.stats import erlang
N = 10000 # Número de muestras
k,scale = 3,1/4 # Parámetros de la distribución
E1 = erlang.rvs(k,scale=scale,size=N)
E2 = np.random.gamma(k,scale,N) # Erlang como caso particular de la distribución gamma
plt.figure(1,figsize=[12,4])
plt.subplot(121)
plt.hist(E1,50,density=True,label='Usando Lib. scipy')
plt.legend()
plt.subplot(122)
plt.hist(E2,50,density=True,label='Usando Lib. numpy')
plt.legend()
plt.show()
```
## 5. Distribución binomial
$X\sim B(n,p)$ Parámetros: $n$ y $p$
$$\textbf{Función de densidad de probabilidad}\\p_i=P(X=i)={n \choose i}p^i(1-p)^{n-i}= \frac{n!}{i!(n-i)!}p^i(1-p)^{n-i},\quad i=0,1,\cdots,n$$
>Recordar:$$p_{i+1}=\frac{n-i}{i+1}\frac{p}{1-p} p_i $$
$$\textbf{Función de distribución de probabilidad}\\F(x)=\sum_{i=0}^{k-1}\frac{n!}{i!(n-i)!}p^i(1-p)^{n-i}$$
## Método vectorizado
```
# Función que calcula la probabilidad acumulada optimizada
def proba_binomial(n:'Cantidad de ensayos',p:'Probabilidad de los eventos',
N:'Cantidad de puntos a graficar'):
Pr = np.zeros(N)
Pr[0] = (1-p)**n
def pr(i):
nonlocal Pr
c = p/(1-p)
Pr[i+1]=(c*(n-i)/(i+1))*Pr[i]
# Lleno el vector Pr usando compresión de listas
[pr(i) for i in range(N-1)]
return Pr
# Comprobación de función creada
# Distintos parámetros para graficar la función binomial
n = [50,100,150]
# Parámetro p de la dristribución
p = 0.5
# Resultado usando método convencional
P = list(map(lambda x,n: proba_binomial(n,p,100),range(len(n)),n))
P = np.asmatrix(P)
print(P.shape)
def grafica_distribucion_prob(P:'Matriz de probabilidades binomiales'):
# Gráfica de densidad de probabilidad
fig,(ax1,ax2) = plt.subplots(1,2)
fig.set_figwidth(10)
ax1.plot(P.T,'o',markersize=3)
ax1.legend(['n=50','n=100','n=150'])
ax1.set_title('Densidad de probabilidad')
# ax1.show()
# Probabilidad acumulada
F = np.cumsum(P,axis=1)
# plt.figure(2)
ax2.plot(F.T,'o',markersize=3)
ax2.legend(['n=%d'%n[0],'n=%d'%n[1],'n=%d'%n[2]])
ax2.set_title('Distribución acumulada')
plt.show()
# Gráfica del método convencional y vectorizado
grafica_distribucion_prob(P)
```
### Características
La distribución binomial es una distribución de probabilidad discreta que cuenta el número de éxitos en una secuencia de **n ensayos de Bernoulli independientes entre sí**, con una probabilidad `fija` p de ocurrencia del éxito entre los ensayos. A lo que se denomina «éxito», tiene una probabilidad de ocurrencia p y al otro, «fracaso», tiene una probabilidad q = 1 - p. En la distribución binomial el anterior experimento se repite n veces, de forma independiente, y se designa por $X$ a la variable que mide el número de éxitos que se han producido en los n experimentos.
Cuando se dan estas circunstancias, se dice que la variable $X$ sigue una distribución de probabilidad binomial, y se denota $X\sim B(n,p)$.
### Ejemplo
Supongamos que se lanza un dado (con 6 caras) 51 veces y queremos conocer la probabilidad de que el número 3 salga 20 veces. En este caso tenemos una $X \sim B(51, 1/6)$ y la probabilidad sería $P(X=20)$:
$$P(X=20)={51 \choose 20}(1/6)^{20}(1-1/6)^{51-20} $$
```
n = 51; p=1/6; X=20
print('P(X=20)=',st.binom(n,p).pmf(X))
```
### Relaciones con otras variables aleatorias
Si n tiende a infinito y p es tal que el producto entre ambos parámetros tiende a $\lambda$, entonces la distribución de la variable aleatoria binomial tiende a una distribución de Poisson de parámetro $\lambda$.
Por último, se cumple que cuando $p =0.5$ y n es muy grande (usualmente se exige que $n\geq 30$) la distribución binomial puede aproximarse mediante la distribución normal, con parámetros $\mu=np,\sigma^2=np(1-p)$.
> Referencia: https://en.wikipedia.org/wiki/Binomial_distribution
```
p = .5; n = 30
mu = n*p; sigma = np.sqrt(n*p*(1-p))
# Usando nuetra función creada
Bi = proba_binomial(n,p,n)
plt.figure(1,figsize=[10,5])
plt.subplot(121)
plt.plot(Bi,'o')
plt.title('Distribución binomial n=%i,p=%0.2f'%(n,p))
# Usando la función de la librería scipy para graficar la normal
x = np.arange(0,n)
Bi_norm = st.norm.pdf(x,loc=mu,scale=sigma)
plt.subplot(122)
plt.plot(Bi_norm,'o')
plt.title('Distribución~normal(np,np(1-p))')
plt.show()
```
## 6. Distribución Poisson
Parámetros: media=$\lambda>0 \in \mathbb{R}$, N°Ocurrencias = k
- k es el número de ocurrencias del evento o fenómeno (la función nos da la probabilidad de que el evento suceda precisamente k veces).
- λ es un parámetro positivo que representa el <font color ='red'>**número de veces que se espera que ocurra el fenómeno durante un intervalo dado**</font>. Por ejemplo, si el suceso estudiado tiene lugar en promedio 4 veces por minuto y estamos interesados en la probabilidad de que ocurra k veces dentro de un intervalo de 10 minutos, usaremos un modelo de distribución de Poisson con λ = 10×4 = 40
$$\textbf{Función de densidad de probabilidad}\\p(k)=\frac{\lambda^k e^{-\lambda}}{k!},\quad k\in \mathbb{N}$$
### Aplicación
El número de sucesos en un intervalo de tiempo dado es una variable aleatoria de distribución de Poisson donde $\lambda$ es la media de números de sucesos en este intervalo.
### Relación con distribución Erlang o Gamma
El tiempo hasta que ocurre el suceso número k en un proceso de Poisson de intensidad $\lambda$ es una variable aleatoria con distribución gamma o (lo mismo) con distribución de Erlang con $ \beta =1/\lambda $
### Aproximación normal
Como consecuencia del teorema central del límite, para valores grandes de $\lambda$ , una variable aleatoria de Poisson X puede aproximarse por otra normal, con parámetros $\mu=\sigma^2=\lambda$. Por otro lado, si el cociente
$$Y=\frac{X-\lambda}{\sqrt{\lambda}}$$
converge a una distribución normal de media 0 y varianza 1.
### Ejemplo
Si el 2% de los libros encuadernados en cierto taller tiene encuadernación defectuosa, para obtener la probabilidad de que 5 de 400 libros encuadernados en este taller tengan encuadernaciones defectuosas usamos la distribución de Poisson. En este caso concreto, k es 5 y, λ, el valor esperado de libros defectuosos es el 2% de 400, es decir, 8. Por lo tanto, la probabilidad buscada es
$$P(5;8)={\frac {8^{5}e^{-8}}{5!}}=0,092$$
> Referencia: https://es.wikipedia.org/wiki/Distribuci%C3%B3n_de_Poisson
```
k=5; Lamda = 8
print('P(5;8)=',st.poisson(Lamda).pmf(k))
```
## Usando el paquete estadístico `stats`
```
# Parámetros
Lamda = [8,20]
k=np.arange(0,40);
# Distribución de probabilidad
P = np.array([st.poisson(Lamda[i]).pmf(k) for i in range(len(Lamda))])
# Distribución de probabilidad acumulada
P_acum = np.array([st.poisson(Lamda[i]).cdf(k) for i in range(len(Lamda))])
fig,[ax1,ax2] = plt.subplots(1,2,sharey=False,figsize=[12,4])
ax1.plot(P.T,'o',markersize=3)
ax1.legend(['$\lambda$=%d'%i for i in Lamda])
ax1.title.set_text('Distribución de probabilidad')
ax2.plot(P_acum.T,'o',markersize=3)
[ax2.hlines(P_acum[i,:],range(len(k)),range(1,len(k)+1)) for i in range(len(Lamda))]
plt.legend(['$\lambda$=%d'%i for i in Lamda])
ax2.title.set_text('Distribución de probabilidad')
plt.show()
# P_acum.shape
```
## Usando las expresiones matemáticas
```
import scipy.special as sps
p = lambda k,l:(l**k*np.exp(-l))/sps.gamma(k+1)
k = np.arange(0,50)
l = [1,10,20,30]
P = np.asmatrix(list(map(lambda x:p(k,x*np.ones(len(k))),l))).T
print(P.shape)
plt.figure(1,figsize=[12,4])
plt.subplot(121)
plt.plot(P,'o',markersize=3)
plt.legend(['$\lambda$=%d'%i for i in l])
plt.title('Distribución de probabilidad')
# Probabilidad acumulada
P_ac = np.cumsum(P,axis=0)
plt.subplot(122)
plt.plot(P_ac,'o',markersize=3)
[plt.hlines(P_ac[:,i],range(len(P_ac)),range(1,len(P_ac)+1)) for i in range(len(l))]
plt.legend(['$\lambda$=%d'%i for i in l])
plt.title('Distribución de probabilidad acumulada')
plt.show()
```

## 7. Distribuciónn triangular
Parámetros:
- a : $a\in (-\infty ,\infty)$
- b : $b > a$
- c : $a\leq c\leq b$
- Soporte: $a\leq x\leq b$
$$\textbf{Función de densidad de probabilidad}\\f(x|a,b,c)={\begin{cases}{\frac {2(x-a)}{(b-a)(c-a)}}&{\text{para }}a\leq x<c,\\[4pt]{\frac {2}{b-a}}&{\text{para }}x=c,\\[4pt]{\frac {2(b-x)}{(b-a)(b-c)}}&{\text{para }}c<x\leq b,\\[4pt]0&{\text{para otros casos}}\end{cases}}$$
$$\textbf{Función de distribución de probabilidad}\\F(x|a,b,c)={\begin{cases}{0}&{\text{para }}x\leq a,\\[4pt]{\frac {(x-a)^2}{(b-a)(c-a)}}&{\text{para }}a< x\leq c,\\[4pt]{1-\frac{(b-x)^2}{(b-a)(b-c)}}&{\text{para }}c<x< b,\\[4pt]1&{\text{para }}b\leq x\end{cases}}$$

### Uso de la distribución triangular
La distribución triangular es habitualmente empleada como una descripción subjetiva de una población para la que sólo se cuenta con una cantidad limitada de datos muestrales y, especialmente en casos en que la relación entre variables es conocida pero los **datos son escasos** (posiblemente porque es alto el costo de recolectarlos). Está basada en un conocimiento del mínimo y el máximo como el del valor modal. Por estos motivos, la Distribución Triangular ha sido denominada como la de "falta de precisión" o de información.
> Referencia: https://en.wikipedia.org/wiki/Triangular_distribution
# <font color ='red'> Ejercicio (Opcional)
Generar valores aleatorios para la siguiente distribución de probabilidad
$$f(x)=\begin{cases}\frac{2}{(c-a)(b-a)}(x-a), & a\leq x \leq b\\ \frac{-2}{(c-a)(c-b)}(x-c),& b\leq x \leq c \end{cases}$$ con a=1; b=2; c=5
1. Usando el método de la transformada inversa.
2. Usando el método de aceptación y rechazo.
3. En la librería `import scipy.stats as st` hay una función que genera variables aleatorias triangulares `st.triang.pdf(x, c, loc, scale)` donde "c,loc,scale" son los parámetros de esta distribución (similares a los que nuestra función se llaman a,b,c, PERO NO IGUALES). Explorar el help de python para encontrar la equivalencia entre los parámetros "c,loc,scale" y los parámetros de nuestra función con parámetros "a,b,c". La solución esperada es como se muestra a continuación:

4. Generar 1000 variables aleatorias usando la función creada en el punto 2 y usando la función `st.triang.rvs` y graficar el histograma en dos gráficas diferentes de cada uno de los conjuntos de variables aleatorios creado. Se espera algo como esto:

### La pongo como opcional por que puede aparecer en un quiz o un examen.
# <font color ='red'>Ejercicio (no necesario):</font>
La tarea debe de realizarse en grupos, los cuales están nombrados en la siguiente tabla. La tarea consiste en modificar una de las páginas que corresponde a el grupo conformado, por ejemplo si eres el grupo 1, debes de modificar la página que corresponde a tu grupo, no ninguna de las otras páginas. En dicha página les voy a pedir que respondan a cada una de las siguientes pregutas, para el próximo viernes 9 de octubre, donde consultan acerca de cada una de las distribuciones de probabilidad asignadas. Lo que necesito que consulten es:
1. Explicación del uso de cada distribución de probabilidad.
2. Utilizar recursos audiovisuales, como videos, tablas, gifts, imágenes, enlace externos, etc, los cuales desde esta plataforma de canvas es posible introducir, en donde expliquen de la forma mas amigable y simple posible, las aplicaciones y usos de las distribuciones de probabilidad asignadas.
3. Consultar en libros, internet, aplicaciones de como usar dichas distribuciones, por qué usarlas y posibles aplicaciones en finanzas.
4. También pueden poner la descripción matemática de dichas distribuciones. Noten que pueden ingresar código latex para poder ingresar ecuaciones y demás.
5. Poner pantallazos del código y resultados de como usar dicha función de distribución de probabilidad en python. Algo como esto
> Distribución poisson
> - Distribución de probabilidad y distribución acumulada (usando el paquete estadístico stats)
> 
> - Gráficas para distintos parámetros en este caso $\lambda = [8,30]$
> 
6. ¿Cómo pudieran responder la pregunta P(X<b)=** ? Para el caso de la distribución poisson exploren el comando `st.poisson(Lamda).ppf(b)`
La calificación estará basada, en la creatividad y el manejo que tengan de cada una de sus distribuciones de probabilidad a la hora de la exposición.
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Oscar David Jaramillo Zuluaga
</footer>
|
github_jupyter
|
# Importamos librerías a trabajar en todas las simulaciones
import matplotlib.pyplot as plt
import numpy as np
from itertools import cycle # Librería para hacer ciclos
import scipy.stats as st # Librería estadística
from math import factorial as fac # Importo la operación factorial
%matplotlib inline
a,b=1,2 # Interval
U = np.random.uniform(a,b)
U
mu, sigma = 0, 0.1 # mean and standard deviation
N = np.random.normal(mu, sigma,5)
N
st.norm
beta = 4
E = np.random.exponential(beta,1)
E
st.expon
from scipy.stats import erlang
N = 10000 # Número de muestras
k,scale = 3,1/4 # Parámetros de la distribución
E1 = erlang.rvs(k,scale=scale,size=N)
E2 = np.random.gamma(k,scale,N) # Erlang como caso particular de la distribución gamma
plt.figure(1,figsize=[12,4])
plt.subplot(121)
plt.hist(E1,50,density=True,label='Usando Lib. scipy')
plt.legend()
plt.subplot(122)
plt.hist(E2,50,density=True,label='Usando Lib. numpy')
plt.legend()
plt.show()
# Función que calcula la probabilidad acumulada optimizada
def proba_binomial(n:'Cantidad de ensayos',p:'Probabilidad de los eventos',
N:'Cantidad de puntos a graficar'):
Pr = np.zeros(N)
Pr[0] = (1-p)**n
def pr(i):
nonlocal Pr
c = p/(1-p)
Pr[i+1]=(c*(n-i)/(i+1))*Pr[i]
# Lleno el vector Pr usando compresión de listas
[pr(i) for i in range(N-1)]
return Pr
# Comprobación de función creada
# Distintos parámetros para graficar la función binomial
n = [50,100,150]
# Parámetro p de la dristribución
p = 0.5
# Resultado usando método convencional
P = list(map(lambda x,n: proba_binomial(n,p,100),range(len(n)),n))
P = np.asmatrix(P)
print(P.shape)
def grafica_distribucion_prob(P:'Matriz de probabilidades binomiales'):
# Gráfica de densidad de probabilidad
fig,(ax1,ax2) = plt.subplots(1,2)
fig.set_figwidth(10)
ax1.plot(P.T,'o',markersize=3)
ax1.legend(['n=50','n=100','n=150'])
ax1.set_title('Densidad de probabilidad')
# ax1.show()
# Probabilidad acumulada
F = np.cumsum(P,axis=1)
# plt.figure(2)
ax2.plot(F.T,'o',markersize=3)
ax2.legend(['n=%d'%n[0],'n=%d'%n[1],'n=%d'%n[2]])
ax2.set_title('Distribución acumulada')
plt.show()
# Gráfica del método convencional y vectorizado
grafica_distribucion_prob(P)
n = 51; p=1/6; X=20
print('P(X=20)=',st.binom(n,p).pmf(X))
p = .5; n = 30
mu = n*p; sigma = np.sqrt(n*p*(1-p))
# Usando nuetra función creada
Bi = proba_binomial(n,p,n)
plt.figure(1,figsize=[10,5])
plt.subplot(121)
plt.plot(Bi,'o')
plt.title('Distribución binomial n=%i,p=%0.2f'%(n,p))
# Usando la función de la librería scipy para graficar la normal
x = np.arange(0,n)
Bi_norm = st.norm.pdf(x,loc=mu,scale=sigma)
plt.subplot(122)
plt.plot(Bi_norm,'o')
plt.title('Distribución~normal(np,np(1-p))')
plt.show()
k=5; Lamda = 8
print('P(5;8)=',st.poisson(Lamda).pmf(k))
# Parámetros
Lamda = [8,20]
k=np.arange(0,40);
# Distribución de probabilidad
P = np.array([st.poisson(Lamda[i]).pmf(k) for i in range(len(Lamda))])
# Distribución de probabilidad acumulada
P_acum = np.array([st.poisson(Lamda[i]).cdf(k) for i in range(len(Lamda))])
fig,[ax1,ax2] = plt.subplots(1,2,sharey=False,figsize=[12,4])
ax1.plot(P.T,'o',markersize=3)
ax1.legend(['$\lambda$=%d'%i for i in Lamda])
ax1.title.set_text('Distribución de probabilidad')
ax2.plot(P_acum.T,'o',markersize=3)
[ax2.hlines(P_acum[i,:],range(len(k)),range(1,len(k)+1)) for i in range(len(Lamda))]
plt.legend(['$\lambda$=%d'%i for i in Lamda])
ax2.title.set_text('Distribución de probabilidad')
plt.show()
# P_acum.shape
import scipy.special as sps
p = lambda k,l:(l**k*np.exp(-l))/sps.gamma(k+1)
k = np.arange(0,50)
l = [1,10,20,30]
P = np.asmatrix(list(map(lambda x:p(k,x*np.ones(len(k))),l))).T
print(P.shape)
plt.figure(1,figsize=[12,4])
plt.subplot(121)
plt.plot(P,'o',markersize=3)
plt.legend(['$\lambda$=%d'%i for i in l])
plt.title('Distribución de probabilidad')
# Probabilidad acumulada
P_ac = np.cumsum(P,axis=0)
plt.subplot(122)
plt.plot(P_ac,'o',markersize=3)
[plt.hlines(P_ac[:,i],range(len(P_ac)),range(1,len(P_ac)+1)) for i in range(len(l))]
plt.legend(['$\lambda$=%d'%i for i in l])
plt.title('Distribución de probabilidad acumulada')
plt.show()
| 0.347094 | 0.990092 |
```
%matplotlib inline
import pandas as pd
import pandas_profiling
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#imputation
from sklearn.preprocessing import Imputer, LabelEncoder, OneHotEncoder
# train_test_split
from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score
# feature selection
from sklearn.feature_selection import RFE
# classification models
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
# evaluation metrics
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score, roc_curve
import warnings
warnings.filterwarnings("ignore")
missing_values = ["n/a", "na", "--"]
df = pd.read_csv("D:/DS/zs/zs_data.csv",index_col = 0, na_values = missing_values)
#pandas_profiling.ProfileReport(df)
df.describe()
df.info()
df['shot_id_number'] = df.index + 1
df['shot_id_number'].isnull().any()
df['knockout_match'].value_counts()
df['match_event_id'].value_counts().head()
df['power_of_shot'].value_counts()
correlation = df.corr()
sns.heatmap(correlation, cmap = 'viridis')
df2 = df.loc[df['is_goal'].isnull() == True]
df['is_goal'].isnull().sum()
df1 = df.drop(df[df['is_goal'].isnull()].index)
df1.shape
df2.shape
df.shape
df1['is_goal'].isnull().any()
df2['is_goal'].value_counts()
# which imples all are NULL values
```
Hence,
* df is the main dataset
* df1 is the df with non NULL is_goal values
* df2 is the df with NULL is_goal values
```
submit = df2[['shot_id_number', 'is_goal']].copy()
submit.shape
```
Checking for NULL values in other features of df1
```
print(df1.isnull().any())
df1.head()
df1.count()
df1.shape
```
By comparing the above two cells, we find that apart from 'match_id' , 'team_id' , 'shot_id_number' & 'is_goal', all other features have NULL values.
```
df1 = df1.drop(['match_event_id', 'game_season', 'date_of_game','team_name', 'lat/lng', 'match_id', 'team_id'], axis = 1)
df1.head()
```
Catgorical Variables:
* area_of_shot
* shot_basics
* home/away
* type_of_shot
* type_of_combined_shot
Dropping knockout_match.1, remaining_min.1, power_of_shot.1 since they contain weird values
```
df1 = df1.drop(['knockout_match.1', 'power_of_shot.1', 'remaining_min.1'], axis = 1)
df1.head()
df1['remaining_min'].value_counts()
df1['remaining_min'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['power_of_shot'].value_counts()
df1['power_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['knockout_match'].value_counts()
df1['knockout_match'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['remaining_sec'].value_counts().head()
df1['remaining_sec'].value_counts().plot.barh(color=sns.color_palette('viridis_r',10))
df1['remaining_sec.1'].value_counts().head()
df1['remaining_sec.1'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['distance_of_shot'].value_counts().head()
df1['distance_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['distance_of_shot.1'].value_counts().head()
df1['distance_of_shot.1'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['area_of_shot'].value_counts()
df1['area_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['shot_basics'].value_counts()
df1['shot_basics'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['range_of_shot'].value_counts()
df1['range_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['home/away'].value_counts().head()
```
HOME/AWAY - Plot after Imputation and Varaible Conversion
```
df1['type_of_shot'].value_counts().head()
df1['type_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['type_of_combined_shot'].value_counts()
df1['type_of_combined_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
sns.catplot(y="distance_of_shot", x="remaining_sec", data=df1);
```
Numeric Variables:
* remaining_min
* power_of_shot
* knockout_match
* remaining_sec
* remaining_sec.1
* distance_of_shot
* distance_of_shot.1
Categorical Variables:
* area_of_shot
* shot_basics
* range_of_shot
* home/away
* type_of_shot
* type_of_combined_shot
```
df1.head()
```
#### Dealing with HOME/AWAY
```
df1['home/away'] = df1['home/away'].fillna(method='ffill')
df1['home/away'].isnull().any()
ha = np.asarray(df1[['home/away']])
ha.size
df1['home/away'] = df1['home/away'].str.contains("@", regex = True)
df1['home/away'].size
df1['home/away'].head(20)
df1.isnull().sum()
```
Since none of the features have missing values more than 70% of all entries; rather than being dropped, they will be imputed.
### IMPUTATION USING LINEAR REGRESSION
```
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
numeric_df1 = df1[['power_of_shot', 'knockout_match', 'remaining_sec', 'distance_of_shot', 'remaining_min']]
numeric_df1.isnull().sum()
numeric_df1.shape
numeric_df1_remmin_notnull = numeric_df1.loc[numeric_df1['remaining_min'].isnull() == False]
numeric_df1_remmin_notnull.isnull().sum()
numeric_df1_remmin_notnull.shape
median = numeric_df1_remmin_notnull['power_of_shot'].median()
numeric_df1_remmin_notnull['power_of_shot'] = numeric_df1_remmin_notnull['power_of_shot'].fillna(median)
numeric_df1_remmin_notnull['knockout_match'] = numeric_df1_remmin_notnull['knockout_match'].fillna(method='ffill')
m1 = numeric_df1_remmin_notnull['remaining_sec'].median()
numeric_df1_remmin_notnull['remaining_sec'] = numeric_df1_remmin_notnull['remaining_sec'].fillna(m1)
m1 = numeric_df1_remmin_notnull['distance_of_shot'].median()
numeric_df1_remmin_notnull['distance_of_shot'] = numeric_df1_remmin_notnull['distance_of_shot'].fillna(m1)
numeric_df1_remmin_notnull.shape
numeric_df1_remmin_notnull.isnull().any()
X = numeric_df1_remmin_notnull.copy()
y = X['remaining_min'].copy()
X = X.drop(['remaining_min'], axis = 1)
X = np.asarray(X)
y = np.asarray(y)
1244/23185
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.05365538063403062, random_state = 42)
linreg.fit(X_train, y_train)
y_pred = linreg.predict(X_test)
y_pred = pd.DataFrame(y_pred)
y_pred.min()
numeric_df1.loc[numeric_df1['remaining_min'].isnull() == True] = y_pred
numeric_df1['remaining_min'].isnull().sum()
```
### IMPUTATION
```
len(df1) - df1.count()
df1.shape
```
#### Imputation for remaining_min
```
df1['remaining_min'] = df1['remaining_min'].fillna(method='ffill')
df1['remaining_min'].isnull().any()
df1['remaining_min'].count()
```
#### Imputation of location_x and location_y
```
median = df1['location_x'].median()
df1['location_x'] = df1['location_x'].fillna(median)
df1['location_x'].isnull().any()
median = df1['location_y'].median()
df1['location_y'] = df1['location_y'].fillna(median)
df1['location_y'].isnull().any()
```
#### Imputation for power_of_shot
```
df1['power_of_shot'].value_counts()
median = df1['power_of_shot'].median()
df1['power_of_shot'] = df1['power_of_shot'].fillna(median)
df1['power_of_shot'].isnull().any()
```
#### Imputation of knockout_match
```
df1['knockout_match'] = df1['knockout_match'].fillna(method='ffill')
df1['knockout_match'].isnull().any()
```
#### Imputation of remaining_sec & remaining_sec.1
```
m1 = df1['remaining_sec'].median()
df1['remaining_sec'] = df1['remaining_sec'].fillna(m1)
m2 = df1['remaining_sec.1'].median()
df1['remaining_sec.1'] = df1['remaining_sec.1'].fillna(m2)
df1['remaining_sec'].isnull().any()
df1['remaining_sec.1'].isnull().any()
```
#### Imputation of distance_of_shot & distance_of_shot.1
```
m1 = df1['distance_of_shot'].median()
df1['distance_of_shot'] = df1['distance_of_shot'].fillna(m1)
m2 = df1['distance_of_shot.1'].median()
df1['distance_of_shot.1'] = df1['distance_of_shot'].fillna(m2)
df1['distance_of_shot'].isnull().any()
df1['distance_of_shot.1'].isnull().any()
```
#### Imputing Categorical Variables
With most frequent value
```
df1['area_of_shot'] = df1['area_of_shot'].fillna(df1['area_of_shot'].value_counts().index[0])
df1['shot_basics'] = df1['shot_basics'].fillna(df1['shot_basics'].value_counts().index[0])
df1['range_of_shot'] = df1['range_of_shot'].fillna(df1['range_of_shot'].value_counts().index[0])
df1['home/away'] = df1['home/away'].fillna(df1['home/away'].value_counts().index[0])
df1['type_of_shot'] = df1['type_of_shot'].fillna(df1['type_of_shot'].value_counts().index[0])
df1['type_of_combined_shot'] = df1['type_of_combined_shot'].fillna(df1['type_of_combined_shot'].value_counts().index[0])
df1.isnull().any()
```
### Encoding Categorical Data
Categorical Variables:
* area_of_shot
* shot_basics
* range_of_shot
* home/away
* type_of_shot
* type_of_combined_shot
#ONEHOT ENCODING
df1['area_of_shot'] = pd.Categorical(df1['area_of_shot'])
df_aos_onehot = pd.get_dummies(df1['area_of_shot'], prefix = 'AOS')
df1['shot_basics'] = pd.Categorical(df1['shot_basics'])
df_sb_onehot = pd.get_dummies(df1['shot_basics'], prefix = 'SB')
df1['range_of_shot'] = pd.Categorical(df1['range_of_shot'])
df_ros_onehot = pd.get_dummies(df1['range_of_shot'], prefix = 'ROS')
df1['home/away'] = pd.Categorical(df1['home/away'])
df_ha_onehot = pd.get_dummies(df1['home/away'], prefix = 'HA')
df1['type_of_combined_shot'] = pd.Categorical(df1['type_of_combined_shot'])
df_tocs_onehot = pd.get_dummies(df1['type_of_combined_shot'], prefix = 'TOCS')
df1 = pd.concat([df1, df_aos_onehot], axis=1)
df1 = pd.concat([df1, df_sb_onehot], axis=1)
df1 = pd.concat([df1, df_ros_onehot], axis=1)
df1 = pd.concat([df1, df_ha_onehot], axis=1)
df1 = pd.concat([df1, df_tocs_onehot], axis=1)
correlation = df1.corr()
sns.heatmap(correlation, cmap = 'viridis')
```
le = LabelEncoder()
df1['area_of_shot'] = le.fit_transform(df1['area_of_shot'])
df1['shot_basics'] = le.fit_transform(df1['shot_basics'])
df1['range_of_shot'] = le.fit_transform(df1['range_of_shot'])
df1['home/away'] = le.fit_transform(df1['home/away'])
df1['type_of_shot'] = le.fit_transform(df1['type_of_shot'])
df1['type_of_combined_shot'] = le.fit_transform(df1['type_of_combined_shot'])
```
## Feature Engineering on Numeric Features
```
q
df1.head()
X = df1.copy()
X.head()
y = X['is_goal'].copy()
X = X.drop(['is_goal', 'shot_id_number'], axis = 1)
#X = X.drop(['area_of_shot', 'shot_basics', 'range_of_shot', 'home/away','type_of_shot', 'type_of_combined_shot'], axis = 1)
X.head()
y = pd.DataFrame(y)
y.head()
X.shape
y.shape
```
from sklearn.preprocessing import StandardScaler
X = pd.DataFrame(StandardScaler().fit_transform(X))
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
#apply SelectKBest class to extract top 10 best features
bestfeatures = SelectKBest(score_func=chi2, k=10)
fit = bestfeatures.fit(X,y)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(X.columns)
#concat two dataframes for better visualization
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score'] #naming the dataframe columns
print(featureScores.nlargest(10,'Score')) #print 10 best features
```
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=2, interaction_only=True, include_bias=True)
X = poly.fit_transform(X)
X = np.asarray(X)
y = np.asarray(y)
```
#skf = StratifiedKFold(n_splits=5, random_state = 42, shuffle = True)
"""for train_index, test_index in skf.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]"""
```
6268/24429
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X = pca.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25658029391, random_state = 42)
data = [X_train, X_test]
for dataset in data:
dataset['XY']= dataset['location_x']* dataset['location_y']
for dataset in data:
dataset['PD'] = dataset['power_of_shot']/dataset['distance_of_shot']
for dataset in data:
dataset['WTF'] = (dataset['area_of_shot']+dataset['range_of_shot'])*dataset['shot_basics']
X_train.head()
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X_train, y_train)
print(model.feature_importances_)
rfm = RandomForestClassifier(n_jobs = 10, random_state = 42)
rfm.fit(X_train, y_train)
y_pred_rfm = rfm.predict(X_test)
rfm.score(X_test, y_test)
knn = KNeighborsClassifier(n_neighbors = 15)
knn.fit(X_train, y_train)
y_pred_knn = knn.predict(X_test)
knn.score(X_test, y_test)
sgd = SGDClassifier(loss = 'modified_huber', shuffle = True, random_state = 101)
sgd.fit(X_train, y_train)
y_pred_sgd = sgd.predict(X_test)
sgd.score(X_test, y_test)
nb = GaussianNB()
nb.fit(X_train, y_train)
y_pred_nb = nb.predict(X_test)
nb.score(X_test, y_test)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred_log = logreg.predict(X_test)
y_pred_prob = logreg.predict_proba(X_test)
logreg.score(X_test, y_test)
dtree = DecisionTreeClassifier(max_depth = 10, random_state = 10, max_features = None, min_samples_leaf = 15)
dtree.fit(X_train, y_train)
y_pred_dt = dtree.predict(X_test)
dtree.score(X_test, y_test)
print(classification_report(y_test, y_pred_dt))
dtree_roc_auc = roc_auc_score(y_test, y_pred_dt)
fpr, tpr, thresholds = roc_curve(y_test, dtree.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Decision Tree (area = %0.2f)' % dtree_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
```
#### Hyperparameter Optiization for Random Forest Classifier
```
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
n_estimators = [int(x) for x in np.linspace(start = 1, stop = 20, num = 10)]
max_depth = [int(x) for x in np.linspace(1, 50, num = 10)]
max_depth.append(None)
max_features = ['auto', 'sqrt']
param_grid = {
'n_estimators': n_estimators,
'max_depth': max_depth,
'max_features': max_features,
}
estimator = RandomForestClassifier(random_state = 69)
cv_test = KFold(n_splits=5)
gscv = GridSearchCV(estimator, param_grid, n_jobs = -1,
scoring = 'roc_auc', cv = cv_test,
verbose = 2)
gscv.fit(X_train, y_train)
gscv.best_params_
best_model = gscv.best_estimator_
best_model.score(X_test,y_test)
rf2_pred = best_model.predict(X_test)
rf2_prob = best_model.predict_proba(X_test)[:, 1]
rf2_roc_auc = roc_auc_score(y_test, rf2_pred)
fpr, tpr, thresholds = roc_curve(y_test, rf2_prob)
plt.figure()
plt.plot(fpr, tpr, label='Random Forest (Model Tuned) (area = %0.2f)' % rf2_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([-0.04, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
y_pred_dt.size
prob = dtree.predict_proba(X_test)[:,1]
prob.size
prob = pd.DataFrame(prob)
prob.tail()
prob['is_goal'] = prob[0]
prob.head()
prob = prob.drop([0], axis = 1)
prob['is_goal'].isnull().any()
df2.shape
prob.shape
submit3 = df2[['shot_id_number']].copy()
submit3.shape
submit3 = submit3.reset_index()
submit3.isnull().any()
submit3['is_goal'] = prob[['is_goal']].copy()
submit3.isnull().any()
submit3.to_csv(r'D:/DS/zs4_submit.csv')
!jupyter nbconvert --to script zs3.ipynb
```
# --------------------------------------------------
# KERAS
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import pandas_profiling
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#imputation
from sklearn.preprocessing import Imputer, LabelEncoder, OneHotEncoder
# train_test_split
from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score
# feature selection
from sklearn.feature_selection import RFE
# classification models
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
# evaluation metrics
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score, roc_curve
import warnings
warnings.filterwarnings("ignore")
missing_values = ["n/a", "na", "--"]
df = pd.read_csv("D:/DS/zs/zs_data.csv",index_col = 0, na_values = missing_values)
#pandas_profiling.ProfileReport(df)
df.describe()
df.info()
df['shot_id_number'] = df.index + 1
df['shot_id_number'].isnull().any()
df['knockout_match'].value_counts()
df['match_event_id'].value_counts().head()
df['power_of_shot'].value_counts()
correlation = df.corr()
sns.heatmap(correlation, cmap = 'viridis')
df2 = df.loc[df['is_goal'].isnull() == True]
df['is_goal'].isnull().sum()
df1 = df.drop(df[df['is_goal'].isnull()].index)
df1.shape
df2.shape
df.shape
df1['is_goal'].isnull().any()
df2['is_goal'].value_counts()
# which imples all are NULL values
submit = df2[['shot_id_number', 'is_goal']].copy()
submit.shape
print(df1.isnull().any())
df1.head()
df1.count()
df1.shape
df1 = df1.drop(['match_event_id', 'game_season', 'date_of_game','team_name', 'lat/lng', 'match_id', 'team_id'], axis = 1)
df1.head()
df1 = df1.drop(['knockout_match.1', 'power_of_shot.1', 'remaining_min.1'], axis = 1)
df1.head()
df1['remaining_min'].value_counts()
df1['remaining_min'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['power_of_shot'].value_counts()
df1['power_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['knockout_match'].value_counts()
df1['knockout_match'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['remaining_sec'].value_counts().head()
df1['remaining_sec'].value_counts().plot.barh(color=sns.color_palette('viridis_r',10))
df1['remaining_sec.1'].value_counts().head()
df1['remaining_sec.1'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['distance_of_shot'].value_counts().head()
df1['distance_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['distance_of_shot.1'].value_counts().head()
df1['distance_of_shot.1'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['area_of_shot'].value_counts()
df1['area_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['shot_basics'].value_counts()
df1['shot_basics'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['range_of_shot'].value_counts()
df1['range_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['home/away'].value_counts().head()
df1['type_of_shot'].value_counts().head()
df1['type_of_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
df1['type_of_combined_shot'].value_counts()
df1['type_of_combined_shot'].value_counts().plot.barh(width=0.9,color=sns.color_palette('viridis_r',10))
sns.catplot(y="distance_of_shot", x="remaining_sec", data=df1);
df1.head()
df1['home/away'] = df1['home/away'].fillna(method='ffill')
df1['home/away'].isnull().any()
ha = np.asarray(df1[['home/away']])
ha.size
df1['home/away'] = df1['home/away'].str.contains("@", regex = True)
df1['home/away'].size
df1['home/away'].head(20)
df1.isnull().sum()
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
numeric_df1 = df1[['power_of_shot', 'knockout_match', 'remaining_sec', 'distance_of_shot', 'remaining_min']]
numeric_df1.isnull().sum()
numeric_df1.shape
numeric_df1_remmin_notnull = numeric_df1.loc[numeric_df1['remaining_min'].isnull() == False]
numeric_df1_remmin_notnull.isnull().sum()
numeric_df1_remmin_notnull.shape
median = numeric_df1_remmin_notnull['power_of_shot'].median()
numeric_df1_remmin_notnull['power_of_shot'] = numeric_df1_remmin_notnull['power_of_shot'].fillna(median)
numeric_df1_remmin_notnull['knockout_match'] = numeric_df1_remmin_notnull['knockout_match'].fillna(method='ffill')
m1 = numeric_df1_remmin_notnull['remaining_sec'].median()
numeric_df1_remmin_notnull['remaining_sec'] = numeric_df1_remmin_notnull['remaining_sec'].fillna(m1)
m1 = numeric_df1_remmin_notnull['distance_of_shot'].median()
numeric_df1_remmin_notnull['distance_of_shot'] = numeric_df1_remmin_notnull['distance_of_shot'].fillna(m1)
numeric_df1_remmin_notnull.shape
numeric_df1_remmin_notnull.isnull().any()
X = numeric_df1_remmin_notnull.copy()
y = X['remaining_min'].copy()
X = X.drop(['remaining_min'], axis = 1)
X = np.asarray(X)
y = np.asarray(y)
1244/23185
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.05365538063403062, random_state = 42)
linreg.fit(X_train, y_train)
y_pred = linreg.predict(X_test)
y_pred = pd.DataFrame(y_pred)
y_pred.min()
numeric_df1.loc[numeric_df1['remaining_min'].isnull() == True] = y_pred
numeric_df1['remaining_min'].isnull().sum()
len(df1) - df1.count()
df1.shape
df1['remaining_min'] = df1['remaining_min'].fillna(method='ffill')
df1['remaining_min'].isnull().any()
df1['remaining_min'].count()
median = df1['location_x'].median()
df1['location_x'] = df1['location_x'].fillna(median)
df1['location_x'].isnull().any()
median = df1['location_y'].median()
df1['location_y'] = df1['location_y'].fillna(median)
df1['location_y'].isnull().any()
df1['power_of_shot'].value_counts()
median = df1['power_of_shot'].median()
df1['power_of_shot'] = df1['power_of_shot'].fillna(median)
df1['power_of_shot'].isnull().any()
df1['knockout_match'] = df1['knockout_match'].fillna(method='ffill')
df1['knockout_match'].isnull().any()
m1 = df1['remaining_sec'].median()
df1['remaining_sec'] = df1['remaining_sec'].fillna(m1)
m2 = df1['remaining_sec.1'].median()
df1['remaining_sec.1'] = df1['remaining_sec.1'].fillna(m2)
df1['remaining_sec'].isnull().any()
df1['remaining_sec.1'].isnull().any()
m1 = df1['distance_of_shot'].median()
df1['distance_of_shot'] = df1['distance_of_shot'].fillna(m1)
m2 = df1['distance_of_shot.1'].median()
df1['distance_of_shot.1'] = df1['distance_of_shot'].fillna(m2)
df1['distance_of_shot'].isnull().any()
df1['distance_of_shot.1'].isnull().any()
df1['area_of_shot'] = df1['area_of_shot'].fillna(df1['area_of_shot'].value_counts().index[0])
df1['shot_basics'] = df1['shot_basics'].fillna(df1['shot_basics'].value_counts().index[0])
df1['range_of_shot'] = df1['range_of_shot'].fillna(df1['range_of_shot'].value_counts().index[0])
df1['home/away'] = df1['home/away'].fillna(df1['home/away'].value_counts().index[0])
df1['type_of_shot'] = df1['type_of_shot'].fillna(df1['type_of_shot'].value_counts().index[0])
df1['type_of_combined_shot'] = df1['type_of_combined_shot'].fillna(df1['type_of_combined_shot'].value_counts().index[0])
df1.isnull().any()
le = LabelEncoder()
df1['area_of_shot'] = le.fit_transform(df1['area_of_shot'])
df1['shot_basics'] = le.fit_transform(df1['shot_basics'])
df1['range_of_shot'] = le.fit_transform(df1['range_of_shot'])
df1['home/away'] = le.fit_transform(df1['home/away'])
df1['type_of_shot'] = le.fit_transform(df1['type_of_shot'])
df1['type_of_combined_shot'] = le.fit_transform(df1['type_of_combined_shot'])
q
df1.head()
X = df1.copy()
X.head()
y = X['is_goal'].copy()
X = X.drop(['is_goal', 'shot_id_number'], axis = 1)
#X = X.drop(['area_of_shot', 'shot_basics', 'range_of_shot', 'home/away','type_of_shot', 'type_of_combined_shot'], axis = 1)
X.head()
y = pd.DataFrame(y)
y.head()
X.shape
y.shape
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=2, interaction_only=True, include_bias=True)
X = poly.fit_transform(X)
X = np.asarray(X)
y = np.asarray(y)
6268/24429
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X = pca.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25658029391, random_state = 42)
data = [X_train, X_test]
for dataset in data:
dataset['XY']= dataset['location_x']* dataset['location_y']
for dataset in data:
dataset['PD'] = dataset['power_of_shot']/dataset['distance_of_shot']
for dataset in data:
dataset['WTF'] = (dataset['area_of_shot']+dataset['range_of_shot'])*dataset['shot_basics']
X_train.head()
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X_train, y_train)
print(model.feature_importances_)
rfm = RandomForestClassifier(n_jobs = 10, random_state = 42)
rfm.fit(X_train, y_train)
y_pred_rfm = rfm.predict(X_test)
rfm.score(X_test, y_test)
knn = KNeighborsClassifier(n_neighbors = 15)
knn.fit(X_train, y_train)
y_pred_knn = knn.predict(X_test)
knn.score(X_test, y_test)
sgd = SGDClassifier(loss = 'modified_huber', shuffle = True, random_state = 101)
sgd.fit(X_train, y_train)
y_pred_sgd = sgd.predict(X_test)
sgd.score(X_test, y_test)
nb = GaussianNB()
nb.fit(X_train, y_train)
y_pred_nb = nb.predict(X_test)
nb.score(X_test, y_test)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred_log = logreg.predict(X_test)
y_pred_prob = logreg.predict_proba(X_test)
logreg.score(X_test, y_test)
dtree = DecisionTreeClassifier(max_depth = 10, random_state = 10, max_features = None, min_samples_leaf = 15)
dtree.fit(X_train, y_train)
y_pred_dt = dtree.predict(X_test)
dtree.score(X_test, y_test)
print(classification_report(y_test, y_pred_dt))
dtree_roc_auc = roc_auc_score(y_test, y_pred_dt)
fpr, tpr, thresholds = roc_curve(y_test, dtree.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Decision Tree (area = %0.2f)' % dtree_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
n_estimators = [int(x) for x in np.linspace(start = 1, stop = 20, num = 10)]
max_depth = [int(x) for x in np.linspace(1, 50, num = 10)]
max_depth.append(None)
max_features = ['auto', 'sqrt']
param_grid = {
'n_estimators': n_estimators,
'max_depth': max_depth,
'max_features': max_features,
}
estimator = RandomForestClassifier(random_state = 69)
cv_test = KFold(n_splits=5)
gscv = GridSearchCV(estimator, param_grid, n_jobs = -1,
scoring = 'roc_auc', cv = cv_test,
verbose = 2)
gscv.fit(X_train, y_train)
gscv.best_params_
best_model = gscv.best_estimator_
best_model.score(X_test,y_test)
rf2_pred = best_model.predict(X_test)
rf2_prob = best_model.predict_proba(X_test)[:, 1]
rf2_roc_auc = roc_auc_score(y_test, rf2_pred)
fpr, tpr, thresholds = roc_curve(y_test, rf2_prob)
plt.figure()
plt.plot(fpr, tpr, label='Random Forest (Model Tuned) (area = %0.2f)' % rf2_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([-0.04, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
y_pred_dt.size
prob = dtree.predict_proba(X_test)[:,1]
prob.size
prob = pd.DataFrame(prob)
prob.tail()
prob['is_goal'] = prob[0]
prob.head()
prob = prob.drop([0], axis = 1)
prob['is_goal'].isnull().any()
df2.shape
prob.shape
submit3 = df2[['shot_id_number']].copy()
submit3.shape
submit3 = submit3.reset_index()
submit3.isnull().any()
submit3['is_goal'] = prob[['is_goal']].copy()
submit3.isnull().any()
submit3.to_csv(r'D:/DS/zs4_submit.csv')
!jupyter nbconvert --to script zs3.ipynb
| 0.42919 | 0.735428 |
Name : Sravani Suravajhula
UTA ID : 1001778007
#### Abstract:
Understanding the sentences and interpreting them in a meaningful way is increasingly important with the current machine learning tools. In this project we try to predict the rating of a game review utilizing some of the models available
#### Introduction:
BoardGameGeek (BGG) is an online forum for board gaming hobbyists and a game database that holds reviews, images and videos for over 101,000 different tabletop games, including European-style board games, wargames, and card games. In addition to the game database, the site allows users to rate games on a 1–10 scale and publishes a ranked list of board games
In this project we start analyzing the reviews for the different games from BoardGameGeek Data and train a model that can predict the rating based on a review. Once we trained different models available under sklearn, we will use a best fit model based on our resiults and use it for our web application when a user can enter and out model will predit the rating
#### Overview:
My process to do this follows a sequence of steps. I first started with reading the data into pandas dataframes. After
that i removed some of the unnecessary data and keep the important data that will be helpful for further analysis.
Then i started cleaning the data my removing all the unnecessary words which won't aide us in determining the ratings.
Finally once we cleanup the data, i used several machine learning models to determine the best fit for our requirement.
In every stage of this process i provided some additional details and also provided some visual way to interpret the
results.
During this process i referred several documents and user guides to make it work with good run rime and decent results.
I incorporated all the credits and citations at the bottom in the References sections.
Let's begin with the actual process.
##### Import Libraries:
As a starting point first we will import all the required python modules
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
import itertools
import copy
import random
import pickle
from collections import defaultdict,Counter
from sklearn.datasets import load_files
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from nltk.corpus import stopwords,words
from sklearn.linear_model import LinearRegression ,LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from wordcloud import WordCloud
```
##### Read Data:
The below function will read the data from the respective folders. Here i loaded all the required files under a folder called data is stored within the current folder of this notebook.
After the function we utilized the function to load all the data into pandas dataframes and i printed the header for each dataframe to get a sense of the columns
```
def read_data():
raw_additional_df=pd.read_csv('data/games_detailed_info.csv',index_col=0)
raw_reviews_df=pd.read_csv('data/bgg-15m-reviews.csv')
raw_score_df=pd.read_csv('data/2020-08-19.csv')
return raw_additional_df,raw_reviews_df,raw_score_df
raw_additional_df,raw_reviews_df,raw_score_df=read_data()
raw_reviews_df.head()
raw_additional_df.head()
raw_score_df.head()
def preprocess_data():
additional_df=raw_additional_df.copy()
additional_df.drop(columns=['thumbnail','image','boardgameintegration','boardgamecompilation','Party Game Rank','Abstract Game Rank','Thematic Rank','War Game Rank','Customizable Rank','Children\'s Game Rank','RPG Item Rank','Accessory Rank','Video Game Rank','Amiga Rank','Commodore 64 Rank','Arcade Rank','Atari ST Rank'],axis=1,inplace=True)
reviews_df=raw_reviews_df[['user','rating','comment','ID','name']].copy()
reviews_df.dropna(subset=['comment'],inplace=True)
reviews_df.reset_index(drop=True,inplace=True)
score_df=raw_score_df[['ID','Name','Year','Rank','Average','Bayes average','Users rated']].copy()
return additional_df,reviews_df,score_df
additional_df,reviews_df,score_df=preprocess_data()
score_df.head()
#score_df['Average'].plot(kind='hist')
#score_df['Average'].plot.hist()
#score_df['Average'].hist(bins=10,grid=False,legend=True)
plt.title('Histogram based on Average ratings')
plt.hist(score_df['Average'],rwidth=0.8)
plt.xlabel('Average Rating Value')
plt.ylabel('No of Games')
#plt.grid(b=None)
plt.show()
```
##### Cleanup Data:
Below function is to remove some of the additional characters from the review. After that we we will apply this function to all the comments in the review. Additional we will split each review into separate words and store then under a new column comment_split.
```
def cleanup_data(data):
pattern1=re.compile("[!#$%&'()*+,\'\"-./:;<=>?@[\]^_`{|}~]")
pattern2=re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
data=re.sub(pattern1, '', data)
#data=re.sub(pattern2, ' ', data).lower()
return data
reviews_df['comment']=reviews_df['comment'].apply(cleanup_data)
reviews_df['comment_split']=reviews_df['comment'].apply(lambda line:line.lower().split())
reviews_df.head()
```
##### Remove Unnecessary Words:
Based on a simple analysis of the words in the comments, we have huge data with lot of unnecessary words. THose words include common stop words defined in ntlk, numberic words, non english words and few additional words. ALong with these there are many words which were not available atleast in 5 reviews.
Removing all those words will improve the performance of the analysis as well as improve the results
```
flatten = itertools.chain.from_iterable
complete_data=list(flatten(reviews_df['comment_split']))
complete_counter=Counter(complete_data)
sorted_words=dict(sorted(complete_counter.items(),key=lambda i:i[1],reverse=True))
stop_words=set(stopwords.words('english'))
additional_words=['game','games','play','player','players','get','would','also','get','got','played','playing']
for word in list(sorted_words):
if sorted_words[word] <5 or word=='br' or word in stop_words or not word.isalpha() or word in additional_words:
del sorted_words[word]
required_words=list(sorted_words.keys())
print('Total words ',len(complete_data))
print('No of unique words',len(complete_counter))
print('No of required words',len(required_words))
reviews_df['comment_split']=reviews_df['comment_split'].apply(lambda line:[word for word in line if word in sorted_words])
reviews_df['rating']=reviews_df['rating'].round()
reviews_df.head()
```
#### Histogram based on the reviews:
The below histogram shows an overview of the number of reviews for each rating value
```
plt.title('Histogram based on ratings across Reviews')
plt.hist(reviews_df['rating'],rwidth=0.8)
plt.xlabel('Ratings Value')
plt.ylabel('No of Reviews')
plt.show()
```
##### High Frequency Words:
The below word cloud shows the Words with highest frequency in the reviews after cleanup
```
word_cloud = WordCloud(width=400, height=350,colormap='plasma',background_color='white').generate_from_frequencies(sorted_words)
plt.figure(figsize=(10, 8))
plt.imshow(word_cloud, interpolation='bilinear')
plt.axis("off")
plt.title('Common Words', fontsize=20)
plt.show()
reviews_df['comment_joined']=reviews_df['comment_split'].apply(' '.join)
```
##### Save reviews data:
Finally we cleaned up the complete data and store the processed reviews data into a pickle file
```
reviews_final=reviews_df[['comment_joined','rating']].copy()
with open('reviews_final.pkl', 'wb') as f_reviews:
pickle.dump(reviews_final, f_reviews)
```
##### Split Data
We will split the data into training and test datasets each with 10% of the actual data
```
random.seed(0)
temp=random.randint(0,100)
train_data_x,test_data_x,train_data_y,test_data_y=train_test_split(reviews_df['comment_joined'],reviews_df['rating'],train_size=0.1,test_size=0.1,random_state=temp)
print('Train Data comments Shape : ',train_data_x.shape)
print('Test Data comments Shape : ',test_data_x.shape)
print('Train Data rating Shape : ',train_data_y.shape)
print('Test Data ratings Shape : ',test_data_y.shape)
```
##### Train Data:
Below histogram gives an overview of the distribution of the training reviews between the possible 10 ratings
```
plt.title('Histogram based on Train data ratings across Reviews')
plt.hist(train_data_y,rwidth=0.8)
plt.xlabel('Ratings Value')
plt.ylabel('No of Reviews')
plt.show()
```
##### Test Data:
Below histogram gives an overview of the distribution of the test reviews between the possible 10 ratings
```
plt.title('Histogram based on Test data ratings across Reviews')
plt.hist(test_data_y,rwidth=0.8)
plt.xlabel('Ratings Value')
plt.ylabel('No of Reviews')
plt.show()
```
##### Models:
So far we did all the required cleanup for the given data. Now we will use the cleaned training data for training the models. In
this process i will use 4 different models. THe models are as below
1. Linear Regression
2. Logistic Regression
3. Naive Bayes
4. Random Forest Classifier
##### Linear Regression:
Linear regression is a linear approach to modelling the relationship between a scalar response and one or more dependent and independent variables. Linear regression model fits using the least squares approach
Linear regression can interpreted as below
Y=a0+a1*X1+a2*X2+a3*X3+.....
##### Logistic Regression:
Logistic regression is a model where the relationship between the response and the variables is a Logarithmic in nature. Unlike Linear regression output which is continuous, Logistic regression output is more of discrete in nature.
Logistic regression can be interpreted as below
log p/(1-p)= b^(a0+a1*X1+a2*X2+a3*X3+.....)
##### Naive Bayes:
Naive Bayes classifier is a probabilistic classifier based on applying Bayes' theorem with naive independence assumption between the features. Naive Bayes classifier is highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers.
Bayes' theorem, the conditional probability can be interpreted as below
p(C|X) = p(C) * p(X|C)/p(X)
##### Random Forest Classifier:
Random Forest Classifier is an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) of the individual trees. By using random forests we can reduce the overfitting on the training data that is normally seen in an individual decision tree.
Random forests uses only certain features in the dataset to split for a decision tree rather than using all the features in creating the tree.
```
vector=TfidfVectorizer()
train_vector_x=vector.fit_transform(train_data_x)
test_vector_x=vector.transform(test_data_x)
with open('vocabulary.pkl', 'wb') as f_vectorizer:
pickle.dump(vector.vocabulary_, f_vectorizer)
```
##### Linear Regression
Linear Regression is a linear model with coefficients to minimize the residual sum of squares between the observed targets in the dataset, and the targets predicted by the linear approximation.
```
lin_reg=LinearRegression(n_jobs=-1)
lin_reg.fit(train_vector_x,train_data_y)
lin_reg_train_model=lin_reg.predict(train_vector_x)
accuracy_lin_train=accuracy_score(lin_reg_train_model.round(),train_data_y)
print('Linear Regression Train Accuracy',round(accuracy_lin_train*100,2))
lin_reg_test_model=lin_reg.predict(test_vector_x)
accuracy_lin_test=accuracy_score(lin_reg_test_model.round(),test_data_y)
print('Linear Regression Test Accuracy',round(accuracy_lin_test*100,2))
with open('lin_reg.pkl', 'wb') as f_lin_reg:
pickle.dump(lin_reg, f_lin_reg)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(lin_reg_train_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Linear Regression Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(lin_reg_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Linear Regression Original vs Predicted for Test Data')
ax2.legend()
```
##### Logistic Regression
Logistic regression is a linear model for classification rather than regression. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.
```
log_reg=LogisticRegression(n_jobs=-1,verbose=3)
log_reg.fit(train_vector_x,train_data_y)
log_reg_model=log_reg.predict(train_vector_x)
accuracy_log_train=accuracy_score(log_reg_model.round(),train_data_y)
log_reg_test_model=log_reg.predict(test_vector_x)
accuracy_log_test=accuracy_score(log_reg_test_model.round(),test_data_y)
print('Logarithmic Regression Train Accuracy',round(accuracy_log_train*100,2))
print('Logarithmic Regression Test Accuracy',round(accuracy_log_test*100,2))
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(log_reg_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Logistic Regression Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(log_reg_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Logistic Regression Original vs Predicted for Test Data')
ax2.legend()
```
##### Logistic Regression with 1000 Iterations
This is same as the earlier Logistic Regression model except that here we set the max_ite parameter to 1000. This will improve the train the model better and we can see the improved results in accuracy score as well.
```
log_reg=LogisticRegression(max_iter=1000,n_jobs=-1)
log_reg.fit(train_vector_x,train_data_y)
log_reg_model=log_reg.predict(train_vector_x)
accuracy_log_train=accuracy_score(log_reg_model.round(),train_data_y)
log_reg_test_model=log_reg.predict(test_vector_x)
accuracy_log_test=accuracy_score(log_reg_test_model.round(),test_data_y)
print('Logarithmic Regression Train Accuracy (1000 iterations)',round(accuracy_log_train*100,2))
print('Logarithmic Regression Test Accuracy (1000 iterations)',round(accuracy_log_test*100,2))
with open('log_reg.pkl', 'wb') as f_log_reg:
pickle.dump(log_reg, f_log_reg)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(log_reg_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Logistic Regression (1000 Iterations) Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(log_reg_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Logistic Regression (1000 Iterations) Original vs Predicted for Test Data')
ax2.legend()
```
##### Naive Bayes Model
In this project we use scikit's MultinomialNB to implement Naive Bayes. MultinomialNB implements the naive Bayes algorithm for multinomially distributed data, and is one of the two classic naive Bayes variants used in text classification (where the data are typically represented as word vector counts, although tf-idf vectors are also known to work well in practice). The distribution is parametrized by vectors
for each class , where is the number of features (in text classification, the size of the vocabulary) and is the probability of feature appearing in a sample belonging to class
```
naive_bayes=MultinomialNB()
naive_bayes.fit(train_vector_x,train_data_y)
naive_bayes_model=naive_bayes.predict(train_vector_x)
accuracy_naive_train=accuracy_score(naive_bayes_model.round(),train_data_y)
print('Naive Bayes Train Accuracy ',round(accuracy_naive_train*100,2))
naive_bayes_test_model=naive_bayes.predict(test_vector_x)
accuracy_naive_test=accuracy_score(naive_bayes_test_model.round(),test_data_y)
print('Naive Bayes Test Accuracy ',round(accuracy_naive_test*100,2))
with open('naive_bayes.pkl', 'wb') as f_naive_bayes:
pickle.dump(naive_bayes, f_naive_bayes)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(naive_bayes_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Naive Bayes Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(naive_bayes_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Naive Bayes Original vs Predicted for Test Data')
ax2.legend()
```
##### Random Forest Classifier:
A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the max_samples parameter if bootstrap=True (default), otherwise the whole dataset is used to build each tree.
```
random_classifier=RandomForestClassifier(n_estimators=100,verbose=3,n_jobs=-1,max_depth=5)
random_classifier.fit(train_vector_x,train_data_y)
random_classifier_train_model=random_classifier.predict(train_vector_x)
accuracy_random_classifier_train=accuracy_score(random_classifier_train_model.round(),train_data_y)
random_classifier_test_model=random_classifier.predict(test_vector_x)
accuracy_random_classifier_test=accuracy_score(random_classifier_test_model.round(),test_data_y)
print('Random Classifier Train Accuracy ',round(accuracy_random_classifier_train*100,2))
print('Random Classifier Test Accuracy ',round(accuracy_random_classifier_test*100,2))
with open('random_classifier.pkl', 'wb') as f_random_classifier:
pickle.dump(random_classifier, f_random_classifier)
```
##### Random Forest Classifier with max depth 500:
In the earlier run i gave max_depth of 5. This value restricts the depth of the tree without which the tree will split till each leaf contains only min_samples_split samples. The default value for min_samples_split is 2 which means the tree will split in entirety but that will take huge amount of time.
So, in my earlier case a depth of 5 is restricting the forest creation and it is completing very quickly. Also the results are not great which can be see in both training and test accuracy. So, i tested for various depths and a depth of 500 gave much better results with a decent run time of 90 mins.
Alternatively i tried with min_samples_split of 10 and that gave very similar results as max_depth of 500 but took more time. So, i am settling with max_depth of 500.
```
random_classifier=RandomForestClassifier(n_estimators=100,verbose=3,n_jobs=-1,max_depth=500)
random_classifier.fit(train_vector_x,train_data_y)
random_classifier_train_model=random_classifier.predict(train_vector_x)
accuracy_random_classifier_train=accuracy_score(random_classifier_train_model.round(),train_data_y)
random_classifier_test_model=random_classifier.predict(test_vector_x)
accuracy_random_classifier_test=accuracy_score(random_classifier_test_model.round(),test_data_y)
print('Random Classifier Train Accuracy ',round(accuracy_random_classifier_train*100,2))
print('Random Classifier Test Accuracy ',round(accuracy_random_classifier_test*100,2))
with open('random_classifier.pkl', 'wb') as f_random_classifier:
pickle.dump(random_classifier, f_random_classifier)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(random_classifier_train_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Random Forest Classifier Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(random_classifier_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Random Forest Classifier Original vs Predicted for Test Data')
ax2.legend()
```
The above result shows an excellent output on the Training data but it didn't perform similarly on the test data. This is primarily because of overfitting. A further increase on depth might improve the results but given the hardware constraints i will limit the maximum depth to 500.
```
sample_review=['The movie is super','The movie is not worst','The movie is excellent','The movie is worst','I like this movie','i like this movie']
sample_review_vector_x=vector.transform(sample_review)
sample_predict_lin=lin_reg.predict(sample_review_vector_x)
sample_predict_log=log_reg.predict(sample_review_vector_x)
sample_predict_naive=naive_bayes.predict(sample_review_vector_x)
sample_predict_random_classifier=random_classifier.predict(sample_review_vector_x)
print('Sample Review results for Linear Regression',sample_predict_lin.round())
print('Sample Review results for Logistic Regression',sample_predict_log.round())
print('Sample Review results for Naive Bayes',sample_predict_naive.round())
print('Sample Review results for Random Classifier ',sample_predict_random_classifier.round())
```
##### Conclusion:
I started with doing Linear regression and then tried Logistic Regression. I see muxh improvement with Logistic regression. Then i tried Logistic regression with 1000 iterations and that showed decent results. Then i tried Naive Bayes. In Naive Bayes we can clearly see from the graphs that the results are distorted towards a particular value (specifically value 8). This is applicable for both train data and test data. Then i tried Random Forest Classifier. The results of this classifier is interesting. The train data showed much better accuracy compared to any other model. But test data results are not much better than logistic regression though it is better than Linear and Naive Bayes models. The Random Forest classifier results are more Overfitted with Training data.
Another interesting thing is Naive Bayes is the fastest model to built and Random Forest Classifier took very long time. Logistic regression is also faste but when we run it for 1000 iterations it took more time. Over all, based on the sample data results and model size logistic regression is the best algorithm for my application.
##### Contribution:
I built an application for the rating predictor based on the review. For this purpose i used flask on pythonanywhere. This application can take a review through an enterable form and can calculate the rating based on our model built above. Additionally for cleanup data i used some python inbuilt flattening the multidimensional arrays to quickly identify the low frequency and redundant words. This process reduced the run time complexity from O(n^2) to O(n) which is very important for the large dataset.
##### Challenges:
Due to the sheer size of data it took good amount of time to cleanup the data. I utilized some of the functions i created in my Naive Bayes Classifier assignment for cleanup the data. Also when i started with RandomForestClassifier it took several hours but not result. Then i went through some documentation and understood the importance of no of jobs ,maximum depth and also verbose which can help in tracking the progress. I started with a maximum depth of 5 and increased it slowly based on results. But due the time taken and size of the model i stopped the maximum depth at 500. For a maximum depth of 500 with 4 parallel jobs it is taking 90 minutes. More than time the model built is of size 3.54 GB. So, even if we increase the maximum depth further to improve the results along with time, the model will increase in size which cannot be used in our flask app based on the size limitations of our webapp hosting site pythonanywhere.
I think given the capacity of the servers both size and processing power, RandomForestClassifier can give better results.
My online rating predictor can be found at http://sravanisuravajhula.pythonanywhere.com/
##### References:
* Linear Regression : https://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares
* Logistic Regression : https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
* Naive Bayes : https://scikit-learn.org/stable/modules/naive_bayes.html#multinomial-naive-bayes
* Random Forest Classifer: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
* Board Game Geek Review : https://en.wikipedia.org/wiki/BoardGameGeek
* Word Cloud : https://www.datacamp.com/community/tutorials/wordcloud-python
* Flask Example: https://realpython.com/flask-by-example-part-1-project-setup/
* Flask Forms: https://python-adv-web-apps.readthedocs.io/en/latest/flask_forms.html
###### Additionally i refered below sites for general queries:
* Stack Overflow : https://stackoverflow.com/
* Wikipedia : https://en.wikipedia.org/
* Sci-kit : https://scikit-learn.org/stable/user_guide.html
* SeaBorn : https://seaborn.pydata.org/api.html
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
import itertools
import copy
import random
import pickle
from collections import defaultdict,Counter
from sklearn.datasets import load_files
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from nltk.corpus import stopwords,words
from sklearn.linear_model import LinearRegression ,LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from wordcloud import WordCloud
def read_data():
raw_additional_df=pd.read_csv('data/games_detailed_info.csv',index_col=0)
raw_reviews_df=pd.read_csv('data/bgg-15m-reviews.csv')
raw_score_df=pd.read_csv('data/2020-08-19.csv')
return raw_additional_df,raw_reviews_df,raw_score_df
raw_additional_df,raw_reviews_df,raw_score_df=read_data()
raw_reviews_df.head()
raw_additional_df.head()
raw_score_df.head()
def preprocess_data():
additional_df=raw_additional_df.copy()
additional_df.drop(columns=['thumbnail','image','boardgameintegration','boardgamecompilation','Party Game Rank','Abstract Game Rank','Thematic Rank','War Game Rank','Customizable Rank','Children\'s Game Rank','RPG Item Rank','Accessory Rank','Video Game Rank','Amiga Rank','Commodore 64 Rank','Arcade Rank','Atari ST Rank'],axis=1,inplace=True)
reviews_df=raw_reviews_df[['user','rating','comment','ID','name']].copy()
reviews_df.dropna(subset=['comment'],inplace=True)
reviews_df.reset_index(drop=True,inplace=True)
score_df=raw_score_df[['ID','Name','Year','Rank','Average','Bayes average','Users rated']].copy()
return additional_df,reviews_df,score_df
additional_df,reviews_df,score_df=preprocess_data()
score_df.head()
#score_df['Average'].plot(kind='hist')
#score_df['Average'].plot.hist()
#score_df['Average'].hist(bins=10,grid=False,legend=True)
plt.title('Histogram based on Average ratings')
plt.hist(score_df['Average'],rwidth=0.8)
plt.xlabel('Average Rating Value')
plt.ylabel('No of Games')
#plt.grid(b=None)
plt.show()
def cleanup_data(data):
pattern1=re.compile("[!#$%&'()*+,\'\"-./:;<=>?@[\]^_`{|}~]")
pattern2=re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
data=re.sub(pattern1, '', data)
#data=re.sub(pattern2, ' ', data).lower()
return data
reviews_df['comment']=reviews_df['comment'].apply(cleanup_data)
reviews_df['comment_split']=reviews_df['comment'].apply(lambda line:line.lower().split())
reviews_df.head()
flatten = itertools.chain.from_iterable
complete_data=list(flatten(reviews_df['comment_split']))
complete_counter=Counter(complete_data)
sorted_words=dict(sorted(complete_counter.items(),key=lambda i:i[1],reverse=True))
stop_words=set(stopwords.words('english'))
additional_words=['game','games','play','player','players','get','would','also','get','got','played','playing']
for word in list(sorted_words):
if sorted_words[word] <5 or word=='br' or word in stop_words or not word.isalpha() or word in additional_words:
del sorted_words[word]
required_words=list(sorted_words.keys())
print('Total words ',len(complete_data))
print('No of unique words',len(complete_counter))
print('No of required words',len(required_words))
reviews_df['comment_split']=reviews_df['comment_split'].apply(lambda line:[word for word in line if word in sorted_words])
reviews_df['rating']=reviews_df['rating'].round()
reviews_df.head()
plt.title('Histogram based on ratings across Reviews')
plt.hist(reviews_df['rating'],rwidth=0.8)
plt.xlabel('Ratings Value')
plt.ylabel('No of Reviews')
plt.show()
word_cloud = WordCloud(width=400, height=350,colormap='plasma',background_color='white').generate_from_frequencies(sorted_words)
plt.figure(figsize=(10, 8))
plt.imshow(word_cloud, interpolation='bilinear')
plt.axis("off")
plt.title('Common Words', fontsize=20)
plt.show()
reviews_df['comment_joined']=reviews_df['comment_split'].apply(' '.join)
reviews_final=reviews_df[['comment_joined','rating']].copy()
with open('reviews_final.pkl', 'wb') as f_reviews:
pickle.dump(reviews_final, f_reviews)
random.seed(0)
temp=random.randint(0,100)
train_data_x,test_data_x,train_data_y,test_data_y=train_test_split(reviews_df['comment_joined'],reviews_df['rating'],train_size=0.1,test_size=0.1,random_state=temp)
print('Train Data comments Shape : ',train_data_x.shape)
print('Test Data comments Shape : ',test_data_x.shape)
print('Train Data rating Shape : ',train_data_y.shape)
print('Test Data ratings Shape : ',test_data_y.shape)
plt.title('Histogram based on Train data ratings across Reviews')
plt.hist(train_data_y,rwidth=0.8)
plt.xlabel('Ratings Value')
plt.ylabel('No of Reviews')
plt.show()
plt.title('Histogram based on Test data ratings across Reviews')
plt.hist(test_data_y,rwidth=0.8)
plt.xlabel('Ratings Value')
plt.ylabel('No of Reviews')
plt.show()
vector=TfidfVectorizer()
train_vector_x=vector.fit_transform(train_data_x)
test_vector_x=vector.transform(test_data_x)
with open('vocabulary.pkl', 'wb') as f_vectorizer:
pickle.dump(vector.vocabulary_, f_vectorizer)
lin_reg=LinearRegression(n_jobs=-1)
lin_reg.fit(train_vector_x,train_data_y)
lin_reg_train_model=lin_reg.predict(train_vector_x)
accuracy_lin_train=accuracy_score(lin_reg_train_model.round(),train_data_y)
print('Linear Regression Train Accuracy',round(accuracy_lin_train*100,2))
lin_reg_test_model=lin_reg.predict(test_vector_x)
accuracy_lin_test=accuracy_score(lin_reg_test_model.round(),test_data_y)
print('Linear Regression Test Accuracy',round(accuracy_lin_test*100,2))
with open('lin_reg.pkl', 'wb') as f_lin_reg:
pickle.dump(lin_reg, f_lin_reg)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(lin_reg_train_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Linear Regression Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(lin_reg_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Linear Regression Original vs Predicted for Test Data')
ax2.legend()
log_reg=LogisticRegression(n_jobs=-1,verbose=3)
log_reg.fit(train_vector_x,train_data_y)
log_reg_model=log_reg.predict(train_vector_x)
accuracy_log_train=accuracy_score(log_reg_model.round(),train_data_y)
log_reg_test_model=log_reg.predict(test_vector_x)
accuracy_log_test=accuracy_score(log_reg_test_model.round(),test_data_y)
print('Logarithmic Regression Train Accuracy',round(accuracy_log_train*100,2))
print('Logarithmic Regression Test Accuracy',round(accuracy_log_test*100,2))
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(log_reg_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Logistic Regression Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(log_reg_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Logistic Regression Original vs Predicted for Test Data')
ax2.legend()
log_reg=LogisticRegression(max_iter=1000,n_jobs=-1)
log_reg.fit(train_vector_x,train_data_y)
log_reg_model=log_reg.predict(train_vector_x)
accuracy_log_train=accuracy_score(log_reg_model.round(),train_data_y)
log_reg_test_model=log_reg.predict(test_vector_x)
accuracy_log_test=accuracy_score(log_reg_test_model.round(),test_data_y)
print('Logarithmic Regression Train Accuracy (1000 iterations)',round(accuracy_log_train*100,2))
print('Logarithmic Regression Test Accuracy (1000 iterations)',round(accuracy_log_test*100,2))
with open('log_reg.pkl', 'wb') as f_log_reg:
pickle.dump(log_reg, f_log_reg)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(log_reg_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Logistic Regression (1000 Iterations) Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(log_reg_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Logistic Regression (1000 Iterations) Original vs Predicted for Test Data')
ax2.legend()
naive_bayes=MultinomialNB()
naive_bayes.fit(train_vector_x,train_data_y)
naive_bayes_model=naive_bayes.predict(train_vector_x)
accuracy_naive_train=accuracy_score(naive_bayes_model.round(),train_data_y)
print('Naive Bayes Train Accuracy ',round(accuracy_naive_train*100,2))
naive_bayes_test_model=naive_bayes.predict(test_vector_x)
accuracy_naive_test=accuracy_score(naive_bayes_test_model.round(),test_data_y)
print('Naive Bayes Test Accuracy ',round(accuracy_naive_test*100,2))
with open('naive_bayes.pkl', 'wb') as f_naive_bayes:
pickle.dump(naive_bayes, f_naive_bayes)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(naive_bayes_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Naive Bayes Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(naive_bayes_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Naive Bayes Original vs Predicted for Test Data')
ax2.legend()
random_classifier=RandomForestClassifier(n_estimators=100,verbose=3,n_jobs=-1,max_depth=5)
random_classifier.fit(train_vector_x,train_data_y)
random_classifier_train_model=random_classifier.predict(train_vector_x)
accuracy_random_classifier_train=accuracy_score(random_classifier_train_model.round(),train_data_y)
random_classifier_test_model=random_classifier.predict(test_vector_x)
accuracy_random_classifier_test=accuracy_score(random_classifier_test_model.round(),test_data_y)
print('Random Classifier Train Accuracy ',round(accuracy_random_classifier_train*100,2))
print('Random Classifier Test Accuracy ',round(accuracy_random_classifier_test*100,2))
with open('random_classifier.pkl', 'wb') as f_random_classifier:
pickle.dump(random_classifier, f_random_classifier)
random_classifier=RandomForestClassifier(n_estimators=100,verbose=3,n_jobs=-1,max_depth=500)
random_classifier.fit(train_vector_x,train_data_y)
random_classifier_train_model=random_classifier.predict(train_vector_x)
accuracy_random_classifier_train=accuracy_score(random_classifier_train_model.round(),train_data_y)
random_classifier_test_model=random_classifier.predict(test_vector_x)
accuracy_random_classifier_test=accuracy_score(random_classifier_test_model.round(),test_data_y)
print('Random Classifier Train Accuracy ',round(accuracy_random_classifier_train*100,2))
print('Random Classifier Test Accuracy ',round(accuracy_random_classifier_test*100,2))
with open('random_classifier.pkl', 'wb') as f_random_classifier:
pickle.dump(random_classifier, f_random_classifier)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,8))
sns.histplot(train_data_y.values,label='Original',kde=True,color='green',ax=ax1)
sns.histplot(random_classifier_train_model.round(),label='Predicted',kde=True,color='red',ax=ax1)
ax1.set_xlim(0,10)
ax1.set_xlabel('Rating values')
ax1.set_title('Random Forest Classifier Original vs Predicted for Train Data')
ax1.legend()
sns.histplot(test_data_y.values,label='Original',kde=True,color='green',ax=ax2)
sns.histplot(random_classifier_test_model.round(),label='Predicted',kde=True,color='red',ax=ax2)
ax2.set_xlim(0,10)
ax2.set_xlabel('Rating values')
ax2.set_title('Random Forest Classifier Original vs Predicted for Test Data')
ax2.legend()
sample_review=['The movie is super','The movie is not worst','The movie is excellent','The movie is worst','I like this movie','i like this movie']
sample_review_vector_x=vector.transform(sample_review)
sample_predict_lin=lin_reg.predict(sample_review_vector_x)
sample_predict_log=log_reg.predict(sample_review_vector_x)
sample_predict_naive=naive_bayes.predict(sample_review_vector_x)
sample_predict_random_classifier=random_classifier.predict(sample_review_vector_x)
print('Sample Review results for Linear Regression',sample_predict_lin.round())
print('Sample Review results for Logistic Regression',sample_predict_log.round())
print('Sample Review results for Naive Bayes',sample_predict_naive.round())
print('Sample Review results for Random Classifier ',sample_predict_random_classifier.round())
| 0.401219 | 0.928409 |
<a href="https://colab.research.google.com/github/wayamhui/ISYS5002_portfolio/blob/main/Simple_Interest_STUDENTS_20486978.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Simple Interest Calculator
The process of problem solving we will use can be broken down into 5 key steps:
1. State the problem clearly
2. Describe the input and output information
3. Work the problem by hand
4. Develop an algorithm (and convert to python)
5. Test solution with a variety of data
### 1. State Problem Clearly
Simple interest in a bank is given by the formula:
Simple Interest = (Ammount Borrowed x Interest Rate * years) / 100
### 2. Describe Inputs/Outputs
| input | processing | output|
|-------|------------|-------|
| Amount Borowed | | Simple Interest |
| Interest Rate | | |
| Loan Period | | |
### 3. Work the problme by hand
Borrowed $10,000
Interest Rate of 2%
Lenghtof loan is 5 years
Simple interest
= (10,000 * 2 * 5) / 100
= 1,000
### 4. Develop and algorithm
Here is the pseudocode
Get the amount borrowed
Get the interest rate
Get the loan period
let simple interest = (ammount borrowed * interest rate * loan period) / 100
print The total interest on the loan is: simple interest
```
# Get the amount borrowed
principal = input("Please input the amount borrowed (unit: dollar): ")
# Get the interest rate
rate = input("Please input the annual interest rate (unit: %): ")
# Get the loan period
period = input("Please input the loan period (unit: years): ")
# let simple interest = (ammount borrowed * interest rate * loan period)
sim_interest = float(principal) * float(rate) / 100 * float(period)
# print The total interest on the loan is: simple interest
print("The total interest on the loan is: ",sim_interest)
```
### 5. Test with a variety of data.
Rather than type in different values each time, perhaps it easier to be prompted for the infromation.
```
# Test with input numbers with decimal point(s)
# Using "for" loop to test more scenarios in a single run
print("This program will calculate simple interest for 3 different loans per run.")
for times in [1,2,3]:
print ("Interest calculation #", times)
principal = input("Please input the amount borrowed (unit: dollar): ")
rate = input("Please input the annual interest rate (unit: %): ")
period = input("Please input the loan period (unit: years): ")
sim_interest = float(principal) * float(rate) / 100 * float(period)
print("The total interest on the loan is: ",sim_interest, "dollars")
print("")
print("Thanks")
```
|
github_jupyter
|
# Get the amount borrowed
principal = input("Please input the amount borrowed (unit: dollar): ")
# Get the interest rate
rate = input("Please input the annual interest rate (unit: %): ")
# Get the loan period
period = input("Please input the loan period (unit: years): ")
# let simple interest = (ammount borrowed * interest rate * loan period)
sim_interest = float(principal) * float(rate) / 100 * float(period)
# print The total interest on the loan is: simple interest
print("The total interest on the loan is: ",sim_interest)
# Test with input numbers with decimal point(s)
# Using "for" loop to test more scenarios in a single run
print("This program will calculate simple interest for 3 different loans per run.")
for times in [1,2,3]:
print ("Interest calculation #", times)
principal = input("Please input the amount borrowed (unit: dollar): ")
rate = input("Please input the annual interest rate (unit: %): ")
period = input("Please input the loan period (unit: years): ")
sim_interest = float(principal) * float(rate) / 100 * float(period)
print("The total interest on the loan is: ",sim_interest, "dollars")
print("")
print("Thanks")
| 0.225843 | 0.989202 |
```
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
```
Extending PyVista {#extending_pyvista_example}
=================
A `pyvista.DataSet`{.interpreted-text role="class"}, such as
`pyvista.PolyData`{.interpreted-text role="class"}, can be extended by
users. For example, if the user wants to keep track of the location of
the maximum point in the (1, 0, 1) direction on the mesh.
There are two methods by which users can handle subclassing. One is
directly managing the types objects. This may require checking types
during filter operations.
The second is automatic managing of types. Users can control whether
user defined classes are nearly always used for particular types of
DataSets.
::: {.note}
::: {.admonition-title}
Note
:::
This is for advanced usage only. Automatic managing of types will not
work in all situations, in particular when a builtin dataset is directly
instantiated. See examples below.
:::
```
import numpy as np
import vtk
import pyvista
pyvista.set_plot_theme("document")
```
A user defined subclass of `pyvista.PolyData`{.interpreted-text
role="class"}, `FooData` is defined. It includes a property to keep
track of the point on the mesh that is furthest along in the (1, 0, 1)
direction.
```
class FooData(pyvista.PolyData):
@property
def max_point(self):
"""Returns index of point that is furthest along (1, 0, 1) direction."""
return np.argmax(np.dot(self.points, (1.0, 0.0, 1.0)))
```
Directly Managing Types
=======================
Now a `foo_sphere` object is created of type `FooData`. The index of the
point and location of the point of interest can be obtained directly.
The sphere has a radius of 0.5, so the maximum extent in the direction
(1, 0, 1) is $0.5\sqrt{0.5}\approx0.354$
```
foo_sphere = FooData(pyvista.Sphere(theta_resolution=100, phi_resolution=100))
print("Original foo sphere:")
print(f"Type: {type(foo_sphere)}")
print(f"Maximum point index: {foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
```
Using an inplace operation like
`pyvista.DataSet.rotate_y`{.interpreted-text role="func"} does not
affect the type of the object.
```
foo_sphere.rotate_y(90, inplace=True)
print("\nRotated foo sphere:")
print(f"Type: {type(foo_sphere)}")
print(f"Maximum point index: {foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
```
However, filter operations can return different `DataSet` types
including ones that differ from the original type. In this case, the
`decimate <pyvista.PolyDataFilters.decimate>`{.interpreted-text
role="func"} method returns a `pyvista.PolyData`{.interpreted-text
role="class"} object.
```
print("\nDecimated foo sphere:")
decimated_foo_sphere = foo_sphere.decimate(0.5)
print(f"Type: {type(decimated_foo_sphere)}")
```
It is now required to explicitly wrap the object into `FooData`.
```
decimated_foo_sphere = FooData(foo_sphere.decimate(0.5))
print(f"Type: {type(decimated_foo_sphere)}")
print(f"Maximum point index: {decimated_foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
```
Automatically Managing Types
============================
The default `pyvista.DataSet`{.interpreted-text role="class"} type can
be set using `pyvista._wrappers`. In general, it is best to use this
method when it is expected to primarily use the user defined class.
In this example, all objects that would have been created as
`pyvista.PolyData`{.interpreted-text role="class"} would now be created
as a `FooData` object. Note, that the key is the underlying vtk object.
```
pyvista._wrappers['vtkPolyData'] = FooData
```
It is no longer necessary to specifically wrap
`pyvista.PolyData`{.interpreted-text role="class"} objects to obtain a
`FooData` object.
```
foo_sphere = pyvista.Sphere(theta_resolution=100, phi_resolution=100)
print("Original foo sphere:")
print(f"Type: {type(foo_sphere)}")
print(f"Maximum point index: {foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
```
Using an inplace operation like
`rotate_y <pyvista.DataSet.rotate_y>`{.interpreted-text role="func"}
does not affect the type of the object.
```
foo_sphere.rotate_y(90, inplace=True)
print("\nRotated foo sphere:")
print(f"Type: {type(foo_sphere)}")
print(f"Maximum point index: {foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
```
Filter operations that return `pyvista.PolyData`{.interpreted-text
role="class"} now return `FooData`
```
print("\nDecimated foo sphere:")
decimated_foo_sphere = foo_sphere.decimate(0.5)
print(f"Type: {type(decimated_foo_sphere)}")
print(f"Maximum point index: {decimated_foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
```
Users can still create a native `pyvista.PolyData`{.interpreted-text
role="class"} object, but using this method may incur unintended
consequences. In this case, it is recommended to use the directly
managing types method.
```
poly_object = pyvista.PolyData(vtk.vtkPolyData())
print(f"Type: {type(poly_object)}")
# catch error
try:
poly_object.rotate_y(90, inplace=True)
except TypeError:
print("This operation fails")
```
Usage of `pyvista._wrappers` may require resetting the default value to
avoid leaking the setting into cases where it is unused.
```
pyvista._wrappers['vtkPolyData'] = pyvista.PolyData
```
For instances where a localized usage is preferred, a tear-down method
is recommended. One example is a `try...finally` block.
```
try:
pyvista._wrappers['vtkPolyData'] = FooData
# some operation that sometimes raises an error
finally:
pyvista._wrappers['vtkPolyData'] = pyvista.PolyData
```
|
github_jupyter
|
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
import numpy as np
import vtk
import pyvista
pyvista.set_plot_theme("document")
class FooData(pyvista.PolyData):
@property
def max_point(self):
"""Returns index of point that is furthest along (1, 0, 1) direction."""
return np.argmax(np.dot(self.points, (1.0, 0.0, 1.0)))
foo_sphere = FooData(pyvista.Sphere(theta_resolution=100, phi_resolution=100))
print("Original foo sphere:")
print(f"Type: {type(foo_sphere)}")
print(f"Maximum point index: {foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
foo_sphere.rotate_y(90, inplace=True)
print("\nRotated foo sphere:")
print(f"Type: {type(foo_sphere)}")
print(f"Maximum point index: {foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
print("\nDecimated foo sphere:")
decimated_foo_sphere = foo_sphere.decimate(0.5)
print(f"Type: {type(decimated_foo_sphere)}")
decimated_foo_sphere = FooData(foo_sphere.decimate(0.5))
print(f"Type: {type(decimated_foo_sphere)}")
print(f"Maximum point index: {decimated_foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
pyvista._wrappers['vtkPolyData'] = FooData
foo_sphere = pyvista.Sphere(theta_resolution=100, phi_resolution=100)
print("Original foo sphere:")
print(f"Type: {type(foo_sphere)}")
print(f"Maximum point index: {foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
foo_sphere.rotate_y(90, inplace=True)
print("\nRotated foo sphere:")
print(f"Type: {type(foo_sphere)}")
print(f"Maximum point index: {foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
print("\nDecimated foo sphere:")
decimated_foo_sphere = foo_sphere.decimate(0.5)
print(f"Type: {type(decimated_foo_sphere)}")
print(f"Maximum point index: {decimated_foo_sphere.max_point}")
print(f"Location of maximum point: {foo_sphere.points[foo_sphere.max_point, :]}")
poly_object = pyvista.PolyData(vtk.vtkPolyData())
print(f"Type: {type(poly_object)}")
# catch error
try:
poly_object.rotate_y(90, inplace=True)
except TypeError:
print("This operation fails")
pyvista._wrappers['vtkPolyData'] = pyvista.PolyData
try:
pyvista._wrappers['vtkPolyData'] = FooData
# some operation that sometimes raises an error
finally:
pyvista._wrappers['vtkPolyData'] = pyvista.PolyData
| 0.713332 | 0.936865 |
## Day 34 Lecture 2 Assignment
In this assignment, we will learn about gradient boosting. We will use a dataset describing TripAdvisor reviews for Las Vegas hotels loaded below and analyze the model generated for this dataset.
```
import numpy as np
import pandas as pd
import timeit
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.metrics import confusion_matrix, classification_report
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import GradientBoostingRegressor,GradientBoostingClassifier
from category_encoders import LeaveOneOutEncoder
from sklearn.pipeline import Pipeline
import seaborn as sns
import matplotlib.pyplot as plt
vegas = pd.read_csv('https://tf-assets-prod.s3.amazonaws.com/tf-curric/data-science/LasVegasTripAdvisorReviews-Dataset.csv', sep=';')
vegas = vegas.replace({'YES': 1, 'NO': 0})
vegas = vegas[vegas['Member years'] >= 0]
vegas.head()
```
Check for missing data and remove all rows with missing data
```
# answer below:
vegas.isna().mean()
vegas['Hotel name'].value_counts().unique()
```
How many reviews do we have for each hotel in the dataset?
```
vegas.info()
```
We would like to predict the score variable. Examine the dataset and decide which columns should be turned into dummy variables and transform the data. Also, where we have two columns with redundant information, remove one of the two columns. Remove the hotel stars column.
```
# answer below:
vegas.corr()['Score'].sort_values(ascending=False).abs()
drop_cols = ['User country', 'Period of stay', 'Hotel stars']
bin_cols = ['Pool', 'Gym', 'Tennis court', 'Spa', 'Casino', 'Free internet']
cat_cols = ['User continent', 'Review month', 'Review weekday', 'Traveler type', 'Hotel name']
drop_cat = ['North America', 'January', 'Sunday', 'Solo', 'The Cromwell']
num_cols = ['Nr. reviews', 'Nr. hotel reviews', 'Helpful votes', 'Nr. rooms', 'Member years']
```
Split the data into train and test (20% in test)
```
# answer below:
X = vegas.drop(columns=drop_cols)
X = X.drop(columns=['Score'])
y = vegas['Score']
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
start_time = timeit.default_timer()
preprocessing = ColumnTransformer([
# ('encode_cats', OneHotEncoder(drop=drop_cat), cat_cols),
('encode_cats', LeaveOneOutEncoder(), cat_cols),
('scale_nums', StandardScaler(), num_cols),
], remainder='passthrough')
pipeline = Pipeline([
('preprocessing', preprocessing),
# ('gbt', GradientBoostingRegressor()),
('gbt', GradientBoostingClassifier()),
])
n_trees = 1000
learning_rate = 2 / n_trees
grid = {
"gbt__subsample": [0.75, 1.0],
"gbt__max_features": [0.75, 1.0],
"gbt__max_depth": [2, 3],
"gbt__n_estimators": [n_trees],
"gbt__learning_rate": [learning_rate, 0.001, 0.2]
}
pipeline_cv = GridSearchCV(pipeline, grid, verbose=1, cv=2)
pipeline_cv.fit(X_train, y_train)
print(pipeline_cv.best_params_)
elapsed = timeit.default_timer() - start_time
print(elapsed)
train_score = pipeline_cv.score(X_train, y_train)
test_score = pipeline_cv.score(X_test, y_test)
print(f"train_score {train_score}")
print(f"test_score {test_score}")
```
Create a graident boosted regression model for predicting the score. To produce the accuracy score for the test data, first round the prediction and then compare to the observed.
Try again with a learning rate of 0.8 and 0.3 and compare the results.
```
preprocessing = ColumnTransformer([
# ('encode_cats', OneHotEncoder(drop=drop_cat), cat_cols),
('encode_cats', LeaveOneOutEncoder(), cat_cols),
('scale_nums', StandardScaler(), num_cols),
], remainder='passthrough')
pipeline = Pipeline([
('preprocessing', preprocessing),
# ('gbt', GradientBoostingRegressor()),
('gbt', GradientBoostingClassifier()),
])
n_trees = 1000
learning_rate = 2 / n_trees
grid = {
"gbt__subsample": [0.75, 1.0],
"gbt__max_features": [0.75, 1.0],
"gbt__max_depth": [2, 3],
"gbt__n_estimators": [n_trees],
"gbt__learning_rate": [learning_rate, 0.8, 0.3]
}
pipeline_cv = GridSearchCV(pipeline, grid, verbose=1, cv=2)
pipeline_cv.fit(X_train, y_train)
print(pipeline_cv.best_params_)
elapsed = timeit.default_timer() - start_time
print(elapsed)
train_score = pipeline_cv.score(X_train, y_train)
test_score = pipeline_cv.score(X_test, y_test)
print(f"train_score {train_score}")
print(f"test_score {test_score}")
n_trees = 1000
learning_rate = 2 / n_trees
grid = {
"gbt__subsample": [0.75, 1.0],
"gbt__max_features": [0.75, 1.0],
"gbt__max_depth": [2, 3],
"gbt__n_estimators": [n_trees],
"gbt__learning_rate": [learning_rate, 0.3]
}
pipeline_cv = GridSearchCV(pipeline, grid, verbose=1, cv=2)
pipeline_cv.fit(X_train, y_train)
print(pipeline_cv.best_params_)
elapsed = timeit.default_timer() - start_time
print(elapsed)
train_score = pipeline_cv.score(X_train, y_train)
test_score = pipeline_cv.score(X_test, y_test)
print(f"train_score {train_score}")
print(f"test_score {test_score}")
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import timeit
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.metrics import confusion_matrix, classification_report
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import GradientBoostingRegressor,GradientBoostingClassifier
from category_encoders import LeaveOneOutEncoder
from sklearn.pipeline import Pipeline
import seaborn as sns
import matplotlib.pyplot as plt
vegas = pd.read_csv('https://tf-assets-prod.s3.amazonaws.com/tf-curric/data-science/LasVegasTripAdvisorReviews-Dataset.csv', sep=';')
vegas = vegas.replace({'YES': 1, 'NO': 0})
vegas = vegas[vegas['Member years'] >= 0]
vegas.head()
# answer below:
vegas.isna().mean()
vegas['Hotel name'].value_counts().unique()
vegas.info()
# answer below:
vegas.corr()['Score'].sort_values(ascending=False).abs()
drop_cols = ['User country', 'Period of stay', 'Hotel stars']
bin_cols = ['Pool', 'Gym', 'Tennis court', 'Spa', 'Casino', 'Free internet']
cat_cols = ['User continent', 'Review month', 'Review weekday', 'Traveler type', 'Hotel name']
drop_cat = ['North America', 'January', 'Sunday', 'Solo', 'The Cromwell']
num_cols = ['Nr. reviews', 'Nr. hotel reviews', 'Helpful votes', 'Nr. rooms', 'Member years']
# answer below:
X = vegas.drop(columns=drop_cols)
X = X.drop(columns=['Score'])
y = vegas['Score']
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
start_time = timeit.default_timer()
preprocessing = ColumnTransformer([
# ('encode_cats', OneHotEncoder(drop=drop_cat), cat_cols),
('encode_cats', LeaveOneOutEncoder(), cat_cols),
('scale_nums', StandardScaler(), num_cols),
], remainder='passthrough')
pipeline = Pipeline([
('preprocessing', preprocessing),
# ('gbt', GradientBoostingRegressor()),
('gbt', GradientBoostingClassifier()),
])
n_trees = 1000
learning_rate = 2 / n_trees
grid = {
"gbt__subsample": [0.75, 1.0],
"gbt__max_features": [0.75, 1.0],
"gbt__max_depth": [2, 3],
"gbt__n_estimators": [n_trees],
"gbt__learning_rate": [learning_rate, 0.001, 0.2]
}
pipeline_cv = GridSearchCV(pipeline, grid, verbose=1, cv=2)
pipeline_cv.fit(X_train, y_train)
print(pipeline_cv.best_params_)
elapsed = timeit.default_timer() - start_time
print(elapsed)
train_score = pipeline_cv.score(X_train, y_train)
test_score = pipeline_cv.score(X_test, y_test)
print(f"train_score {train_score}")
print(f"test_score {test_score}")
preprocessing = ColumnTransformer([
# ('encode_cats', OneHotEncoder(drop=drop_cat), cat_cols),
('encode_cats', LeaveOneOutEncoder(), cat_cols),
('scale_nums', StandardScaler(), num_cols),
], remainder='passthrough')
pipeline = Pipeline([
('preprocessing', preprocessing),
# ('gbt', GradientBoostingRegressor()),
('gbt', GradientBoostingClassifier()),
])
n_trees = 1000
learning_rate = 2 / n_trees
grid = {
"gbt__subsample": [0.75, 1.0],
"gbt__max_features": [0.75, 1.0],
"gbt__max_depth": [2, 3],
"gbt__n_estimators": [n_trees],
"gbt__learning_rate": [learning_rate, 0.8, 0.3]
}
pipeline_cv = GridSearchCV(pipeline, grid, verbose=1, cv=2)
pipeline_cv.fit(X_train, y_train)
print(pipeline_cv.best_params_)
elapsed = timeit.default_timer() - start_time
print(elapsed)
train_score = pipeline_cv.score(X_train, y_train)
test_score = pipeline_cv.score(X_test, y_test)
print(f"train_score {train_score}")
print(f"test_score {test_score}")
n_trees = 1000
learning_rate = 2 / n_trees
grid = {
"gbt__subsample": [0.75, 1.0],
"gbt__max_features": [0.75, 1.0],
"gbt__max_depth": [2, 3],
"gbt__n_estimators": [n_trees],
"gbt__learning_rate": [learning_rate, 0.3]
}
pipeline_cv = GridSearchCV(pipeline, grid, verbose=1, cv=2)
pipeline_cv.fit(X_train, y_train)
print(pipeline_cv.best_params_)
elapsed = timeit.default_timer() - start_time
print(elapsed)
train_score = pipeline_cv.score(X_train, y_train)
test_score = pipeline_cv.score(X_test, y_test)
print(f"train_score {train_score}")
print(f"test_score {test_score}")
| 0.465387 | 0.934395 |
# Magic Methods
Below you'll find the same code from the previous exercise except two more methods have been added: an __add__ method and a __repr__ method. Your task is to fill out the code and get all of the unit tests to pass. You'll find the code cell with the unit tests at the bottom of this Jupyter notebook.
As in previous exercises, there is an answer key that you can look at if you get stuck. Click on the "Jupyter" icon at the top of this notebook, and open the folder 4.OOP_code_magic_methods. You'll find the answer.py file inside the folder.
```
import math
import matplotlib.pyplot as plt
class Gaussian():
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu = 0, sigma = 1):
self.mean = mu
self.stdev = sigma
self.data = []
def calculate_mean(self):
"""Method to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
#TODO: Calculate the mean of the data set. Remember that the data set is stored in self.data
average = 1.0 * sum(self.data)/len(self.data)
self.mean = average
return self.mean
def calculate_stdev(self, sample=True):
"""Method to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
# TODO:
# Calculate the standard deviation of the data set
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.mean
sigma = 0
for d in self.data:
sigma += (d - mean)**2
stdev = math.sqrt(sigma/n)
self.stdev = stdev
return self.stdev
def read_data_file(self, file_name, sample=True):
"""Method to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
After reading in the file, the mean and standard deviation are calculated
Args:
file_name (string): name of a file to read from
Returns:
None
"""
# This code opens a data file and appends the data to a list called data_list
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
# TODO:
# Update the self.data attribute with the data_list
# Update self.mean with the mean of the data_list.
# You can use the calculate_mean() method with self.calculate_mean()
# Update self.stdev with the standard deviation of the data_list. Use the
# calcaulte_stdev() method.
self.data = data_list
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev(sample)
def plot_histogram(self):
"""Method to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
# TODO: Plot a histogram of the data_list using the matplotlib package.
# Be sure to label the x and y axes and also give the chart a title
plt.hist(self.data)
plt.title('Histogram Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
# TODO: Calculate the probability density function of the Gaussian distribution
# at the value x. You'll need to use self.stdev and self.mean to do the calculation
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Method to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
#TODO: Nothing to do for this method. Try it out and see how it works.
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Magic method to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
# TODO: Calculate the results of summing two Gaussian distributions
result = Gaussian()
# TODO: calculate the mean and standard deviation of the sum of two Gaussians
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev**2 + other.stdev**2)
return result
def __repr__(self):
"""Magic method to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
# TODO: Return a string in the following format -
return "mean {}, standard deviation {}".format(self.mean, self.stdev)
# Unit tests to check your solution
import unittest
class TestGaussianClass(unittest.TestCase):
def setUp(self):
self.gaussian = Gaussian(25, 2)
def test_initialization(self):
self.assertEqual(self.gaussian.mean, 25, 'incorrect mean')
self.assertEqual(self.gaussian.stdev, 2, 'incorrect standard deviation')
def test_pdf(self):
self.assertEqual(round(self.gaussian.pdf(25), 5), 0.19947,\
'pdf function does not give expected result')
def test_meancalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(self.gaussian.calculate_mean(),\
sum(self.gaussian.data) / float(len(self.gaussian.data)), 'calculated mean not as expected')
def test_stdevcalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(round(self.gaussian.stdev, 2), 92.87, 'sample standard deviation incorrect')
self.gaussian.read_data_file('numbers.txt', False)
self.assertEqual(round(self.gaussian.stdev, 2), 88.55, 'population standard deviation incorrect')
def test_add(self):
gaussian_one = Gaussian(25, 3)
gaussian_two = Gaussian(30, 4)
gaussian_sum = gaussian_one + gaussian_two
self.assertEqual(gaussian_sum.mean, 55)
self.assertEqual(gaussian_sum.stdev, 5)
def test_repr(self):
gaussian_one = Gaussian(25, 3)
self.assertEqual(str(gaussian_one), "mean 25, standard deviation 3")
tests = TestGaussianClass()
tests_loaded = unittest.TestLoader().loadTestsFromModule(tests)
unittest.TextTestRunner().run(tests_loaded)
```
|
github_jupyter
|
import math
import matplotlib.pyplot as plt
class Gaussian():
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu = 0, sigma = 1):
self.mean = mu
self.stdev = sigma
self.data = []
def calculate_mean(self):
"""Method to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
#TODO: Calculate the mean of the data set. Remember that the data set is stored in self.data
average = 1.0 * sum(self.data)/len(self.data)
self.mean = average
return self.mean
def calculate_stdev(self, sample=True):
"""Method to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
# TODO:
# Calculate the standard deviation of the data set
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.mean
sigma = 0
for d in self.data:
sigma += (d - mean)**2
stdev = math.sqrt(sigma/n)
self.stdev = stdev
return self.stdev
def read_data_file(self, file_name, sample=True):
"""Method to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
After reading in the file, the mean and standard deviation are calculated
Args:
file_name (string): name of a file to read from
Returns:
None
"""
# This code opens a data file and appends the data to a list called data_list
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
# TODO:
# Update the self.data attribute with the data_list
# Update self.mean with the mean of the data_list.
# You can use the calculate_mean() method with self.calculate_mean()
# Update self.stdev with the standard deviation of the data_list. Use the
# calcaulte_stdev() method.
self.data = data_list
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev(sample)
def plot_histogram(self):
"""Method to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
# TODO: Plot a histogram of the data_list using the matplotlib package.
# Be sure to label the x and y axes and also give the chart a title
plt.hist(self.data)
plt.title('Histogram Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
# TODO: Calculate the probability density function of the Gaussian distribution
# at the value x. You'll need to use self.stdev and self.mean to do the calculation
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Method to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
#TODO: Nothing to do for this method. Try it out and see how it works.
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Magic method to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
# TODO: Calculate the results of summing two Gaussian distributions
result = Gaussian()
# TODO: calculate the mean and standard deviation of the sum of two Gaussians
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev**2 + other.stdev**2)
return result
def __repr__(self):
"""Magic method to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
# TODO: Return a string in the following format -
return "mean {}, standard deviation {}".format(self.mean, self.stdev)
# Unit tests to check your solution
import unittest
class TestGaussianClass(unittest.TestCase):
def setUp(self):
self.gaussian = Gaussian(25, 2)
def test_initialization(self):
self.assertEqual(self.gaussian.mean, 25, 'incorrect mean')
self.assertEqual(self.gaussian.stdev, 2, 'incorrect standard deviation')
def test_pdf(self):
self.assertEqual(round(self.gaussian.pdf(25), 5), 0.19947,\
'pdf function does not give expected result')
def test_meancalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(self.gaussian.calculate_mean(),\
sum(self.gaussian.data) / float(len(self.gaussian.data)), 'calculated mean not as expected')
def test_stdevcalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(round(self.gaussian.stdev, 2), 92.87, 'sample standard deviation incorrect')
self.gaussian.read_data_file('numbers.txt', False)
self.assertEqual(round(self.gaussian.stdev, 2), 88.55, 'population standard deviation incorrect')
def test_add(self):
gaussian_one = Gaussian(25, 3)
gaussian_two = Gaussian(30, 4)
gaussian_sum = gaussian_one + gaussian_two
self.assertEqual(gaussian_sum.mean, 55)
self.assertEqual(gaussian_sum.stdev, 5)
def test_repr(self):
gaussian_one = Gaussian(25, 3)
self.assertEqual(str(gaussian_one), "mean 25, standard deviation 3")
tests = TestGaussianClass()
tests_loaded = unittest.TestLoader().loadTestsFromModule(tests)
unittest.TextTestRunner().run(tests_loaded)
| 0.522202 | 0.983263 |
# Week 3 - Notes
## Classification
Examples:
* Email: Spam or not spam?
* Tumor: malignant or benign?
* Online transactions: Fraudulent or not?
Output y = {0, 1}, 0 = negative class (Malignant, etc), 1 = positive class (Benign, etc)
These are **Binary classification problems.** Later we will address **Multiclass classification problems**.
Applying linear regression to a classification problem is usually not a good idea. Instead, we use **logistic regression**, for which 0 <= h(x) <= 1.
## Logistic regression model
h<sub>θ</sub>(x) = g(θ<sup>T</sup>x), where g(z) = 1 / (1 + e<sup>-z</sup>)

g(z) is called the **sigmoid function**, or the **logistic function**. The sigmoid function is chosen because (1) as z approaches negative infinity, g(z) approaches 0, and (2) as z approaches infinity, g(z) approaches 1.
#### h<sub>θ</sub>(x) = 1 / (1 + e<sup>-θ<sup>T</sup>x</sup>) = _p(y = 1 | x ; θ)_ = _1 - p(y = 0 | x ; θ)_
The output h(x) = the probability that y = 1 for input x (input is in the positive class)
## Decision boundary
Suppose, when h(x) >= 0.5, we predict y = 1, and when h(x) < 0.5 we predict y = 0. Where is the "boundary" for this decision? I.e. when does h(x) = 0.5?
When you look at the graph of the sigmoid function, g(z) is >= 0.5 when z >= 0. Therefore, h(x) >= 0.5 (and therefore y = 1) when θ<sup>T</sup>x >= 0, and h(x) < 0.5 (and therefore y = 0) when θ<sup>T</sup>x < 0.
Expand θ<sup>T</sup>x to get θ<sub>0</sub> + θ<sub>1</sub>x<sub>1</sub> + ... + θ<sub>n</sub>x<sub>n</sub> and set it equal to 0, and you get the formula for the line that separates the positive and negative classes on a graph, or the **decision boundary**.
The training set is not used to find the decision boundary. Though the training set is used to coordinate the parameters, those parameters can then be used to find the decision boundary, without the help of training data.

Some decision boundaries are non-linear. To model a non-linear decision boundary, you can use a similar method to multivariate regression where you add higher order components to your hypothesis to fit a more complicated boundary.
**Example:** h<sub>θ</sub> = g(θ<sub>0</sub> + θ<sub>1</sub>x<sub>1</sub> + θ<sub>2</sub>x<sub>2</sub><sup>2</sup> + θ<sub>3</sub>x<sub>3</sub><sup>2</sup>).
**Summary:**
* Predict y = 1 when θ<sup>T</sup>x >= 0
* Predict y = 0 when θ<sup>T</sup>x < 0
|
github_jupyter
|
# Week 3 - Notes
## Classification
Examples:
* Email: Spam or not spam?
* Tumor: malignant or benign?
* Online transactions: Fraudulent or not?
Output y = {0, 1}, 0 = negative class (Malignant, etc), 1 = positive class (Benign, etc)
These are **Binary classification problems.** Later we will address **Multiclass classification problems**.
Applying linear regression to a classification problem is usually not a good idea. Instead, we use **logistic regression**, for which 0 <= h(x) <= 1.
## Logistic regression model
h<sub>θ</sub>(x) = g(θ<sup>T</sup>x), where g(z) = 1 / (1 + e<sup>-z</sup>)

g(z) is called the **sigmoid function**, or the **logistic function**. The sigmoid function is chosen because (1) as z approaches negative infinity, g(z) approaches 0, and (2) as z approaches infinity, g(z) approaches 1.
#### h<sub>θ</sub>(x) = 1 / (1 + e<sup>-θ<sup>T</sup>x</sup>) = _p(y = 1 | x ; θ)_ = _1 - p(y = 0 | x ; θ)_
The output h(x) = the probability that y = 1 for input x (input is in the positive class)
## Decision boundary
Suppose, when h(x) >= 0.5, we predict y = 1, and when h(x) < 0.5 we predict y = 0. Where is the "boundary" for this decision? I.e. when does h(x) = 0.5?
When you look at the graph of the sigmoid function, g(z) is >= 0.5 when z >= 0. Therefore, h(x) >= 0.5 (and therefore y = 1) when θ<sup>T</sup>x >= 0, and h(x) < 0.5 (and therefore y = 0) when θ<sup>T</sup>x < 0.
Expand θ<sup>T</sup>x to get θ<sub>0</sub> + θ<sub>1</sub>x<sub>1</sub> + ... + θ<sub>n</sub>x<sub>n</sub> and set it equal to 0, and you get the formula for the line that separates the positive and negative classes on a graph, or the **decision boundary**.
The training set is not used to find the decision boundary. Though the training set is used to coordinate the parameters, those parameters can then be used to find the decision boundary, without the help of training data.

Some decision boundaries are non-linear. To model a non-linear decision boundary, you can use a similar method to multivariate regression where you add higher order components to your hypothesis to fit a more complicated boundary.
**Example:** h<sub>θ</sub> = g(θ<sub>0</sub> + θ<sub>1</sub>x<sub>1</sub> + θ<sub>2</sub>x<sub>2</sub><sup>2</sup> + θ<sub>3</sub>x<sub>3</sub><sup>2</sup>).
**Summary:**
* Predict y = 1 when θ<sup>T</sup>x >= 0
* Predict y = 0 when θ<sup>T</sup>x < 0
| 0.894965 | 0.94079 |
# Preparing the dataset
This notebook describes how to create a timeseries dataset for use with the CNTK minibatch source.
The dataset for this sample is a [free open-source dataset](https://www.cntk.ai/jup/dat/solar.csv) containing measurements of a set of solar panels during the day. The data is stored as a CSV file on disk, so we can use pandas to process it.
The output of this notebook is a CTF file containing sequences of varying length used to train a recurrent neural network to predict total solar power output for a set of solar panels. We'll produce two datasets: A training set and a validation set.
## Loading the data
The dataset is stored as a table not a set of sequences. First we'll need to load the data and normalize it so we have proper input to generate sequences from. The dataset has a timestamp we can use as the index. This makes it easier to group the data per day so we can generate sequences for a specific day.
```
import pandas as pd
import numpy as np
df_solar = pd.read_csv('solar.csv', index_col='time', parse_dates=['time'])
df_solar['date'] = df_solar.index.date
print(df_solar['solar.total'].max())
# Normalize the data so all values are between 0 and 1.
# This is required, because we are using sigmoid and tanh activations in our model.
# These kind of activations don't work for values that are not within the 0 to 1 range.
df_solar['solar.current'] /= df_solar['solar.total'].max()
df_solar['solar.total'] /= df_solar['solar.total'].max()
```
The result of the code above is that we now have a dataset that has an index containing the timestamps for the measurements. The dataset contains normalized values for the current output and the total output for a day. We can now start to group up measurements per day and calculate the total power generated for each day.
```
df_grouped = df_solar.groupby(df_solar.index.date).max()
df_grouped.columns = ['solar.current.max', 'solar.total.max', 'date']
```
The grouped dataset contains the total power generated for a particular day `solar.total.max`. It also contains the maximum power generated in 30 minutes for that day which is stored in `solar.current.max`. We can now merge both datasets to get a dataset that contains the original sequences, but with the totals for each day added to each entry of the sequence.
```
df_merged = pd.merge(df_solar, df_grouped, right_index=True, on='date')
df_merged = df_merged[['solar.current', 'solar.total', 'solar.current.max','solar.total.max']]
df_per_day = df_merged.groupby(df_merged.index.date)
```
## Preprocessing the data
The data is stored as a table but we need to get sequences into a CTF file. We're going to have to create sequences from the original dataset. Each day is its own sequence that we can use to predict the total power generated for a day.
There are a few things that we have to keep in mind to ensure that our model does sensible things:
* Each day that has less than 8 measurements is considered faulty and discarded.
* Each day that has more than 14 measurements is truncated to 14 measurements.
We'll create two lists of datapoints, one with the targets for each day and another one that contains the sequence of datapoints for that day.
```
targets = []
sequences = []
for _, group in df_per_day:
# Less then 8 measurements on a day is considered invalid.
if len(group['solar.total'].values) < 8:
continue
day_total = group['solar.total.max'].values[0]
sequence = group[['solar.total']].values[0:14, :]
for j in range(2, len(sequence)):
derived_seq = sequence[:j]
sequences.append(derived_seq)
targets.append(day_total)
```
## Storing the data
Once we have the data preprocessed into sequences and targets, let's create the CTF file. The CTF file format says that you can store a sequence over multiple lines, so each sample from the sequence goes on a separate line to make things simple.
This looks like this:
```
0 |target 0.5392670157068062 |features 8.848167838850571e-05
0 |features 0.000594764392413394
1 |target 0.5392670157068062 |features 8.848167838850571e-05
1 |features 0.000594764392413394
1 |features 0.0035340314136125656
2 |target 0.5392670157068062 |features 8.848167838850571e-05
2 |features 0.000594764392413394
2 |features 0.0035340314136125656
2 |features 0.013115183246073298
```
The first line of a new sequence includes the target for that sequence.
To properly train the model we need to have two datasets, a training set and a validation set.
We're splitting the whole dataset in three chunks:
1. A training set containing 70% of all the data.
2. A validation set containing 20% of all the data.
3. A test set containing 10% of all the data.
We'll store the first two sets in a CTF file format for use with a minibatch source later on. We're going to store the test set as a pickle file with numpy arrays. This makes it easier to load the test samples in a ready-to-go format for making predictions with our model later on.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(sequences,targets, test_size=0.1)
train_x, val_x, train_y, val_y = train_test_split(X_train, y_train, test_size=0.2)
def store_dataset(filename, x, y):
with open(filename, 'w') as output_file:
for i in range(0, len(y)):
sequence = x[i]
target = y[i]
for j,element in enumerate(sequence):
output_file.write('{} '.format(i))
if j == 0:
output_file.write('|target {} '.format(target))
output_file.write('|features {}\n'.format(element[0]))
store_dataset('solar_train.ctf', train_x, train_y)
store_dataset('solar_val.ctf', val_x, val_y)
import pickle
test_items = []
for item in X_test:
test_items.append(np.array(item))
with open('test_samples.pkl', 'wb') as test_file:
pickle.dump(test_items, test_file)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
df_solar = pd.read_csv('solar.csv', index_col='time', parse_dates=['time'])
df_solar['date'] = df_solar.index.date
print(df_solar['solar.total'].max())
# Normalize the data so all values are between 0 and 1.
# This is required, because we are using sigmoid and tanh activations in our model.
# These kind of activations don't work for values that are not within the 0 to 1 range.
df_solar['solar.current'] /= df_solar['solar.total'].max()
df_solar['solar.total'] /= df_solar['solar.total'].max()
df_grouped = df_solar.groupby(df_solar.index.date).max()
df_grouped.columns = ['solar.current.max', 'solar.total.max', 'date']
df_merged = pd.merge(df_solar, df_grouped, right_index=True, on='date')
df_merged = df_merged[['solar.current', 'solar.total', 'solar.current.max','solar.total.max']]
df_per_day = df_merged.groupby(df_merged.index.date)
targets = []
sequences = []
for _, group in df_per_day:
# Less then 8 measurements on a day is considered invalid.
if len(group['solar.total'].values) < 8:
continue
day_total = group['solar.total.max'].values[0]
sequence = group[['solar.total']].values[0:14, :]
for j in range(2, len(sequence)):
derived_seq = sequence[:j]
sequences.append(derived_seq)
targets.append(day_total)
0 |target 0.5392670157068062 |features 8.848167838850571e-05
0 |features 0.000594764392413394
1 |target 0.5392670157068062 |features 8.848167838850571e-05
1 |features 0.000594764392413394
1 |features 0.0035340314136125656
2 |target 0.5392670157068062 |features 8.848167838850571e-05
2 |features 0.000594764392413394
2 |features 0.0035340314136125656
2 |features 0.013115183246073298
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(sequences,targets, test_size=0.1)
train_x, val_x, train_y, val_y = train_test_split(X_train, y_train, test_size=0.2)
def store_dataset(filename, x, y):
with open(filename, 'w') as output_file:
for i in range(0, len(y)):
sequence = x[i]
target = y[i]
for j,element in enumerate(sequence):
output_file.write('{} '.format(i))
if j == 0:
output_file.write('|target {} '.format(target))
output_file.write('|features {}\n'.format(element[0]))
store_dataset('solar_train.ctf', train_x, train_y)
store_dataset('solar_val.ctf', val_x, val_y)
import pickle
test_items = []
for item in X_test:
test_items.append(np.array(item))
with open('test_samples.pkl', 'wb') as test_file:
pickle.dump(test_items, test_file)
| 0.276593 | 0.979784 |
# Biofilm 1D solver class tutorial
**Maintainer: Brendan Harding**\
**Initial development: May 2020**\
**Last updated: August 2020**
This notebook acts as a brief tutorial on using the Biofilm 1D solver class contained in ```BiofilmOneDLubricationClass.py```.
The class implements solvers for the biofilm model described in:
- *A Thin-Film Lubrication Model for Biofilm Expansion Under Strong Adhesion*,\
A. Tam, B. Harding, J.E.F. Green, S. Balasuriya, and B.J. Binder,\
To be submitted soon, 2020.
which builds upon the model developed by Alex Tam in his PhD thesis:
- *Mathematical Modelling of Pattern Formation in Yeast Biofilms*,\
Alex Tam,\
The University of Adelaide, 2019.
First, the following cell will load a few standard Python libraries, set a couple of plotting parameters, and import the solver class. Note: if you don't have latex on your system you should change the ```usetex=True``` option to ```usetex=False``` (or just comment out this line with a # at the front).
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('text',usetex=True)
plt.rc('font',size=14)
from BiofilmOneDLubricationClass import BiofilmOneDLubricationModel
from BiofilmOneDPlottingHelper import Plot1DFields
```
## Accessing documentation
The class itself contains some quite a bit of documentation (although is far from complete).
You can print the entire documentation using ```help(BiofilmOneDLubricationModel)```.
You'll see a some documentation for the entire class, then a list of available methods and their arguments along with a brief description for each.
The documentation for a specific class method can also be printed on its own using ```help(BiofilmOneDLubricationModel.solve)``` for example.
(The class also contains a large number of *private* methods, but are not shown in the help as it is not expected that the typical user should call them directly. More advanced users can look directly at the class code to learn about these.)
```
help(BiofilmOneDLubricationModel)
```
## Getting started
Okay, now let's get started shall we.
The following cell initialises an instance of the class using a default setup (no arguments).
We then fetch and plot the initial conditions so you can see how to do this from the interface.
Initial conditions can be changed using the corresponding ```set_...``` method, e.g. ```set_g_s```.
For each of these you can either pass a function which the clas will then sample on an appropriate grid/array, or you can pass an array directly (although it must be the same length as the ```r``` variable within the class).
```
# Initialise the class using all internal defaults
BLM_1D = BiofilmOneDLubricationModel()
# Fetch the initial conditions
r = BLM_1D.get_r()
h = BLM_1D.get_h()
phi_n = BLM_1D.get_phi_n()
g_s = BLM_1D.get_g_s()
g_b = BLM_1D.get_g_b()
# Plot the initial conditions, include a legend
Plot1DFields(r,h,phi_n,g_s,g_b)
plt.show()
```
## Getting and setting parameters
You can get and set the parameters for the class using the ```get_parameters``` and ```set_parameters``` methods.
If ```get_parameters()``` is called with no arguments it returns all the parameters as a dictionary. Alternatively specific parameters can be fetched by passing their name as a string, e.g. ```get_parameters('Pe')```.
To use ```set_parameters``` you must pass a dictionary of the parameters you wish to set. E.g. to set $\mathrm{Pe}=10$ and $\Upsilon=5$ you would call ```set_parameters({'Pe':10,'Upsilon':5})```. You need only include those parameters you wish to change. Alternatively, the dictionary returned by ```get_parameters()``` can also be edited directly (it is a reference rather than a copy), although I advise against this approach.
Note: there are a couple of parameters which cannot be changed, ```R``` and ```dr``` in particular.
If, for some reason, you wanted to change these, the best thing to do is create a new instance of a class where the desired ```R``` and ```dr``` must be specified during initialisation. You then need to manually reset other parameters and initial conditions as needed.
Here we will change the slip parameter $\lambda^{\ast}$ to a finite number, say $100$ (noting $\lambda^{\ast}=\infty$ by default).
```
print(BLM_1D.get_parameters())
BLM_1D.set_parameters({'lambda_ast':100.0})
print(BLM_1D.get_parameters())
```
## Solving
Okay, now let's run the (default) solver for a duration of $T=2$ units and plot the result.
Note that the call to solve returns the solutions at the end (in the order $h,\Phi_n,g_s,g_b$). Observe that $\Phi_n$ is returned as opposed to $\bar{\phi}_n=\phi_n$ as $\Phi_n$ is what the solver computed internally. It is straightforward to convert calculate $\bar{\phi}_n=\Phi_n/h$ though.
Calling to solve again will continue to evolve the solution for the specified period of time from the current solution.
Beware: Currently if you evolve so long that the biofilm reaches the right hand wall then the solver will fail. (This will be fixed at some point in the future.)
```
# Solve for 2 units in time
solution = BLM_1D.solve(2.0)
h = solution[0]
phi_n = solution[1]/solution[0] # Phi_n/h
g_s = solution[2]
g_b = solution[3]
# Plot the solutions
Plot1DFields(r,h,phi_n,g_s,g_b)
plt.show()
```
## More complex use case
Okay, now let's re-initialise the class on a larger domain, solve over several time periods, plotting the solutions as we go.
Note: this may take several minutes to complete.
```
# Initialise the class
BLM_1D = BiofilmOneDLubricationModel(R=10.0,params={'lambda_ast':100.0})
r = BLM_1D.get_r()
h = BLM_1D.get_h()
Phi_n = BLM_1D.get_Phi_n()
phi_n = BLM_1D.get_phi_n()
g_s = BLM_1D.get_g_s()
g_b = BLM_1D.get_g_b()
initial = [h.copy(),Phi_n.copy(),g_s.copy(),g_b.copy()]
results = []
if True:
for _ in range(10):
solution = BLM_1D.solve(5.0)
results.append([sol.copy() for sol in solution]) # ensure copies are recorded
h = solution[0]
phi_n = solution[1]/solution[0] # Phi_n/h
g_s = solution[2]
g_b = solution[3]
Plot1DFields(r,h,phi_n,g_s,g_b)
plt.show()
```
## Alternative plot of results
The following cell takes all of the results computed, stored in the list ```results```, and plots them analogous to figure 6.4 in Alex Tam's thesis (albeit we have added some slip here).
Note: these can be saved by calling ```plt.savefig(...)``` with suitable arguments immediately before ```plt.show()```.
```
# Plot all of the h solutions together...
plt.plot(r,initial[0],'k--',lw=1)
for i in range(10):
plt.plot(r,results[i][0],'k-',lw=1)
plt.xlabel(r'$r$',labelpad=0)
plt.ylabel(r'$h$')
plt.show()
# Plot all of the phi_n solutions together...
plt.plot(r,initial[1]/initial[0],'k--',lw=1)
for i in range(10):
phi_n_i = np.maximum(results[i][1]/results[i][0],0*r)
plt.plot(r,phi_n_i,'k-',lw=1)
plt.xlabel(r'$r$',labelpad=0)
plt.ylabel(r'$\phi_n$')
plt.show()
# Plot all of the g_s solutions together...
plt.plot(r,initial[2],'k--',lw=1)
for i in range(10):
plt.plot(r,results[i][2],'k-',lw=1)
plt.xlabel(r'$r$',labelpad=0)
plt.ylabel(r'$g_s$')
plt.show()
# Plot all of the g_b solutions together...
plt.plot(r,initial[3],'k--',lw=1)
for i in range(10):
plt.plot(r,results[i][3],'k-',lw=1)
plt.xlabel(r'$r$',labelpad=0)
plt.ylabel(r'$g_b$')
plt.show()
```
## It's in you hands now, go nuts!
Feel free to contact me if you ever have any queries/questions.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('text',usetex=True)
plt.rc('font',size=14)
from BiofilmOneDLubricationClass import BiofilmOneDLubricationModel
from BiofilmOneDPlottingHelper import Plot1DFields
help(BiofilmOneDLubricationModel)
# Initialise the class using all internal defaults
BLM_1D = BiofilmOneDLubricationModel()
# Fetch the initial conditions
r = BLM_1D.get_r()
h = BLM_1D.get_h()
phi_n = BLM_1D.get_phi_n()
g_s = BLM_1D.get_g_s()
g_b = BLM_1D.get_g_b()
# Plot the initial conditions, include a legend
Plot1DFields(r,h,phi_n,g_s,g_b)
plt.show()
print(BLM_1D.get_parameters())
BLM_1D.set_parameters({'lambda_ast':100.0})
print(BLM_1D.get_parameters())
# Solve for 2 units in time
solution = BLM_1D.solve(2.0)
h = solution[0]
phi_n = solution[1]/solution[0] # Phi_n/h
g_s = solution[2]
g_b = solution[3]
# Plot the solutions
Plot1DFields(r,h,phi_n,g_s,g_b)
plt.show()
# Initialise the class
BLM_1D = BiofilmOneDLubricationModel(R=10.0,params={'lambda_ast':100.0})
r = BLM_1D.get_r()
h = BLM_1D.get_h()
Phi_n = BLM_1D.get_Phi_n()
phi_n = BLM_1D.get_phi_n()
g_s = BLM_1D.get_g_s()
g_b = BLM_1D.get_g_b()
initial = [h.copy(),Phi_n.copy(),g_s.copy(),g_b.copy()]
results = []
if True:
for _ in range(10):
solution = BLM_1D.solve(5.0)
results.append([sol.copy() for sol in solution]) # ensure copies are recorded
h = solution[0]
phi_n = solution[1]/solution[0] # Phi_n/h
g_s = solution[2]
g_b = solution[3]
Plot1DFields(r,h,phi_n,g_s,g_b)
plt.show()
# Plot all of the h solutions together...
plt.plot(r,initial[0],'k--',lw=1)
for i in range(10):
plt.plot(r,results[i][0],'k-',lw=1)
plt.xlabel(r'$r$',labelpad=0)
plt.ylabel(r'$h$')
plt.show()
# Plot all of the phi_n solutions together...
plt.plot(r,initial[1]/initial[0],'k--',lw=1)
for i in range(10):
phi_n_i = np.maximum(results[i][1]/results[i][0],0*r)
plt.plot(r,phi_n_i,'k-',lw=1)
plt.xlabel(r'$r$',labelpad=0)
plt.ylabel(r'$\phi_n$')
plt.show()
# Plot all of the g_s solutions together...
plt.plot(r,initial[2],'k--',lw=1)
for i in range(10):
plt.plot(r,results[i][2],'k-',lw=1)
plt.xlabel(r'$r$',labelpad=0)
plt.ylabel(r'$g_s$')
plt.show()
# Plot all of the g_b solutions together...
plt.plot(r,initial[3],'k--',lw=1)
for i in range(10):
plt.plot(r,results[i][3],'k-',lw=1)
plt.xlabel(r'$r$',labelpad=0)
plt.ylabel(r'$g_b$')
plt.show()
| 0.747063 | 0.954732 |
### Custom Classes
We'll cover classes in a lot of detail in this course, but for now you should have at least some understanding of classes in Python and how to create them.
To create a custom class we use the `class` keyword, and we can initialize class attributes in the special method `__init__`.
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
```
We create **instances** of the `Rectangle` class by calling it with arguments that are passed to the `__init__` method as the second and third arguments. The first argument (`self`) is automatically filled in by Python and contains the object being created.
Note that using `self` is just a convention (although a good one, and you shgoudl use it to make your code more understandable by others), you could really call it whatever (valid) name you choose.
But just because you can does not mean you should!
```
r1 = Rectangle(10, 20)
r2 = Rectangle(3, 5)
r1.width
r2.height
```
`width` and `height` are attributes of the `Rectangle` class. But since they are just values (not callables), we call them **properties**.
Attributes that are callables are called **methods**.
You'll note that we were able to retrieve the `width` and `height` attributes (properties) using a dot notation, where we specify the object we are interested in, then a dot, then the attribute we are interested in.
We can add callable attributes to our class (methods), that will also be referenced using the dot notation.
Again, we will create instance methods, which means the method will require the first argument to be the object being used when the method is called.
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(the_referenced_object):
return 2 * (the_referenced_object.width + the_referenced_object.height)
r1 = Rectangle(10, 20)
r1.area()
```
When we ran the above line of code, our object was `r1`, so when `area` was called, Python in fact called the method `area` in the Rectangle class automatically passing `r1` to the `self` parameter.
This is why we can use a name other than self, such as in the perimeter method:
```
r1.perimeter()
```
Again, I'm just illustrating a point, don't actually do that!
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
r1 = Rectangle(10, 20)
```
Python defines a bunch of **special** methods that we can use to give our classes functionality that resembles functionality of built-in and standard library objects.
Many people refer to them as *magic* methods, but there's nothing magical about them - unlike magic, they are well documented and understood!!
These **special** methods provide us an easy way to overload operators in Python.
For example, we can obtain the string representation of an integer using the built-in `str` function:
```
str(10)
```
What happens if we try this with our Rectangle object?
```
str(r1)
```
Not exactly what we might have expected. On the other hand, how is Python supposed to know how to display our rectangle as a string?
We could write a method in the class such as:
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def to_str(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
```
So now we could get a string from our object as follows:
```
r1 = Rectangle(10, 20)
r1.to_str()
```
But of course, using the built-in `str` function still does not work:
```
str(r1)
```
Does this mean we are out of luck, and anyone who writes a class in Python will need to provide some method to do this, and probably come up with their own name for the method too, maybe `to_str`, `make_string`, `stringify`, and who knows what else.
Fortunately, this is where these special methods come in. When we call `str(r1)`, Python will first look to see if our class (`Rectangle`) has a special method called `__str__`.
If the `__str__` method is present, then Python will call it and return that value.
There's actually another one called `__repr__` which is related, but we'll just focus on `__str__` for now.
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
r1 = Rectangle(10, 20)
str(r1)
```
However, in Jupyter (and interactive console if you are using that), look what happens here:
```
r1
```
As you can see we still get that default. That's because here Python is not converting `r1` to a string, but instead looking for a string *representation* of the object. It is looking for the `__repr__` method (which we'll come back to later).
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
r1 = Rectangle(10, 20)
print(r1) # uses __str__
r1 # uses __repr__
```
How about the comparison operators, such as `==` or `<`?
```
r1 = Rectangle(10, 20)
r2 = Rectangle(10, 20)
r1 == r2
```
As you can see, Python does not consider `r1` and `r2` as equal (using the `==` operator). Again, how is Python supposed to know that two Rectangle objects with the same height and width should be considered equal?
We just need to tell Python how to do it, using the special method `__eq__`.
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
def __eq__(self, other):
print('self={0}, other={1}'.format(self, other))
if isinstance(other, Rectangle):
return (self.width, self.height) == (other.width, other.height)
else:
return False
r1 = Rectangle(10, 20)
r2 = Rectangle(10, 20)
r1 is r2
r1 == r2
r3 = Rectangle(2, 3)
r1 == r3
```
And if we try to compare our Rectangle to a different type:
```
r1 == 100
```
Let's remove that print statement - I only put that in so you could see what the arguments were, in practice you should avoid side effects.
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
def __eq__(self, other):
if isinstance(other, Rectangle):
return (self.width, self.height) == (other.width, other.height)
else:
return False
```
What about `<`, `>`, `<=`, etc.?
Again, Python has special methods we can use to provide that functionality.
These are methods such as `__lt__`, `__gt__`, `__le__`, etc.
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
def __eq__(self, other):
if isinstance(other, Rectangle):
return (self.width, self.height) == (other.width, other.height)
else:
return False
def __lt__(self, other):
if isinstance(other, Rectangle):
return self.area() < other.area()
else:
return NotImplemented
r1 = Rectangle(100, 200)
r2 = Rectangle(10, 20)
r1 < r2
r2 < r1
```
What about `>`?
```
r1 > r2
```
How did that work? We did not define a `__gt__` method.
Well, Python cleverly decided that since `r1 > r2` was not implemented, it would give
`r2 < r1`
a try. And since, `__lt__` **is** defined, it worked!
Of course, `<=` is not going to magically work!
```
r1 <= r2
```
If you come from a Java background, you are probably thinking that using "bare" properties (direct access), such as `height` and `width` is a terrible design idea.
It is for Java, but not for Python.
Although you can use bare properties in Java, if you ever need to intercept the getting or setting of a property, you will need to write a method (such as `getWidth` and `setWidth`. The problem is that if you used a bare `width` property for example, a lot of your code might be using `obj.width` (as we have been doing here). The instant you make the `width` private and instead implement getters and setters, you break your code.
Hence one of the reasons why in Java we just write getters and setters for properties from the beginning.
With Python this is not the case - we can change any bare property into getters and setters without breaking the code that uses that bare property.
I'll show you a quick example here, but we'll come back to this topic in much more detail later.
Let's take our Rectangle class once again. I'll use a simplified version to keep the code short.
```
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
r1 = Rectangle(10, 20)
r1.width
r1.width = 100
r1
```
As you saw we can *get* and *set* the `width` property directly.
But let's say after this code has been released for a while and users of our class have been using it (and specifically setting and getting the `width` and `height` attribute a lot), but now we want to make sure users cannot set a non-positive value (i.e. <= 0) for width (or height, but we'll focus on width as an example).
In a language like Java, we would implement `getWidth` and `setWidth` and make `width` private - which would break any code directly accessing the `width` property.
In Pytyhon we can use some special **decorators** (more on those later) to encapsulate our property getters and setters:
```
class Rectangle:
def __init__(self, width, height):
self._width = width
self._height = height
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
@property
def width(self):
return self._width
@width.setter
def width(self, width):
if width <= 0:
raise ValueError('Width must be positive.')
self._width = width
@property
def height(self):
return self._height
@height.setter
def height(self, height):
if height <= 0:
raise ValueError('Height must be positive.')
self._height = height
r1 = Rectangle(10, 20)
r1.width
r1.width = 100
r1
r1.width = -10
```
There are more things we should do to properly implement all this, in particular we should also be checking the positive and negative values during the `__init__` phase. We do so by using the accessor methods for height and width:
```
class Rectangle:
def __init__(self, width, height):
self._width = None
self._height = None
# now we call our accessor methods to set the width and height
self.width = width
self.height = height
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
@property
def width(self):
return self._width
@width.setter
def width(self, width):
if width <= 0:
raise ValueError('Width must be positive.')
self._width = width
@property
def height(self):
return self._height
@height.setter
def height(self, height):
if height <= 0:
raise ValueError('Height must be positive.')
self._height = height
r1 = Rectangle(0, 10)
```
There more we should be doing, like checking that the width and height being passed in are numeric types, and so on. Especially during the `__init__` phase - we would rather raise an exception when the object is being created rather than delay things and raise an exception when the user calls some method like `area` - that way the exception will be on the line that creates the object - makes debugging much easier!
There are many more of these special methods, and we'll look in detail at them later in this course.
|
github_jupyter
|
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
r1 = Rectangle(10, 20)
r2 = Rectangle(3, 5)
r1.width
r2.height
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(the_referenced_object):
return 2 * (the_referenced_object.width + the_referenced_object.height)
r1 = Rectangle(10, 20)
r1.area()
r1.perimeter()
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
r1 = Rectangle(10, 20)
str(10)
str(r1)
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def to_str(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
r1 = Rectangle(10, 20)
r1.to_str()
str(r1)
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
r1 = Rectangle(10, 20)
str(r1)
r1
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
r1 = Rectangle(10, 20)
print(r1) # uses __str__
r1 # uses __repr__
r1 = Rectangle(10, 20)
r2 = Rectangle(10, 20)
r1 == r2
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
def __eq__(self, other):
print('self={0}, other={1}'.format(self, other))
if isinstance(other, Rectangle):
return (self.width, self.height) == (other.width, other.height)
else:
return False
r1 = Rectangle(10, 20)
r2 = Rectangle(10, 20)
r1 is r2
r1 == r2
r3 = Rectangle(2, 3)
r1 == r3
r1 == 100
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
def __eq__(self, other):
if isinstance(other, Rectangle):
return (self.width, self.height) == (other.width, other.height)
else:
return False
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
def __str__(self):
return 'Rectangle (width={0}, height={1})'.format(self.width, self.height)
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
def __eq__(self, other):
if isinstance(other, Rectangle):
return (self.width, self.height) == (other.width, other.height)
else:
return False
def __lt__(self, other):
if isinstance(other, Rectangle):
return self.area() < other.area()
else:
return NotImplemented
r1 = Rectangle(100, 200)
r2 = Rectangle(10, 20)
r1 < r2
r2 < r1
r1 > r2
r1 <= r2
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
r1 = Rectangle(10, 20)
r1.width
r1.width = 100
r1
class Rectangle:
def __init__(self, width, height):
self._width = width
self._height = height
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
@property
def width(self):
return self._width
@width.setter
def width(self, width):
if width <= 0:
raise ValueError('Width must be positive.')
self._width = width
@property
def height(self):
return self._height
@height.setter
def height(self, height):
if height <= 0:
raise ValueError('Height must be positive.')
self._height = height
r1 = Rectangle(10, 20)
r1.width
r1.width = 100
r1
r1.width = -10
class Rectangle:
def __init__(self, width, height):
self._width = None
self._height = None
# now we call our accessor methods to set the width and height
self.width = width
self.height = height
def __repr__(self):
return 'Rectangle({0}, {1})'.format(self.width, self.height)
@property
def width(self):
return self._width
@width.setter
def width(self, width):
if width <= 0:
raise ValueError('Width must be positive.')
self._width = width
@property
def height(self):
return self._height
@height.setter
def height(self, height):
if height <= 0:
raise ValueError('Height must be positive.')
self._height = height
r1 = Rectangle(0, 10)
| 0.905233 | 0.972701 |
```
# default_exp utils
```
# Utilities
> Helper functions used throughout the library not related to timeseries data.
```
#export
from tsai.imports import *
from fastcore.test import *
#hide
import tsai
a = !python -V
p = a[0].split(' ')
print(f'python : {p[1]}')
print('tsai :', tsai.__version__)
print('fastai :', fastai.__version__)
print('fastcore :', fastcore.__version__)
print('torch :', torch.__version__)
print('scipy :', sp.__version__)
print('numpy :', np.__version__)
print('pandas :', pd.__version__)
print('matplotlib :', matplotlib.__version__)
#export
import inspect
import sklearn
# ensure these folders exist for testing purposes
fns = ['data', 'export', 'models']
for fn in fns:
path = Path('.')/fn
if not os.path.exists(path): os.makedirs(path)
#export
def totensor(o):
if isinstance(o, torch.Tensor): return o
elif isinstance(o, np.ndarray): return torch.from_numpy(o)
else:
try: return torch.tensor(o)
except: warnings.warn(f"Can't convert {type(o)} to torch.Tensor", Warning)
def toarray(o):
if isinstance(o, np.ndarray): return o
elif isinstance(o, torch.Tensor): return o.cpu().numpy()
else:
try: return np.asarray(o)
except: warnings.warn(f"Can't convert {type(o)} to np.array", Warning)
def toL(o):
if isinstance(o, L): return o
elif isinstance(o, (np.ndarray, torch.Tensor)): return L(o.tolist())
else:
try: return L(o)
except: warnings.warn(f'passed object needs to be of type L, list, np.ndarray or torch.Tensor but is {type(o)}', Warning)
def to3dtensor(o):
o = totensor(o)
if o.ndim == 3: return o
elif o.ndim == 1: return o[None, None]
elif o.ndim == 2: return o[:, None]
assert False, f'Please, review input dimensions {o.ndim}'
def to2dtensor(o):
o = totensor(o)
if o.ndim == 2: return o
elif o.ndim == 1: return o[None]
elif o.ndim == 3: return o[0]
assert False, f'Please, review input dimensions {o.ndim}'
def to1dtensor(o):
o = totensor(o)
if o.ndim == 1: return o
elif o.ndim == 3: return o[0,0]
if o.ndim == 2: return o[0]
assert False, f'Please, review input dimensions {o.ndim}'
def to3darray(o):
o = toarray(o)
if o.ndim == 3: return o
elif o.ndim == 1: return o[None, None]
elif o.ndim == 2: return o[:, None]
assert False, f'Please, review input dimensions {o.ndim}'
def to2darray(o):
o = toarray(o)
if o.ndim == 2: return o
elif o.ndim == 1: return o[None]
elif o.ndim == 3: return o[0]
assert False, f'Please, review input dimensions {o.ndim}'
def to1darray(o):
o = toarray(o)
if o.ndim == 1: return o
elif o.ndim == 3: o = o[0,0]
elif o.ndim == 2: o = o[0]
assert False, f'Please, review input dimensions {o.ndim}'
def to3d(o):
if o.ndim == 3: return o
if isinstance(o, np.ndarray): return to3darray(o)
if isinstance(o, torch.Tensor): return to3dtensor(o)
def to2d(o):
if o.ndim == 2: return o
if isinstance(o, np.ndarray): return to2darray(o)
if isinstance(o, torch.Tensor): return to2dtensor(o)
def to1d(o):
if o.ndim == 1: return o
if isinstance(o, np.ndarray): return to1darray(o)
if isinstance(o, torch.Tensor): return to1dtensor(o)
def to2dPlus(o):
if o.ndim >= 2: return o
if isinstance(o, np.ndarray): return to2darray(o)
elif isinstance(o, torch.Tensor): return to2dtensor(o)
def to3dPlus(o):
if o.ndim >= 3: return o
if isinstance(o, np.ndarray): return to3darray(o)
elif isinstance(o, torch.Tensor): return to3dtensor(o)
def to2dPlusTensor(o):
return to2dPlus(totensor(o))
def to2dPlusArray(o):
return to2dPlus(toarray(o))
def to3dPlusTensor(o):
return to3dPlus(totensor(o))
def to3dPlusArray(o):
return to3dPlus(toarray(o))
def todtype(dtype):
def _to_type(o, dtype=dtype):
if o.dtype == dtype: return o
elif isinstance(o, torch.Tensor): o = o.to(dtype=dtype)
elif isinstance(o, np.ndarray): o = o.astype(dtype)
return o
return _to_type
a = np.random.rand(100).astype(np.float32)
b = torch.from_numpy(a).float()
test_eq(totensor(a), b)
test_eq(a, toarray(b))
test_eq(to3dtensor(a).ndim, 3)
test_eq(to2dtensor(a).ndim, 2)
test_eq(to1dtensor(a).ndim, 1)
test_eq(to3darray(b).ndim, 3)
test_eq(to2darray(b).ndim, 2)
test_eq(to1darray(b).ndim, 1)
#export
def bytes2size(size_bytes):
if size_bytes == 0: return "0B"
size_name = ("B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB")
i = int(math.floor(math.log(size_bytes, 1024)))
p = math.pow(1024, i)
s = round(size_bytes / p, 2)
return "%s %s" % (s, size_name[i])
def bytes2GB(byts):
return round(byts / math.pow(1024, 3), 2)
def get_size(o, return_str=False):
s = sys.getsizeof(o)
if return_str: return bytes2size(s)
else: return s
a = np.random.rand(10, 5, 3)
test_eq(get_size(a, True), '1.3 KB')
#export
def delete_all_in_dir(tgt_dir, exception=None):
if exception is not None and len(L(exception)) > 1: exception = tuple(exception)
for file in os.listdir(tgt_dir):
if exception is not None and file.endswith(exception): continue
file_path = os.path.join(tgt_dir, file)
if os.path.isfile(file_path) or os.path.islink(file_path): os.unlink(file_path)
elif os.path.isdir(file_path): shutil.rmtree(file_path)
#export
def reverse_dict(dictionary):
return {v: k for k, v in dictionary.items()}
#export
def is_tuple(o): return isinstance(o, tuple)
#export
def itemify(*o, tup_id=None):
o = [o_ for o_ in L(*o) if o_ is not None]
items = L(o).zip()
if tup_id is not None: return L([item[tup_id] for item in items])
else: return items
a = [1, 2, 3]
b = [4, 5, 6]
print(itemify(a, b))
test_eq(len(itemify(a, b)), len(a))
a = [1, 2, 3]
b = None
print(itemify(a, b))
test_eq(len(itemify(a, b)), len(a))
a = [1, 2, 3]
b = [4, 5, 6]
c = None
print(itemify(a, b, c))
test_eq(len(itemify(a, b, c)), len(a))
#export
def isnone(o):
return o is None
def exists(o): return o is not None
def ifelse(a, b, c):
"`b` if `a` is True else `c`"
return b if a else c
a = np.array(3)
test_eq(isnone(a), False)
test_eq(exists(a), True)
b = None
test_eq(isnone(b), True)
test_eq(exists(b), False)
#export
def is_not_close(a, b, eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b, '__array__'):
return (abs(a - b) > eps).all()
if isinstance(a, (Iterable, Generator)) or isinstance(b, (Iterable, Generator)):
return is_not_close(np.array(a), np.array(b), eps=eps)
return abs(a - b) > eps
def test_not_close(a, b, eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a, b, partial(is_not_close, eps=eps), 'not_close')
def test_type(a, b):
return test_eq(type(a), type(b))
def test_ok(f, *args, **kwargs):
try:
f(*args, **kwargs)
e = 0
except:
e = 1
pass
test_eq(e, 0)
def test_not_ok(f, *args, **kwargs):
try:
f(*args, **kwargs)
e = 0
except:
e = 1
pass
test_eq(e, 1)
def test_error(error, f, *args, **kwargs):
try: f(*args, **kwargs)
except Exception as e:
test_eq(str(e), error)
def test_eq_nan(a,b):
"`test` that `a==b` excluding nan values (valid for torch.Tensor and np.ndarray)"
mask_a = torch.isnan(a) if isinstance(a, torch.Tensor) else np.isnan(a)
mask_b = torch.isnan(b) if isinstance(b, torch.Tensor) else np.isnan(b)
test(a[~mask_a],b[~mask_b],equals, '==')
#export
def assert_fn(*args, **kwargs): assert False, 'assertion test'
test_error('assertion test', assert_fn, 35, a=3)
#export
def test_gt(a,b):
"`test` that `a>b`"
test(a,b,gt,'>')
def test_ge(a,b):
"`test` that `a>=b`"
test(a,b,ge,'>')
def test_lt(a,b):
"`test` that `a>b`"
test(a,b,lt,'<')
def test_le(a,b):
"`test` that `a>b`"
test(a,b,le,'<=')
test_ok(test_gt, 5, 4)
test_not_ok(test_gt, 4, 4)
test_ok(test_ge, 4, 4)
test_not_ok(test_ge, 3, 4)
test_ok(test_lt, 3, 4)
test_not_ok(test_lt, 4, 4)
test_ok(test_le, 4, 4)
test_not_ok(test_le, 5, 4)
t = torch.rand(100)
t[t<.5] = np.nan
test_ne(t, t)
test_eq_nan(t, t)
#export
def stack(o, axis=0, retain=True):
if hasattr(o, '__array__'): return o
if isinstance(o[0], torch.Tensor):
return retain_type(torch.stack(tuple(o), dim=axis), o[0]) if retain else torch.stack(tuple(o), dim=axis)
else:
return retain_type(np.stack(o, axis), o[0]) if retain else np.stack(o, axis)
def stack_pad(o, padding_value=np.nan):
'Converts a an iterable into a numpy array using padding if necessary'
row_length = len(max(o, key=len))
result = np.full((len(o), row_length), padding_value)
for i,row in enumerate(o): result[i, :len(row)] = row
return result
a = [[0,1,2], [4,5,6,7]]
test_eq(stack_pad(a).shape, (2, 4))
test_eq(type(stack_pad(a)), np.ndarray)
test_eq(np.isnan(stack_pad(a)).sum(), 1)
a = np.random.rand(2, 3, 4)
t = torch.from_numpy(a)
test_eq_type(stack(itemify(a, tup_id=0)), a)
test_eq_type(stack(itemify(t, tup_id=0)), t)
#export
def match_seq_len(*arrays):
max_len = stack([x.shape[-1] for x in arrays]).max()
return [np.pad(x, pad_width=((0,0), (0,0), (max_len - x.shape[-1], 0)), mode='constant', constant_values=0) for x in arrays]
a = np.random.rand(10, 5, 8)
b = np.random.rand(3, 5, 10)
c, d = match_seq_len(a, b)
test_eq(c.shape[-1], d.shape[-1])
#export
def random_shuffle(o, random_state=None):
res = sklearn.utils.shuffle(o, random_state=random_state)
if isinstance(o, L): return L(list(res))
return res
a = np.arange(10)
test_eq_type(random_shuffle(a, 1), np.array([2, 9, 6, 4, 0, 3, 1, 7, 8, 5]))
t = torch.arange(10)
test_eq_type(random_shuffle(t, 1), tensor([2, 9, 6, 4, 0, 3, 1, 7, 8, 5]))
l = list(a)
test_eq(random_shuffle(l, 1), [2, 9, 6, 4, 0, 3, 1, 7, 8, 5])
l2 = L(l)
test_eq_type(random_shuffle(l2, 1), L([2, 9, 6, 4, 0, 3, 1, 7, 8, 5]))
#export
def cat2int(o):
cat = Categorize()
cat.setup(o)
return stack(TfmdLists(o, cat)[:])
a = np.array(['b', 'a', 'a', 'b', 'a', 'b', 'a'])
test_eq_type(cat2int(a), TensorCategory([1, 0, 0, 1, 0, 1, 0]))
TensorBase([1,2,3])
#export
def cycle_dl(dl):
for _ in dl: _
def cycle_dl_to_device(dl):
for bs in dl: [b.to(default_device()) for b in bs]
#export
def cache_data(o, slice_len=10_000, verbose=False):
start = 0
n_loops = (len(o) - 1) // slice_len + 1
pv(f'{n_loops} loops', verbose)
timer.start(False)
for i in range(n_loops):
o[slice(start,start + slice_len)]
if verbose and (i+1) % 10 == 0: print(f'{i+1:4} elapsed time: {timer.elapsed()}')
start += slice_len
pv(f'{i+1:4} total time : {timer.stop()}\n', verbose)
memmap2cache = cache_data
cache_memmap = cache_data
#export
def get_func_defaults(f):
fa = inspect.getfullargspec(f)
if fa.defaults is None: return dict(zip(fa.args, [''] * (len(fa.args))))
else: return dict(zip(fa.args, [''] * (len(fa.args) - len(fa.defaults)) + list(fa.defaults)))
#export
def get_idx_from_df_col_vals(df, col, val_list):
return [df[df[col] == val].index[0] for val in val_list]
#export
def get_sublist_idxs(aList, bList):
"Get idxs that when applied to aList will return bList. aList must contain all values in bList"
sorted_aList = aList[np.argsort(aList)]
return np.argsort(aList)[np.searchsorted(sorted_aList, bList)]
x = np.array([3, 5, 7, 1, 9, 8, 6, 2])
y = np.array([6, 1, 5, 7])
idx = get_sublist_idxs(x, y)
test_eq(x[idx], y)
x = np.array([3, 5, 7, 1, 9, 8, 6, 6, 2])
y = np.array([6, 1, 5, 7, 5])
idx = get_sublist_idxs(x, y)
test_eq(x[idx], y)
#export
def flatten_list(l):
return [item for sublist in l for item in sublist]
#export
def display_pd_df(df, max_rows:Union[bool, int]=False, max_columns:Union[bool, int]=False):
if max_rows:
old_max_rows = pd.get_option('display.max_rows')
if max_rows is not True and isinstance(max_rows, Integral): pd.set_option('display.max_rows', max_rows)
else: pd.set_option('display.max_rows', df.shape[0])
if max_columns:
old_max_columns = pd.get_option('display.max_columns')
if max_columns is not True and isinstance(max_columns, Integral): pd.set_option('display.max_columns', max_columns)
else: pd.set_option('display.max_columns', df.shape[1])
display(df)
if max_rows: pd.set_option('display.max_rows', old_max_rows)
if max_columns: pd.set_option('display.max_columns', old_max_columns)
old_max_rows, old_max_columns = pd.get_option('display.max_rows'), pd.get_option('display.max_columns')
df = pd.DataFrame(np.random.rand(70, 25))
display_pd_df(df, max_rows=2, max_columns=3)
test_eq(old_max_rows, pd.get_option('display.max_rows'))
test_eq(old_max_columns, pd.get_option('display.max_columns'))
#export
def ttest(data1, data2, equal_var=False):
"Calculates t-statistic and p-value based on 2 sample distributions"
t_stat, p_value = scipy.stats.ttest_ind(data1, data2, equal_var=equal_var)
return t_stat, np.sign(t_stat) * p_value
def tscore(o):
if o.std() == 0: return 0
else: return np.sqrt(len(o)) * o.mean() / o.std()
a = np.random.normal(0.5, 1, 100)
b = np.random.normal(0.15, .5, 50)
plt.hist(a, 50)
plt.hist(b, 50)
plt.show()
ttest(a,b)
a = np.random.normal(0.5, 1, 100)
t = torch.normal(0.5, 1, (100, ))
tscore(a), tscore(t)
#export
def ttest_tensor(a, b):
"differentiable pytorch function equivalent to scipy.stats.ttest_ind with equal_var=False"
# calculate standard errors
se1, se2 = torch.std(a)/np.sqrt(len(a)), torch.std(b)/np.sqrt(len(b))
# standard error on the difference between the samples
sed = torch.sqrt(se1**2.0 + se2**2.0)
# calculate the t statistic
t_stat = (torch.mean(a) - torch.mean(b)) / sed
return t_stat
a = torch.rand(100).requires_grad_(True) + .1
b = torch.rand(100).requires_grad_(True)
ttest_tensor(a, b)
#export
from scipy.stats import pearsonr, spearmanr
def pcc(a, b):
return pearsonr(a, b)[0]
def scc(a, b):
return spearmanr(a, b)[0]
a = np.random.normal(0.5, 1, 100)
b = np.random.normal(0.15, .5, 100)
pcc(a, b), scc(a, b)
#export
def remove_fn(fn, verbose=False):
"Removes a file (fn) if exists"
try:
os.remove(fn)
pv(f'{fn} file removed', verbose)
except OSError:
pv(f'{fn} does not exist', verbose)
pass
#export
def npsave(array_fn, array, verbose=True):
remove_fn(array_fn, verbose)
pv(f'saving {array_fn}...', verbose)
np.save(array_fn, array)
pv(f'...{array_fn} saved', verbose)
np_save = npsave
fn = 'data/remove_fn_test.npy'
a = np.zeros(1)
npsave(fn, a)
del a
np.load(fn, mmap_mode='r+')
remove_fn(fn, True)
remove_fn(fn, True)
#export
def permute_2D(array, axis=None):
"Permute rows or columns in an array. This can be used, for example, in feature permutation"
if axis == 0: return array[np.random.randn(*array.shape).argsort(axis=0), np.arange(array.shape[-1])[None, :]]
elif axis == 1 or axis == -1: return array[np.arange(len(array))[:,None], np.random.randn(*array.shape).argsort(axis=1)]
return array[np.random.randn(*array.shape).argsort(axis=0), np.random.randn(*array.shape).argsort(axis=1)]
s = np.arange(100 * 50).reshape(100, 50)
test_eq(permute_2D(s, axis=0).mean(0), s.mean(0))
test_ne(permute_2D(s, axis=0), s)
test_eq(permute_2D(s, axis=1).mean(1), s.mean(1))
test_ne(permute_2D(s, axis=1), s)
test_ne(permute_2D(s), s)
#export
def random_normal():
"Returns a number between -1 and 1 with a normal distribution"
while True:
o = np.random.normal(loc=0., scale=1/3)
if abs(o) <= 1: break
return o
def random_half_normal():
"Returns a number between 0 and 1 with a half-normal distribution"
while True:
o = abs(np.random.normal(loc=0., scale=1/3))
if o <= 1: break
return o
def random_normal_tensor(shape=1, device=None):
"Returns a tensor of a predefined shape between -1 and 1 with a normal distribution"
return torch.empty(shape, device=device).normal_(mean=0, std=1/3).clamp_(-1, 1)
def random_half_normal_tensor(shape=1, device=None):
"Returns a tensor of a predefined shape between 0 and 1 with a half-normal distribution"
return abs(torch.empty(shape, device=device).normal_(mean=0, std=1/3)).clamp_(0, 1)
#export
from matplotlib.backends.backend_agg import FigureCanvasAgg
def default_dpi():
DPI = plt.gcf().get_dpi()
plt.close()
return int(DPI)
def get_plot_fig(size=None, dpi=default_dpi()):
fig = plt.figure(figsize=(size / dpi, size / dpi), dpi=dpi, frameon=False) if size else plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
config = plt.gcf()
plt.close('all')
return config
def fig2buf(fig):
canvas = FigureCanvasAgg(fig)
fig.canvas.draw()
return np.asarray(canvas.buffer_rgba())[..., :3]
default_dpi()
#export
def plot_scatter(x, y, deg=1):
linreg = sp.stats.linregress(x, y)
plt.scatter(x, y, label=f'R2:{linreg.rvalue:.2f}', color='lime', edgecolor='black', alpha=.5)
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, deg))(np.unique(x)), color='r')
plt.legend(loc='best')
plt.show()
a = np.random.rand(100)
b = np.random.rand(100)**2
plot_scatter(a, b)
#export
def get_idxs(o, aList): return array([o.tolist().index(v) for v in aList])
a = random_shuffle(np.arange(100, 200))
b = np.random.choice(a, 10, False)
idxs = get_idxs(a, b)
test_eq(a[idxs], b)
# export
def apply_cmap(o, cmap):
o = toarray(o)
out = plt.get_cmap(cmap)(o)[..., :3]
out = tensor(out).squeeze(1)
return out.permute(0, 3, 1, 2)
a = np.random.rand(16, 1, 40, 50)
s = L(a.shape)
s[1] = 3
test_eq(L(apply_cmap(a, 'viridis').shape), s)
s[0] = 1
a = np.random.rand(1, 40, 50)
test_eq(L(apply_cmap(a, 'viridis').shape), s)
#export
def torch_tile(a, n_tile, dim=0):
init_dim = a.size(dim)
repeat_idx = [1] * a.dim()
repeat_idx[dim] = n_tile
a = a.repeat(*(repeat_idx))
order_index = torch.cat([init_dim * torch.arange(n_tile) + i for i in range(init_dim)]).to(device=a.device)
return torch.index_select(a, dim, order_index)
test_eq(torch_tile(torch.arange(2), 3), tensor([0, 0, 0, 1, 1, 1]))
#export
def to_tsfresh_df(ts):
r"""Prepares a time series (Tensor/ np.ndarray) to be used as a tsfresh dataset to allow feature extraction"""
ts = to3d(ts)
if isinstance(ts, np.ndarray):
ids = np.repeat(np.arange(len(ts)), ts.shape[-1]).reshape(-1,1)
joint_ts = ts.transpose(0,2,1).reshape(-1, ts.shape[1])
cols = ['id'] + np.arange(ts.shape[1]).tolist()
df = pd.DataFrame(np.concatenate([ids, joint_ts], axis=1), columns=cols)
elif isinstance(ts, torch.Tensor):
ids = torch_tile(torch.arange(len(ts)), ts.shape[-1]).reshape(-1,1)
joint_ts = ts.transpose(1,2).reshape(-1, ts.shape[1])
cols = ['id']+np.arange(ts.shape[1]).tolist()
df = pd.DataFrame(torch.cat([ids, joint_ts], dim=1).numpy(), columns=cols)
df['id'] = df['id'].astype(int)
df.reset_index(drop=True, inplace=True)
return df
ts = torch.rand(16, 3, 20)
a = to_tsfresh_df(ts)
ts = ts.numpy()
b = to_tsfresh_df(ts)
#export
from scipy.stats import skew, kurtosis
def pcorr(a, b):
return scipy.stats.pearsonr(a, b)
def scorr(a, b):
corr = scipy.stats.spearmanr(a, b)
return corr[0], corr[1]
#export
def torch_diff(t, lag=1, pad=True):
import torch.nn.functional as F
diff = t[..., lag:] - t[..., :-lag]
if pad: return F.pad(diff, (lag,0))
else: return diff
t = torch.arange(24).reshape(2,3,4)
test_eq(torch_diff(t, 1)[..., 1:].float().mean(), 1.)
test_eq(torch_diff(t, 2)[..., 2:].float().mean(), 2.)
#export
def get_outliers_IQR(o, axis=None):
tt = False
if isinstance(o, torch.Tensor):
tt = True
device = o.device
tdtype = o.dtype
o = o.detach().cpu().numpy()
Q1 = np.nanpercentile(o, 25, axis=axis, keepdims=axis is not None)
Q3 = np.nanpercentile(o, 75, axis=axis, keepdims=axis is not None)
IQR = Q3 - Q1
if tt:
Q1 = torch.tensor(Q1, dtype=tdtype, device=device)
Q3 = torch.tensor(Q3, dtype=tdtype, device=device)
IQR = torch.tensor(IQR, dtype=tdtype, device=device)
return Q1 - 1.5 * IQR, Q3 + 1.5 * IQR
def clip_outliers(o, axis=None):
min_outliers, max_outliers = get_outliers_IQR(o, axis=axis)
if isinstance(o, (np.ndarray, pd.core.series.Series)):
return np.clip(o, min_outliers, max_outliers)
elif isinstance(o, torch.Tensor):
return torch.clamp(o, min_outliers, max_outliers)
def get_percentile(o, percentile, axis=None):
if isinstance(o, torch.Tensor): o = o.detach().cpu().numpy()
return np.nanpercentile(o, percentile, axis=axis, keepdims=axis is not None)
def torch_clamp(o, min=None, max=None):
r"""Clamp torch.Tensor using 1 or multiple dimensions"""
if min is not None: o = torch.max(o, min)
if max is not None: o = torch.min(o, max)
return o
t = torch.randn(2,3,100)
test_eq(type(get_outliers_IQR(t, -1)[0]), torch.Tensor)
a = np.random.randn(2,3,100)
test_eq(type(get_outliers_IQR(a, -1)[0]), np.ndarray)
#export
def torch_slice_by_dim(t, index, dim=-1, **kwargs):
if not isinstance(index, torch.Tensor): index = torch.Tensor(index)
assert t.ndim == index.ndim, "t and index must have the same ndim"
index = index.long()
return torch.gather(t, dim, index, **kwargs)
t = torch.rand(5, 3)
index = torch.randint(0, 3, (5, 1))
# index = [[0, 2], [0, 1], [1, 2], [0, 2], [0, 1]]
torch_slice_by_dim(t, index)
#export
def torch_nanmean(o, dim=None, keepdim=False):
"""There's currently no torch.nanmean function"""
mask = torch.isnan(o)
if mask.any():
output = torch.from_numpy(np.asarray(np.nanmean(o.cpu().numpy(), axis=dim, keepdims=keepdim))).to(o.device)
if output.shape == mask.shape:
output[mask] = 0
return output
else:
return torch.mean(o, dim=dim, keepdim=keepdim) if dim is not None else torch.mean(o)
def torch_nanstd(o, dim=None, keepdim=False):
"""There's currently no torch.nanstd function"""
mask = torch.isnan(o)
if mask.any():
output = torch.from_numpy(np.asarray(np.nanstd(o.cpu().numpy(), axis=dim, keepdims=keepdim))).to(o.device)
if output.shape == mask.shape:
output[mask] = 1
return output
else:
return torch.std(o, dim=dim, keepdim=keepdim) if dim is not None else torch.std(o)
t = torch.rand(1000)
t[:100] = float('nan')
assert torch_nanmean(t).item() > 0
#export
def concat(*ls, dim=0):
"Concatenate tensors, arrays, lists, or tuples by a dimension"
if not len(ls): return []
it = ls[0]
if isinstance(it, torch.Tensor): return torch.cat(ls, dim=dim)
elif isinstance(it, np.ndarray): return np.concatenate(ls, axis=dim)
else:
res = np.concatenate(ls, axis=dim).tolist()
return retain_type(res, typ=type(it))
#export
def reduce_memory_usage(df):
start_memory = df.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
for col in df.columns:
col_type = df[col].dtype
if col_type != 'object':
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
pass
else:
df[col] = df[col].astype('category')
end_memory = df.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return df
# export
def cls_name(o): return o.__class__.__name__
test_eq(cls_name(timer), 'Timer')
#export
def roll2d(o, roll1: Union[None, list, int] = None, roll2: Union[None, list, int] = None):
"""Rolls a 2D object on the indicated axis
This solution is based on https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently
"""
assert o.ndim == 2, "roll2D can only be applied to 2d objects"
axis1, axis2 = np.ogrid[:o.shape[0], :o.shape[1]]
if roll1 is not None:
if isinstance(roll1, int): axis1 = axis1 - np.array(roll1).reshape(1,1)
else: axis1 = np.array(roll1).reshape(o.shape[0],1)
if roll2:
if isinstance(roll2, int): axis2 = axis2 - np.array(roll2).reshape(1,1)
else: axis2 = np.array(roll2).reshape(1,o.shape[1])
return o[axis1, axis2]
def roll3d(o, roll1: Union[None, list, int] = None, roll2: Union[None, list, int] = None, roll3: Union[None, list, int] = None):
"""Rolls a 3D object on the indicated axis
This solution is based on https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently
"""
assert o.ndim == 3, "roll3D can only be applied to 3d objects"
axis1, axis2, axis3 = np.ogrid[:o.shape[0], :o.shape[1], :o.shape[2]]
if roll1 is not None:
if isinstance(roll1, int): axis1 = axis1 - np.array(roll1).reshape(1,1,1)
else: axis1 = np.array(roll1).reshape(o.shape[0],1,1)
if roll2:
if isinstance(roll2, int): axis2 = axis2 - np.array(roll2).reshape(1,1,1)
else: axis2 = np.array(roll2).reshape(1,o.shape[1],1)
if roll3:
if isinstance(roll3, int): axis3 = axis3 - np.array(roll3).reshape(1,1,1)
else: axis3 = np.array(roll3).reshape(1,1,o.shape[2])
return o[axis1, axis2, axis3]
def random_roll2d(o, axis=()):
"""Rolls a 2D object on the indicated axis
This solution is based on https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently
"""
assert o.ndim == 2, "roll2D can only be applied to 2d objects"
axis1, axis2 = np.ogrid[:o.shape[0], :o.shape[1]]
if 0 in axis:
axis1 = np.random.choice(np.arange(o.shape[0]), o.shape[0], replace).reshape(-1, 1)
if 1 in axis:
axis2 = np.random.choice(np.arange(o.shape[1]), o.shape[1], replace).reshape(1, -1)
return o[axis1, axis2]
def random_roll3d(o, axis=(), replace=False):
"""Randomly rolls a 3D object along the indicated axes
This solution is based on https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently
"""
assert o.ndim == 3, "random_roll3d can only be applied to 3d objects"
axis1, axis2, axis3 = np.ogrid[:o.shape[0], :o.shape[1], :o.shape[2]]
if 0 in axis:
axis1 = np.random.choice(np.arange(o.shape[0]), o.shape[0], replace).reshape(-1, 1, 1)
if 1 in axis:
axis2 = np.random.choice(np.arange(o.shape[1]), o.shape[1], replace).reshape(1, -1, 1)
if 2 in axis:
axis3 = np.random.choice(np.arange(o.shape[2]), o.shape[2], replace).reshape(1, 1, -1)
return o[axis1, axis2, axis3]
def rotate_axis0(o, steps=1):
return o[np.arange(o.shape[0]) - steps]
def rotate_axis1(o, steps=1):
return o[:, np.arange(o.shape[1]) - steps]
def rotate_axis2(o, steps=1):
return o[:, :, np.arange(o.shape[2]) - steps]
a = np.tile(np.arange(10), 3).reshape(3, 10) * np.array([1, 10, 100]).reshape(-1, 1)
a
roll2d(a, roll1=[2, 1, 0])
roll2d(a, roll2=3)
o = torch.arange(24).reshape(2,3,4)
test_eq(rotate_axis0(o)[1], o[0])
test_eq(rotate_axis1(o)[:,1], o[:,0])
test_eq(rotate_axis2(o)[...,1], o[...,0])
#export
def create_empty_array(shape, fname=None, path='./data', on_disk=True, dtype='float32', mode='r+', **kwargs):
"""
mode:
‘r’: Open existing file for reading only.
‘r+’: Open existing file for reading and writing.
‘w+’: Create or overwrite existing file for reading and writing.
‘c’: Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only.
"""
if on_disk:
assert fname is not None, 'you must provide a fname (filename)'
path = Path(path)
if not fname.endswith('npy'): fname = f'{fname}.npy'
filename = path/fname
filename.parent.mkdir(parents=True, exist_ok=True)
# Save a small empty array
_temp_fn = path/'temp_X.npy'
np.save(_temp_fn, np.empty(0))
# Create & save file
arr = np.memmap(_temp_fn, dtype=dtype, mode='w+', shape=shape, **kwargs)
np.save(filename, arr)
del arr
os.remove(_temp_fn)
# Open file in selected mode
arr = np.load(filename, mmap_mode=mode)
else:
arr = np.empty(shape, dtype=dtype, **kwargs)
return arr
fname = 'X_on_disk'
shape = (100, 10, 10)
X = create_empty_array(shape, fname, on_disk=True, mode='r+')
chunksize = 10
pbar = progress_bar(range(math.ceil(len(X) / chunksize)), leave=False)
start = 0
for i in pbar:
end = min(start + chunksize, len(X))
partial_data = np.random.rand(end - start, X.shape[1] , X.shape[2])
X[start:end] = partial_data
start = end
del partial_data
gc.collect()
filename = X.filename
del X
X = np.load(filename, mmap_mode='r+')
test_eq((X == 0).sum(), 0)
test_eq(X.shape, shape)
os.remove(X.filename)
# export
import gzip
def np_save_compressed(arr, fname=None, path='./data', verbose=False, **kwargs):
assert fname is not None, 'you must provide a fname (filename)'
if fname.endswith('npy'): fname = f'{fname}.gz'
elif not fname.endswith('npy.gz'): fname = f'{fname}.npy.gz'
filename = Path(path)/fname
filename.parent.mkdir(parents=True, exist_ok=True)
f = gzip.GzipFile(filename, 'w', **kwargs)
np.save(file=f, arr=arr)
f.close()
pv(f'array saved to {filename}', verbose)
def np_load_compressed(fname=None, path='./data', **kwargs):
assert fname is not None, 'you must provide a fname (filename)'
if fname.endswith('npy'): fname = f'{fname}.gz'
elif not fname.endswith('npy.gz'): fname = f'{fname}.npy.gz'
filename = Path(path)/fname
f = gzip.GzipFile(filename, 'r', **kwargs)
arr = np.load(f)
f.close()
return arr
X1 = np.random.rand(10)
np_save_compressed(X1, 'X_comp', path='./data')
X2 = np_load_compressed('X_comp')
test_eq(X1, X2)
# export
def np2memmap(arr, fname=None, path='./data', dtype='float32', mode='c', **kwargs):
""" Function that turns an ndarray into a memmap ndarray
mode:
‘r’: Open existing file for reading only.
‘r+’: Open existing file for reading and writing.
‘w+’: Create or overwrite existing file for reading and writing.
‘c’: Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only.
"""
assert fname is not None, 'you must provide a fname (filename)'
if not fname.endswith('npy'): fname = f'{fname}.npy'
filename = Path(path)/fname
filename.parent.mkdir(parents=True, exist_ok=True)
# Save file
np.save(filename, arr)
# Open file in selected mode
arr = np.load(filename, mmap_mode=mode)
return arr
X1 = np.random.rand(10)
X2 = np2memmap(X1, 'X1_test')
test_eq(X1, X2)
test_ne(type(X1), type(X2))
# export
def torch_mean_groupby(o, idxs):
"""Computes torch mean along axis 0 grouped by the idxs.
Need to ensure that idxs have the same order as o"""
if is_listy(idxs[0]): idxs = flatten_list(idxs)
flattened_idxs = torch.tensor(idxs)
idxs, vals = torch.unique(flattened_idxs, return_counts=True)
vs = torch.split_with_sizes(o, tuple(vals))
return torch.cat([v.mean(0).unsqueeze(0) for k,v in zip(idxs, vs)])
o = torch.arange(6*2*3).reshape(6, 2, 3).float()
idxs = np.array([[0,1,2,3], [2,3]], dtype=object)
output = torch_mean_groupby(o, idxs)
test_eq(o[:2], output[:2])
test_eq(o[2:4].mean(0), output[2])
test_eq(o[4:6].mean(0), output[3])
# export
def torch_flip(t, dims=-1):
if dims == -1: return t[..., np.arange(t.shape[dims])[::-1].copy()]
elif dims == 0: return t[np.arange(t.shape[dims])[::-1].copy()]
elif dims == 1: return t[:, np.arange(t.shape[dims])[::-1].copy()]
elif dims == 2: return t[:, :, np.arange(t.shape[dims])[::-1].copy()]
t = torch.randn(2, 3, 4)
test_eq(torch.flip(t, (2,)), torch_flip(t, dims=-1))
# export
def torch_nan_to_num(o, num=0, inplace=False):
mask = torch.isnan(o)
return torch_masked_to_num(o, mask, num=num, inplace=inplace)
def torch_masked_to_num(o, mask, num=0, inplace=False):
if inplace:
o[:] = o.masked_fill(mask, num)
else:
return o.masked_fill(mask, num)
x = torch.rand(2, 4, 6)
x[:, :3][x[:, :3] < .5] = np.nan
nan_values = torch.isnan(x).sum()
y = torch_nan_to_num(x[:, :3], inplace=False)
test_eq(torch.isnan(y).sum(), 0)
test_eq(torch.isnan(x).sum(), nan_values)
torch_nan_to_num(x[:, :3], inplace=True)
test_eq(torch.isnan(x).sum(), 0)
x = torch.rand(2, 4, 6)
mask = x[:, :3] > .5
x[:, :3] = torch_masked_to_num(x[:, :3], mask, num=0, inplace=False)
test_eq(x[:, :3][mask].sum(), 0)
x = torch.rand(2, 4, 6)
mask = x[:, :3] > .5
torch_masked_to_num(x[:, :3], mask, num=0, inplace=True)
test_eq(x[:, :3][mask].sum(), 0)
# export
def mpl_trend(x, y, deg=1):
return np.poly1d(np.polyfit(x, y, deg))(x)
x = np.sort(np.random.randint(0, 100, 100)/10)
y = np.random.rand(100) + np.linspace(0, 10, 100)
trend = mpl_trend(x, y)
plt.scatter(x, y)
plt.plot(x, trend, 'r')
plt.show()
# export
def int2digits(o, n_digits=None, normalize=True):
if n_digits is not None:
iterable = '0' * (n_digits - len(str(abs(o)))) + str(abs(o))
else:
iterable = str(abs(o))
sign = np.sign(o)
digits = np.array([sign * int(d) for d in iterable])
if normalize:
digits = digits / 10
return digits
def array2digits(o, n_digits=None, normalize=True):
output = np.array(list(map(partial(int2digits, n_digits=n_digits), o)))
if normalize:
output = output / 10
return output
o = -9645
test_eq(int2digits(o, 6), np.array([ 0, 0, -.9, -.6, -.4, -.5]))
a = np.random.randint(-1000, 1000, 10)
test_eq(array2digits(a,5).shape, (10,5))
# export
def sincos_encoding(seq_len, device=None, to_np=False):
if to_np:
sin = np.sin(np.arange(seq_len) / seq_len * 2 * np.pi)
cos = np.cos(np.arange(seq_len) / seq_len * 2 * np.pi)
else:
device = default_device()
sin = torch.sin(torch.arange(seq_len, device=device) / seq_len * 2 * np.pi)
cos = torch.cos(torch.arange(seq_len, device=device) / seq_len * 2 * np.pi)
return sin, cos
sin, cos = sincos_encoding(100)
plt.plot(sin.cpu().numpy())
plt.plot(cos.cpu().numpy())
plt.show()
# export
def linear_encoding(seq_len, device=None, to_np=False, lin_range=(-1,1)):
if to_np:
enc = np.linspace(lin_range[0], lin_range[1], seq_len)
else:
device = default_device()
enc = torch.linspace(lin_range[0], lin_range[1], seq_len, device=device)
return enc
lin = linear_encoding(100)
plt.plot(lin.cpu().numpy())
plt.show()
# export
def encode_positions(pos_arr, min_val=None, max_val=None, linear=False, lin_range=(-1,1)):
""" Encodes an array with positions using a linear or sincos methods
"""
if min_val is None:
min_val = np.nanmin(pos_arr)
if max_val is None:
max_val = np.nanmax(pos_arr)
if linear:
return (((pos_arr - min_val)/(max_val - min_val)) * (lin_range[1] - lin_range[0]) + lin_range[0])
else:
sin = np.sin((pos_arr - min_val)/(max_val - min_val) * 2 * np.pi)
cos = np.cos((pos_arr - min_val)/(max_val - min_val) * 2 * np.pi)
return sin, cos
n_samples = 10
length = 500
_a = []
for i in range(n_samples):
a = np.arange(-4000, 4000, 10)
mask = np.random.rand(len(a)) > .5
a = a[mask]
a = np.concatenate([a, np.array([np.nan] * (length - len(a)))])
_a.append(a.reshape(-1,1))
a = np.concatenate(_a, -1).transpose(1,0)
sin, cos = encode_positions(a, linear=False)
test_eq(a.shape, (n_samples, length))
test_eq(sin.shape, (n_samples, length))
test_eq(cos.shape, (n_samples, length))
plt.plot(sin.T)
plt.plot(cos.T)
plt.xlim(0, 500)
plt.show()
n_samples = 10
length = 500
_a = []
for i in range(n_samples):
a = np.arange(-4000, 4000, 10)
mask = np.random.rand(len(a)) > .5
a = a[mask]
a = np.concatenate([a, np.array([np.nan] * (length - len(a)))])
_a.append(a.reshape(-1,1))
a = np.concatenate(_a, -1).transpose(1,0)
lin = encode_positions(a, linear=True)
test_eq(a.shape, (n_samples, length))
test_eq(lin.shape, (n_samples, length))
plt.plot(lin.T)
plt.xlim(0, 500)
plt.show()
# export
def sort_generator(generator, bs):
g = list(generator)
for i in range(len(g)//bs + 1): g[bs*i:bs*(i+1)] = np.sort(g[bs*i:bs*(i+1)])
return (i for i in g)
generator = (i for i in np.random.permutation(np.arange(1000000)).tolist())
l = list(sort_generator(generator, 512))
test_eq(l[:512], sorted(l[:512]))
#hide
out = create_scripts(); beep(out)
```
|
github_jupyter
|
# default_exp utils
#export
from tsai.imports import *
from fastcore.test import *
#hide
import tsai
a = !python -V
p = a[0].split(' ')
print(f'python : {p[1]}')
print('tsai :', tsai.__version__)
print('fastai :', fastai.__version__)
print('fastcore :', fastcore.__version__)
print('torch :', torch.__version__)
print('scipy :', sp.__version__)
print('numpy :', np.__version__)
print('pandas :', pd.__version__)
print('matplotlib :', matplotlib.__version__)
#export
import inspect
import sklearn
# ensure these folders exist for testing purposes
fns = ['data', 'export', 'models']
for fn in fns:
path = Path('.')/fn
if not os.path.exists(path): os.makedirs(path)
#export
def totensor(o):
if isinstance(o, torch.Tensor): return o
elif isinstance(o, np.ndarray): return torch.from_numpy(o)
else:
try: return torch.tensor(o)
except: warnings.warn(f"Can't convert {type(o)} to torch.Tensor", Warning)
def toarray(o):
if isinstance(o, np.ndarray): return o
elif isinstance(o, torch.Tensor): return o.cpu().numpy()
else:
try: return np.asarray(o)
except: warnings.warn(f"Can't convert {type(o)} to np.array", Warning)
def toL(o):
if isinstance(o, L): return o
elif isinstance(o, (np.ndarray, torch.Tensor)): return L(o.tolist())
else:
try: return L(o)
except: warnings.warn(f'passed object needs to be of type L, list, np.ndarray or torch.Tensor but is {type(o)}', Warning)
def to3dtensor(o):
o = totensor(o)
if o.ndim == 3: return o
elif o.ndim == 1: return o[None, None]
elif o.ndim == 2: return o[:, None]
assert False, f'Please, review input dimensions {o.ndim}'
def to2dtensor(o):
o = totensor(o)
if o.ndim == 2: return o
elif o.ndim == 1: return o[None]
elif o.ndim == 3: return o[0]
assert False, f'Please, review input dimensions {o.ndim}'
def to1dtensor(o):
o = totensor(o)
if o.ndim == 1: return o
elif o.ndim == 3: return o[0,0]
if o.ndim == 2: return o[0]
assert False, f'Please, review input dimensions {o.ndim}'
def to3darray(o):
o = toarray(o)
if o.ndim == 3: return o
elif o.ndim == 1: return o[None, None]
elif o.ndim == 2: return o[:, None]
assert False, f'Please, review input dimensions {o.ndim}'
def to2darray(o):
o = toarray(o)
if o.ndim == 2: return o
elif o.ndim == 1: return o[None]
elif o.ndim == 3: return o[0]
assert False, f'Please, review input dimensions {o.ndim}'
def to1darray(o):
o = toarray(o)
if o.ndim == 1: return o
elif o.ndim == 3: o = o[0,0]
elif o.ndim == 2: o = o[0]
assert False, f'Please, review input dimensions {o.ndim}'
def to3d(o):
if o.ndim == 3: return o
if isinstance(o, np.ndarray): return to3darray(o)
if isinstance(o, torch.Tensor): return to3dtensor(o)
def to2d(o):
if o.ndim == 2: return o
if isinstance(o, np.ndarray): return to2darray(o)
if isinstance(o, torch.Tensor): return to2dtensor(o)
def to1d(o):
if o.ndim == 1: return o
if isinstance(o, np.ndarray): return to1darray(o)
if isinstance(o, torch.Tensor): return to1dtensor(o)
def to2dPlus(o):
if o.ndim >= 2: return o
if isinstance(o, np.ndarray): return to2darray(o)
elif isinstance(o, torch.Tensor): return to2dtensor(o)
def to3dPlus(o):
if o.ndim >= 3: return o
if isinstance(o, np.ndarray): return to3darray(o)
elif isinstance(o, torch.Tensor): return to3dtensor(o)
def to2dPlusTensor(o):
return to2dPlus(totensor(o))
def to2dPlusArray(o):
return to2dPlus(toarray(o))
def to3dPlusTensor(o):
return to3dPlus(totensor(o))
def to3dPlusArray(o):
return to3dPlus(toarray(o))
def todtype(dtype):
def _to_type(o, dtype=dtype):
if o.dtype == dtype: return o
elif isinstance(o, torch.Tensor): o = o.to(dtype=dtype)
elif isinstance(o, np.ndarray): o = o.astype(dtype)
return o
return _to_type
a = np.random.rand(100).astype(np.float32)
b = torch.from_numpy(a).float()
test_eq(totensor(a), b)
test_eq(a, toarray(b))
test_eq(to3dtensor(a).ndim, 3)
test_eq(to2dtensor(a).ndim, 2)
test_eq(to1dtensor(a).ndim, 1)
test_eq(to3darray(b).ndim, 3)
test_eq(to2darray(b).ndim, 2)
test_eq(to1darray(b).ndim, 1)
#export
def bytes2size(size_bytes):
if size_bytes == 0: return "0B"
size_name = ("B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB")
i = int(math.floor(math.log(size_bytes, 1024)))
p = math.pow(1024, i)
s = round(size_bytes / p, 2)
return "%s %s" % (s, size_name[i])
def bytes2GB(byts):
return round(byts / math.pow(1024, 3), 2)
def get_size(o, return_str=False):
s = sys.getsizeof(o)
if return_str: return bytes2size(s)
else: return s
a = np.random.rand(10, 5, 3)
test_eq(get_size(a, True), '1.3 KB')
#export
def delete_all_in_dir(tgt_dir, exception=None):
if exception is not None and len(L(exception)) > 1: exception = tuple(exception)
for file in os.listdir(tgt_dir):
if exception is not None and file.endswith(exception): continue
file_path = os.path.join(tgt_dir, file)
if os.path.isfile(file_path) or os.path.islink(file_path): os.unlink(file_path)
elif os.path.isdir(file_path): shutil.rmtree(file_path)
#export
def reverse_dict(dictionary):
return {v: k for k, v in dictionary.items()}
#export
def is_tuple(o): return isinstance(o, tuple)
#export
def itemify(*o, tup_id=None):
o = [o_ for o_ in L(*o) if o_ is not None]
items = L(o).zip()
if tup_id is not None: return L([item[tup_id] for item in items])
else: return items
a = [1, 2, 3]
b = [4, 5, 6]
print(itemify(a, b))
test_eq(len(itemify(a, b)), len(a))
a = [1, 2, 3]
b = None
print(itemify(a, b))
test_eq(len(itemify(a, b)), len(a))
a = [1, 2, 3]
b = [4, 5, 6]
c = None
print(itemify(a, b, c))
test_eq(len(itemify(a, b, c)), len(a))
#export
def isnone(o):
return o is None
def exists(o): return o is not None
def ifelse(a, b, c):
"`b` if `a` is True else `c`"
return b if a else c
a = np.array(3)
test_eq(isnone(a), False)
test_eq(exists(a), True)
b = None
test_eq(isnone(b), True)
test_eq(exists(b), False)
#export
def is_not_close(a, b, eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b, '__array__'):
return (abs(a - b) > eps).all()
if isinstance(a, (Iterable, Generator)) or isinstance(b, (Iterable, Generator)):
return is_not_close(np.array(a), np.array(b), eps=eps)
return abs(a - b) > eps
def test_not_close(a, b, eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a, b, partial(is_not_close, eps=eps), 'not_close')
def test_type(a, b):
return test_eq(type(a), type(b))
def test_ok(f, *args, **kwargs):
try:
f(*args, **kwargs)
e = 0
except:
e = 1
pass
test_eq(e, 0)
def test_not_ok(f, *args, **kwargs):
try:
f(*args, **kwargs)
e = 0
except:
e = 1
pass
test_eq(e, 1)
def test_error(error, f, *args, **kwargs):
try: f(*args, **kwargs)
except Exception as e:
test_eq(str(e), error)
def test_eq_nan(a,b):
"`test` that `a==b` excluding nan values (valid for torch.Tensor and np.ndarray)"
mask_a = torch.isnan(a) if isinstance(a, torch.Tensor) else np.isnan(a)
mask_b = torch.isnan(b) if isinstance(b, torch.Tensor) else np.isnan(b)
test(a[~mask_a],b[~mask_b],equals, '==')
#export
def assert_fn(*args, **kwargs): assert False, 'assertion test'
test_error('assertion test', assert_fn, 35, a=3)
#export
def test_gt(a,b):
"`test` that `a>b`"
test(a,b,gt,'>')
def test_ge(a,b):
"`test` that `a>=b`"
test(a,b,ge,'>')
def test_lt(a,b):
"`test` that `a>b`"
test(a,b,lt,'<')
def test_le(a,b):
"`test` that `a>b`"
test(a,b,le,'<=')
test_ok(test_gt, 5, 4)
test_not_ok(test_gt, 4, 4)
test_ok(test_ge, 4, 4)
test_not_ok(test_ge, 3, 4)
test_ok(test_lt, 3, 4)
test_not_ok(test_lt, 4, 4)
test_ok(test_le, 4, 4)
test_not_ok(test_le, 5, 4)
t = torch.rand(100)
t[t<.5] = np.nan
test_ne(t, t)
test_eq_nan(t, t)
#export
def stack(o, axis=0, retain=True):
if hasattr(o, '__array__'): return o
if isinstance(o[0], torch.Tensor):
return retain_type(torch.stack(tuple(o), dim=axis), o[0]) if retain else torch.stack(tuple(o), dim=axis)
else:
return retain_type(np.stack(o, axis), o[0]) if retain else np.stack(o, axis)
def stack_pad(o, padding_value=np.nan):
'Converts a an iterable into a numpy array using padding if necessary'
row_length = len(max(o, key=len))
result = np.full((len(o), row_length), padding_value)
for i,row in enumerate(o): result[i, :len(row)] = row
return result
a = [[0,1,2], [4,5,6,7]]
test_eq(stack_pad(a).shape, (2, 4))
test_eq(type(stack_pad(a)), np.ndarray)
test_eq(np.isnan(stack_pad(a)).sum(), 1)
a = np.random.rand(2, 3, 4)
t = torch.from_numpy(a)
test_eq_type(stack(itemify(a, tup_id=0)), a)
test_eq_type(stack(itemify(t, tup_id=0)), t)
#export
def match_seq_len(*arrays):
max_len = stack([x.shape[-1] for x in arrays]).max()
return [np.pad(x, pad_width=((0,0), (0,0), (max_len - x.shape[-1], 0)), mode='constant', constant_values=0) for x in arrays]
a = np.random.rand(10, 5, 8)
b = np.random.rand(3, 5, 10)
c, d = match_seq_len(a, b)
test_eq(c.shape[-1], d.shape[-1])
#export
def random_shuffle(o, random_state=None):
res = sklearn.utils.shuffle(o, random_state=random_state)
if isinstance(o, L): return L(list(res))
return res
a = np.arange(10)
test_eq_type(random_shuffle(a, 1), np.array([2, 9, 6, 4, 0, 3, 1, 7, 8, 5]))
t = torch.arange(10)
test_eq_type(random_shuffle(t, 1), tensor([2, 9, 6, 4, 0, 3, 1, 7, 8, 5]))
l = list(a)
test_eq(random_shuffle(l, 1), [2, 9, 6, 4, 0, 3, 1, 7, 8, 5])
l2 = L(l)
test_eq_type(random_shuffle(l2, 1), L([2, 9, 6, 4, 0, 3, 1, 7, 8, 5]))
#export
def cat2int(o):
cat = Categorize()
cat.setup(o)
return stack(TfmdLists(o, cat)[:])
a = np.array(['b', 'a', 'a', 'b', 'a', 'b', 'a'])
test_eq_type(cat2int(a), TensorCategory([1, 0, 0, 1, 0, 1, 0]))
TensorBase([1,2,3])
#export
def cycle_dl(dl):
for _ in dl: _
def cycle_dl_to_device(dl):
for bs in dl: [b.to(default_device()) for b in bs]
#export
def cache_data(o, slice_len=10_000, verbose=False):
start = 0
n_loops = (len(o) - 1) // slice_len + 1
pv(f'{n_loops} loops', verbose)
timer.start(False)
for i in range(n_loops):
o[slice(start,start + slice_len)]
if verbose and (i+1) % 10 == 0: print(f'{i+1:4} elapsed time: {timer.elapsed()}')
start += slice_len
pv(f'{i+1:4} total time : {timer.stop()}\n', verbose)
memmap2cache = cache_data
cache_memmap = cache_data
#export
def get_func_defaults(f):
fa = inspect.getfullargspec(f)
if fa.defaults is None: return dict(zip(fa.args, [''] * (len(fa.args))))
else: return dict(zip(fa.args, [''] * (len(fa.args) - len(fa.defaults)) + list(fa.defaults)))
#export
def get_idx_from_df_col_vals(df, col, val_list):
return [df[df[col] == val].index[0] for val in val_list]
#export
def get_sublist_idxs(aList, bList):
"Get idxs that when applied to aList will return bList. aList must contain all values in bList"
sorted_aList = aList[np.argsort(aList)]
return np.argsort(aList)[np.searchsorted(sorted_aList, bList)]
x = np.array([3, 5, 7, 1, 9, 8, 6, 2])
y = np.array([6, 1, 5, 7])
idx = get_sublist_idxs(x, y)
test_eq(x[idx], y)
x = np.array([3, 5, 7, 1, 9, 8, 6, 6, 2])
y = np.array([6, 1, 5, 7, 5])
idx = get_sublist_idxs(x, y)
test_eq(x[idx], y)
#export
def flatten_list(l):
return [item for sublist in l for item in sublist]
#export
def display_pd_df(df, max_rows:Union[bool, int]=False, max_columns:Union[bool, int]=False):
if max_rows:
old_max_rows = pd.get_option('display.max_rows')
if max_rows is not True and isinstance(max_rows, Integral): pd.set_option('display.max_rows', max_rows)
else: pd.set_option('display.max_rows', df.shape[0])
if max_columns:
old_max_columns = pd.get_option('display.max_columns')
if max_columns is not True and isinstance(max_columns, Integral): pd.set_option('display.max_columns', max_columns)
else: pd.set_option('display.max_columns', df.shape[1])
display(df)
if max_rows: pd.set_option('display.max_rows', old_max_rows)
if max_columns: pd.set_option('display.max_columns', old_max_columns)
old_max_rows, old_max_columns = pd.get_option('display.max_rows'), pd.get_option('display.max_columns')
df = pd.DataFrame(np.random.rand(70, 25))
display_pd_df(df, max_rows=2, max_columns=3)
test_eq(old_max_rows, pd.get_option('display.max_rows'))
test_eq(old_max_columns, pd.get_option('display.max_columns'))
#export
def ttest(data1, data2, equal_var=False):
"Calculates t-statistic and p-value based on 2 sample distributions"
t_stat, p_value = scipy.stats.ttest_ind(data1, data2, equal_var=equal_var)
return t_stat, np.sign(t_stat) * p_value
def tscore(o):
if o.std() == 0: return 0
else: return np.sqrt(len(o)) * o.mean() / o.std()
a = np.random.normal(0.5, 1, 100)
b = np.random.normal(0.15, .5, 50)
plt.hist(a, 50)
plt.hist(b, 50)
plt.show()
ttest(a,b)
a = np.random.normal(0.5, 1, 100)
t = torch.normal(0.5, 1, (100, ))
tscore(a), tscore(t)
#export
def ttest_tensor(a, b):
"differentiable pytorch function equivalent to scipy.stats.ttest_ind with equal_var=False"
# calculate standard errors
se1, se2 = torch.std(a)/np.sqrt(len(a)), torch.std(b)/np.sqrt(len(b))
# standard error on the difference between the samples
sed = torch.sqrt(se1**2.0 + se2**2.0)
# calculate the t statistic
t_stat = (torch.mean(a) - torch.mean(b)) / sed
return t_stat
a = torch.rand(100).requires_grad_(True) + .1
b = torch.rand(100).requires_grad_(True)
ttest_tensor(a, b)
#export
from scipy.stats import pearsonr, spearmanr
def pcc(a, b):
return pearsonr(a, b)[0]
def scc(a, b):
return spearmanr(a, b)[0]
a = np.random.normal(0.5, 1, 100)
b = np.random.normal(0.15, .5, 100)
pcc(a, b), scc(a, b)
#export
def remove_fn(fn, verbose=False):
"Removes a file (fn) if exists"
try:
os.remove(fn)
pv(f'{fn} file removed', verbose)
except OSError:
pv(f'{fn} does not exist', verbose)
pass
#export
def npsave(array_fn, array, verbose=True):
remove_fn(array_fn, verbose)
pv(f'saving {array_fn}...', verbose)
np.save(array_fn, array)
pv(f'...{array_fn} saved', verbose)
np_save = npsave
fn = 'data/remove_fn_test.npy'
a = np.zeros(1)
npsave(fn, a)
del a
np.load(fn, mmap_mode='r+')
remove_fn(fn, True)
remove_fn(fn, True)
#export
def permute_2D(array, axis=None):
"Permute rows or columns in an array. This can be used, for example, in feature permutation"
if axis == 0: return array[np.random.randn(*array.shape).argsort(axis=0), np.arange(array.shape[-1])[None, :]]
elif axis == 1 or axis == -1: return array[np.arange(len(array))[:,None], np.random.randn(*array.shape).argsort(axis=1)]
return array[np.random.randn(*array.shape).argsort(axis=0), np.random.randn(*array.shape).argsort(axis=1)]
s = np.arange(100 * 50).reshape(100, 50)
test_eq(permute_2D(s, axis=0).mean(0), s.mean(0))
test_ne(permute_2D(s, axis=0), s)
test_eq(permute_2D(s, axis=1).mean(1), s.mean(1))
test_ne(permute_2D(s, axis=1), s)
test_ne(permute_2D(s), s)
#export
def random_normal():
"Returns a number between -1 and 1 with a normal distribution"
while True:
o = np.random.normal(loc=0., scale=1/3)
if abs(o) <= 1: break
return o
def random_half_normal():
"Returns a number between 0 and 1 with a half-normal distribution"
while True:
o = abs(np.random.normal(loc=0., scale=1/3))
if o <= 1: break
return o
def random_normal_tensor(shape=1, device=None):
"Returns a tensor of a predefined shape between -1 and 1 with a normal distribution"
return torch.empty(shape, device=device).normal_(mean=0, std=1/3).clamp_(-1, 1)
def random_half_normal_tensor(shape=1, device=None):
"Returns a tensor of a predefined shape between 0 and 1 with a half-normal distribution"
return abs(torch.empty(shape, device=device).normal_(mean=0, std=1/3)).clamp_(0, 1)
#export
from matplotlib.backends.backend_agg import FigureCanvasAgg
def default_dpi():
DPI = plt.gcf().get_dpi()
plt.close()
return int(DPI)
def get_plot_fig(size=None, dpi=default_dpi()):
fig = plt.figure(figsize=(size / dpi, size / dpi), dpi=dpi, frameon=False) if size else plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
config = plt.gcf()
plt.close('all')
return config
def fig2buf(fig):
canvas = FigureCanvasAgg(fig)
fig.canvas.draw()
return np.asarray(canvas.buffer_rgba())[..., :3]
default_dpi()
#export
def plot_scatter(x, y, deg=1):
linreg = sp.stats.linregress(x, y)
plt.scatter(x, y, label=f'R2:{linreg.rvalue:.2f}', color='lime', edgecolor='black', alpha=.5)
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, deg))(np.unique(x)), color='r')
plt.legend(loc='best')
plt.show()
a = np.random.rand(100)
b = np.random.rand(100)**2
plot_scatter(a, b)
#export
def get_idxs(o, aList): return array([o.tolist().index(v) for v in aList])
a = random_shuffle(np.arange(100, 200))
b = np.random.choice(a, 10, False)
idxs = get_idxs(a, b)
test_eq(a[idxs], b)
# export
def apply_cmap(o, cmap):
o = toarray(o)
out = plt.get_cmap(cmap)(o)[..., :3]
out = tensor(out).squeeze(1)
return out.permute(0, 3, 1, 2)
a = np.random.rand(16, 1, 40, 50)
s = L(a.shape)
s[1] = 3
test_eq(L(apply_cmap(a, 'viridis').shape), s)
s[0] = 1
a = np.random.rand(1, 40, 50)
test_eq(L(apply_cmap(a, 'viridis').shape), s)
#export
def torch_tile(a, n_tile, dim=0):
init_dim = a.size(dim)
repeat_idx = [1] * a.dim()
repeat_idx[dim] = n_tile
a = a.repeat(*(repeat_idx))
order_index = torch.cat([init_dim * torch.arange(n_tile) + i for i in range(init_dim)]).to(device=a.device)
return torch.index_select(a, dim, order_index)
test_eq(torch_tile(torch.arange(2), 3), tensor([0, 0, 0, 1, 1, 1]))
#export
def to_tsfresh_df(ts):
r"""Prepares a time series (Tensor/ np.ndarray) to be used as a tsfresh dataset to allow feature extraction"""
ts = to3d(ts)
if isinstance(ts, np.ndarray):
ids = np.repeat(np.arange(len(ts)), ts.shape[-1]).reshape(-1,1)
joint_ts = ts.transpose(0,2,1).reshape(-1, ts.shape[1])
cols = ['id'] + np.arange(ts.shape[1]).tolist()
df = pd.DataFrame(np.concatenate([ids, joint_ts], axis=1), columns=cols)
elif isinstance(ts, torch.Tensor):
ids = torch_tile(torch.arange(len(ts)), ts.shape[-1]).reshape(-1,1)
joint_ts = ts.transpose(1,2).reshape(-1, ts.shape[1])
cols = ['id']+np.arange(ts.shape[1]).tolist()
df = pd.DataFrame(torch.cat([ids, joint_ts], dim=1).numpy(), columns=cols)
df['id'] = df['id'].astype(int)
df.reset_index(drop=True, inplace=True)
return df
ts = torch.rand(16, 3, 20)
a = to_tsfresh_df(ts)
ts = ts.numpy()
b = to_tsfresh_df(ts)
#export
from scipy.stats import skew, kurtosis
def pcorr(a, b):
return scipy.stats.pearsonr(a, b)
def scorr(a, b):
corr = scipy.stats.spearmanr(a, b)
return corr[0], corr[1]
#export
def torch_diff(t, lag=1, pad=True):
import torch.nn.functional as F
diff = t[..., lag:] - t[..., :-lag]
if pad: return F.pad(diff, (lag,0))
else: return diff
t = torch.arange(24).reshape(2,3,4)
test_eq(torch_diff(t, 1)[..., 1:].float().mean(), 1.)
test_eq(torch_diff(t, 2)[..., 2:].float().mean(), 2.)
#export
def get_outliers_IQR(o, axis=None):
tt = False
if isinstance(o, torch.Tensor):
tt = True
device = o.device
tdtype = o.dtype
o = o.detach().cpu().numpy()
Q1 = np.nanpercentile(o, 25, axis=axis, keepdims=axis is not None)
Q3 = np.nanpercentile(o, 75, axis=axis, keepdims=axis is not None)
IQR = Q3 - Q1
if tt:
Q1 = torch.tensor(Q1, dtype=tdtype, device=device)
Q3 = torch.tensor(Q3, dtype=tdtype, device=device)
IQR = torch.tensor(IQR, dtype=tdtype, device=device)
return Q1 - 1.5 * IQR, Q3 + 1.5 * IQR
def clip_outliers(o, axis=None):
min_outliers, max_outliers = get_outliers_IQR(o, axis=axis)
if isinstance(o, (np.ndarray, pd.core.series.Series)):
return np.clip(o, min_outliers, max_outliers)
elif isinstance(o, torch.Tensor):
return torch.clamp(o, min_outliers, max_outliers)
def get_percentile(o, percentile, axis=None):
if isinstance(o, torch.Tensor): o = o.detach().cpu().numpy()
return np.nanpercentile(o, percentile, axis=axis, keepdims=axis is not None)
def torch_clamp(o, min=None, max=None):
r"""Clamp torch.Tensor using 1 or multiple dimensions"""
if min is not None: o = torch.max(o, min)
if max is not None: o = torch.min(o, max)
return o
t = torch.randn(2,3,100)
test_eq(type(get_outliers_IQR(t, -1)[0]), torch.Tensor)
a = np.random.randn(2,3,100)
test_eq(type(get_outliers_IQR(a, -1)[0]), np.ndarray)
#export
def torch_slice_by_dim(t, index, dim=-1, **kwargs):
if not isinstance(index, torch.Tensor): index = torch.Tensor(index)
assert t.ndim == index.ndim, "t and index must have the same ndim"
index = index.long()
return torch.gather(t, dim, index, **kwargs)
t = torch.rand(5, 3)
index = torch.randint(0, 3, (5, 1))
# index = [[0, 2], [0, 1], [1, 2], [0, 2], [0, 1]]
torch_slice_by_dim(t, index)
#export
def torch_nanmean(o, dim=None, keepdim=False):
"""There's currently no torch.nanmean function"""
mask = torch.isnan(o)
if mask.any():
output = torch.from_numpy(np.asarray(np.nanmean(o.cpu().numpy(), axis=dim, keepdims=keepdim))).to(o.device)
if output.shape == mask.shape:
output[mask] = 0
return output
else:
return torch.mean(o, dim=dim, keepdim=keepdim) if dim is not None else torch.mean(o)
def torch_nanstd(o, dim=None, keepdim=False):
"""There's currently no torch.nanstd function"""
mask = torch.isnan(o)
if mask.any():
output = torch.from_numpy(np.asarray(np.nanstd(o.cpu().numpy(), axis=dim, keepdims=keepdim))).to(o.device)
if output.shape == mask.shape:
output[mask] = 1
return output
else:
return torch.std(o, dim=dim, keepdim=keepdim) if dim is not None else torch.std(o)
t = torch.rand(1000)
t[:100] = float('nan')
assert torch_nanmean(t).item() > 0
#export
def concat(*ls, dim=0):
"Concatenate tensors, arrays, lists, or tuples by a dimension"
if not len(ls): return []
it = ls[0]
if isinstance(it, torch.Tensor): return torch.cat(ls, dim=dim)
elif isinstance(it, np.ndarray): return np.concatenate(ls, axis=dim)
else:
res = np.concatenate(ls, axis=dim).tolist()
return retain_type(res, typ=type(it))
#export
def reduce_memory_usage(df):
start_memory = df.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
for col in df.columns:
col_type = df[col].dtype
if col_type != 'object':
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
pass
else:
df[col] = df[col].astype('category')
end_memory = df.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return df
# export
def cls_name(o): return o.__class__.__name__
test_eq(cls_name(timer), 'Timer')
#export
def roll2d(o, roll1: Union[None, list, int] = None, roll2: Union[None, list, int] = None):
"""Rolls a 2D object on the indicated axis
This solution is based on https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently
"""
assert o.ndim == 2, "roll2D can only be applied to 2d objects"
axis1, axis2 = np.ogrid[:o.shape[0], :o.shape[1]]
if roll1 is not None:
if isinstance(roll1, int): axis1 = axis1 - np.array(roll1).reshape(1,1)
else: axis1 = np.array(roll1).reshape(o.shape[0],1)
if roll2:
if isinstance(roll2, int): axis2 = axis2 - np.array(roll2).reshape(1,1)
else: axis2 = np.array(roll2).reshape(1,o.shape[1])
return o[axis1, axis2]
def roll3d(o, roll1: Union[None, list, int] = None, roll2: Union[None, list, int] = None, roll3: Union[None, list, int] = None):
"""Rolls a 3D object on the indicated axis
This solution is based on https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently
"""
assert o.ndim == 3, "roll3D can only be applied to 3d objects"
axis1, axis2, axis3 = np.ogrid[:o.shape[0], :o.shape[1], :o.shape[2]]
if roll1 is not None:
if isinstance(roll1, int): axis1 = axis1 - np.array(roll1).reshape(1,1,1)
else: axis1 = np.array(roll1).reshape(o.shape[0],1,1)
if roll2:
if isinstance(roll2, int): axis2 = axis2 - np.array(roll2).reshape(1,1,1)
else: axis2 = np.array(roll2).reshape(1,o.shape[1],1)
if roll3:
if isinstance(roll3, int): axis3 = axis3 - np.array(roll3).reshape(1,1,1)
else: axis3 = np.array(roll3).reshape(1,1,o.shape[2])
return o[axis1, axis2, axis3]
def random_roll2d(o, axis=()):
"""Rolls a 2D object on the indicated axis
This solution is based on https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently
"""
assert o.ndim == 2, "roll2D can only be applied to 2d objects"
axis1, axis2 = np.ogrid[:o.shape[0], :o.shape[1]]
if 0 in axis:
axis1 = np.random.choice(np.arange(o.shape[0]), o.shape[0], replace).reshape(-1, 1)
if 1 in axis:
axis2 = np.random.choice(np.arange(o.shape[1]), o.shape[1], replace).reshape(1, -1)
return o[axis1, axis2]
def random_roll3d(o, axis=(), replace=False):
"""Randomly rolls a 3D object along the indicated axes
This solution is based on https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently
"""
assert o.ndim == 3, "random_roll3d can only be applied to 3d objects"
axis1, axis2, axis3 = np.ogrid[:o.shape[0], :o.shape[1], :o.shape[2]]
if 0 in axis:
axis1 = np.random.choice(np.arange(o.shape[0]), o.shape[0], replace).reshape(-1, 1, 1)
if 1 in axis:
axis2 = np.random.choice(np.arange(o.shape[1]), o.shape[1], replace).reshape(1, -1, 1)
if 2 in axis:
axis3 = np.random.choice(np.arange(o.shape[2]), o.shape[2], replace).reshape(1, 1, -1)
return o[axis1, axis2, axis3]
def rotate_axis0(o, steps=1):
return o[np.arange(o.shape[0]) - steps]
def rotate_axis1(o, steps=1):
return o[:, np.arange(o.shape[1]) - steps]
def rotate_axis2(o, steps=1):
return o[:, :, np.arange(o.shape[2]) - steps]
a = np.tile(np.arange(10), 3).reshape(3, 10) * np.array([1, 10, 100]).reshape(-1, 1)
a
roll2d(a, roll1=[2, 1, 0])
roll2d(a, roll2=3)
o = torch.arange(24).reshape(2,3,4)
test_eq(rotate_axis0(o)[1], o[0])
test_eq(rotate_axis1(o)[:,1], o[:,0])
test_eq(rotate_axis2(o)[...,1], o[...,0])
#export
def create_empty_array(shape, fname=None, path='./data', on_disk=True, dtype='float32', mode='r+', **kwargs):
"""
mode:
‘r’: Open existing file for reading only.
‘r+’: Open existing file for reading and writing.
‘w+’: Create or overwrite existing file for reading and writing.
‘c’: Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only.
"""
if on_disk:
assert fname is not None, 'you must provide a fname (filename)'
path = Path(path)
if not fname.endswith('npy'): fname = f'{fname}.npy'
filename = path/fname
filename.parent.mkdir(parents=True, exist_ok=True)
# Save a small empty array
_temp_fn = path/'temp_X.npy'
np.save(_temp_fn, np.empty(0))
# Create & save file
arr = np.memmap(_temp_fn, dtype=dtype, mode='w+', shape=shape, **kwargs)
np.save(filename, arr)
del arr
os.remove(_temp_fn)
# Open file in selected mode
arr = np.load(filename, mmap_mode=mode)
else:
arr = np.empty(shape, dtype=dtype, **kwargs)
return arr
fname = 'X_on_disk'
shape = (100, 10, 10)
X = create_empty_array(shape, fname, on_disk=True, mode='r+')
chunksize = 10
pbar = progress_bar(range(math.ceil(len(X) / chunksize)), leave=False)
start = 0
for i in pbar:
end = min(start + chunksize, len(X))
partial_data = np.random.rand(end - start, X.shape[1] , X.shape[2])
X[start:end] = partial_data
start = end
del partial_data
gc.collect()
filename = X.filename
del X
X = np.load(filename, mmap_mode='r+')
test_eq((X == 0).sum(), 0)
test_eq(X.shape, shape)
os.remove(X.filename)
# export
import gzip
def np_save_compressed(arr, fname=None, path='./data', verbose=False, **kwargs):
assert fname is not None, 'you must provide a fname (filename)'
if fname.endswith('npy'): fname = f'{fname}.gz'
elif not fname.endswith('npy.gz'): fname = f'{fname}.npy.gz'
filename = Path(path)/fname
filename.parent.mkdir(parents=True, exist_ok=True)
f = gzip.GzipFile(filename, 'w', **kwargs)
np.save(file=f, arr=arr)
f.close()
pv(f'array saved to {filename}', verbose)
def np_load_compressed(fname=None, path='./data', **kwargs):
assert fname is not None, 'you must provide a fname (filename)'
if fname.endswith('npy'): fname = f'{fname}.gz'
elif not fname.endswith('npy.gz'): fname = f'{fname}.npy.gz'
filename = Path(path)/fname
f = gzip.GzipFile(filename, 'r', **kwargs)
arr = np.load(f)
f.close()
return arr
X1 = np.random.rand(10)
np_save_compressed(X1, 'X_comp', path='./data')
X2 = np_load_compressed('X_comp')
test_eq(X1, X2)
# export
def np2memmap(arr, fname=None, path='./data', dtype='float32', mode='c', **kwargs):
""" Function that turns an ndarray into a memmap ndarray
mode:
‘r’: Open existing file for reading only.
‘r+’: Open existing file for reading and writing.
‘w+’: Create or overwrite existing file for reading and writing.
‘c’: Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only.
"""
assert fname is not None, 'you must provide a fname (filename)'
if not fname.endswith('npy'): fname = f'{fname}.npy'
filename = Path(path)/fname
filename.parent.mkdir(parents=True, exist_ok=True)
# Save file
np.save(filename, arr)
# Open file in selected mode
arr = np.load(filename, mmap_mode=mode)
return arr
X1 = np.random.rand(10)
X2 = np2memmap(X1, 'X1_test')
test_eq(X1, X2)
test_ne(type(X1), type(X2))
# export
def torch_mean_groupby(o, idxs):
"""Computes torch mean along axis 0 grouped by the idxs.
Need to ensure that idxs have the same order as o"""
if is_listy(idxs[0]): idxs = flatten_list(idxs)
flattened_idxs = torch.tensor(idxs)
idxs, vals = torch.unique(flattened_idxs, return_counts=True)
vs = torch.split_with_sizes(o, tuple(vals))
return torch.cat([v.mean(0).unsqueeze(0) for k,v in zip(idxs, vs)])
o = torch.arange(6*2*3).reshape(6, 2, 3).float()
idxs = np.array([[0,1,2,3], [2,3]], dtype=object)
output = torch_mean_groupby(o, idxs)
test_eq(o[:2], output[:2])
test_eq(o[2:4].mean(0), output[2])
test_eq(o[4:6].mean(0), output[3])
# export
def torch_flip(t, dims=-1):
if dims == -1: return t[..., np.arange(t.shape[dims])[::-1].copy()]
elif dims == 0: return t[np.arange(t.shape[dims])[::-1].copy()]
elif dims == 1: return t[:, np.arange(t.shape[dims])[::-1].copy()]
elif dims == 2: return t[:, :, np.arange(t.shape[dims])[::-1].copy()]
t = torch.randn(2, 3, 4)
test_eq(torch.flip(t, (2,)), torch_flip(t, dims=-1))
# export
def torch_nan_to_num(o, num=0, inplace=False):
mask = torch.isnan(o)
return torch_masked_to_num(o, mask, num=num, inplace=inplace)
def torch_masked_to_num(o, mask, num=0, inplace=False):
if inplace:
o[:] = o.masked_fill(mask, num)
else:
return o.masked_fill(mask, num)
x = torch.rand(2, 4, 6)
x[:, :3][x[:, :3] < .5] = np.nan
nan_values = torch.isnan(x).sum()
y = torch_nan_to_num(x[:, :3], inplace=False)
test_eq(torch.isnan(y).sum(), 0)
test_eq(torch.isnan(x).sum(), nan_values)
torch_nan_to_num(x[:, :3], inplace=True)
test_eq(torch.isnan(x).sum(), 0)
x = torch.rand(2, 4, 6)
mask = x[:, :3] > .5
x[:, :3] = torch_masked_to_num(x[:, :3], mask, num=0, inplace=False)
test_eq(x[:, :3][mask].sum(), 0)
x = torch.rand(2, 4, 6)
mask = x[:, :3] > .5
torch_masked_to_num(x[:, :3], mask, num=0, inplace=True)
test_eq(x[:, :3][mask].sum(), 0)
# export
def mpl_trend(x, y, deg=1):
return np.poly1d(np.polyfit(x, y, deg))(x)
x = np.sort(np.random.randint(0, 100, 100)/10)
y = np.random.rand(100) + np.linspace(0, 10, 100)
trend = mpl_trend(x, y)
plt.scatter(x, y)
plt.plot(x, trend, 'r')
plt.show()
# export
def int2digits(o, n_digits=None, normalize=True):
if n_digits is not None:
iterable = '0' * (n_digits - len(str(abs(o)))) + str(abs(o))
else:
iterable = str(abs(o))
sign = np.sign(o)
digits = np.array([sign * int(d) for d in iterable])
if normalize:
digits = digits / 10
return digits
def array2digits(o, n_digits=None, normalize=True):
output = np.array(list(map(partial(int2digits, n_digits=n_digits), o)))
if normalize:
output = output / 10
return output
o = -9645
test_eq(int2digits(o, 6), np.array([ 0, 0, -.9, -.6, -.4, -.5]))
a = np.random.randint(-1000, 1000, 10)
test_eq(array2digits(a,5).shape, (10,5))
# export
def sincos_encoding(seq_len, device=None, to_np=False):
if to_np:
sin = np.sin(np.arange(seq_len) / seq_len * 2 * np.pi)
cos = np.cos(np.arange(seq_len) / seq_len * 2 * np.pi)
else:
device = default_device()
sin = torch.sin(torch.arange(seq_len, device=device) / seq_len * 2 * np.pi)
cos = torch.cos(torch.arange(seq_len, device=device) / seq_len * 2 * np.pi)
return sin, cos
sin, cos = sincos_encoding(100)
plt.plot(sin.cpu().numpy())
plt.plot(cos.cpu().numpy())
plt.show()
# export
def linear_encoding(seq_len, device=None, to_np=False, lin_range=(-1,1)):
if to_np:
enc = np.linspace(lin_range[0], lin_range[1], seq_len)
else:
device = default_device()
enc = torch.linspace(lin_range[0], lin_range[1], seq_len, device=device)
return enc
lin = linear_encoding(100)
plt.plot(lin.cpu().numpy())
plt.show()
# export
def encode_positions(pos_arr, min_val=None, max_val=None, linear=False, lin_range=(-1,1)):
""" Encodes an array with positions using a linear or sincos methods
"""
if min_val is None:
min_val = np.nanmin(pos_arr)
if max_val is None:
max_val = np.nanmax(pos_arr)
if linear:
return (((pos_arr - min_val)/(max_val - min_val)) * (lin_range[1] - lin_range[0]) + lin_range[0])
else:
sin = np.sin((pos_arr - min_val)/(max_val - min_val) * 2 * np.pi)
cos = np.cos((pos_arr - min_val)/(max_val - min_val) * 2 * np.pi)
return sin, cos
n_samples = 10
length = 500
_a = []
for i in range(n_samples):
a = np.arange(-4000, 4000, 10)
mask = np.random.rand(len(a)) > .5
a = a[mask]
a = np.concatenate([a, np.array([np.nan] * (length - len(a)))])
_a.append(a.reshape(-1,1))
a = np.concatenate(_a, -1).transpose(1,0)
sin, cos = encode_positions(a, linear=False)
test_eq(a.shape, (n_samples, length))
test_eq(sin.shape, (n_samples, length))
test_eq(cos.shape, (n_samples, length))
plt.plot(sin.T)
plt.plot(cos.T)
plt.xlim(0, 500)
plt.show()
n_samples = 10
length = 500
_a = []
for i in range(n_samples):
a = np.arange(-4000, 4000, 10)
mask = np.random.rand(len(a)) > .5
a = a[mask]
a = np.concatenate([a, np.array([np.nan] * (length - len(a)))])
_a.append(a.reshape(-1,1))
a = np.concatenate(_a, -1).transpose(1,0)
lin = encode_positions(a, linear=True)
test_eq(a.shape, (n_samples, length))
test_eq(lin.shape, (n_samples, length))
plt.plot(lin.T)
plt.xlim(0, 500)
plt.show()
# export
def sort_generator(generator, bs):
g = list(generator)
for i in range(len(g)//bs + 1): g[bs*i:bs*(i+1)] = np.sort(g[bs*i:bs*(i+1)])
return (i for i in g)
generator = (i for i in np.random.permutation(np.arange(1000000)).tolist())
l = list(sort_generator(generator, 512))
test_eq(l[:512], sorted(l[:512]))
#hide
out = create_scripts(); beep(out)
| 0.272896 | 0.79049 |
```
import pandas as pd
import os
import numpy as np
from sklearn.metrics import r2_score
meta = pd.read_csv("../input/meta_open.csv", index_col='uid', parse_dates=["datastart","dataend"], dayfirst=True)
temporal = pd.read_csv("../input/temp_open_utc_complete.csv", index_col='timestamp', parse_dates=True).tz_localize('utc')
buildingnames = temporal.columns[temporal.columns.str.contains("Office")]
buildingnames
MAPE_data = {}
RSQUARED_data = {}
NMBE_data = {}
CVRSME_data = {}
for singlebuilding in buildingnames[:2]:
print("Modelling: "+singlebuilding)
# try:
# Get Data
single_timezone = meta.T[singlebuilding].timezone
single_start = meta.T[singlebuilding].datastart
single_end = meta.T[singlebuilding].dataend
single_building_data = pd.DataFrame(temporal[singlebuilding].tz_convert(single_timezone).truncate(before=single_start,after=single_end))
# Split into Training and Testing
trainingdata = single_building_data[single_building_data.index.month.isin(["1","2","3","5","6","7","9","10","11"])]
testdata = single_building_data[single_building_data.index.month.isin(["4","8","12"])]
# Get weather file
weatherfilename = meta.T[singlebuilding].newweatherfilename
print("Weatherfile: "+weatherfilename)
weather = pd.read_csv(os.path.join("../input/",weatherfilename),index_col='timestamp', parse_dates=True, na_values='-9999')
weather = weather.tz_localize(single_timezone, ambiguous = 'infer')
outdoor_temp = pd.DataFrame(weather[[col for col in weather.columns if 'Temperature' in col]]).resample("H").mean()
outdoor_temp = outdoor_temp.reindex(pd.DatetimeIndex(start=outdoor_temp.index[0], periods=len(single_building_data), freq="H")).fillna(method='ffill').fillna(method='bfill')
# Create training data array
train_features = np.array(pd.concat([pd.get_dummies(trainingdata.index.hour),
pd.get_dummies(trainingdata.index.dayofweek),
pd.Series(outdoor_temp[outdoor_temp.index.month.isin(["1","2","3","5","6","7","9","10","11"])].TemperatureC.values)], axis=1))
train_labels = np.array(trainingdata[singlebuilding].values)
# Create test data array
test_features = np.array(pd.concat([pd.get_dummies(testdata.index.hour),
pd.get_dummies(testdata.index.dayofweek),
pd.Series(outdoor_temp[outdoor_temp.index.month.isin(["4","8","12"])].TemperatureC.values)], axis=1))
test_labels = np.array(testdata[singlebuilding].values)
# Import the model we are using
from sklearn.ensemble import AdaBoostRegressor
# Make model
model = AdaBoostRegressor(n_estimators = 1000, random_state = 42)
# Train the model on training data
model.fit(train_features, train_labels);
# Use the forest's predict method on the test data
predictions = model.predict(test_features)
# Calculate the absolute errors
errors = abs(predictions - test_labels)
# Calculate mean absolute percentage error (MAPE) and add to list
MAPE = 100 * np.mean((errors / test_labels))
NMBE = 100 * (sum(test_labels - predictions) / (pd.Series(test_labels).count() * np.mean(test_labels)))
CVRSME = 100 * ((sum((test_labels - predictions)**2) / (pd.Series(test_labels).count()-1))**(0.5)) / np.mean(test_labels)
RSQUARED = r2_score(test_labels, predictions)
print("MAPE: "+str(MAPE))
print("NMBE: "+str(NMBE))
print("CVRSME: "+str(CVRSME))
print("R SQUARED: "+str(RSQUARED))
MAPE_data[singlebuilding] = MAPE
NMBE_data[singlebuilding] = NMBE
CVRSME_data[singlebuilding] = CVRSME
RSQUARED_data[singlebuilding] = RSQUARED
# except:
# print("There was a problem")
metrics = pd.DataFrame([MAPE_data, NMBE_data, CVRSME_data, RSQUARED_data]).T
metrics.columns = ["MAPE", "NMBE", "CVRSME", "RSQUARED"]
metrics
metrics.to_csv("AdaBoostRegressor_metrics.csv")
```
|
github_jupyter
|
import pandas as pd
import os
import numpy as np
from sklearn.metrics import r2_score
meta = pd.read_csv("../input/meta_open.csv", index_col='uid', parse_dates=["datastart","dataend"], dayfirst=True)
temporal = pd.read_csv("../input/temp_open_utc_complete.csv", index_col='timestamp', parse_dates=True).tz_localize('utc')
buildingnames = temporal.columns[temporal.columns.str.contains("Office")]
buildingnames
MAPE_data = {}
RSQUARED_data = {}
NMBE_data = {}
CVRSME_data = {}
for singlebuilding in buildingnames[:2]:
print("Modelling: "+singlebuilding)
# try:
# Get Data
single_timezone = meta.T[singlebuilding].timezone
single_start = meta.T[singlebuilding].datastart
single_end = meta.T[singlebuilding].dataend
single_building_data = pd.DataFrame(temporal[singlebuilding].tz_convert(single_timezone).truncate(before=single_start,after=single_end))
# Split into Training and Testing
trainingdata = single_building_data[single_building_data.index.month.isin(["1","2","3","5","6","7","9","10","11"])]
testdata = single_building_data[single_building_data.index.month.isin(["4","8","12"])]
# Get weather file
weatherfilename = meta.T[singlebuilding].newweatherfilename
print("Weatherfile: "+weatherfilename)
weather = pd.read_csv(os.path.join("../input/",weatherfilename),index_col='timestamp', parse_dates=True, na_values='-9999')
weather = weather.tz_localize(single_timezone, ambiguous = 'infer')
outdoor_temp = pd.DataFrame(weather[[col for col in weather.columns if 'Temperature' in col]]).resample("H").mean()
outdoor_temp = outdoor_temp.reindex(pd.DatetimeIndex(start=outdoor_temp.index[0], periods=len(single_building_data), freq="H")).fillna(method='ffill').fillna(method='bfill')
# Create training data array
train_features = np.array(pd.concat([pd.get_dummies(trainingdata.index.hour),
pd.get_dummies(trainingdata.index.dayofweek),
pd.Series(outdoor_temp[outdoor_temp.index.month.isin(["1","2","3","5","6","7","9","10","11"])].TemperatureC.values)], axis=1))
train_labels = np.array(trainingdata[singlebuilding].values)
# Create test data array
test_features = np.array(pd.concat([pd.get_dummies(testdata.index.hour),
pd.get_dummies(testdata.index.dayofweek),
pd.Series(outdoor_temp[outdoor_temp.index.month.isin(["4","8","12"])].TemperatureC.values)], axis=1))
test_labels = np.array(testdata[singlebuilding].values)
# Import the model we are using
from sklearn.ensemble import AdaBoostRegressor
# Make model
model = AdaBoostRegressor(n_estimators = 1000, random_state = 42)
# Train the model on training data
model.fit(train_features, train_labels);
# Use the forest's predict method on the test data
predictions = model.predict(test_features)
# Calculate the absolute errors
errors = abs(predictions - test_labels)
# Calculate mean absolute percentage error (MAPE) and add to list
MAPE = 100 * np.mean((errors / test_labels))
NMBE = 100 * (sum(test_labels - predictions) / (pd.Series(test_labels).count() * np.mean(test_labels)))
CVRSME = 100 * ((sum((test_labels - predictions)**2) / (pd.Series(test_labels).count()-1))**(0.5)) / np.mean(test_labels)
RSQUARED = r2_score(test_labels, predictions)
print("MAPE: "+str(MAPE))
print("NMBE: "+str(NMBE))
print("CVRSME: "+str(CVRSME))
print("R SQUARED: "+str(RSQUARED))
MAPE_data[singlebuilding] = MAPE
NMBE_data[singlebuilding] = NMBE
CVRSME_data[singlebuilding] = CVRSME
RSQUARED_data[singlebuilding] = RSQUARED
# except:
# print("There was a problem")
metrics = pd.DataFrame([MAPE_data, NMBE_data, CVRSME_data, RSQUARED_data]).T
metrics.columns = ["MAPE", "NMBE", "CVRSME", "RSQUARED"]
metrics
metrics.to_csv("AdaBoostRegressor_metrics.csv")
| 0.345216 | 0.469581 |
# Data Aggregation and Group Operations
```
import numpy as np
import pandas as pd
PREVIOUS_MAX_ROWS = pd.options.display.max_rows
pd.options.display.max_rows = 20
np.random.seed(12345)
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
np.set_printoptions(precision=4, suppress=True)
```
## GroupBy Mechanics
```
df = pd.DataFrame({'key1' : ['a', 'a', 'b', 'b', 'a'],
'key2' : ['one', 'two', 'one', 'two', 'one'],
'data1' : np.random.randn(5),
'data2' : np.random.randn(5)})
df
grouped = df['data1'].groupby(df['key1'])
grouped
grouped.mean()
means = df['data1'].groupby([df['key1'], df['key2']]).mean()
means
means.unstack()
states = np.array(['Ohio', 'California', 'California', 'Ohio', 'Ohio'])
years = np.array([2005, 2005, 2006, 2005, 2006])
df['data1'].groupby([states, years]).mean()
df.groupby('key1').mean()
df.groupby(['key1', 'key2']).mean()
df.groupby(['key1', 'key2']).size()
```
### Iterating Over Groups
```
for name, group in df.groupby('key1'):
print(name)
print(group)
for (k1, k2), group in df.groupby(['key1', 'key2']):
print((k1, k2))
print(group)
pieces = dict(list(df.groupby('key1')))
pieces['b']
pieces
df.dtypes
grouped = df.groupby(df.dtypes, axis=1)
for dtype, group in grouped:
print(dtype)
print(group)
```
### Selecting a Column or Subset of Columns
df.groupby('key1')['data1']
df.groupby('key1')[['data2']]
df['data1'].groupby(df['key1'])
df[['data2']].groupby(df['key1'])
```
df.groupby(['key1', 'key2'])[['data2']].mean()
s_grouped = df.groupby(['key1', 'key2'])['data2']
s_grouped
s_grouped.mean()
```
### Grouping with Dicts and Series
```
people = pd.DataFrame(np.random.randn(5, 5),
columns=['a', 'b', 'c', 'd', 'e'],
index=['Joe', 'Steve', 'Wes', 'Jim', 'Travis'])
people.iloc[2:3, [1, 2]] = np.nan # Add a few NA values
people
mapping = {'a': 'red', 'b': 'red', 'c': 'blue',
'd': 'blue', 'e': 'red', 'f' : 'orange'}
by_column = people.groupby(mapping, axis=1)
by_column.sum()
map_series = pd.Series(mapping)
map_series
people.groupby(map_series, axis=1).count()
```
### Grouping with Functions
```
people.groupby(len).sum()
key_list = ['one', 'one', 'one', 'two', 'two']
people.groupby([len, key_list]).min()
```
### Grouping by Index Levels
```
columns = pd.MultiIndex.from_arrays([['US', 'US', 'US', 'JP', 'JP'],
[1, 3, 5, 1, 3]],
names=['cty', 'tenor'])
hier_df = pd.DataFrame(np.random.randn(4, 5), columns=columns)
hier_df
hier_df.groupby(level='cty', axis=1).count()
```
## Data Aggregation
```
df
grouped = df.groupby('key1')
grouped['data1'].quantile(0.9)
def peak_to_peak(arr):
return arr.max() - arr.min()
grouped.agg(peak_to_peak)
grouped.describe()
```
### Column-Wise and Multiple Function Application
```
tips = pd.read_csv('examples/tips.csv')
# Add tip percentage of total bill
tips['tip_pct'] = tips['tip'] / tips['total_bill']
tips[:6]
grouped = tips.groupby(['day', 'smoker'])
grouped_pct = grouped['tip_pct']
grouped_pct.agg('mean')
grouped_pct.agg(['mean', 'std', peak_to_peak])
grouped_pct.agg([('foo', 'mean'), ('bar', np.std)])
functions = ['count', 'mean', 'max']
result = grouped['tip_pct', 'total_bill'].agg(functions)
result
result['tip_pct']
ftuples = [('Durchschnitt', 'mean'), ('Abweichung', np.var)]
grouped['tip_pct', 'total_bill'].agg(ftuples)
grouped.agg({'tip' : np.max, 'size' : 'sum'})
grouped.agg({'tip_pct' : ['min', 'max', 'mean', 'std'],
'size' : 'sum'})
```
### Returning Aggregated Data Without Row Indexes
```
tips.groupby(['day', 'smoker'], as_index=False).mean()
```
## Apply: General split-apply-combine
```
def top(df, n=5, column='tip_pct'):
return df.sort_values(by=column)[-n:]
top(tips, n=6)
tips.groupby('smoker').apply(top)
tips.groupby(['smoker', 'day']).apply(top, n=1, column='total_bill')
result = tips.groupby('smoker')['tip_pct'].describe()
result
result.unstack('smoker')
```
f = lambda x: x.describe()
grouped.apply(f)
### Suppressing the Group Keys
```
tips.groupby('smoker', group_keys=False).apply(top)
```
### Quantile and Bucket Analysis
```
frame = pd.DataFrame({'data1': np.random.randn(1000),
'data2': np.random.randn(1000)})
frame.head()
quartiles = pd.cut(frame.data1, 4)
quartiles
quartiles[:10]
def get_stats(group):
return {'min': group.min(), 'max': group.max(),
'count': group.count(), 'mean': group.mean()}
grouped = frame.data2.groupby(quartiles)
grouped.apply(get_stats).unstack()
# Return quantile numbers
grouping = pd.qcut(frame.data1, 10, labels=False)
grouped = frame.data2.groupby(grouping)
grouped.apply(get_stats).unstack()
```
### Example: Filling Missing Values with Group-Specific Values
```
s = pd.Series(np.random.randn(6))
s[::2] = np.nan
s
s.fillna(s.mean())
states = ['Ohio', 'New York', 'Vermont', 'Florida',
'Oregon', 'Nevada', 'California', 'Idaho']
group_key = ['East'] * 4 + ['West'] * 4
data = pd.Series(np.random.randn(8), index=states)
data
data[['Vermont', 'Nevada', 'Idaho']] = np.nan
data
data.groupby(group_key).mean()
fill_mean = lambda g: g.fillna(g.mean())
data.groupby(group_key).apply(fill_mean)
fill_values = {'East': 0.5, 'West': -1}
fill_func = lambda g: g.fillna(fill_values[g.name])
data.groupby(group_key).apply(fill_func)
```
### Example: Random Sampling and Permutation
```
# Hearts, Spades, Clubs, Diamonds
suits = ['H', 'S', 'C', 'D']
card_val = (list(range(1, 11)) + [10] * 3) * 4
base_names = ['A'] + list(range(2, 11)) + ['J', 'K', 'Q']
cards = []
for suit in ['H', 'S', 'C', 'D']:
cards.extend(str(num) + suit for num in base_names)
deck = pd.Series(card_val, index=cards)
card_val[:13]
base_names
deck[:13]
def draw(deck, n=5):
return deck.sample(n)
draw(deck)
get_suit = lambda card: card[-1] # last letter is suit
deck.groupby(get_suit).apply(draw, n=2)
deck.groupby(get_suit, group_keys=False).apply(draw, n=2)
```
### Example: Group Weighted Average and Correlation
```
df = pd.DataFrame({'category': ['a', 'a', 'a', 'a',
'b', 'b', 'b', 'b'],
'data': np.random.randn(8),
'weights': np.random.rand(8)})
df
grouped = df.groupby('category')
get_wavg = lambda g: np.average(g['data'], weights=g['weights'])
grouped.apply(get_wavg)
close_px = pd.read_csv('examples/stock_px_2.csv', parse_dates=True,
index_col=0)
close_px.info()
close_px[-4:]
spx_corr = lambda x: x.corrwith(x['SPX'])
rets = close_px.pct_change().dropna()
get_year = lambda x: x.year
by_year = rets.groupby(get_year)
by_year.apply(spx_corr)
by_year.apply(lambda g: g['AAPL'].corr(g['MSFT']))
```
### Example: Group-Wise Linear Regression
```
import statsmodels.api as sm
def regress(data, yvar, xvars):
Y = data[yvar]
X = data[xvars]
X['intercept'] = 1.
result = sm.OLS(Y, X).fit()
return result.params
by_year.apply(regress, 'AAPL', ['SPX'])
```
## Pivot Tables and Cross-Tabulation
```
tips.pivot_table(index=['day', 'smoker'])
tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'],
columns='smoker')
tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'],
columns='smoker', margins=True)
tips.pivot_table('tip_pct', index=['time', 'smoker'], columns='day',
aggfunc=len, margins=True)
tips.pivot_table('tip_pct', index=['time', 'size', 'smoker'],
columns='day', aggfunc='mean', fill_value=0)
```
### Cross-Tabulations: Crosstab
```
from io import StringIO
data = """\
Sample Nationality Handedness
1 USA Right-handed
2 Japan Left-handed
3 USA Right-handed
4 Japan Right-handed
5 Japan Left-handed
6 Japan Right-handed
7 USA Right-handed
8 USA Left-handed
9 Japan Right-handed
10 USA Right-handed"""
data = pd.read_table(StringIO(data), sep='\s+')
data
pd.crosstab(data.Nationality, data.Handedness, margins=True)
pd.crosstab([tips.time, tips.day], tips.smoker, margins=True)
pd.options.display.max_rows = PREVIOUS_MAX_ROWS
```
## Conclusion
|
github_jupyter
|
import numpy as np
import pandas as pd
PREVIOUS_MAX_ROWS = pd.options.display.max_rows
pd.options.display.max_rows = 20
np.random.seed(12345)
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
np.set_printoptions(precision=4, suppress=True)
df = pd.DataFrame({'key1' : ['a', 'a', 'b', 'b', 'a'],
'key2' : ['one', 'two', 'one', 'two', 'one'],
'data1' : np.random.randn(5),
'data2' : np.random.randn(5)})
df
grouped = df['data1'].groupby(df['key1'])
grouped
grouped.mean()
means = df['data1'].groupby([df['key1'], df['key2']]).mean()
means
means.unstack()
states = np.array(['Ohio', 'California', 'California', 'Ohio', 'Ohio'])
years = np.array([2005, 2005, 2006, 2005, 2006])
df['data1'].groupby([states, years]).mean()
df.groupby('key1').mean()
df.groupby(['key1', 'key2']).mean()
df.groupby(['key1', 'key2']).size()
for name, group in df.groupby('key1'):
print(name)
print(group)
for (k1, k2), group in df.groupby(['key1', 'key2']):
print((k1, k2))
print(group)
pieces = dict(list(df.groupby('key1')))
pieces['b']
pieces
df.dtypes
grouped = df.groupby(df.dtypes, axis=1)
for dtype, group in grouped:
print(dtype)
print(group)
df.groupby(['key1', 'key2'])[['data2']].mean()
s_grouped = df.groupby(['key1', 'key2'])['data2']
s_grouped
s_grouped.mean()
people = pd.DataFrame(np.random.randn(5, 5),
columns=['a', 'b', 'c', 'd', 'e'],
index=['Joe', 'Steve', 'Wes', 'Jim', 'Travis'])
people.iloc[2:3, [1, 2]] = np.nan # Add a few NA values
people
mapping = {'a': 'red', 'b': 'red', 'c': 'blue',
'd': 'blue', 'e': 'red', 'f' : 'orange'}
by_column = people.groupby(mapping, axis=1)
by_column.sum()
map_series = pd.Series(mapping)
map_series
people.groupby(map_series, axis=1).count()
people.groupby(len).sum()
key_list = ['one', 'one', 'one', 'two', 'two']
people.groupby([len, key_list]).min()
columns = pd.MultiIndex.from_arrays([['US', 'US', 'US', 'JP', 'JP'],
[1, 3, 5, 1, 3]],
names=['cty', 'tenor'])
hier_df = pd.DataFrame(np.random.randn(4, 5), columns=columns)
hier_df
hier_df.groupby(level='cty', axis=1).count()
df
grouped = df.groupby('key1')
grouped['data1'].quantile(0.9)
def peak_to_peak(arr):
return arr.max() - arr.min()
grouped.agg(peak_to_peak)
grouped.describe()
tips = pd.read_csv('examples/tips.csv')
# Add tip percentage of total bill
tips['tip_pct'] = tips['tip'] / tips['total_bill']
tips[:6]
grouped = tips.groupby(['day', 'smoker'])
grouped_pct = grouped['tip_pct']
grouped_pct.agg('mean')
grouped_pct.agg(['mean', 'std', peak_to_peak])
grouped_pct.agg([('foo', 'mean'), ('bar', np.std)])
functions = ['count', 'mean', 'max']
result = grouped['tip_pct', 'total_bill'].agg(functions)
result
result['tip_pct']
ftuples = [('Durchschnitt', 'mean'), ('Abweichung', np.var)]
grouped['tip_pct', 'total_bill'].agg(ftuples)
grouped.agg({'tip' : np.max, 'size' : 'sum'})
grouped.agg({'tip_pct' : ['min', 'max', 'mean', 'std'],
'size' : 'sum'})
tips.groupby(['day', 'smoker'], as_index=False).mean()
def top(df, n=5, column='tip_pct'):
return df.sort_values(by=column)[-n:]
top(tips, n=6)
tips.groupby('smoker').apply(top)
tips.groupby(['smoker', 'day']).apply(top, n=1, column='total_bill')
result = tips.groupby('smoker')['tip_pct'].describe()
result
result.unstack('smoker')
tips.groupby('smoker', group_keys=False).apply(top)
frame = pd.DataFrame({'data1': np.random.randn(1000),
'data2': np.random.randn(1000)})
frame.head()
quartiles = pd.cut(frame.data1, 4)
quartiles
quartiles[:10]
def get_stats(group):
return {'min': group.min(), 'max': group.max(),
'count': group.count(), 'mean': group.mean()}
grouped = frame.data2.groupby(quartiles)
grouped.apply(get_stats).unstack()
# Return quantile numbers
grouping = pd.qcut(frame.data1, 10, labels=False)
grouped = frame.data2.groupby(grouping)
grouped.apply(get_stats).unstack()
s = pd.Series(np.random.randn(6))
s[::2] = np.nan
s
s.fillna(s.mean())
states = ['Ohio', 'New York', 'Vermont', 'Florida',
'Oregon', 'Nevada', 'California', 'Idaho']
group_key = ['East'] * 4 + ['West'] * 4
data = pd.Series(np.random.randn(8), index=states)
data
data[['Vermont', 'Nevada', 'Idaho']] = np.nan
data
data.groupby(group_key).mean()
fill_mean = lambda g: g.fillna(g.mean())
data.groupby(group_key).apply(fill_mean)
fill_values = {'East': 0.5, 'West': -1}
fill_func = lambda g: g.fillna(fill_values[g.name])
data.groupby(group_key).apply(fill_func)
# Hearts, Spades, Clubs, Diamonds
suits = ['H', 'S', 'C', 'D']
card_val = (list(range(1, 11)) + [10] * 3) * 4
base_names = ['A'] + list(range(2, 11)) + ['J', 'K', 'Q']
cards = []
for suit in ['H', 'S', 'C', 'D']:
cards.extend(str(num) + suit for num in base_names)
deck = pd.Series(card_val, index=cards)
card_val[:13]
base_names
deck[:13]
def draw(deck, n=5):
return deck.sample(n)
draw(deck)
get_suit = lambda card: card[-1] # last letter is suit
deck.groupby(get_suit).apply(draw, n=2)
deck.groupby(get_suit, group_keys=False).apply(draw, n=2)
df = pd.DataFrame({'category': ['a', 'a', 'a', 'a',
'b', 'b', 'b', 'b'],
'data': np.random.randn(8),
'weights': np.random.rand(8)})
df
grouped = df.groupby('category')
get_wavg = lambda g: np.average(g['data'], weights=g['weights'])
grouped.apply(get_wavg)
close_px = pd.read_csv('examples/stock_px_2.csv', parse_dates=True,
index_col=0)
close_px.info()
close_px[-4:]
spx_corr = lambda x: x.corrwith(x['SPX'])
rets = close_px.pct_change().dropna()
get_year = lambda x: x.year
by_year = rets.groupby(get_year)
by_year.apply(spx_corr)
by_year.apply(lambda g: g['AAPL'].corr(g['MSFT']))
import statsmodels.api as sm
def regress(data, yvar, xvars):
Y = data[yvar]
X = data[xvars]
X['intercept'] = 1.
result = sm.OLS(Y, X).fit()
return result.params
by_year.apply(regress, 'AAPL', ['SPX'])
tips.pivot_table(index=['day', 'smoker'])
tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'],
columns='smoker')
tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'],
columns='smoker', margins=True)
tips.pivot_table('tip_pct', index=['time', 'smoker'], columns='day',
aggfunc=len, margins=True)
tips.pivot_table('tip_pct', index=['time', 'size', 'smoker'],
columns='day', aggfunc='mean', fill_value=0)
from io import StringIO
data = """\
Sample Nationality Handedness
1 USA Right-handed
2 Japan Left-handed
3 USA Right-handed
4 Japan Right-handed
5 Japan Left-handed
6 Japan Right-handed
7 USA Right-handed
8 USA Left-handed
9 Japan Right-handed
10 USA Right-handed"""
data = pd.read_table(StringIO(data), sep='\s+')
data
pd.crosstab(data.Nationality, data.Handedness, margins=True)
pd.crosstab([tips.time, tips.day], tips.smoker, margins=True)
pd.options.display.max_rows = PREVIOUS_MAX_ROWS
| 0.413832 | 0.859487 |
# G
```
conda install numba
conda install cudatoolkit
```
```
#!/usr/bin/env python
from __future__ import print_function, division, absolute_import
import math
import threading
from timeit import repeat
import numpy as np
from numba import jit
nthreads = 4
size = 10**6
def func_np(a, b):
"""
Control function using Numpy.
"""
return np.exp(2.1 * a + 3.2 * b)
@jit('void(double[:], double[:], double[:])', nopython=True, nogil=True)
def inner_func_nb(result, a, b):
"""
Function under test.
"""
for i in range(len(result)):
result[i] = math.exp(2.1 * a[i] + 3.2 * b[i])
def timefunc(correct, s, func, *args, **kwargs):
"""
Benchmark *func* and print out its runtime.
"""
print(s.ljust(20), end=" ")
# Make sure the function is compiled before we start the benchmark
res = func(*args, **kwargs)
if correct is not None:
assert np.allclose(res, correct), (res, correct)
# time it
print('{:>5.0f} ms'.format(min(repeat(lambda: func(*args, **kwargs),
number=5, repeat=2)) * 1000))
return res
def make_singlethread(inner_func):
"""
Run the given function inside a single thread.
"""
def func(*args):
length = len(args[0])
result = np.empty(length, dtype=np.float64)
inner_func(result, *args)
return result
return func
def make_multithread(inner_func, numthreads):
"""
Run the given function inside *numthreads* threads, splitting its
arguments into equal-sized chunks.
"""
def func_mt(*args):
length = len(args[0])
result = np.empty(length, dtype=np.float64)
args = (result,) + args
chunklen = (length + numthreads - 1) // numthreads
# Create argument tuples for each input chunk
chunks = [[arg[i * chunklen:(i + 1) * chunklen] for arg in args]
for i in range(numthreads)]
# Spawn one thread per chunk
threads = [threading.Thread(target=inner_func, args=chunk)
for chunk in chunks]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
return result
return func_mt
func_nb = make_singlethread(inner_func_nb)
func_nb_mt = make_multithread(inner_func_nb, nthreads)
a = np.random.rand(size)
b = np.random.rand(size)
correct = timefunc(None, "numpy (1 thread)", func_np, a, b)
timefunc(correct, "numba (1 thread)", func_nb, a, b)
timefunc(correct, "numba (%d threads)" % nthreads, func_nb_mt, a, b)
```
# JIT with JAX
https://github.com/google/jax
|
github_jupyter
|
conda install numba
conda install cudatoolkit
#!/usr/bin/env python
from __future__ import print_function, division, absolute_import
import math
import threading
from timeit import repeat
import numpy as np
from numba import jit
nthreads = 4
size = 10**6
def func_np(a, b):
"""
Control function using Numpy.
"""
return np.exp(2.1 * a + 3.2 * b)
@jit('void(double[:], double[:], double[:])', nopython=True, nogil=True)
def inner_func_nb(result, a, b):
"""
Function under test.
"""
for i in range(len(result)):
result[i] = math.exp(2.1 * a[i] + 3.2 * b[i])
def timefunc(correct, s, func, *args, **kwargs):
"""
Benchmark *func* and print out its runtime.
"""
print(s.ljust(20), end=" ")
# Make sure the function is compiled before we start the benchmark
res = func(*args, **kwargs)
if correct is not None:
assert np.allclose(res, correct), (res, correct)
# time it
print('{:>5.0f} ms'.format(min(repeat(lambda: func(*args, **kwargs),
number=5, repeat=2)) * 1000))
return res
def make_singlethread(inner_func):
"""
Run the given function inside a single thread.
"""
def func(*args):
length = len(args[0])
result = np.empty(length, dtype=np.float64)
inner_func(result, *args)
return result
return func
def make_multithread(inner_func, numthreads):
"""
Run the given function inside *numthreads* threads, splitting its
arguments into equal-sized chunks.
"""
def func_mt(*args):
length = len(args[0])
result = np.empty(length, dtype=np.float64)
args = (result,) + args
chunklen = (length + numthreads - 1) // numthreads
# Create argument tuples for each input chunk
chunks = [[arg[i * chunklen:(i + 1) * chunklen] for arg in args]
for i in range(numthreads)]
# Spawn one thread per chunk
threads = [threading.Thread(target=inner_func, args=chunk)
for chunk in chunks]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
return result
return func_mt
func_nb = make_singlethread(inner_func_nb)
func_nb_mt = make_multithread(inner_func_nb, nthreads)
a = np.random.rand(size)
b = np.random.rand(size)
correct = timefunc(None, "numpy (1 thread)", func_np, a, b)
timefunc(correct, "numba (1 thread)", func_nb, a, b)
timefunc(correct, "numba (%d threads)" % nthreads, func_nb_mt, a, b)
| 0.464173 | 0.717829 |
```
import pandas as pd
import numpy as np
suicide_info = pd.read_csv("master.csv")
suicide_info.head()
# rank suicide no after 2005
suicide_rank_country = suicide_info.loc[:,['country','year', 'suicides_no']].copy()
# suicide_rank_country = suicide_rank_country[suicide_rank_country.year > 2005]
suicide_rank_country = suicide_rank_country.groupby('country')[['suicides_no']].sum().reset_index().sort_values(['suicides_no'], ascending=False)
suicide_rank_country.head()
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(15,10)})
sns.barplot(x="suicides_no", y="country", data=suicide_rank_country.head(15)).set_title("Rank of Suicide No After 2005")
plt.show()
# Suicide number group by gender
suicide_gender_year = suicide_info.loc[:,['sex', 'suicides_no', 'year']].copy().groupby(['year', 'sex'])[['suicides_no']].sum().reset_index()
suicide_male_year = suicide_gender_year[suicide_gender_year.sex == "male"]
suicide_female_year = suicide_gender_year[suicide_gender_year.sex == "female"]
ax = sns.catplot(x = "year", y = "suicides_no",
hue = "sex",
data = suicide_gender_year, kind = "bar", palette="Set2",
height=5, aspect=3)
!pip install folium
import folium
m = folium.Map(location=[20,0], tiles="Mapbox Bright", zoom_start=2)
coutry_geo = 'country.json'
suicide_rate_by_contry = pd.read_csv("suicide-rates-by-country.csv")
suicide_rate_by_contry_2005 = suicide_rate_by_contry.loc[suicide_rate_by_contry.Year == 2005].copy()
suicide_rate_by_contry_2005['rate'] = suicide_rate_by_contry_2005['suicide rate (age-adjusted suicides per 100,000 people)']
suicide_rate_by_contry_2005.head()
folium.Choropleth(
geo_data=coutry_geo,
name='Suicide No',
data=suicide_rate_by_contry_2005,
columns=['Entity', 'rate'],
key_on='feature.properties.name',
fill_color='YlGn',
legend_name='Suicide Rate (%)'
).add_to(m)
folium.LayerControl().add_to(m)
m
suicide_info[' gdp_for_year ($) '] = suicide_info[' gdp_for_year ($) '].str.replace(',', '')
suicide_info[' gdp_for_year ($) '] = suicide_info[' gdp_for_year ($) '].apply(pd.to_numeric)
sns.pairplot(suicide_info.drop(columns=['year', 'HDI for year']))
plt.show()
suicide_info.head()
suicide_rate_after_2008 = suicide_info[suicide_info.year >= 2008]
suicide_rate_after_2008.head()
# North America
suicide_with_ec = suicide_rate_after_2008.loc[:,['country', 'year', 'suicides/100k pop']].copy()
suicide_with_ec = suicide_with_ec[(suicide_with_ec.country == "Canada") | (suicide_with_ec.country == "United States")].groupby('year')[['suicides/100k pop']].sum().reset_index()
economic_risk_year = 2008
suicide_with_ec
ax = sns.lineplot(x='year', y = 'suicides/100k pop', data = suicide_with_ec)
ax.set(ylabel = "Suicide Rate * 100")
plt.title("Suicide Rate After Finacial Crisis in 2008")
plt.show()
suicide_info.drop(columns=['HDI for year'], inplace=True)
suicide_info.age = suicide_info.age.astype('category')
x_labels = ['5-14 years', '15-24 years','25-34 years','35-54 years','55-74 years','75+ years']
suicide_info.age.cat.set_categories(x_labels, inplace = True)
suicide_info.sort_values(['age'])
ax = sns.catplot(x="age", y="suicides/100k pop", data=suicide_info, height = 5, aspect=1.25)
plt.show()
suicide_info.columns
population_growth = suicide_info.loc[:, ['country', 'year', 'population', 'suicides/100k pop']].copy()
population_growth = population_growth.groupby(['country','year'])[['population', 'suicides/100k pop']].sum().reset_index()
population_growth.head()
population_growth['population_growth'] = population_growth['population'].diff().fillna(0)
population_growth['population_growth'] = population_growth['population_growth'] / population_growth['population'] * 100
population_growth.head()
sns.pairplot(population_growth.loc[:, ['suicides/100k pop', 'population_growth']])
plt.show()
unemployment_rate = pd.read_csv("UNdata_Export_20190107_231138133.csv")
unemployment_rate.head()
unemployment_rate['Rate'] = unemployment_rate['Rate'].str.rstrip('%').astype('float')
world_up_rate = unemployment_rate.groupby(['Year'])[['Rate']].mean().reset_index()
world_up_rate.head()
suicide_rate = suicide_info.loc[:, ['year', 'suicides/100k pop']].copy()
suicide_rate = suicide_rate.groupby(['year'])[['suicides/100k pop']].sum().reset_index()
suicide_rate['rate'] = suicide_rate['suicides/100k pop']
suicide_rate.drop(columns=['suicides/100k pop'], inplace=True)
suicide_rate['rate'] = suicide_rate['rate'] / 1000
suicide_rate.head()
unem_suicide_rate = suicide_rate.set_index('year').join(world_up_rate.set_index('Year')).reset_index()
unem_suicide_rate.head()
fig, ax = plt.subplots()
sns.lineplot(x = 'year', y='Rate', data = unem_suicide_rate, ax=ax)
ax2 = ax.twinx()
sns.lineplot(x = 'year', y='rate', data = unem_suicide_rate, ax = ax2, color='r')
plt.show()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
suicide_info = pd.read_csv("master.csv")
suicide_info.head()
# rank suicide no after 2005
suicide_rank_country = suicide_info.loc[:,['country','year', 'suicides_no']].copy()
# suicide_rank_country = suicide_rank_country[suicide_rank_country.year > 2005]
suicide_rank_country = suicide_rank_country.groupby('country')[['suicides_no']].sum().reset_index().sort_values(['suicides_no'], ascending=False)
suicide_rank_country.head()
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(15,10)})
sns.barplot(x="suicides_no", y="country", data=suicide_rank_country.head(15)).set_title("Rank of Suicide No After 2005")
plt.show()
# Suicide number group by gender
suicide_gender_year = suicide_info.loc[:,['sex', 'suicides_no', 'year']].copy().groupby(['year', 'sex'])[['suicides_no']].sum().reset_index()
suicide_male_year = suicide_gender_year[suicide_gender_year.sex == "male"]
suicide_female_year = suicide_gender_year[suicide_gender_year.sex == "female"]
ax = sns.catplot(x = "year", y = "suicides_no",
hue = "sex",
data = suicide_gender_year, kind = "bar", palette="Set2",
height=5, aspect=3)
!pip install folium
import folium
m = folium.Map(location=[20,0], tiles="Mapbox Bright", zoom_start=2)
coutry_geo = 'country.json'
suicide_rate_by_contry = pd.read_csv("suicide-rates-by-country.csv")
suicide_rate_by_contry_2005 = suicide_rate_by_contry.loc[suicide_rate_by_contry.Year == 2005].copy()
suicide_rate_by_contry_2005['rate'] = suicide_rate_by_contry_2005['suicide rate (age-adjusted suicides per 100,000 people)']
suicide_rate_by_contry_2005.head()
folium.Choropleth(
geo_data=coutry_geo,
name='Suicide No',
data=suicide_rate_by_contry_2005,
columns=['Entity', 'rate'],
key_on='feature.properties.name',
fill_color='YlGn',
legend_name='Suicide Rate (%)'
).add_to(m)
folium.LayerControl().add_to(m)
m
suicide_info[' gdp_for_year ($) '] = suicide_info[' gdp_for_year ($) '].str.replace(',', '')
suicide_info[' gdp_for_year ($) '] = suicide_info[' gdp_for_year ($) '].apply(pd.to_numeric)
sns.pairplot(suicide_info.drop(columns=['year', 'HDI for year']))
plt.show()
suicide_info.head()
suicide_rate_after_2008 = suicide_info[suicide_info.year >= 2008]
suicide_rate_after_2008.head()
# North America
suicide_with_ec = suicide_rate_after_2008.loc[:,['country', 'year', 'suicides/100k pop']].copy()
suicide_with_ec = suicide_with_ec[(suicide_with_ec.country == "Canada") | (suicide_with_ec.country == "United States")].groupby('year')[['suicides/100k pop']].sum().reset_index()
economic_risk_year = 2008
suicide_with_ec
ax = sns.lineplot(x='year', y = 'suicides/100k pop', data = suicide_with_ec)
ax.set(ylabel = "Suicide Rate * 100")
plt.title("Suicide Rate After Finacial Crisis in 2008")
plt.show()
suicide_info.drop(columns=['HDI for year'], inplace=True)
suicide_info.age = suicide_info.age.astype('category')
x_labels = ['5-14 years', '15-24 years','25-34 years','35-54 years','55-74 years','75+ years']
suicide_info.age.cat.set_categories(x_labels, inplace = True)
suicide_info.sort_values(['age'])
ax = sns.catplot(x="age", y="suicides/100k pop", data=suicide_info, height = 5, aspect=1.25)
plt.show()
suicide_info.columns
population_growth = suicide_info.loc[:, ['country', 'year', 'population', 'suicides/100k pop']].copy()
population_growth = population_growth.groupby(['country','year'])[['population', 'suicides/100k pop']].sum().reset_index()
population_growth.head()
population_growth['population_growth'] = population_growth['population'].diff().fillna(0)
population_growth['population_growth'] = population_growth['population_growth'] / population_growth['population'] * 100
population_growth.head()
sns.pairplot(population_growth.loc[:, ['suicides/100k pop', 'population_growth']])
plt.show()
unemployment_rate = pd.read_csv("UNdata_Export_20190107_231138133.csv")
unemployment_rate.head()
unemployment_rate['Rate'] = unemployment_rate['Rate'].str.rstrip('%').astype('float')
world_up_rate = unemployment_rate.groupby(['Year'])[['Rate']].mean().reset_index()
world_up_rate.head()
suicide_rate = suicide_info.loc[:, ['year', 'suicides/100k pop']].copy()
suicide_rate = suicide_rate.groupby(['year'])[['suicides/100k pop']].sum().reset_index()
suicide_rate['rate'] = suicide_rate['suicides/100k pop']
suicide_rate.drop(columns=['suicides/100k pop'], inplace=True)
suicide_rate['rate'] = suicide_rate['rate'] / 1000
suicide_rate.head()
unem_suicide_rate = suicide_rate.set_index('year').join(world_up_rate.set_index('Year')).reset_index()
unem_suicide_rate.head()
fig, ax = plt.subplots()
sns.lineplot(x = 'year', y='Rate', data = unem_suicide_rate, ax=ax)
ax2 = ax.twinx()
sns.lineplot(x = 'year', y='rate', data = unem_suicide_rate, ax = ax2, color='r')
plt.show()
| 0.315525 | 0.351895 |
# Adaptive Voter Model Simulation
A short tutorial on using the adaptive voter simulation package.
We start by importing the ```AdaptiveVoter``` class from the package.
This is the only class we need for the simulation and almost all the code for running the simulation is contained within this one class.
```
%pylab inline
from adaptive import AdaptiveVoter
import networkx as nx
import numpy as np
```
### Creating a Simulation From Scratch
To build a new simulation, we need to specify the parameters of the model.
These are:
* kap: The preferred degree of the nodes.
* r: The probability of an node update (otherwise an edge update occurs).
* h: Homophily parameter. +1 for complete homophily, -1 for heterophily.
* T: Temperature. Controls how likely a node will change state randomly.
* f: Fraction of neighbourhood required for node state change.
```
sim = AdaptiveVoter(kap=10,
r=0,
h=0,
T=0,
f=0)
```
We then need to set the initial graph structure.
We can do this by building a graph using NetworkX and then adding this to the simulation using ```.set_initial_network()```.
```
G = nx.erdos_renyi_graph(40, 0.05)
sim.set_initial_network(G)
```
We also need to set the initial condition of the node opinions (which are either -1 or +1).
We can specify these states using a vector of opinions, or omit any parameter for a random initial state.
```
# Prescribed initial condition
sim.set_inital_condition(np.ones(len(G)))
# Random initial condition
sim.set_inital_condition()
```
To run a single iteration of the simulation we call ```.run_iteration()```.
Currently this does not guarantee that any event (state change) will occur.
```
sim.run_iteration()
```
More likely, we will want to run many iterations at once.
We can do this simply by calling ```.run(iterations)```.
```
sim.run(iterations=1000)
```
### Querying the Simulation
To query the simulation at any given point we can access the simulation variables.
The opinion vector is stored as ```S```.
```
sim.S
```
The node degrees are stored as ```K```, and the adjacency matrix is stored as ```A```.
```
sim.K
```
The excess degree is stored as ```X```.
```
sim.X
```
### Queries at different points in time.
There may be instances where we want to query what the system state was at a particular time.
We can move the simulation to a particular point in time by using the ```.build``` function.
```
sim.build(500)
```
Since our simulation is 1001 iterations in, and we have moved the simulation back to $t=500$, we cannot run further iterations, as shown below.
```
sim.run_iteration()
```
We first need to build the simulation back to the latest timepoint so that we can progress the simulation further.
This can be achived by specifying the final time, or using 'max'.
```
sim.build('max')
sim.run_iteration()
```
### Plotting
We can return the graph of the current state using ```.to_graph()```.
This returns a NetworkX graph where the nodes have an 'opinion' attribute which can be used to change the node colour.
```
G = sim.to_graph()
# Plotting will be incorporated into the package in the future.
nodes = G.nodes()
cmap = {-1:'r', 1:'g'}
node_colors = [cmap[val] for key,val in nx.get_node_attributes(G,'opinion').items()]
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, node_color=node_colors, pos=pos)
nx.draw_networkx_edges(G, pos=pos)
plt.axis('off');
```
### Saving and Loading
Saving a simulation is simple.
Simulations are saved as JSON files, and can be compressed to save space.
```
sim.save_to_file('./test.json', compressed=True)
```
Loading a simulation is through the ```from_saved_simulation``` classmethod.
In this case all that needs to be specified is a path to a valid AdaptiveVoter simulation.
**Note**: To continue running the simulation it first needs to be built.
```
sim = AdaptiveVoter.from_saved_simulation(filepath='./test.json.gz')
```
### Simulation Statistics
What we ultimately want from the simulation is to track the evolution of certain attributes of the system.
We can do this using the ```process_timeseries``` function.
There are also some basic functions included in the package (they can mostly fit on one line) to illustrate the possiblities.
```
from adaptive import process_timeseries, mean_opinion, mean_degree, degree, active_links
```
The ```process_timeseries``` function takes a simulation object, a time range for which we want to query, and a list of functions that we want to call at each point in time.
In this example, we have taken the mean opinion and degree, as well as the number of active links and the degree vector (should we want to examine the *distribution* of degree over time).
```
result = process_timeseries(sim, np.linspace(0,1000,40), [mean_opinion, mean_degree, degree, active_links])
```
The result is a dictionary with a timeseries (also saved as a dictionary) for each function we call.
We can plot the results below (alternatively one might convert to a Pandas Series or DataFrame for convenience).
```
mdegree = result['mean_degree']
plt.plot(list(mdegree.keys()), list(mdegree.values()));
```
We can create our own statistic to measure over time by creating a custom function.
The only requirements for the function is that it takes a single paramter (the simulation), and makes valid calls to the simulation object.
Below we create a clustering coefficient function and track its value over time.
```
def clustering(simulation):
"""Calculates the clustering coefficient of a simulation."""
A = simulation.A
A2 = A.dot(A)
A3 = A2.dot(A)
num = np.trace(A3)
den = (np.sum(A2)-np.trace(A2))
if num == 0:
return 0
return num / den
result = process_timeseries(sim, np.linspace(0,1000,40), [clustering])
clust = result['clustering']
plt.plot(list(clust.keys()), list(clust.values()));
```
|
github_jupyter
|
%pylab inline
from adaptive import AdaptiveVoter
import networkx as nx
import numpy as np
sim = AdaptiveVoter(kap=10,
r=0,
h=0,
T=0,
f=0)
G = nx.erdos_renyi_graph(40, 0.05)
sim.set_initial_network(G)
# Prescribed initial condition
sim.set_inital_condition(np.ones(len(G)))
# Random initial condition
sim.set_inital_condition()
sim.run_iteration()
sim.run(iterations=1000)
sim.S
sim.K
sim.X
sim.build(500)
sim.run_iteration()
sim.build('max')
sim.run_iteration()
G = sim.to_graph()
# Plotting will be incorporated into the package in the future.
nodes = G.nodes()
cmap = {-1:'r', 1:'g'}
node_colors = [cmap[val] for key,val in nx.get_node_attributes(G,'opinion').items()]
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, node_color=node_colors, pos=pos)
nx.draw_networkx_edges(G, pos=pos)
plt.axis('off');
sim.save_to_file('./test.json', compressed=True)
sim = AdaptiveVoter.from_saved_simulation(filepath='./test.json.gz')
from adaptive import process_timeseries, mean_opinion, mean_degree, degree, active_links
result = process_timeseries(sim, np.linspace(0,1000,40), [mean_opinion, mean_degree, degree, active_links])
mdegree = result['mean_degree']
plt.plot(list(mdegree.keys()), list(mdegree.values()));
def clustering(simulation):
"""Calculates the clustering coefficient of a simulation."""
A = simulation.A
A2 = A.dot(A)
A3 = A2.dot(A)
num = np.trace(A3)
den = (np.sum(A2)-np.trace(A2))
if num == 0:
return 0
return num / den
result = process_timeseries(sim, np.linspace(0,1000,40), [clustering])
clust = result['clustering']
plt.plot(list(clust.keys()), list(clust.values()));
| 0.626924 | 0.991431 |
## 1. The most Nobel of Prizes
<p><img style="float: right;margin:5px 20px 5px 1px; max-width:250px" src="https://assets.datacamp.com/production/project_441/img/Nobel_Prize.png"></p>
<p>The Nobel Prize is perhaps the world's most well known scientific award. Except for the honor, prestige and substantial prize money the recipient also gets a gold medal showing Alfred Nobel (1833 - 1896) who established the prize. Every year it's given to scientists and scholars in the categories chemistry, literature, physics, physiology or medicine, economics, and peace. The first Nobel Prize was handed out in 1901, and at that time the Prize was very Eurocentric and male-focused, but nowadays it's not biased in any way whatsoever. Surely. Right?</p>
<p>Well, we're going to find out! The Nobel Foundation has made a dataset available of all prize winners from the start of the prize, in 1901, to 2016. Let's load it in and take a look.</p>
```
# Loading in required libraries
import pandas as pd
import seaborn as sns
import numpy as np
# Reading in the Nobel Prize data
nobel = pd.read_csv('datasets/nobel.csv')
# Taking a look at the first several winners
nobel.head(6)
```
## 2. So, who gets the Nobel Prize?
<p>Just looking at the first couple of prize winners, or Nobel laureates as they are also called, we already see a celebrity: Wilhelm Conrad Röntgen, the guy who discovered X-rays. And actually, we see that all of the winners in 1901 were guys that came from Europe. But that was back in 1901, looking at all winners in the dataset, from 1901 to 2016, which sex and which country is the most commonly represented? </p>
<p>(For <em>country</em>, we will use the <code>birth_country</code> of the winner, as the <code>organization_country</code> is <code>NaN</code> for all shared Nobel Prizes.)</p>
```
# Display the number of (possibly shared) Nobel Prizes handed
# out between 1901 and 2016
display(len(nobel['prize']))
# Display the number of prizes won by male and female recipients.
display(nobel['sex'].value_counts())
# Display the number of prizes won by the top 10 nationalities.
nobel['birth_country'].value_counts().head(10)
```
## 3. USA dominance
<p>Not so surprising perhaps: the most common Nobel laureate between 1901 and 2016 was a man born in the United States of America. But in 1901 all the winners were European. When did the USA start to dominate the Nobel Prize charts?</p>
```
# Calculating the proportion of USA born winners per decade
nobel['usa_born_winner'] = nobel['birth_country']=='United States of America'
nobel['decade'] = (np.floor(nobel['year']/10)*10).astype(int)
prop_usa_winners = nobel.groupby('decade',as_index=False)['usa_born_winner'].mean()
# Display the proportions of USA born winners per decade
prop_usa_winners
```
## 4. USA dominance, visualized
<p>A table is OK, but to <em>see</em> when the USA started to dominate the Nobel charts we need a plot!</p>
```
# Setting the plotting theme
sns.set()
# and setting the size of all plots.
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [11, 7]
# Plotting USA born winners
ax = sns.lineplot(x='decade',y='usa_born_winner',data=prop_usa_winners)
# Adding %-formatting to the y-axis
from matplotlib.ticker import PercentFormatter
ax.yaxis.set_major_formatter(PercentFormatter(1.0))
```
## 5. What is the gender of a typical Nobel Prize winner?
<p>So the USA became the dominating winner of the Nobel Prize first in the 1930s and had kept the leading position ever since. But one group that was in the lead from the start, and never seems to let go, are <em>men</em>. Maybe it shouldn't come as a shock that there is some imbalance between how many male and female prize winners there are, but how significant is this imbalance? And is it better or worse within specific prize categories like physics, medicine, literature, etc.?</p>
```
# Calculating the proportion of female laureates per decade
nobel['female_winner'] = nobel['sex']=='Female'
prop_female_winners = nobel.groupby(['decade','category'],as_index=False)['female_winner'].mean()
# Plotting USA born winners with % winners on the y-axis
ax = sns.lineplot(x='decade',y='female_winner',hue='category',data=prop_female_winners)
ax.yaxis.set_major_formatter(PercentFormatter(1.0))
```
## 6. The first woman to win the Nobel Prize
<p>The plot above is a bit messy as the lines are overplotting. But it does show some interesting trends and patterns. Overall the imbalance is pretty large with physics, economics, and chemistry having the largest imbalance. Medicine has a somewhat positive trend, and since the 1990s the literature prize is also now more balanced. The big outlier is the peace prize during the 2010s, but keep in mind that this just covers the years 2010 to 2016.</p>
<p>Given this imbalance, who was the first woman to receive a Nobel Prize? And in what category?</p>
```
# Picking out the first woman to win a Nobel Prize
female_nobel=nobel[nobel['sex']=='Female']
firstwomen=female_nobel.nsmallest(1, 'year', keep='first')
firstwomen
```
## 7. Repeat laureates
<p>For most scientists/writers/activists a Nobel Prize would be the crowning achievement of a long career. But for some people, one is just not enough, and few have gotten it more than once. Who are these lucky few? (Having won no Nobel Prize myself, I'll assume it's just about luck.)</p>
```
# Selecting the laureates that have received 2 or more prizes.
nobel.groupby('full_name').filter(lambda x:len(x)>=2)
```
## 8. How old are you when you get the prize?
<p>The list of repeat winners contains some illustrious names! We again meet Marie Curie, who got the prize in physics for discovering radiation and in chemistry for isolating radium and polonium. John Bardeen got it twice in physics for transistors and superconductivity, Frederick Sanger got it twice in chemistry, and Linus Carl Pauling got it first in chemistry and later in peace for his work in promoting nuclear disarmament. We also learn that organizations also get the prize as both the Red Cross and the UNHCR have gotten it twice.</p>
<p>But how old are you generally when you get the prize?</p>
```
# Converting birth_date from String to datetime
nobel['birth_date'] = pd.to_datetime(nobel['birth_date'])
# Calculating the age of Nobel Prize winners
nobel['age'] = nobel['year']-nobel['birth_date'].dt.year
# Plotting the age of Nobel Prize winners
sns.lmplot(x='year',y='age',data=nobel,lowess=True,aspect=2,line_kws={'color':'black'})
```
## 9. Age differences between prize categories
<p>The plot above shows us a lot! We see that people use to be around 55 when they received the price, but nowadays the average is closer to 65. But there is a large spread in the laureates' ages, and while most are 50+, some are very young.</p>
<p>We also see that the density of points is much high nowadays than in the early 1900s -- nowadays many more of the prizes are shared, and so there are many more winners. We also see that there was a disruption in awarded prizes around the Second World War (1939 - 1945). </p>
<p>Let's look at age trends within different prize categories.</p>
```
# Same plot as above, but separate plots for each type of Nobel Prize
sns.lmplot(x='year',y='age',row='category',data=nobel,lowess=True,aspect=2,line_kws={'color':'black'})
```
## 10. Oldest and youngest winners
<p>More plots with lots of exciting stuff going on! We see that both winners of the chemistry, medicine, and physics prize have gotten older over time. The trend is strongest for physics: the average age used to be below 50, and now it's almost 70. Literature and economics are more stable. We also see that economics is a newer category. But peace shows an opposite trend where winners are getting younger! </p>
<p>In the peace category we also a winner around 2010 that seems exceptionally young. This begs the questions, who are the oldest and youngest people ever to have won a Nobel Prize?</p>
```
# The oldest winner of a Nobel Prize as of 2016
display(nobel.nlargest(1,'age'))
# The youngest winner of a Nobel Prize as of 2016
nobel.nsmallest(1,'age')
```
## 11. You get a prize!
<p><img style="float: right;margin:20px 20px 20px 20px; max-width:200px" src="https://assets.datacamp.com/production/project_441/img/paint_nobel_prize.png"></p>
<p>Hey! You get a prize for making it to the very end of this notebook! It might not be a Nobel Prize, but I made it myself in paint so it should count for something. But don't despair, Leonid Hurwicz was 90 years old when he got his prize, so it might not be too late for you. Who knows.</p>
<p>Before you leave, what was again the name of the youngest winner ever who in 2014 got the prize for "[her] struggle against the suppression of children and young people and for the right of all children to education"?</p>
```
# The name of the youngest winner of the Nobel Prize as of 2016
youngest_winner = 'Malala Yousafzai'
```
|
github_jupyter
|
# Loading in required libraries
import pandas as pd
import seaborn as sns
import numpy as np
# Reading in the Nobel Prize data
nobel = pd.read_csv('datasets/nobel.csv')
# Taking a look at the first several winners
nobel.head(6)
# Display the number of (possibly shared) Nobel Prizes handed
# out between 1901 and 2016
display(len(nobel['prize']))
# Display the number of prizes won by male and female recipients.
display(nobel['sex'].value_counts())
# Display the number of prizes won by the top 10 nationalities.
nobel['birth_country'].value_counts().head(10)
# Calculating the proportion of USA born winners per decade
nobel['usa_born_winner'] = nobel['birth_country']=='United States of America'
nobel['decade'] = (np.floor(nobel['year']/10)*10).astype(int)
prop_usa_winners = nobel.groupby('decade',as_index=False)['usa_born_winner'].mean()
# Display the proportions of USA born winners per decade
prop_usa_winners
# Setting the plotting theme
sns.set()
# and setting the size of all plots.
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [11, 7]
# Plotting USA born winners
ax = sns.lineplot(x='decade',y='usa_born_winner',data=prop_usa_winners)
# Adding %-formatting to the y-axis
from matplotlib.ticker import PercentFormatter
ax.yaxis.set_major_formatter(PercentFormatter(1.0))
# Calculating the proportion of female laureates per decade
nobel['female_winner'] = nobel['sex']=='Female'
prop_female_winners = nobel.groupby(['decade','category'],as_index=False)['female_winner'].mean()
# Plotting USA born winners with % winners on the y-axis
ax = sns.lineplot(x='decade',y='female_winner',hue='category',data=prop_female_winners)
ax.yaxis.set_major_formatter(PercentFormatter(1.0))
# Picking out the first woman to win a Nobel Prize
female_nobel=nobel[nobel['sex']=='Female']
firstwomen=female_nobel.nsmallest(1, 'year', keep='first')
firstwomen
# Selecting the laureates that have received 2 or more prizes.
nobel.groupby('full_name').filter(lambda x:len(x)>=2)
# Converting birth_date from String to datetime
nobel['birth_date'] = pd.to_datetime(nobel['birth_date'])
# Calculating the age of Nobel Prize winners
nobel['age'] = nobel['year']-nobel['birth_date'].dt.year
# Plotting the age of Nobel Prize winners
sns.lmplot(x='year',y='age',data=nobel,lowess=True,aspect=2,line_kws={'color':'black'})
# Same plot as above, but separate plots for each type of Nobel Prize
sns.lmplot(x='year',y='age',row='category',data=nobel,lowess=True,aspect=2,line_kws={'color':'black'})
# The oldest winner of a Nobel Prize as of 2016
display(nobel.nlargest(1,'age'))
# The youngest winner of a Nobel Prize as of 2016
nobel.nsmallest(1,'age')
# The name of the youngest winner of the Nobel Prize as of 2016
youngest_winner = 'Malala Yousafzai'
| 0.522446 | 0.976038 |
```
!wget https://resources.lendingclub.com/LoanStats_2019Q1.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2019Q2.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2019Q3.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2019Q4.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2020Q1.csv.zip
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
from sklearn.model_selection import train_test_split
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership", "annual_inc",
"verification_status", "pymnt_plan", "dti", "delinq_2yrs",
"inq_last_6mths", "open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int",
"total_rec_late_fee", "recoveries", "collection_recovery_fee",
"last_pymnt_amnt", "collections_12_mths_ex_med", "policy_code",
"application_type", "acc_now_delinq", "tot_coll_amt", "tot_cur_bal",
"open_acc_6m", "open_act_il", "open_il_12m", "open_il_24m",
"mths_since_rcnt_il", "total_bal_il", "il_util", "open_rv_12m",
"open_rv_24m", "max_bal_bc", "all_util", "total_rev_hi_lim", "inq_fi",
"total_cu_tl", "inq_last_12m", "acc_open_past_24mths", "avg_cur_bal",
"bc_open_to_buy", "bc_util", "chargeoff_within_12_mths", "delinq_amnt",
"mo_sin_old_il_acct", "mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op",
"mo_sin_rcnt_tl", "mort_acc", "mths_since_recent_bc",
"mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0", "num_sats",
"num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75",
"pub_rec_bankruptcies", "tax_liens", "tot_hi_cred_lim",
"total_bal_ex_mort", "total_bc_limit", "total_il_high_credit_limit",
"hardship_flag", "debt_settlement_flag",
"loan_status"
]
target = "loan_status"
# Load the data
df1 = pd.read_csv(Path('../Resources/LoanStats_2019Q1.csv.zip'), skiprows=1)[:-2]
df2 = pd.read_csv(Path('../Resources/LoanStats_2019Q2.csv.zip'), skiprows=1)[:-2]
df3 = pd.read_csv(Path('../Resources/LoanStats_2019Q3.csv.zip'), skiprows=1)[:-2]
df4 = pd.read_csv(Path('../Resources/LoanStats_2019Q4.csv.zip'), skiprows=1)[:-2]
df = pd.concat([df1, df2, df3, df4]).loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
low_risk_rows = df[df[target] == 'low_risk']
high_risk_rows = df[df[target] == 'high_risk']
#df = pd.concat([low_risk_rows, high_risk_rows.sample(n=len(low_risk_rows), replace=True)])
df = pd.concat([low_risk_rows.sample(n=len(high_risk_rows), random_state=42), high_risk_rows])
df = df.reset_index(drop=True)
df = df.rename({target:'target'}, axis="columns")
df
df.to_csv('2019loans.csv', index=False)
# Load the data
validate_df = pd.read_csv(Path('../Resources/LoanStats_2020Q1.csv.zip'), skiprows=1)[:-2]
validate_df = validate_df.loc[:, columns].copy()
# Drop the null columns where all values are null
validate_df = validate_df.dropna(axis='columns', how='all')
# Drop the null rows
validate_df = validate_df.dropna()
# Remove the `Issued` loan status
issued_mask = validate_df[target] != 'Issued'
validate_df = validate_df.loc[issued_mask]
# convert interest rate to numerical
validate_df['int_rate'] = validate_df['int_rate'].str.replace('%', '')
validate_df['int_rate'] = validate_df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = dict.fromkeys(['Current', 'Fully Paid'], 'low_risk')
validate_df = validate_df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period', 'Charged Off'], 'high_risk')
validate_df = validate_df.replace(x)
low_risk_rows = validate_df[validate_df[target] == 'low_risk']
high_risk_rows = validate_df[validate_df[target] == 'high_risk']
validate_df = pd.concat([low_risk_rows.sample(n=len(high_risk_rows), random_state=37), high_risk_rows])
validate_df = validate_df.reset_index(drop=True)
validate_df = validate_df.rename({target:'target'}, axis="columns")
validate_df
validate_df.to_csv('2020Q1loans.csv', index=False)
```
|
github_jupyter
|
!wget https://resources.lendingclub.com/LoanStats_2019Q1.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2019Q2.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2019Q3.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2019Q4.csv.zip
!wget https://resources.lendingclub.com/LoanStats_2020Q1.csv.zip
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
from sklearn.model_selection import train_test_split
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership", "annual_inc",
"verification_status", "pymnt_plan", "dti", "delinq_2yrs",
"inq_last_6mths", "open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int",
"total_rec_late_fee", "recoveries", "collection_recovery_fee",
"last_pymnt_amnt", "collections_12_mths_ex_med", "policy_code",
"application_type", "acc_now_delinq", "tot_coll_amt", "tot_cur_bal",
"open_acc_6m", "open_act_il", "open_il_12m", "open_il_24m",
"mths_since_rcnt_il", "total_bal_il", "il_util", "open_rv_12m",
"open_rv_24m", "max_bal_bc", "all_util", "total_rev_hi_lim", "inq_fi",
"total_cu_tl", "inq_last_12m", "acc_open_past_24mths", "avg_cur_bal",
"bc_open_to_buy", "bc_util", "chargeoff_within_12_mths", "delinq_amnt",
"mo_sin_old_il_acct", "mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op",
"mo_sin_rcnt_tl", "mort_acc", "mths_since_recent_bc",
"mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0", "num_sats",
"num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75",
"pub_rec_bankruptcies", "tax_liens", "tot_hi_cred_lim",
"total_bal_ex_mort", "total_bc_limit", "total_il_high_credit_limit",
"hardship_flag", "debt_settlement_flag",
"loan_status"
]
target = "loan_status"
# Load the data
df1 = pd.read_csv(Path('../Resources/LoanStats_2019Q1.csv.zip'), skiprows=1)[:-2]
df2 = pd.read_csv(Path('../Resources/LoanStats_2019Q2.csv.zip'), skiprows=1)[:-2]
df3 = pd.read_csv(Path('../Resources/LoanStats_2019Q3.csv.zip'), skiprows=1)[:-2]
df4 = pd.read_csv(Path('../Resources/LoanStats_2019Q4.csv.zip'), skiprows=1)[:-2]
df = pd.concat([df1, df2, df3, df4]).loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
low_risk_rows = df[df[target] == 'low_risk']
high_risk_rows = df[df[target] == 'high_risk']
#df = pd.concat([low_risk_rows, high_risk_rows.sample(n=len(low_risk_rows), replace=True)])
df = pd.concat([low_risk_rows.sample(n=len(high_risk_rows), random_state=42), high_risk_rows])
df = df.reset_index(drop=True)
df = df.rename({target:'target'}, axis="columns")
df
df.to_csv('2019loans.csv', index=False)
# Load the data
validate_df = pd.read_csv(Path('../Resources/LoanStats_2020Q1.csv.zip'), skiprows=1)[:-2]
validate_df = validate_df.loc[:, columns].copy()
# Drop the null columns where all values are null
validate_df = validate_df.dropna(axis='columns', how='all')
# Drop the null rows
validate_df = validate_df.dropna()
# Remove the `Issued` loan status
issued_mask = validate_df[target] != 'Issued'
validate_df = validate_df.loc[issued_mask]
# convert interest rate to numerical
validate_df['int_rate'] = validate_df['int_rate'].str.replace('%', '')
validate_df['int_rate'] = validate_df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = dict.fromkeys(['Current', 'Fully Paid'], 'low_risk')
validate_df = validate_df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period', 'Charged Off'], 'high_risk')
validate_df = validate_df.replace(x)
low_risk_rows = validate_df[validate_df[target] == 'low_risk']
high_risk_rows = validate_df[validate_df[target] == 'high_risk']
validate_df = pd.concat([low_risk_rows.sample(n=len(high_risk_rows), random_state=37), high_risk_rows])
validate_df = validate_df.reset_index(drop=True)
validate_df = validate_df.rename({target:'target'}, axis="columns")
validate_df
validate_df.to_csv('2020Q1loans.csv', index=False)
| 0.506591 | 0.325588 |
```
import sys
import itertools
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook as tqdm
from matplotlib import pyplot as plt
from sklearn.metrics import mutual_info_score
import networkx as nx
# https://networkx.github.io/documentation/stable/tutorial.html
import visJS2jupyter
import visJS2jupyter.visJS_module as visJS_module
# http://compbio.ucsd.edu/bringing-interactivity-network-visualization-jupyter-notebooks-visjs2jupyter/
sys.path.append("..") # Adds higher directory to python modules path for importing from src dir
from src.datasets import NyseStocksDataset, NyseSecuritiesDataset
from src.nlp_utils import *
%matplotlib inline
%load_ext autotime
%load_ext autoreload
%autoreload 2
ds = NyseStocksDataset('OCMvOC-3C', file_path='../data/nyse/prices-split-adjusted.csv', features=['open', 'close', 'movement', 'vix_open', 'vix_close'])
securities = NyseSecuritiesDataset(file_path='../data/nyse/securities.csv')
ds.load()
securities.load()
# features = pd.read_csv('cointegration.csv', index_col=0)
coints = pd.read_csv('../reports/cointegration-10-to-12.csv', index_col=0).stack()
coocs = pd.read_csv('../data/preprocessed/occurrences/cooccurrences.csv', index_col=0).stack().astype(float)
features = pd.merge(coocs.reset_index(), coints.reset_index(), on=['level_0', 'level_1'], how='outer').set_index(['level_0', 'level_1']).fillna(0)
features.columns = ['cooccurrence', 'cointegration']
def generate_threshold_counts(features):
# Remove duplicate entries
features = features[list((compA < compB) for ((compA, compB), _) in features.iterrows())]
# Select threshold to have in the end roughly the `n` largest edges left
amount_counts = features.groupby('cooccurrence').count()
amount_counts.columns = ['count']
threshold_counts = amount_counts[::-1].cumsum()[::-1]
return threshold_counts
threshold_counts = generate_threshold_counts(features)
def top_edges(features, n=100):
threshold = threshold_counts[(threshold_counts['count'] > n) & (threshold_counts['count'].shift(-1) <= n)].index[0]
return features[features['cooccurrence'] > threshold]
# https://github.com/ucsd-ccbb/visJS2jupyter/blob/master/visJS2jupyter/visJS_module.py
# http://compbio.ucsd.edu/bringing-interactivity-network-visualization-jupyter-notebooks-visjs2jupyter/
def display_interactive_graph(G, output_file=None):
# Prepare graph data
V = list(G.nodes())
E = list(G.edges())
pos = nx.spring_layout(G)
V_enriched = [(x, securities.get_company_name(x), securities.get_industry(x)) for x in V]
colors = plot.get_colors(np.unique([x[2] for x in V_enriched]))
nodes_dict = [{"id":n,
"title": f'{comp} ({industry})',
"color": colors[industry],
"border_width": 0.3,
"x":pos[n][0]*1000,
"y":pos[n][1]*1000} for (n, comp, industry) in V_enriched]
node_map = dict(zip(V, range(len(V))))
edges_dict = [{"id": f'{coocs[E[i]]:n} articles', "source": node_map[E[i][0]], "target": node_map[E[i][1]],
"width": 5 * coocs[E[i]] / features.cooccurrence.max()} for i in range(len(E))]
return visJS_module.visjs_network(nodes_dict, edges_dict, time_stamp=1000000, node_size_multiplier=7,
edge_width_field='width', edge_label_field='none',
graph_height=500, graph_width=900, export_network=bool(output_file), export_file=output_file)
def generate_graph(edges):
edges = [(idx[0], idx[1], { 'cooc': max(val.cooccurrence / features.cooccurrence.max(), 0.2) })
for idx, val in edges.iterrows()]
G = nx.Graph(title='number_of_shared_articles')
G.add_weighted_edges_from([(x[0], x[1], x[2]['cooc']) for x in edges])
return G
# 1occ -> 17147, 2cooc -> 9155, 5cooc -> 3969, 10cooc -> 2131, 25cooc -> 975, 185cooc -> 97, 272cooc -> 50
edges = top_edges(features, 50)
G = generate_graph(edges)
# display_interactive_graph(G, output_file=f'article_amounts_top{len(edges)}.json')
display_interactive_graph(G)
def ApEn(U, m, r):
def _maxdist(x_i, x_j):
return max([abs(ua - va) for ua, va in zip(x_i, x_j)])
def _phi(m):
x = [[U[j] for j in range(i, i + m - 1 + 1)] for i in range(N - m + 1)]
C = [len([1 for x_j in x if _maxdist(x_i, x_j) <= r]) / (N - m + 1.0) for x_i in x]
return (N - m + 1.0)**(-1) * sum(np.log(C))
N = len(U)
return abs(_phi(m+1) - _phi(m))
# Usage example
U = np.array([85, 80, 89] * 17)
print(ApEn(U, 2, 3))
# 1.0996541105257052e-05
randU = np.random.choice([85, 80, 89], size=17*3)
print(ApEn(randU, 2, 3))
x = np.sin(np.arange(100)).round(1)
ApEn(x, 2, 3)
import scipy
x = np.array([1, 0, -1, 0, -1, -1, -1])
y = np.array([0, 1, 1, 0, 0, 1, 1])
scipy.stats.pearsonr(x, y)
a=[1,4,6]
b=[1,2,3]
ApEn(randU, 2, 3)
```
|
github_jupyter
|
import sys
import itertools
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook as tqdm
from matplotlib import pyplot as plt
from sklearn.metrics import mutual_info_score
import networkx as nx
# https://networkx.github.io/documentation/stable/tutorial.html
import visJS2jupyter
import visJS2jupyter.visJS_module as visJS_module
# http://compbio.ucsd.edu/bringing-interactivity-network-visualization-jupyter-notebooks-visjs2jupyter/
sys.path.append("..") # Adds higher directory to python modules path for importing from src dir
from src.datasets import NyseStocksDataset, NyseSecuritiesDataset
from src.nlp_utils import *
%matplotlib inline
%load_ext autotime
%load_ext autoreload
%autoreload 2
ds = NyseStocksDataset('OCMvOC-3C', file_path='../data/nyse/prices-split-adjusted.csv', features=['open', 'close', 'movement', 'vix_open', 'vix_close'])
securities = NyseSecuritiesDataset(file_path='../data/nyse/securities.csv')
ds.load()
securities.load()
# features = pd.read_csv('cointegration.csv', index_col=0)
coints = pd.read_csv('../reports/cointegration-10-to-12.csv', index_col=0).stack()
coocs = pd.read_csv('../data/preprocessed/occurrences/cooccurrences.csv', index_col=0).stack().astype(float)
features = pd.merge(coocs.reset_index(), coints.reset_index(), on=['level_0', 'level_1'], how='outer').set_index(['level_0', 'level_1']).fillna(0)
features.columns = ['cooccurrence', 'cointegration']
def generate_threshold_counts(features):
# Remove duplicate entries
features = features[list((compA < compB) for ((compA, compB), _) in features.iterrows())]
# Select threshold to have in the end roughly the `n` largest edges left
amount_counts = features.groupby('cooccurrence').count()
amount_counts.columns = ['count']
threshold_counts = amount_counts[::-1].cumsum()[::-1]
return threshold_counts
threshold_counts = generate_threshold_counts(features)
def top_edges(features, n=100):
threshold = threshold_counts[(threshold_counts['count'] > n) & (threshold_counts['count'].shift(-1) <= n)].index[0]
return features[features['cooccurrence'] > threshold]
# https://github.com/ucsd-ccbb/visJS2jupyter/blob/master/visJS2jupyter/visJS_module.py
# http://compbio.ucsd.edu/bringing-interactivity-network-visualization-jupyter-notebooks-visjs2jupyter/
def display_interactive_graph(G, output_file=None):
# Prepare graph data
V = list(G.nodes())
E = list(G.edges())
pos = nx.spring_layout(G)
V_enriched = [(x, securities.get_company_name(x), securities.get_industry(x)) for x in V]
colors = plot.get_colors(np.unique([x[2] for x in V_enriched]))
nodes_dict = [{"id":n,
"title": f'{comp} ({industry})',
"color": colors[industry],
"border_width": 0.3,
"x":pos[n][0]*1000,
"y":pos[n][1]*1000} for (n, comp, industry) in V_enriched]
node_map = dict(zip(V, range(len(V))))
edges_dict = [{"id": f'{coocs[E[i]]:n} articles', "source": node_map[E[i][0]], "target": node_map[E[i][1]],
"width": 5 * coocs[E[i]] / features.cooccurrence.max()} for i in range(len(E))]
return visJS_module.visjs_network(nodes_dict, edges_dict, time_stamp=1000000, node_size_multiplier=7,
edge_width_field='width', edge_label_field='none',
graph_height=500, graph_width=900, export_network=bool(output_file), export_file=output_file)
def generate_graph(edges):
edges = [(idx[0], idx[1], { 'cooc': max(val.cooccurrence / features.cooccurrence.max(), 0.2) })
for idx, val in edges.iterrows()]
G = nx.Graph(title='number_of_shared_articles')
G.add_weighted_edges_from([(x[0], x[1], x[2]['cooc']) for x in edges])
return G
# 1occ -> 17147, 2cooc -> 9155, 5cooc -> 3969, 10cooc -> 2131, 25cooc -> 975, 185cooc -> 97, 272cooc -> 50
edges = top_edges(features, 50)
G = generate_graph(edges)
# display_interactive_graph(G, output_file=f'article_amounts_top{len(edges)}.json')
display_interactive_graph(G)
def ApEn(U, m, r):
def _maxdist(x_i, x_j):
return max([abs(ua - va) for ua, va in zip(x_i, x_j)])
def _phi(m):
x = [[U[j] for j in range(i, i + m - 1 + 1)] for i in range(N - m + 1)]
C = [len([1 for x_j in x if _maxdist(x_i, x_j) <= r]) / (N - m + 1.0) for x_i in x]
return (N - m + 1.0)**(-1) * sum(np.log(C))
N = len(U)
return abs(_phi(m+1) - _phi(m))
# Usage example
U = np.array([85, 80, 89] * 17)
print(ApEn(U, 2, 3))
# 1.0996541105257052e-05
randU = np.random.choice([85, 80, 89], size=17*3)
print(ApEn(randU, 2, 3))
x = np.sin(np.arange(100)).round(1)
ApEn(x, 2, 3)
import scipy
x = np.array([1, 0, -1, 0, -1, -1, -1])
y = np.array([0, 1, 1, 0, 0, 1, 1])
scipy.stats.pearsonr(x, y)
a=[1,4,6]
b=[1,2,3]
ApEn(randU, 2, 3)
| 0.476092 | 0.410993 |
# Self-Driving Car Engineer Nanodegree
## Deep Learning
## Project: Build a Traffic Sign Recognition Classifier
---
## Importing required Libraries
```
import pickle
import tensorflow as tf
import tensorflow as tff
from tensorflow.contrib.layers import flatten
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
```
---
## Step 0: Load The Data
```
training_file = "../data/train.p"
validation_file = "../data/valid.p"
testing_file = "../data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
```
---
## Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
Completing the basic data summary below.
```
# Number of training examples
n_train = len(X_train)
# Number of validation examples
n_validation = len(X_valid)
# Number of testing examples.
n_test = len(X_test)
# What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
```
### Visualizing the dataset
```
### Data exploration visualization code goes here.
#Visualising 10 random images from training set
#These are some images out of thousands which will train the brain of our model
#to recognise the traffic signs correctly
for i in np.random.randint(low=0, high=n_train-1, size=5):
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
f.tight_layout()
ax1.imshow(X_train[i])
ax1.set_title(y_train[i], fontsize=30)
ax2.imshow(X_train[i+100])
ax2.set_title(y_train[i+100], fontsize=30)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
plt.show()
%matplotlib inline
#A histogram to understand the distribution of the classes in our training set
#will help is visualize the number of inputs available for each class
plt.hist(y_train, bins=n_classes)
plt.xlabel("Class label")
plt.ylabel("Frequency")
plt.show()
distribution, classes = np.histogram(y_train, bins=np.arange(n_classes), density=True)
```
----
## Step 2: Design and Test a Model Architecture
Implement a deep learning model(LeNet-5) that learns to recognize traffic signs. I will train and test the model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
### Pre-process the Data Set (normalization, grayscale, etc.)
```
#Converting the images to grayscale
X_train_gry = np.sum(X_train/3, axis=3, keepdims=True)
X_valid_gry = np.sum(X_valid/3, axis=3, keepdims=True)
X_test_gry = np.sum(X_test/3, axis=3, keepdims=True)
print('RGB shape:', X_train.shape)
print('Grayscale shape:', X_train_gry.shape)
X_train = X_train_gry
X_valid = X_valid_gry
X_test = X_test_gry
#Normalising the datasets
X_train_normalized = (X_train - 128)/128
X_valid_normalized = (X_valid - 128)/128
X_test_normalized = (X_test - 128)/128
print(np.mean(X_train))
print(np.mean(X_valid))
print(np.mean(X_test))
print(np.mean(X_train_normalized))
print(np.mean(X_valid_normalized))
print(np.mean(X_test_normalized))
X_train = X_train_normalized
X_valid = X_valid_normalized
X_test = X_test_normalized
```
### Model Architecture
```
EPOCHS = 50
BATCH_SIZE = 50
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
#Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Activation.
conv1 = tf.nn.relu(conv1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Activation.
conv2 = tf.nn.relu(conv2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# Activation.
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Activation.
fc2 = tf.nn.relu(fc2)
# Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
```
## Features and Labels
Train LeNet to classify [MNIST](http://yann.lecun.com/exdb/mnist/) data.
`x` is a placeholder for a batch of input images.
`y` is a placeholder for a batch of output labels.
```
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, None)
one_hot_y = tf.one_hot(y, 43)
```
## Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
```
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
```
## Model Evaluation
Evaluating how well the loss and accuracy of the model for a given dataset.
```
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
```
## Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
```
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenetttt')
print("Model saved")
```
## Testing the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
```
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
```
---
## Step 3: Test the Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
### Load and Output the Images
```
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import glob
import matplotlib.image as mpimg
import cv2 as cv2
import scipy.ndimage as ndimage
from scipy.misc import imread
new_imageset=['my_images/bumpy.jpg','my_images/caution.jpg','my_images/no-vehicles.jpg','my_images/stop.jpg','my_images/work.jpg']
my_images = np.zeros([len(new_imageset),32,32,3],dtype=np.uint8)
for i in range(len(new_imageset)):
my_images[i] = ndimage.imread(new_imageset[i])
my_labels = np.array([22,18,15,14,25],dtype=np.uint8)
f, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(24, 9))
f.tight_layout()
ax1.imshow(my_images[0])
ax2.imshow(my_images[1])
ax3.imshow(my_images[2])
ax4.imshow(my_images[3])
ax5.imshow(my_images[4])
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
plt.show()
my_images = np.asarray(my_images)
my_images_gry = np.sum(my_images/3, axis=3, keepdims=True)
my_images_normalized = (my_images_gry - 128)/128
print(my_images.shape)
print(my_images_normalized.shape)
print(my_labels)
f, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(24, 9))
f.tight_layout()
ax1.imshow(my_images_normalized[0].squeeze(), cmap='gray')
ax2.imshow(my_images_normalized[1].squeeze(), cmap='gray')
ax3.imshow(my_images_normalized[2].squeeze(), cmap='gray')
ax4.imshow(my_images_normalized[3].squeeze(), cmap='gray')
ax5.imshow(my_images_normalized[4].squeeze(), cmap='gray')
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
plt.show()
```
### Predicting the Sign Type for Each Image and Analyzing Performance
```
# Running the predictions here and using the model to output the prediction for each image.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
my_accuracy = evaluate(my_images_normalized, my_labels)
print("Test Set Accuracy = {:.3f}".format(my_accuracy))
```
### Output Top 5 Softmax Probabilities For Each Image Found on the Web
```
softmax_logits = tf.nn.softmax(logits)
top_k = tf.nn.top_k(softmax_logits, k=5)
csv_data = np.genfromtxt('signnames.csv', delimiter=',', names=True, dtype=None)
sign_names = [t[1].decode('utf-8') for t in csv_data]
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
my_softmax_logits = sess.run(softmax_logits, feed_dict={x: my_images_normalized, y:my_labels})
my_top_k = sess.run(top_k, feed_dict={x: my_images_normalized, y:my_labels})
for i in range(len(new_imageset)):
img = ndimage.imread(new_imageset[i])
title=''
title += 'Probability - ' + str(my_top_k[0][i][0]) + ' : ' + 'Label - ' + str(my_top_k[1][i][0]) + ' - ' + sign_names[my_top_k[1][i][0]] +'\n'+ 'Probability - ' + str(my_top_k[0][i][1]) + ' : ' + 'Label - ' + str(my_top_k[1][i][1]) + ' - ' + sign_names[my_top_k[1][i][1]] +'\n'+ 'Probability - ' + str(my_top_k[0][i][2]) + ' : ' + 'Label - ' + str(my_top_k[1][i][2]) + ' - ' + sign_names[my_top_k[1][i][2]] +'\n'+ 'Probability - ' + str(my_top_k[0][i][3]) + ' : ' + 'Label - ' + str(my_top_k[1][i][3]) + ' - ' + sign_names[my_top_k[1][i][3]] +'\n'+ 'Probability - ' + str(my_top_k[0][i][4]) + ' : ' + 'Label - ' + str(my_top_k[1][i][4]) + ' - ' + sign_names[my_top_k[1][i][4]]
plt.title(title)
plt.imshow(img)
plt.show()
```
|
github_jupyter
|
import pickle
import tensorflow as tf
import tensorflow as tff
from tensorflow.contrib.layers import flatten
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
training_file = "../data/train.p"
validation_file = "../data/valid.p"
testing_file = "../data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# Number of training examples
n_train = len(X_train)
# Number of validation examples
n_validation = len(X_valid)
# Number of testing examples.
n_test = len(X_test)
# What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
### Data exploration visualization code goes here.
#Visualising 10 random images from training set
#These are some images out of thousands which will train the brain of our model
#to recognise the traffic signs correctly
for i in np.random.randint(low=0, high=n_train-1, size=5):
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
f.tight_layout()
ax1.imshow(X_train[i])
ax1.set_title(y_train[i], fontsize=30)
ax2.imshow(X_train[i+100])
ax2.set_title(y_train[i+100], fontsize=30)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
plt.show()
%matplotlib inline
#A histogram to understand the distribution of the classes in our training set
#will help is visualize the number of inputs available for each class
plt.hist(y_train, bins=n_classes)
plt.xlabel("Class label")
plt.ylabel("Frequency")
plt.show()
distribution, classes = np.histogram(y_train, bins=np.arange(n_classes), density=True)
#Converting the images to grayscale
X_train_gry = np.sum(X_train/3, axis=3, keepdims=True)
X_valid_gry = np.sum(X_valid/3, axis=3, keepdims=True)
X_test_gry = np.sum(X_test/3, axis=3, keepdims=True)
print('RGB shape:', X_train.shape)
print('Grayscale shape:', X_train_gry.shape)
X_train = X_train_gry
X_valid = X_valid_gry
X_test = X_test_gry
#Normalising the datasets
X_train_normalized = (X_train - 128)/128
X_valid_normalized = (X_valid - 128)/128
X_test_normalized = (X_test - 128)/128
print(np.mean(X_train))
print(np.mean(X_valid))
print(np.mean(X_test))
print(np.mean(X_train_normalized))
print(np.mean(X_valid_normalized))
print(np.mean(X_test_normalized))
X_train = X_train_normalized
X_valid = X_valid_normalized
X_test = X_test_normalized
EPOCHS = 50
BATCH_SIZE = 50
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
#Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Activation.
conv1 = tf.nn.relu(conv1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Activation.
conv2 = tf.nn.relu(conv2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# Activation.
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Activation.
fc2 = tf.nn.relu(fc2)
# Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, None)
one_hot_y = tf.one_hot(y, 43)
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenetttt')
print("Model saved")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import glob
import matplotlib.image as mpimg
import cv2 as cv2
import scipy.ndimage as ndimage
from scipy.misc import imread
new_imageset=['my_images/bumpy.jpg','my_images/caution.jpg','my_images/no-vehicles.jpg','my_images/stop.jpg','my_images/work.jpg']
my_images = np.zeros([len(new_imageset),32,32,3],dtype=np.uint8)
for i in range(len(new_imageset)):
my_images[i] = ndimage.imread(new_imageset[i])
my_labels = np.array([22,18,15,14,25],dtype=np.uint8)
f, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(24, 9))
f.tight_layout()
ax1.imshow(my_images[0])
ax2.imshow(my_images[1])
ax3.imshow(my_images[2])
ax4.imshow(my_images[3])
ax5.imshow(my_images[4])
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
plt.show()
my_images = np.asarray(my_images)
my_images_gry = np.sum(my_images/3, axis=3, keepdims=True)
my_images_normalized = (my_images_gry - 128)/128
print(my_images.shape)
print(my_images_normalized.shape)
print(my_labels)
f, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(24, 9))
f.tight_layout()
ax1.imshow(my_images_normalized[0].squeeze(), cmap='gray')
ax2.imshow(my_images_normalized[1].squeeze(), cmap='gray')
ax3.imshow(my_images_normalized[2].squeeze(), cmap='gray')
ax4.imshow(my_images_normalized[3].squeeze(), cmap='gray')
ax5.imshow(my_images_normalized[4].squeeze(), cmap='gray')
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
plt.show()
# Running the predictions here and using the model to output the prediction for each image.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
my_accuracy = evaluate(my_images_normalized, my_labels)
print("Test Set Accuracy = {:.3f}".format(my_accuracy))
softmax_logits = tf.nn.softmax(logits)
top_k = tf.nn.top_k(softmax_logits, k=5)
csv_data = np.genfromtxt('signnames.csv', delimiter=',', names=True, dtype=None)
sign_names = [t[1].decode('utf-8') for t in csv_data]
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
my_softmax_logits = sess.run(softmax_logits, feed_dict={x: my_images_normalized, y:my_labels})
my_top_k = sess.run(top_k, feed_dict={x: my_images_normalized, y:my_labels})
for i in range(len(new_imageset)):
img = ndimage.imread(new_imageset[i])
title=''
title += 'Probability - ' + str(my_top_k[0][i][0]) + ' : ' + 'Label - ' + str(my_top_k[1][i][0]) + ' - ' + sign_names[my_top_k[1][i][0]] +'\n'+ 'Probability - ' + str(my_top_k[0][i][1]) + ' : ' + 'Label - ' + str(my_top_k[1][i][1]) + ' - ' + sign_names[my_top_k[1][i][1]] +'\n'+ 'Probability - ' + str(my_top_k[0][i][2]) + ' : ' + 'Label - ' + str(my_top_k[1][i][2]) + ' - ' + sign_names[my_top_k[1][i][2]] +'\n'+ 'Probability - ' + str(my_top_k[0][i][3]) + ' : ' + 'Label - ' + str(my_top_k[1][i][3]) + ' - ' + sign_names[my_top_k[1][i][3]] +'\n'+ 'Probability - ' + str(my_top_k[0][i][4]) + ' : ' + 'Label - ' + str(my_top_k[1][i][4]) + ' - ' + sign_names[my_top_k[1][i][4]]
plt.title(title)
plt.imshow(img)
plt.show()
| 0.708213 | 0.957755 |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#PRODUCT_ID" data-toc-modified-id="PRODUCT_ID-1"><span class="toc-item-num">1 </span>PRODUCT_ID</a></div><div class="lev1 toc-item"><a href="#SOURCE_PRODUCT_ID" data-toc-modified-id="SOURCE_PRODUCT_ID-2"><span class="toc-item-num">2 </span>SOURCE_PRODUCT_ID</a></div><div class="lev1 toc-item"><a href="#HiRISE_URL" data-toc-modified-id="HiRISE_URL-3"><span class="toc-item-num">3 </span>HiRISE_URL</a></div><div class="lev1 toc-item"><a href="#others" data-toc-modified-id="others-4"><span class="toc-item-num">4 </span>others</a></div>
```
# setup
from pyrise import products as prod
obsid = prod.OBSERVATION_ID('PSP_003072_0985')
# test orbit number
assert obsid.orbit == '003072'
# test setting orbit property
obsid.orbit = 4080
assert obsid.orbit == '004080'
# test repr
assert obsid.__repr__() == 'PSP_004080_0985'
# test targetcode
assert obsid.targetcode == '0985'
# test setting targetcode property
obsid.targetcode = '0980'
assert obsid.targetcode == '0980'
assert obsid.__repr__() == 'PSP_004080_0980'
# test phase
assert obsid.phase == 'PSP'
# test upper orbit folder
assert obsid.get_upper_orbit_folder() == 'ORB_004000_004099'
# test storage path stem
assert obsid.storage_path_stem == 'PSP/ORB_004000_004099/PSP_004080_0980'
```
# PRODUCT_ID
```
pid = prod.PRODUCT_ID('PSP_003072_0985')
pid
pid.kind = 'RED'
pid
pid.s
pid.storage_stem
pid.label_fname
pid.label_path
pid.jp2_fname
pid.jp2_path
for item in dir(pid):
if not item.startswith('__'):
print(item,':')
print(getattr(pid, item))
print()
```
# SOURCE_PRODUCT_ID
```
spid = prod.SOURCE_PRODUCT_ID('PSP_003092_0985_RED4_0')
spid
spid.channel = 1
spid
spid.ccd
for i in dir(spid):
if not i.startswith('__'):
print(i,':')
print(getattr(spid, i))
print()
```
http://hirise-pds.lpl.arizona.edu/PDS/EDR/PSP/ORB_003000_003099/PSP_003092_0985/PSP_003092_0985_RED4_0.IMG
```
spid.pid.storage_stem
spid.pid.edr_storage_stem
spid.fpath
```
# HiRISE_URL
```
hiurl = prod.HiRISE_URL(spid.fpath)
hiurl.url
hiurl.path
```
# others
```
pid.label_path
pid.obsid
pid
prod.RED_PRODUCT_ID(pid.obsid.s, 4, 1).furl
prod.RED_PRODUCT_ID(pid.obsid.s, 4,1)
from pyrise import downloads
obsid = 'PSP_003092_0985'
downloads.download_RED_product(obsid, 4, 0)
red_pid = prod.RED_PRODUCT_ID(pid.obsid.s, 4,1)
red_pid.fname
pid
name = obsid + '_RED'
channels = [4, 5]
ccds = [0, 1]
for channel in channels:
for ccd in ccds:
print(f'{name}{channel}_0.cub')
sid = prod.RED_PRODUCT_ID(obsid, 4,0)
sid.pid.label_url
```
|
github_jupyter
|
# setup
from pyrise import products as prod
obsid = prod.OBSERVATION_ID('PSP_003072_0985')
# test orbit number
assert obsid.orbit == '003072'
# test setting orbit property
obsid.orbit = 4080
assert obsid.orbit == '004080'
# test repr
assert obsid.__repr__() == 'PSP_004080_0985'
# test targetcode
assert obsid.targetcode == '0985'
# test setting targetcode property
obsid.targetcode = '0980'
assert obsid.targetcode == '0980'
assert obsid.__repr__() == 'PSP_004080_0980'
# test phase
assert obsid.phase == 'PSP'
# test upper orbit folder
assert obsid.get_upper_orbit_folder() == 'ORB_004000_004099'
# test storage path stem
assert obsid.storage_path_stem == 'PSP/ORB_004000_004099/PSP_004080_0980'
pid = prod.PRODUCT_ID('PSP_003072_0985')
pid
pid.kind = 'RED'
pid
pid.s
pid.storage_stem
pid.label_fname
pid.label_path
pid.jp2_fname
pid.jp2_path
for item in dir(pid):
if not item.startswith('__'):
print(item,':')
print(getattr(pid, item))
print()
spid = prod.SOURCE_PRODUCT_ID('PSP_003092_0985_RED4_0')
spid
spid.channel = 1
spid
spid.ccd
for i in dir(spid):
if not i.startswith('__'):
print(i,':')
print(getattr(spid, i))
print()
spid.pid.storage_stem
spid.pid.edr_storage_stem
spid.fpath
hiurl = prod.HiRISE_URL(spid.fpath)
hiurl.url
hiurl.path
pid.label_path
pid.obsid
pid
prod.RED_PRODUCT_ID(pid.obsid.s, 4, 1).furl
prod.RED_PRODUCT_ID(pid.obsid.s, 4,1)
from pyrise import downloads
obsid = 'PSP_003092_0985'
downloads.download_RED_product(obsid, 4, 0)
red_pid = prod.RED_PRODUCT_ID(pid.obsid.s, 4,1)
red_pid.fname
pid
name = obsid + '_RED'
channels = [4, 5]
ccds = [0, 1]
for channel in channels:
for ccd in ccds:
print(f'{name}{channel}_0.cub')
sid = prod.RED_PRODUCT_ID(obsid, 4,0)
sid.pid.label_url
| 0.23546 | 0.82176 |
# Blocked (Stratified) Cluster Randomized Assignment (BCRA)
## BCRA3_2f
| Assignment | Clustering Level | Treatment Assignment | Treatment Level| Cluster Effect |
|:----------:|:----------------:|:--------------------:|:--------------:|:--------------:|
| blocked | 3 | cluster | 2 | fixed |
```
from pypowerup import effect_size, sample_size, power
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra3_2f', rho2=0.10, r21=0.50, r22=0.50, g=1, n=20, J=44, K=5)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 3 units
sample_size(design='bcra3_2f', es=0.10219354337360606, rho2=0.10, r21=0.50, r22=0.50, g=1, n=20, J=44)
# power
power(design='bcra3_2f', es=0.10219354337360606, rho2=0.10, r21=0.50, r22=0.50, g=1, n=20, J=44, K=5)
```
## BCRA3_2r
| Assignment | Clustering Level | Treatment Assignment | Treatment Level| Cluster Effect |
|:----------:|:----------------:|:--------------------:|:--------------:|:--------------:|
| blocked | 3 | cluster | 2 | random |
```
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra3_2r', rho3=0.38, rho2=0.10, omega3=0.50, r21=0.37,
r22=0.53, r2t3=0, g=0, n=20, J=2, K=64)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 3 units
sample_size(design='bcra3_2r', es=0.20020041517111645, rho3=0.38, rho2=0.10, omega3=0.50, r21=0.37,
r22=0.53, r2t3=0, g=0, n=20, J=2)
# power
power(design='bcra3_2r', es=0.20020041517111645, rho3=0.38, rho2=0.10, omega3=0.50, r21=0.37,
r22=0.53, r2t3=0, g=0, n=20, J=2, K=64)
```
## BCRA4_2r
| Assignment | Clustering Level | Treatment Assignment | Treatment Level| Cluster Effect |
|:----------:|:----------------:|:--------------------:|:--------------:|:--------------:|
| blocked | 4 | cluster | 2 | random |
```
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra4_2r', rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5, omega3=0.5, r21=0.5, r22=0.5,
r2t3=0.5, r2t4=0.5, g=0, n=10, J=4, K=4, L=20)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 4 units
sample_size(design='bcra4_2r', es=0.14584081061169768, rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5, omega3=0.5, r21=0.5, r22=0.5,
r2t3=0.5, r2t4=0.5, g=0, n=10, J=4, K=4)
# power
power(design='bcra4_2r', es=0.14584081061169768, rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5, omega3=0.5, r21=0.5, r22=0.5,
r2t3=0.5, r2t4=0.5, g=0, n=10, J=4, K=4, L=20)
```
## BCRA4_3f
| Assignment | Clustering Level | Treatment Assignment | Treatment Level| Cluster Effect |
|:----------:|:----------------:|:--------------------:|:--------------:|:--------------:|
| blocked | 4 | cluster | 3 | fixed |
```
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra4_3f', rho3=0.15, rho2=0.15, r21=0.5, r22=0.5,
r23=0.5, g=2, n=10, J=4, K=4, L=15)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 4 units
sample_size(design='bcra4_3f', es=0.2399780453218905, rho3=0.15, rho2=0.15, r21=0.5, r22=0.5,
r23=0.5, g=2, n=10, J=4, K=4)
# power
power(design='bcra4_3f', es=0.2399780453218905, rho3=0.15, rho2=0.15, r21=0.5, r22=0.5,
r23=0.5, g=2, n=10, J=4, K=4, L=15)
```
## BCRA4_3r
| Assignment | Clustering Level | Treatment Assignment | Treatment Level| Cluster Effect |
|:----------:|:----------------:|:--------------------:|:--------------:|:--------------:|
| blocked | 4 | cluster | 3 | random |
```
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra4_3r', rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5,
r21=0.5, r22=0.5, r23=0.5, r2t4=0.5, g=3, n=10, J=4, K=20, L=20)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 4 units
sample_size(design='bcra4_3r', es=0.12100407246925271, rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5,
r21=0.5, r22=0.5, r23=0.5, r2t4=0.5, g=3, n=10, J=4, K=20)
# power
power(design='bcra4_3r', es=0.12100407246925271, rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5,
r21=0.5, r22=0.5, r23=0.5, r2t4=0.5, g=3, n=10, J=4, K=20, L=20)
```
|
github_jupyter
|
from pypowerup import effect_size, sample_size, power
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra3_2f', rho2=0.10, r21=0.50, r22=0.50, g=1, n=20, J=44, K=5)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 3 units
sample_size(design='bcra3_2f', es=0.10219354337360606, rho2=0.10, r21=0.50, r22=0.50, g=1, n=20, J=44)
# power
power(design='bcra3_2f', es=0.10219354337360606, rho2=0.10, r21=0.50, r22=0.50, g=1, n=20, J=44, K=5)
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra3_2r', rho3=0.38, rho2=0.10, omega3=0.50, r21=0.37,
r22=0.53, r2t3=0, g=0, n=20, J=2, K=64)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 3 units
sample_size(design='bcra3_2r', es=0.20020041517111645, rho3=0.38, rho2=0.10, omega3=0.50, r21=0.37,
r22=0.53, r2t3=0, g=0, n=20, J=2)
# power
power(design='bcra3_2r', es=0.20020041517111645, rho3=0.38, rho2=0.10, omega3=0.50, r21=0.37,
r22=0.53, r2t3=0, g=0, n=20, J=2, K=64)
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra4_2r', rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5, omega3=0.5, r21=0.5, r22=0.5,
r2t3=0.5, r2t4=0.5, g=0, n=10, J=4, K=4, L=20)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 4 units
sample_size(design='bcra4_2r', es=0.14584081061169768, rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5, omega3=0.5, r21=0.5, r22=0.5,
r2t3=0.5, r2t4=0.5, g=0, n=10, J=4, K=4)
# power
power(design='bcra4_2r', es=0.14584081061169768, rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5, omega3=0.5, r21=0.5, r22=0.5,
r2t3=0.5, r2t4=0.5, g=0, n=10, J=4, K=4, L=20)
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra4_3f', rho3=0.15, rho2=0.15, r21=0.5, r22=0.5,
r23=0.5, g=2, n=10, J=4, K=4, L=15)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 4 units
sample_size(design='bcra4_3f', es=0.2399780453218905, rho3=0.15, rho2=0.15, r21=0.5, r22=0.5,
r23=0.5, g=2, n=10, J=4, K=4)
# power
power(design='bcra4_3f', es=0.2399780453218905, rho3=0.15, rho2=0.15, r21=0.5, r22=0.5,
r23=0.5, g=2, n=10, J=4, K=4, L=15)
# effect size, i.e., minimum detectable effect sizes (MDES)
effect_size(design='bcra4_3r', rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5,
r21=0.5, r22=0.5, r23=0.5, r2t4=0.5, g=3, n=10, J=4, K=20, L=20)
# sample_size, i.e., minimum required samples sizes (MRSS) for level 4 units
sample_size(design='bcra4_3r', es=0.12100407246925271, rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5,
r21=0.5, r22=0.5, r23=0.5, r2t4=0.5, g=3, n=10, J=4, K=20)
# power
power(design='bcra4_3r', es=0.12100407246925271, rho4=0.05, rho3=0.15, rho2=0.15, omega4=0.5,
r21=0.5, r22=0.5, r23=0.5, r2t4=0.5, g=3, n=10, J=4, K=20, L=20)
| 0.724188 | 0.9434 |
# Python web frameworks
## ISO/OSI model

<https://www.jetbrains.com/lp/devecosystem-2020/python/>
## Overview
### Why Python for Web Development?
- Easy to Learn. Easy to maintain.
- Rich Ecosystem for Backend tasks
- Rich production-grade community solutions
- Easy for prototyping
- Platform-independent
## Overview
### Why Python couldn’t be the Best of all for Web Development?
- High memory consumption
- Lack of real multiprocessing
- Not popular for mobile backends
- Runtime Errors
## How it works?
- Web Server accepting incoming connection Requests
- Web Server applies routing according to Request Headers and Data
- Web Server invoke Web App listener/worker via socket or port
- Web App listener start working with Request
- Web App applies business logic to Request Data and produce Response
- Web Server pass Response to Web Browser

## Web Application Basic Components
- WSGI Server
- URL Router
- Middlewares
- Business Logic (Controllers, Services, etc)
- Data Representation (Views)
- Data Model
- Background Tasks
## WSGI Server
- Interface Protocol
- Acts like the glue between Proxying Web Server (like nginx) and Web Application logic
- Normally listens to specific port or file socket for incoming requests
- Keep request-response session alive
- Spawn multiple workers to serve many requests at the same time
- Could be easily scaled up with Load Balancer architectural pattern
- Popular Implementations:
- Gunicorn
- uWSGI
## ASGI
- Asynchronous approach to WSGI and async Python
- Popular implementations:
- Uvicorn (uses wrapper around libuv)
- Daphne (uses Twisted)
- Hypercorn

## URL Router
- Match Request path with specific Application logic elements — mainly Views
- Could support regex rules for describing URL patterns (classic way) or special synthax (modern way)
- Could help limit or fully describe URL Parameters
- Could been described in a dedicated URL Config files (Django-way) or be inlined using decorators (Flask-way)
```python
from django.urls import path
from . import views
urlpatterns = [
path("/articles/2003/", views.special_case_2003),
path("/articles/<int:year>/", views.year_archive),
path("/articles/<int:year>/<int:year>/", views.month_archive),
path("/articles/<int:year>/<int:year>/<str:slug>/", views.article),
]
```
## Middlewares
- Dispatch raw Request and Response data.
- Could handle various tasks:
- Security checks (Auth, Rate limiting, CSRF, etc)
- Work with Headers (cache, x-frame, app-specific headers)
- Respond cached data
- Process user session (recognize user and add user information to request/response)
- Enrich Request or Response data
## Middlewares diagram

## Middlewares example

## Controllers, Services, etc
- Not a mandatory part of Web Application approach in Python frameworks
- Could be useful for synchronous utilitary tasks — Data transformation on the fly, loading files from filesystem to read, read from cache, etc
- Should be replaced with background tasks for heavy I/O or time-bound operations
## Data Representation (Views)
- Mainly overrides Controller role in MVC Pattern.
- Prepare Data Model objects to be presented in specific format: render a Template, prepare JSON, serve file from filesystem, open a stream, etc
- Works with full Request payload: Headers, Body, URL Params. Also could rely on data of User Session, Auth, and other business logic which could be matched with particular Request.
- Basically One View per One REST verb (GET, POST, DELETE, etc)
- Most of popular Web Frameworks already include many of Response solutions — JSON Response, Template Rendering, WebSocket Logic, Static Files
## Data Representation example
```python
from django.http import Http404
from django.shortcuts import render
from polls.models import Poll
def detail(request, poll_id):
try:
p = Poll.bjects.get(pk=poll_id)
except Poll.DoesNotExist:
raise Http404("Poll does not exist")
return render(request, "polls/detail.html", {"poll": p})
```
## Data Model: ORM
- Most of the time `model` word stands for ORM Objects.
- ORM is a Object-Relational Mapping
- Binds Data in Storage (DB) with Object-Oriented entities like Classes, Objects
- Manipulate both Schema and Data itself
- Basically have an eloquent and rich synthax for basic data manipulation: CRUD operations, aggregation, grouping, etc
- Sanitize input and prevents popular data-related attack types, such as an SQL-injection
- Help Keep DB Schema in historical manner using migrations
- ORM Using also could be tricky
- Developers should be aware of problems like N+1 query and always be ready to debug raw SQL output of ORM methods
- Migrations conflicts like making nulled fields non-null could lead to serious problems in application data
## Data Model: ORM example
```python
from django.db import models
class Musician(models.Model):
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
instrument = models.CharField(max_length=100)
class Album(models.Model):
artist = models.ForeignKey(Musician, on_delete=models.CASCADE)
name = models.CharField(max_length=100)
release_date = models.DateField()
num_start = models.IntegerField()
```
## Background Tasks
- Useful for handling slow, heavy and asynchronous duties like:
- Updating cache or DB tables
- Recalculating statistics
- Preparing heavy data to response to user request
- Do not interrupt or corrupt current requests
- Could be scaled to another machines or cloud services
- Help implementing Web Application as a part of [CQRS](https://ru.wikipedia.org/wiki/CQRS) or any other Asynchronous Messaging paradigm
- Celery is the most popular task scheduler for Python
## Advanced Web Development Topics
- Caching
- Auth
- Template Engines
- Static Files serving
- Configuration and Deployment
## Caching
- Caching solve high availability problems for Web Applications:
- Caching content pages
- Caching API Responses
- Caching complex and repetitive DB Queries results (using Task Scheduler)
- Cache is always need to be fresh
- LRU Caching with Read Thru Cache is one of the most popular caching strategies
- Redis is the most popular Key-Value store using for Cached data
## Caching

## Authentication and Authorization
- **Authentication** - confirms that users are who they say they are.
- **Authorization** - gives those users permission to access a resource.
- Popular web frameworks provides readymade Authentication solutions. This is the best choice for the most of Authentication duties.
- Many of popular web frameworks also has community plugins for implementing third-party Authentication (Okta and other OAuth providers, LDAP, SAML SSO providers)
## Authentication Types
- HTTP Basic Auth (pass username and password in request parameters)
- API Key (issue the key within a Web App profile and use it)
- OAuth (user credentials do not pass to the application, security token using instead. Also any token is bind to specific role. Authentication with Authorization features)
## OAuth

<https://api.slack.com/legacy/oauth>
## Template Engines
- Template Engines stands for rendering dynamic HTML content with Web Application context (variables, functions, etc)
- Template Engines widely using if Web Application do not need a standalone Frontend
- The most popular Template Engines are: Jinja2 and Django Templates
- Templates are rendering in realtime, unless you do not need to pre-render some of
## Static Files serving
- Web Frameworks also have an ability to serve static files within application: HTML, CSS and JS files, Images and Media, Downloadable artifacts, etc
- Django have a build-in Staticfiles Framework which helps to discover, build, tag, and serve static content
- But basically you do not need to serve static with your Backend, and have to rely on Nginx, Cloud CDN, or Frontend Application.

## Configuration and Deployment
- Modern Web Frameworks allows You to follow [Twelve Factor App](https://12factor.net/) principles for application Configuration and Deployment
- Web application configs should be environment-dependent, and use only suitable configuration in particular environment — test, prod, dev, etc
- Secret values must be keeped away from configuraton but also should be available to get imported into Application Runtime
- Django support more verbose Debug mode for better development
- Modern Python Web Application could be deployed with Docker container or deployed as is to Cloud infrastructure.
## Modern Python Web Frameworks
- Full-Stack frameworks with built-in template rendering
- Django
- Pyramid
- Extendable and Backend-Only Frameworks:
- Flask
- Starlette
- FastAPI
- Falcon
- Tornado
- Sanic
## Django Essentials
- Easy to start
- Rich built-ins:
- Static Files Framework
- Authorization and Authentication, User Sessions
- Admin panel
- ORM
- First class Django Forms
- Own Template Engine
- Caching framework
- Advanced Security Features
- First class support for many popular backing services — DBs, Key-Value storages
- I18n, RSS, sitemaps
- Backward compatibility
- Truly follow [DRY](https://ru.wikipedia.org/wiki/Don%E2%80%99t_repeat_yourself) principles
## Flask Essentials
- Microframework with highly extensible modular design
- You can use any library you like for ORM, Auth, Templates, Forms, etc
- Very simple and ready-to-go API
- Rich Documentation
- Best for tiny applications or prototyping
## RESTful API
- RESTful API: Design
- RESTful API Tools: OpenAPI
- RESTful API Tools: FastAPI
## RESTful API

<https://tutorialedge.net/software-eng/what-is-a-rest-api/>
## RESTful API: Design
- Consider making your APIs RESTful and follow principles for good REST API Design:
- Wrap HTTP Methods around Resources and their URI. Make them idempotent
- Consider using Authentication and Authorization
- Use representational status codes for method response: 201 for Created, 4xx for client-party errors, 5xx for server erorrs.
- Do not return errors with 2xx code
- Keep connection alive
- Use compression for network performance
- Use caching headers
## RESTful API Tools: OpenAPI
- OpenAPI is a specification for designing and documenting REST API
- It describes:
- All possible resources
- All possible parameters
- All possible responses
- Stores API Schema in JSON or YaML
- Solve problems of bad API Documentation
- Good for API auto-testing
- Easy to track API changes
## RESTful API Tools: FastAPI
- FastAPI is a Python Web Framework designed to build REST API in a fast and effective way
- Full support for asynchronous code
- Minimalistic and simple API
- Perfect in data validation with Python Type Hints and Pydantic Library
- Great Documentation
- Built-in support for OpenAPI
## RESTful Frameworks and Tools
- FastAPI
- Django REST Framework
- Flask-restful
## More to Read
- Tutorials:
- [Two Scoops of Django](https://www.feldroy.com/products/two-scoops-of-django-3-x)
- [Django Girls Tutorial 🇷🇺](https://tutorial.djangogirls.org/ru/)
- [Мега-Учебник Flask (Хабр) 🇷🇺](https://habr.com/ru/post/346306/)
- [FastAPI Tutorial](https://fastapi.tiangolo.com/tutorial/)
- Advanced Web Development:
- [Web Developer Roadmap 2020](https://github.com/kamranahmedse/developer-roadmap)
- [Donne Martin System Design Primer](https://github.com/donnemartin/system-design-primer)
- [12 Factor App Manifest 🇷🇺](https://12factor.net/ru/)
- Best Practices:
- [WeMake.services Django Template](https://github.com/wemake-services/wemake-django-template)
- Django Batteries:
- [Awesome-django (Github)](https://github.com/wsvincent/awesome-django)
- RESTful Design:
- [Best Practices (Microsoft)](https://docs.microsoft.com/ru-ru/azure/architecture/best-practices/api-design)
- [Django REST Framework Tutorial](https://www.django-rest-framework.org/tutorial/quickstart/)
## Special thanks Evgenii Uvarov for original slides!
# THE END
|
github_jupyter
|
from django.urls import path
from . import views
urlpatterns = [
path("/articles/2003/", views.special_case_2003),
path("/articles/<int:year>/", views.year_archive),
path("/articles/<int:year>/<int:year>/", views.month_archive),
path("/articles/<int:year>/<int:year>/<str:slug>/", views.article),
]
from django.http import Http404
from django.shortcuts import render
from polls.models import Poll
def detail(request, poll_id):
try:
p = Poll.bjects.get(pk=poll_id)
except Poll.DoesNotExist:
raise Http404("Poll does not exist")
return render(request, "polls/detail.html", {"poll": p})
from django.db import models
class Musician(models.Model):
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
instrument = models.CharField(max_length=100)
class Album(models.Model):
artist = models.ForeignKey(Musician, on_delete=models.CASCADE)
name = models.CharField(max_length=100)
release_date = models.DateField()
num_start = models.IntegerField()
| 0.383295 | 0.793666 |
# pytest的基本文件结构
pytest可以自动遍历文件夹中包含测试用例的.py文件, 并且运行其中的测试用例代码。如何才能被pytest自动识别到呢?只要让.py文件,以及其中测试用例代码用'test_'或者'_test'结尾即可。
例如我们测试文件是这样的组织的
```
|demo
test_basic.py
test_resource.py
```
打开的话可以找到很多'test_'开头的函数,这些都是可以被自动识别的。
(在Jupyter Notebook中用'!'可以运行terminal命令,下面的命令等同于在这个notebook所在的文件夹打开一个terminal,运行```pytest demo```)
```
! pytest demo
```
上面的例子里, pytest找到了demo文件夹下2个包含测试用例的.py文件, 并且找到其中测试用例代码并且执行。(这里我们的测试用例都是能通过的)
# 用例的基本写法
测试用例的基本思路是, 运行待测函数,然后比较待测函数的行为(生成特定结果, 正确的raise Exception)是否和设计的一致。
例如我们构想一个函数func, 需要满足两个特征。
1. 接受参数字符串s和整数n, 返回将s扩增n次以后拼接在一起的结果
2. 如果s的类型不是str, raise TypeError
针对第一个要求, 我们可以构造一个具体的参数组合, 让待测函数执行, 然后比较返回的结果是否和我们设计的一致。
```
def test_value():
assert func('ab',3) == 'ababab'
```
assert语句会判断之后的条件表达式是否成立, 如果成立, 什么都不发生; 如果不成立, 会raise Exception并被pytest捕捉。
针对第二个需求, 无法直接利用assert语句判断, 但是可以利用pytest提供的context manager去表达"这是会raise xx类型的Exception的错误"的要求, 语法如下。
```
def test_error():
with pytest.raises(TypeError) as error_info:
func(1,3)
```
我们一开始的代码是两个测试都能通过的, 大家可以修改一下代码后观察一下pytest的运行结果。
# 创建和销毁资源
有些场合下, 我们需要在测试用例执行前创建一些资源, 以及在测试用例执行后销毁一些资源。 比如在数据库中创建表, 导入数据, 测试一段sql逻辑, 然后销毁这张表。 这种场合可以利用pytest提供的@pytest.fixture和yield语法构造一个资源管理器
```
@pytest.fixture # pytest提供的装饰器
def function_level_resource():
# 创建资源的代码
print('---------------------')
print('setup function level resource')
# 如果有必要, 返回生成的资源(例如和特定数据库的连接conn); 如果不需要(例如只是在数据库中建张表), 写一个空的yield语句即可
yield 'some resource' # replace into real resource, such as connection
# 销毁资源的代码
print('teardown function level resource')
print('---------------------')
```
如果不理解python的decorator和yield语法的话, 对上面这段代码可能会比较迷茫。 如果没有时间去详细理解decorator和yield, 这里只要知道
1. 虽然这段代码用的是函数定义的语法, 但是得到的结果并不是一个函数, 而是一个object, 所以别用函数的观点去理解这段代码
2. 记住生成资源, 返回资源, 销毁资源的代码写哪即可。
如果要在测试用例代码中使用相关的资源, 把这个"函数"名传入测试用例的代码即可
```
def test_1(function_level_resource):
print('running test case ',1)
print('Get '+function_level_resource) #yield返回的结果在测试用例代码中可以用函数的名字访问
assert True
```
这样在运行这个测试用例前, 就会执行function_level_resource定义的资源创建代码, 将yield返回的资源通过function_level_resource这个变量暴露给测试用例代码。并且在测试用例完成后,执行销毁资源的代码。
如果需要让整个.py文件共享一个资源, 在所有该文件的test case执行前统一创建一次资源, 等所有该文件的test case完成后统一销毁资源。 可以定义一个module level的资源管理器, 像这样。
```
@pytest.fixture(scope="module")
def moudule_level_resource():
# setup resource and return by yield
print('==========================')
print('setup module level resource')
yield 'some module level resource' # replace into real resource, such as connection
# teardown resource
print('teardown module level resource')
print('==========================')
```
在test_resource.py中, 每个测试用例同时使用了module level的资源和function level的资源。
下面验证一下结果, 可以看到module_level资源只是在测试test_resource.py时被创建和销毁一次, function_level的资源在每个待测函数的起始和终止都被创建和销毁一次。
(注意pytest默认不会输出print的结果, 如果需要显示, 要添加-s的参数)
```
! pytest demo -s
```
|
github_jupyter
|
|demo
test_basic.py
test_resource.py
! pytest demo
def test_value():
assert func('ab',3) == 'ababab'
def test_error():
with pytest.raises(TypeError) as error_info:
func(1,3)
@pytest.fixture # pytest提供的装饰器
def function_level_resource():
# 创建资源的代码
print('---------------------')
print('setup function level resource')
# 如果有必要, 返回生成的资源(例如和特定数据库的连接conn); 如果不需要(例如只是在数据库中建张表), 写一个空的yield语句即可
yield 'some resource' # replace into real resource, such as connection
# 销毁资源的代码
print('teardown function level resource')
print('---------------------')
def test_1(function_level_resource):
print('running test case ',1)
print('Get '+function_level_resource) #yield返回的结果在测试用例代码中可以用函数的名字访问
assert True
@pytest.fixture(scope="module")
def moudule_level_resource():
# setup resource and return by yield
print('==========================')
print('setup module level resource')
yield 'some module level resource' # replace into real resource, such as connection
# teardown resource
print('teardown module level resource')
print('==========================')
! pytest demo -s
| 0.26218 | 0.94366 |
# Feature Analysis Using TensorFlow Data Validation and Facets
## Learning Objectives
1. Use TFRecords to load record-oriented binary format data
2. Use TFDV to generate statistics and Facets to visualize the data
3. Use the TFDV widget to answer questions
4. Analyze label distribution for subset groups
## Introduction
Bias can manifest in any part of a typical machine learning pipeline, from an unrepresentative dataset, to learned model representations, to the way in which the results are presented to the user. Errors that result from this bias can disproportionately impact some users more than others.
[TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) (TFDV) is one tool you can use to analyze your data to find potential problems in your data, such as missing values and data imbalances - that can lead to Fairness disparities. The TFDV tool analyzes training and serving data to compute descriptive statistics, infer a schema, and detect data anomalies. [Facets Overview](https://pair-code.github.io/facets/) provides a succinct visualization of these statistics for easy browsing. Both the TFDV and Facets are tools that are part of the [Fairness Indicators](https://www.tensorflow.org/tfx/fairness_indicators).
In this notebook, we use TFDV to compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions. We use Facets Overview to visualize these statistics using the Civil Comments dataset.
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/adv_tfdv_facets.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
## Set up environment variables and load necessary libraries
We will start by importing the necessary dependencies for the libraries we'll be using in this exercise. First, run the cell below to install Fairness Indicators.
**NOTE:** You can ignore the "pip" being invoked by an old script wrapper, as it will not affect the lab's functionality.
```
# Fairness Indicators is designed to support in evaluating, improving, and comparing models for fairness concerns.
!pip3 install fairness-indicators==0.1.2 --user
```
<strong>Restart the kernel</strong> after you do a pip3 install (click on the <strong>Restart the kernel</strong> button above).
Kindly ignore the deprecation warnings and incompatibility errors.
Next, import all the dependencies we'll use in this exercise, which include Fairness Indicators, TensorFlow Data Validation (tfdv), and the What-If tool (WIT) Facets Overview.
```
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the results of that search
# to a name in the local scope.
# %tensorflow_version 2.x
import sys, os
import warnings
warnings.filterwarnings('ignore')
#os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Ignore deprecation warnings
import tempfile
import apache_beam as beam
import numpy as np
import pandas as pd
from datetime import datetime
import tensorflow_hub as hub
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_data_validation as tfdv
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view
from fairness_indicators.examples import util
import warnings
warnings.filterwarnings("ignore")
from witwidget.notebook.visualization import WitConfigBuilder
from witwidget.notebook.visualization import WitWidget
print(tf.version.VERSION)
print(tf) # This statement shows us what version of Python we are currently running.
```
### About the Civil Comments dataset
Click below to learn more about the Civil Comments dataset, and how we've preprocessed it for this exercise.
The Civil Comments dataset comprises approximately 2 million public comments that were submitted to the Civil Comments platform. [Jigsaw](https://jigsaw.google.com/) sponsored the effort to compile and annotate these comments for ongoing [research](https://arxiv.org/abs/1903.04561); they've also hosted competitions on [Kaggle](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) to help classify toxic comments as well as minimize unintended model bias.
#### Features
Within the Civil Comments data, a subset of comments are tagged with a variety of identity attributes pertaining to gender, sexual orientation, religion, race, and ethnicity. Each identity annotation column contains a value that represents the percentage of annotators who categorized a comment as containing references to that identity. Multiple identities may be present in a comment.
**NOTE:** These identity attributes are intended *for evaluation purposes only*, to assess how well a classifier trained solely on the comment text performs on different tag sets.
To collect these identity labels, each comment was reviewed by up to 10 annotators, who were asked to indicate all identities that were mentioned in the comment. For example, annotators were posed the question: "What genders are mentioned in the comment?", and asked to choose all of the following categories that were applicable.
* Male
* Female
* Transgender
* Other gender
* No gender mentioned
**NOTE:** *We recognize the limitations of the categories used in the original dataset, and acknowledge that these terms do not encompass the full range of vocabulary used in describing gender.*
Jigsaw used these ratings to generate an aggregate score for each identity attribute representing the percentage of raters who said the identity was mentioned in the comment. For example, if 10 annotators reviewed a comment, and 6 said that the comment mentioned the identity "female" and 0 said that the comment mentioned the identity "male," the comment would receive a `female` score of `0.6` and a `male` score of `0.0`.
**NOTE:** For the purposes of annotation, a comment was considered to "mention" gender if it contained a comment about gender issues (e.g., a discussion about feminism, wage gap between men and women, transgender rights, etc.), gendered language, or gendered insults. Use of "he," "she," or gendered names (e.g., Donald, Margaret) did not require a gender label.
#### Label
Each comment was rated by up to 10 annotators for toxicity, who each classified it with one of the following ratings.
* Very Toxic
* Toxic
* Hard to Say
* Not Toxic
Again, Jigsaw used these ratings to generate an aggregate toxicity "score" for each comment (ranging from `0.0` to `1.0`) to serve as the [label](https://developers.google.com/machine-learning/glossary?utm_source=Colab&utm_medium=fi-colab&utm_campaign=fi-practicum&utm_content=glossary&utm_term=label#label), representing the fraction of annotators who labeled the comment either "Very Toxic" or "Toxic." For example, if 10 annotators rated a comment, and 3 of them labeled it "Very Toxic" and 5 of them labeled it "Toxic", the comment would receive a toxicity score of `0.8`.
**NOTE:** For more information on the Civil Comments labeling schema, see the [Data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data) section of the Jigsaw Untended Bias in Toxicity Classification Kaggle competition.
### Preprocessing the data
For the purposes of this exercise, we converted toxicity and identity columns to booleans in order to work with our neural net and metrics calculations. In the preprocessed dataset, we considered any value ≥ 0.5 as True (i.e., a comment is considered toxic if 50% or more crowd raters labeled it as toxic).
For identity labels, the threshold 0.5 was chosen and the identities were grouped together by their categories. For example, if one comment has `{ male: 0.3, female: 1.0, transgender: 0.0, heterosexual: 0.8, homosexual_gay_or_lesbian: 1.0 }`, after processing, the data will be `{ gender: [female], sexual_orientation: [heterosexual, homosexual_gay_or_lesbian] }`.
**NOTE:** Missing identity fields were converted to False.
### Use TFRecords to load record-oriented binary format data
-------------------------------------------------------------------------------------------------------
The [TFRecord format](https://www.tensorflow.org/tutorials/load_data/tfrecord) is a simple [Protobuf](https://developers.google.com/protocol-buffers)-based format for storing a sequence of binary records. It gives you and your machine learning models to handle arbitrarily large datasets over the network because it:
1. Splits up large files into 100-200MB chunks
2. Stores the results as serialized binary messages for faster ingestion
If you already have a dataset in TFRecord format, you can use the tf.keras.utils functions for accessing the data (as you will below!). If you want to practice creating your own TFRecord datasets you can do so outside of this lab by [viewing the documentation here](https://www.tensorflow.org/tutorials/load_data/tfrecord).
#### TODO 1: Use the utility functions tf.keras to download and import our datasets
Run the following cell to download and import the training and validation preprocessed datasets.
```
download_original_data = False #@param {type:"boolean"}
# TODO 1
# Downloads a file from a URL if it is not already in the cache using the `tf.keras.utils.get_file()` function.
if download_original_data:
train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord')
# The identity terms list will be grouped together by their categories
# (see 'IDENTITY_COLUMNS') on threshould 0.5. Only the identity term column,
# text column and label column will be kept after processing.
train_tf_file = util.convert_comments_data(train_tf_file)
validate_tf_file = util.convert_comments_data(validate_tf_file)
# TODO 1a
else:
train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')
```
### Use TFDV to generate statistics and Facets to visualize the data
TensorFlow Data Validation supports data stored in a TFRecord file, a CSV input format, with extensibility for other common formats. You can find the available data decoders [here](https://github.com/tensorflow/data-validation/tree/master/tensorflow_data_validation/coders). In addition, TFDV provides the [tfdv.generate_statistics_from_dataframe](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) utility function for users with in-memory data represented as a pandas DataFrame.
In addition to computing a default set of data statistics, TFDV can also compute statistics for semantic domains (e.g., images, text). To enable computation of semantic domain statistics, pass a tfdv.StatsOptions object with enable_semantic_domain_stats set to True to tfdv.generate_statistics_from_tfrecord.Before we train the model, let's do a quick audit of our training data using [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started), so we can better understand our data distribution.
#### TODO 2: Use TFDV to get quick statistics on your dataset
The following cell may take 2–3 minutes to run. Please ignore the deprecation warnings.
**NOTE:** Please re-run the below cell if you are not getting the TensorFlow Data Validation widget in the output.
```
# TODO 2
# The computation of statistics using TFDV. The returned value is a DatasetFeatureStatisticsList protocol buffer.
stats = tfdv.generate_statistics_from_tfrecord(data_location=train_tf_file)
# TODO 2a
# A visualization of the statistics using Facets Overview.
tfdv.visualize_statistics(stats)
```
### TODO 3: Use the TensorFlow Data Validation widget above to answer the following questions.
#### **1. How many total examples are in the training dataset?**
#### Solution
See below solution.
**There are 1.08 million total examples in the training dataset.**
The count column tells us how many examples there are for a given feature. Each feature (`sexual_orientation`, `comment_text`, `gender`, etc.) has 1.08 million examples. The missing column tells us what percentage of examples are missing that feature.

Each feature is missing from 0% of examples, so we know that the per-feature example count of 1.08 million is also the total number of examples in the dataset.
#### **2. How many unique values are there for the `gender` feature? What are they, and what are the frequencies of each of these values?**
**NOTE #1:** `gender` and the other identity features (`sexual_orientation`, `religion`, `disability`, and `race`) are included in this dataset for evaluation purposes only, so we can assess model performance on different identity slices. The only feature we will use for model training is `comment_text`.
**NOTE #2:** *We recognize the limitations of the categories used in the original dataset, and acknowledge that these terms do not encompass the full range of vocabulary used in describing gender.*
#### Solution
See below solution.
The **unique** column of the **Categorical Features** table tells us that there are 4 unique values for the `gender` feature.
To view the 4 values and their frequencies, we can click on the **SHOW RAW DATA** button:

The raw data table shows that there are 32,208 examples with a gender value of `female`, 26,758 examples with a value of `male`, 1,551 examples with a value of `transgender`, and 4 examples with a value of `other gender`.
**NOTE:** As described [earlier](#scrollTo=J3R2QWkru1WN), a `gender` feature can contain zero or more of these 4 values, depending on the content of the comment. For example, a comment containing the text "I am a transgender man" will have both `transgender` and `male` as `gender` values, whereas a comment that does not reference gender at all will have an empty/false `gender` value.
#### **3. What percentage of total examples are labeled toxic? Overall, is this a class-balanced dataset (relatively even split of examples between positive and negative classes) or a class-imbalanced dataset (majority of examples are in one class)?**
**NOTE:** In this dataset, a `toxicity` value of `0` signifies "not toxic," and a `toxicity` value of `1` signifies "toxic."
#### Solution
See below solution.
**7.98 percent of examples are toxic.**
Under **Numeric Features**, we can see the distribution of values for the `toxicity` feature. 92.02% of examples have a value of 0 (which signifies "non-toxic"), so 7.98% of examples are toxic.

This is a [**class-imbalanced dataset**](https://developers.google.com/machine-learning/glossary?utm_source=Colab&utm_medium=fi-colab&utm_campaign=fi-practicum&utm_content=glossary&utm_term=class-imbalanced-dataset#class-imbalanced-dataset), as the overwhelming majority of examples (over 90%) are classified as nontoxic.
Notice that there is one numeric feature (count of toxic comments) and six categorical features.
### TODO 4: Analyze label distribution for subset groups
Run the following code to analyze label distribution for the subset of examples that contain a `gender` value
**NOTE:** *The cell should run for just a few minutes*
```
#@title Calculate label distribution for gender-related examples
raw_dataset = tf.data.TFRecordDataset(train_tf_file)
toxic_gender_examples = 0
nontoxic_gender_examples = 0
# TODO 4
# There are 1,082,924 examples in the dataset
# The `take()` method returns the specified number of elements starting from the first element.
for raw_record in raw_dataset.take(1082924):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
if str(example.features.feature["gender"].bytes_list.value) != "[]":
if str(example.features.feature["toxicity"].float_list.value) == "[1.0]":
toxic_gender_examples += 1
else:
nontoxic_gender_examples += 1
# TODO 4a
print("Toxic Gender Examples: %s" % toxic_gender_examples)
print("Nontoxic Gender Examples: %s" % nontoxic_gender_examples)
```
#### **What percentage of `gender` examples are labeled toxic? Compare this percentage to the percentage of total examples that are labeled toxic from #3 above. What, if any, fairness concerns can you identify based on this comparison?**
There are 7,189 gender-related examples that are labeled toxic, which represent 14.7% of all gender-related examples.
The percentage of gender-related examples that are toxic (14.7%) is nearly double the percentage of toxic examples overall (7.98%). In other words, in our dataset, gender-related comments are almost two times more likely than comments overall to be labeled as toxic.
This skew suggests that a model trained on this dataset might learn a correlation between gender-related content and toxicity. This raises fairness considerations, as the model might be more likely to classify nontoxic comments as toxic if they contain gender terminology, which could lead to [disparate impact](https://developers.google.com/machine-learning/glossary?utm_source=Colab&utm_medium=fi-colab&utm_campaign=fi-practicum&utm_content=glossary&utm_term=disparate-impact#disparate-impact) for gender subgroups.
Copyright 2021 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
github_jupyter
|
# Fairness Indicators is designed to support in evaluating, improving, and comparing models for fairness concerns.
!pip3 install fairness-indicators==0.1.2 --user
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the results of that search
# to a name in the local scope.
# %tensorflow_version 2.x
import sys, os
import warnings
warnings.filterwarnings('ignore')
#os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Ignore deprecation warnings
import tempfile
import apache_beam as beam
import numpy as np
import pandas as pd
from datetime import datetime
import tensorflow_hub as hub
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_data_validation as tfdv
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view
from fairness_indicators.examples import util
import warnings
warnings.filterwarnings("ignore")
from witwidget.notebook.visualization import WitConfigBuilder
from witwidget.notebook.visualization import WitWidget
print(tf.version.VERSION)
print(tf) # This statement shows us what version of Python we are currently running.
download_original_data = False #@param {type:"boolean"}
# TODO 1
# Downloads a file from a URL if it is not already in the cache using the `tf.keras.utils.get_file()` function.
if download_original_data:
train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord')
# The identity terms list will be grouped together by their categories
# (see 'IDENTITY_COLUMNS') on threshould 0.5. Only the identity term column,
# text column and label column will be kept after processing.
train_tf_file = util.convert_comments_data(train_tf_file)
validate_tf_file = util.convert_comments_data(validate_tf_file)
# TODO 1a
else:
train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')
# TODO 2
# The computation of statistics using TFDV. The returned value is a DatasetFeatureStatisticsList protocol buffer.
stats = tfdv.generate_statistics_from_tfrecord(data_location=train_tf_file)
# TODO 2a
# A visualization of the statistics using Facets Overview.
tfdv.visualize_statistics(stats)
#@title Calculate label distribution for gender-related examples
raw_dataset = tf.data.TFRecordDataset(train_tf_file)
toxic_gender_examples = 0
nontoxic_gender_examples = 0
# TODO 4
# There are 1,082,924 examples in the dataset
# The `take()` method returns the specified number of elements starting from the first element.
for raw_record in raw_dataset.take(1082924):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
if str(example.features.feature["gender"].bytes_list.value) != "[]":
if str(example.features.feature["toxicity"].float_list.value) == "[1.0]":
toxic_gender_examples += 1
else:
nontoxic_gender_examples += 1
# TODO 4a
print("Toxic Gender Examples: %s" % toxic_gender_examples)
print("Nontoxic Gender Examples: %s" % nontoxic_gender_examples)
| 0.362179 | 0.992252 |
# 人脸关键点检测
**作者:** [ssz95](https://github.com/zzs95) <br>
**日期:** 2022.1 <br>
**摘要:** 本示例教程将会演示如何使用飞桨实现人脸关键点检测。
## 一、简介
在图像处理中,关键点本质上是一种特征。它是对一个固定区域或者空间物理关系的抽象描述,描述的是一定邻域范围内的组合或上下文关系。它不仅仅是一个点信息,或代表一个位置,更代表着上下文与周围邻域的组合关系。关键点检测的目标就是通过计算机从图像中找出这些点的坐标,作为计算机视觉领域的一个基础任务,关键点的检测对于高级别任务,例如识别和分类具有至关重要的意义。
关键点检测方法总体上可以分成两个类型,一个种是用坐标回归的方式来解决,另一种是将关键点建模成热力图,通过像素分类任务,回归热力图分布得到关键点位置。这两个方法,都是一种手段或者是途径,解决的问题就是要找出这个点在图像当中的位置与关系。
其中人脸关键点检测是关键点检测方法的一个成功实践,本示例简要介绍如何通过飞桨开源框架,实现人脸关键点检测的功能。这个案例用到的是第一种关键点检测方法——坐标回归。将使用到 Paddle 2.1的API,集成式的训练接口,能够很方便对模型进行训练和预测。
## 二、环境设置
本教程基于Paddle 2.2 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick)。
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
import paddle
from paddle.io import Dataset
from paddle.vision.transforms import transforms
from paddle.vision.models import resnet18
from paddle.nn import functional as F
print(paddle.__version__)
```
## 三、数据集
### 3.1 数据集下载
本案例使用了Kaggle官方举办的人脸关键点检测challenge数据集,官网:[https://www.kaggle.com/c/facial-keypoints-detection](https://www.kaggle.com/c/facial-keypoints-detection)
官方数据集将人脸图像和标注数据打包成了csv文件,使用panda来读取。其中数据集中的文件:<br>
training.csv: 包含了用于训练的人脸关键点坐标和图像。<br>
test.csv: 包含了用于测试的人脸关键点图像, 没有标注关键点坐标。<br>
IdLookupTable.csv: 测试集关键点的位置的对应名称。<br>
图像的长和宽都为96像素,所需要检测的一共有15个关键点。
```
!unzip -o ./test.zip -d data/data60
!unzip -o ./training.zip -d data/data60
```
### 3.2 数据集定义
飞桨(PaddlePaddle)数据集加载方案是统一使用Dataset(数据集定义) + DataLoader(多进程数据集加载)。
首先进行数据集的定义,数据集定义主要是实现一个新的Dataset类,继承父类paddle.io.Dataset,并实现父类中以下两个抽象方法,__getitem__和__len__:
```
Train_Dir = './data/data60/training.csv'
Test_Dir = './data/data60/test.csv'
lookid_dir = './data/data60/IdLookupTable.csv'
class ImgTransforms(object):
"""
图像预处理工具,用于将图像进行升维(96, 96) => (96, 96, 3),
并对图像的维度进行转换从HWC变为CHW
"""
def __init__(self, fmt):
self.format = fmt
def __call__(self, img):
if len(img.shape) == 2:
img = np.expand_dims(img, axis=2)
img = img.transpose(self.format)
if img.shape[0] == 1:
img = np.repeat(img, 3, axis=0)
return img
class FaceDataset(Dataset):
def __init__(self, data_path, mode='train', val_split=0.2):
self.mode = mode
assert self.mode in ['train', 'val', 'test'], \
"mode should be 'train' or 'test', but got {}".format(self.mode)
self.data_source = pd.read_csv(data_path)
# 清洗数据, 数据集中有很多样本只标注了部分关键点, 这里有两种策略
# 第一种, 将未标注的位置从上一个样本对应的关键点复制过来
# self.data_source.fillna(method = 'ffill',inplace = True)
# 第二种, 将包含有未标注的样本从数据集中移除
self.data_source.dropna(how="any", inplace=True)
self.data_label_all = self.data_source.drop('Image', axis = 1)
# 划分训练集和验证集合
if self.mode in ['train', 'val']:
np.random.seed(43)
data_len = len(self.data_source)
# 随机划分
shuffled_indices = np.random.permutation(data_len)
# 顺序划分
# shuffled_indices = np.arange(data_len)
self.shuffled_indices = shuffled_indices
val_set_size = int(data_len*val_split)
if self.mode == 'val':
val_indices = shuffled_indices[:val_set_size]
self.data_img = self.data_source.reindex().iloc[val_indices]
self.data_label = self.data_label_all.reindex().iloc[val_indices]
elif self.mode == 'train':
train_indices = shuffled_indices[val_set_size:]
self.data_img = self.data_source.reindex().iloc[train_indices]
self.data_label = self.data_label_all.reindex().iloc[train_indices]
elif self.mode == 'test':
self.data_img = self.data_source
self.data_label = self.data_label_all
self.transforms = transforms.Compose([
ImgTransforms((2, 0, 1))
])
# 每次迭代时返回数据和对应的标签
def __getitem__(self, idx):
img = self.data_img['Image'].iloc[idx].split(' ')
img = ['0' if x == '' else x for x in img]
img = np.array(img, dtype = 'float32').reshape(96, 96)
img = self.transforms(img)
label = np.array(self.data_label.iloc[idx,:],dtype = 'float32')/96
return img, label
# 返回整个数据集的总数
def __len__(self):
return len(self.data_img)
# 训练数据集和验证数据集
train_dataset = FaceDataset(Train_Dir, mode='train')
val_dataset = FaceDataset(Train_Dir, mode='val')
# 测试数据集
test_dataset = FaceDataset(Test_Dir, mode='test')
```
### 3.3 数据集抽样展示
实现好Dataset数据集后,来测试一下数据集是否符合预期,因为Dataset是一个可以被迭代的Class,通过for循环从里面读取数据来用matplotlib进行展示。关键点的坐标在数据集中进行了归一化处理,这里乘以图像的大小恢复到原始尺度,并用scatter函数将点画在输出的图像上。
```
def plot_sample(x, y, axis):
img = x.reshape(96, 96)
axis.imshow(img, cmap='gray')
axis.scatter(y[0::2], y[1::2], marker='x', s=10, color='b')
fig = plt.figure(figsize=(10, 7))
fig.subplots_adjust(
left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# 随机取16个样本展示
for i in range(16):
axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
idx = np.random.randint(train_dataset.__len__())
# print(idx)
img, label = train_dataset[idx]
label = label*96
plot_sample(img[0], label, axis)
plt.show()
```
## 四、定义模型
这里使用到 ``paddle.vision.models`` 中定义的 ``resnet18`` 网络模型。在ImageNet分类任务中,图像分成1000类,在模型后接一个全连接层,将输出的1000维向量映射成30维,对应15个关键点的横纵坐标。
```
class FaceNet(paddle.nn.Layer):
def __init__(self, num_keypoints, pretrained=False):
super(FaceNet, self).__init__()
self.backbone = resnet18(pretrained)
self.outLayer1 = paddle.nn.Sequential(
paddle.nn.Linear(1000, 512),
paddle.nn.ReLU(),
paddle.nn.Dropout(0.1))
self.outLayer2 = paddle.nn.Linear(512, num_keypoints*2)
def forward(self, inputs):
out = self.backbone(inputs)
out = self.outLayer1(out)
out = self.outLayer2(out)
return out
```
### 4.1 模型可视化
调用飞桨提供的summary接口对组建好的模型进行可视化,方便进行模型结构和参数信息的查看和确认。
```
from paddle.static import InputSpec
num_keypoints = 15
model = paddle.Model(FaceNet(num_keypoints))
model.summary((1,3, 96, 96))
```
## 五、训练模型
在这个任务是对坐标进行回归,使用均方误差(Mean Square error )损失函数`paddle.nn.MSELoss()`来做计算,飞桨2.1中,在nn下将损失函数封装成可调用类。这里使用paddle.Model相关的API直接进行训练,只需要定义好数据集、网络模型和损失函数即可。
使用模型代码进行Model实例生成,使用prepare接口定义优化器、损失函数和评价指标等信息,用于后续训练使用。在所有初步配置完成后,调用fit接口开启训练执行过程,调用fit时只需要将前面定义好的训练数据集、测试数据集、训练轮次(Epoch)和批次大小(batch_size)配置好即可。
```
model = paddle.Model(FaceNet(num_keypoints=15))
optim = paddle.optimizer.Adam(learning_rate=1e-3,
parameters=model.parameters())
model.prepare(optim, paddle.nn.MSELoss())
model.fit(train_dataset, val_dataset, epochs=60, batch_size=256)
```
## 六、模型预测
为了更好的观察预测结果,分别可视化验证集结果与标注点的对比,和在未标注的测试集的预测结果。
### 6.1 验证集结果可视化
红色的关键点为网络预测的结果, 绿色的关键点为标注的groundtrue。
```
result = model.predict(val_dataset, batch_size=1)
def plot_sample(x, y, axis, gt=[]):
img = x.reshape(96, 96)
axis.imshow(img, cmap='gray')
axis.scatter(y[0::2], y[1::2], marker='x', s=10, color='r')
if gt!=[]:
axis.scatter(gt[0::2], gt[1::2], marker='x', s=10, color='lime')
fig = plt.figure(figsize=(10, 7))
fig.subplots_adjust(
left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(16):
axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
idx = np.random.randint(val_dataset.__len__())
img, gt_label = val_dataset[idx]
gt_label = gt_label*96
label_pred = result[0][idx].reshape(-1)
label_pred = label_pred*96
plot_sample(img[0], label_pred, axis, gt_label)
plt.show()
```
### 6.2 测试集结果可视化
```
result = model.predict(test_dataset, batch_size=1)
fig = plt.figure(figsize=(10, 7))
fig.subplots_adjust(
left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(16):
axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
idx = np.random.randint(test_dataset.__len__())
img, _ = test_dataset[idx]
label_pred = result[0][idx].reshape(-1)
label_pred = label_pred*96
plot_sample(img[0], label_pred, axis)
plt.show()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
import paddle
from paddle.io import Dataset
from paddle.vision.transforms import transforms
from paddle.vision.models import resnet18
from paddle.nn import functional as F
print(paddle.__version__)
!unzip -o ./test.zip -d data/data60
!unzip -o ./training.zip -d data/data60
Train_Dir = './data/data60/training.csv'
Test_Dir = './data/data60/test.csv'
lookid_dir = './data/data60/IdLookupTable.csv'
class ImgTransforms(object):
"""
图像预处理工具,用于将图像进行升维(96, 96) => (96, 96, 3),
并对图像的维度进行转换从HWC变为CHW
"""
def __init__(self, fmt):
self.format = fmt
def __call__(self, img):
if len(img.shape) == 2:
img = np.expand_dims(img, axis=2)
img = img.transpose(self.format)
if img.shape[0] == 1:
img = np.repeat(img, 3, axis=0)
return img
class FaceDataset(Dataset):
def __init__(self, data_path, mode='train', val_split=0.2):
self.mode = mode
assert self.mode in ['train', 'val', 'test'], \
"mode should be 'train' or 'test', but got {}".format(self.mode)
self.data_source = pd.read_csv(data_path)
# 清洗数据, 数据集中有很多样本只标注了部分关键点, 这里有两种策略
# 第一种, 将未标注的位置从上一个样本对应的关键点复制过来
# self.data_source.fillna(method = 'ffill',inplace = True)
# 第二种, 将包含有未标注的样本从数据集中移除
self.data_source.dropna(how="any", inplace=True)
self.data_label_all = self.data_source.drop('Image', axis = 1)
# 划分训练集和验证集合
if self.mode in ['train', 'val']:
np.random.seed(43)
data_len = len(self.data_source)
# 随机划分
shuffled_indices = np.random.permutation(data_len)
# 顺序划分
# shuffled_indices = np.arange(data_len)
self.shuffled_indices = shuffled_indices
val_set_size = int(data_len*val_split)
if self.mode == 'val':
val_indices = shuffled_indices[:val_set_size]
self.data_img = self.data_source.reindex().iloc[val_indices]
self.data_label = self.data_label_all.reindex().iloc[val_indices]
elif self.mode == 'train':
train_indices = shuffled_indices[val_set_size:]
self.data_img = self.data_source.reindex().iloc[train_indices]
self.data_label = self.data_label_all.reindex().iloc[train_indices]
elif self.mode == 'test':
self.data_img = self.data_source
self.data_label = self.data_label_all
self.transforms = transforms.Compose([
ImgTransforms((2, 0, 1))
])
# 每次迭代时返回数据和对应的标签
def __getitem__(self, idx):
img = self.data_img['Image'].iloc[idx].split(' ')
img = ['0' if x == '' else x for x in img]
img = np.array(img, dtype = 'float32').reshape(96, 96)
img = self.transforms(img)
label = np.array(self.data_label.iloc[idx,:],dtype = 'float32')/96
return img, label
# 返回整个数据集的总数
def __len__(self):
return len(self.data_img)
# 训练数据集和验证数据集
train_dataset = FaceDataset(Train_Dir, mode='train')
val_dataset = FaceDataset(Train_Dir, mode='val')
# 测试数据集
test_dataset = FaceDataset(Test_Dir, mode='test')
def plot_sample(x, y, axis):
img = x.reshape(96, 96)
axis.imshow(img, cmap='gray')
axis.scatter(y[0::2], y[1::2], marker='x', s=10, color='b')
fig = plt.figure(figsize=(10, 7))
fig.subplots_adjust(
left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# 随机取16个样本展示
for i in range(16):
axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
idx = np.random.randint(train_dataset.__len__())
# print(idx)
img, label = train_dataset[idx]
label = label*96
plot_sample(img[0], label, axis)
plt.show()
class FaceNet(paddle.nn.Layer):
def __init__(self, num_keypoints, pretrained=False):
super(FaceNet, self).__init__()
self.backbone = resnet18(pretrained)
self.outLayer1 = paddle.nn.Sequential(
paddle.nn.Linear(1000, 512),
paddle.nn.ReLU(),
paddle.nn.Dropout(0.1))
self.outLayer2 = paddle.nn.Linear(512, num_keypoints*2)
def forward(self, inputs):
out = self.backbone(inputs)
out = self.outLayer1(out)
out = self.outLayer2(out)
return out
from paddle.static import InputSpec
num_keypoints = 15
model = paddle.Model(FaceNet(num_keypoints))
model.summary((1,3, 96, 96))
model = paddle.Model(FaceNet(num_keypoints=15))
optim = paddle.optimizer.Adam(learning_rate=1e-3,
parameters=model.parameters())
model.prepare(optim, paddle.nn.MSELoss())
model.fit(train_dataset, val_dataset, epochs=60, batch_size=256)
result = model.predict(val_dataset, batch_size=1)
def plot_sample(x, y, axis, gt=[]):
img = x.reshape(96, 96)
axis.imshow(img, cmap='gray')
axis.scatter(y[0::2], y[1::2], marker='x', s=10, color='r')
if gt!=[]:
axis.scatter(gt[0::2], gt[1::2], marker='x', s=10, color='lime')
fig = plt.figure(figsize=(10, 7))
fig.subplots_adjust(
left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(16):
axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
idx = np.random.randint(val_dataset.__len__())
img, gt_label = val_dataset[idx]
gt_label = gt_label*96
label_pred = result[0][idx].reshape(-1)
label_pred = label_pred*96
plot_sample(img[0], label_pred, axis, gt_label)
plt.show()
result = model.predict(test_dataset, batch_size=1)
fig = plt.figure(figsize=(10, 7))
fig.subplots_adjust(
left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(16):
axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
idx = np.random.randint(test_dataset.__len__())
img, _ = test_dataset[idx]
label_pred = result[0][idx].reshape(-1)
label_pred = label_pred*96
plot_sample(img[0], label_pred, axis)
plt.show()
| 0.464173 | 0.875202 |
# Setting up Thermochemistry Calculations with Cantera and Spitfire
_This demo is part of Spitfire, with [licensing and copyright info here.](https://github.com/sandialabs/Spitfire/blob/master/license.md)_
_Highlights_
- Importing thermochemistry data in Cantera format
- A hydrogen-air ignition calculation
## Introduction
Building reactor models of chemical processes starts with setting up thermochemistry data of the different chemical species involved and the set of reactions they undergo. To manage mechanism data, Spitfire uses the Python interface of [Cantera](https://cantera.org/). It is highly recommended that advanced users become familiar with Cantera's Python interface, not only for using Spitfire but also for the wealth of useful capabilities provided directly by Cantera.
Mechanism data can be passed to Spitfire in any way that it can be provided to Cantera.
Three particularly useful routes are:
1. Provide a Cantera XML file (often by converting a Cantera CTI file or Chemkin file)
2. Build a Cantera CTI file manually
3. Build a mechanism programmatically with Cantera's Python interface
## Griffon
One of the key research topics involved in Spitfire's earliest developments was the design of numerical methods for complex chemistry problems. For this reason all reaction rates and sensitivities (for Jacobian matrices) are evaluated in the Griffon code, an internal C++ library of Spitfire. Griffon is the "engine" for thermochemistry with Spitfire that is wrapped by Cython code and precompiled into the `spitfire.griffon` package. Griffon computes reaction rates and right-hand sides and analytical Jacobian matrices for reactors and flamelets in addition to some optimized solvers (e.g., a linear solver for flamelet models).
## `HomogeneousReactor`
While Griffon's functionality is largely made available to users, Spitfire provides Python classes to simplify the solution of canonical reactors (`HomogeneneousReactor` in `spitfire.chemistry.reactors`). Python classes are also provided for non-premixed flamelets (`Flamelet` in `spitfire.chemistry.flamelet`), and three-stream flamelets are currently under development. Flamelet models will be discussed in detail later on.
A number of reactor models are available, namely all possibly combinations of the following characteristics:
- **configuration**: constant-pressure (isobaric) or constant-volume (isochoric)
- **heat transfer**: adiabatic, isothermal, or diathermal (radiative and convective heat transfer) walls
- **mass transfer**: closed or open reactors with an imposed feed stream and residence time
Parameters such as the residence time, feed stream, or heat transfer parameters (e.g., external convection temperature) may be specied as arbitrary functions of time.
## `ChemicalMechanismSpec`
In order to use Griffon functions or the reactor and flamelet classes, Spitfire provides the `ChemicalMechanismSpec` class to interface with Cantera data. In the cell below we import this class and build an instance with the `h2-burke.xml` Cantera file, which contains data for hydrogen combustion. The "group_name" argument tells Cantera which phase to use in its input file (some contain several options with different groups of species or different transport properties).
```
from spitfire import ChemicalMechanismSpec
mech = ChemicalMechanismSpec(cantera_xml='h2-burke.xml', group_name='h2-burke')
```
## Streams and Mixing
The next step is to make a mixture of hydrogen and air and "spark" it to a high temperature to ignite.
To make streams of reactants the mechanism provides the `stream` method. This produces an instance of a Cantera `Quantity`, which you can create without the `ChemicalMechanismSpec` if you know Cantera well.
Below we make a stream of pure hydrogen at 300 K and one atmosphere, and a stream of air at standard temperature and pressure. Note that the `'TPX'` string is a Cantera detail: see [Cantera's documentation](https://www.cantera.org/docs/sphinx/html/cython/importing.html#cantera.Quantity) for more details regarding stream initialiation and all the various options.
```
h2 = mech.stream('TPX', (300, 101325, 'H2:1'))
air = mech.stream(stp_air=True)
```
Now we take these streams and mix them so that the resultant stream is a stoichiometric mixture which has an [equivalence ratio](https://en.wikipedia.org/wiki/Air%E2%80%93fuel_ratio#Fuel%E2%80%93air_equivalence_ratio_(%CF%95)) of one. We then set the temperature and pressure.
```
mix = mech.mix_for_equivalence_ratio(1.0, h2, air)
mix.TP = 950., 101325.
```
## Building The Reactor
To build a `HomogeneousReactor` we now simply provide the `ChemicalMechanismSpec` object (which contains things like a Cantera `Solution` object and a Griffon objec) and the `mix` stream we made above which is the initial state of the reactor. We also provide the configuration, heat transfer, and mass transfer settings. For adiabatic and closed reactors these settings are pretty limited but for more complicated reactors there will be more arguments needed.
## Integrating in Time
We can integrate the reactor in time towards a steady state with the `integrate_to_steady` method below, which can take all kinds of arguments to control details of the time-stepping. Without any arguments it simply uses the defaults to integrate a reactor until a steady state is obtained.
```
from spitfire import HomogeneousReactor
r = HomogeneousReactor(mech, mix,
configuration='isochoric',
heat_transfer='adiabatic',
mass_transfer='closed')
output = r.integrate_to_steady()
```
## Plotting variables over time
The `output` variable above that was returned by the reactor integration is a Spitfire `Library` object that will be discussed in greater detail in later notebooks (it is critical when solving flamelets to build tabulated chemistry models). To plot the temperature of the reactor over time, for instance, you can simply use the following code.
The output from the reactor `integrate*` call contains temperature, all of the species mass fractions, and for an isochoric reactor the density (for isobaric the pressure will be included so the thermodynamic state can be reconstructed from the output alone). This means we can plot several mass fractions as follows.
```
import matplotlib.pyplot as plt
%matplotlib notebook
plt.plot(output.time_values * 1e3, output['temperature'])
plt.xlabel('t (s)')
plt.ylabel('T (K)')
plt.grid()
plt.show()
%matplotlib notebook
for s in ['H2', 'O2', 'H2O', 'OH', 'H']:
plt.semilogy(output.time_values * 1e3, output['mass fraction ' + s], label=s)
plt.xlabel('t (s)')
plt.ylabel('mass fraction')
plt.ylim([1e-8, 1])
plt.grid()
plt.legend(loc='best')
plt.show()
```
## Post-processing quantities
To compute quantities like reaction rates, species production rates, enthalpy, pressure, etc. on the solution trajectory returned by `integrate_to_steady` we can use the `spitfire.chemistry.analysis` package. To facilitate the use of Cantera's Python interface, use the `get_ct_solution_array` method to return a Cantera `SolutionArray` that can compute quantities across a range of states just like a Cantera `Quantity` or `Solution` object. Note the `shape` output from the function can be used to reshape and add newly computed properties to the output library (this is much more important later on for tabulated chemistry models).
```
from spitfire import get_ct_solution_array
ctsol, shape = get_ct_solution_array(mech, output)
```
Now we'll plot the rate of the important chain-branching reaction, `H + O2 <-> O + OH`, which happens to be the 0-th reaction in this mechanism, alongside the temperature on a twin axis.
```
qcb = ctsol.net_rates_of_progress[:, 0]
%matplotlib notebook
fig, ax1 = plt.subplots()
ax1.semilogy(output.time_values * 1e3, qcb, label='rate')
ax1.set_xlabel('t (s)')
ax1.set_ylabel(f'net rate of {mech.gas.reaction_equation(0)}')
ax1.legend(loc='center left')
ax2 = ax1.twinx()
ax2.plot(output.time_values * 1e3, output['temperature'], 'g--', label='temperature')
ax2.set_ylabel('T (K)')
ax2.legend(loc='lower right')
ax1.grid()
fig.tight_layout()
plt.show()
```
## Conclusions
This notebook has introduced the use of Spitfire to solve a simple reactor model and the use of Cantera to load mechanism data and post-process thermochemical quantities on computed solutions. More detailed options for reactor simulations will be presented in the next notebooks.
|
github_jupyter
|
from spitfire import ChemicalMechanismSpec
mech = ChemicalMechanismSpec(cantera_xml='h2-burke.xml', group_name='h2-burke')
h2 = mech.stream('TPX', (300, 101325, 'H2:1'))
air = mech.stream(stp_air=True)
mix = mech.mix_for_equivalence_ratio(1.0, h2, air)
mix.TP = 950., 101325.
from spitfire import HomogeneousReactor
r = HomogeneousReactor(mech, mix,
configuration='isochoric',
heat_transfer='adiabatic',
mass_transfer='closed')
output = r.integrate_to_steady()
import matplotlib.pyplot as plt
%matplotlib notebook
plt.plot(output.time_values * 1e3, output['temperature'])
plt.xlabel('t (s)')
plt.ylabel('T (K)')
plt.grid()
plt.show()
%matplotlib notebook
for s in ['H2', 'O2', 'H2O', 'OH', 'H']:
plt.semilogy(output.time_values * 1e3, output['mass fraction ' + s], label=s)
plt.xlabel('t (s)')
plt.ylabel('mass fraction')
plt.ylim([1e-8, 1])
plt.grid()
plt.legend(loc='best')
plt.show()
from spitfire import get_ct_solution_array
ctsol, shape = get_ct_solution_array(mech, output)
qcb = ctsol.net_rates_of_progress[:, 0]
%matplotlib notebook
fig, ax1 = plt.subplots()
ax1.semilogy(output.time_values * 1e3, qcb, label='rate')
ax1.set_xlabel('t (s)')
ax1.set_ylabel(f'net rate of {mech.gas.reaction_equation(0)}')
ax1.legend(loc='center left')
ax2 = ax1.twinx()
ax2.plot(output.time_values * 1e3, output['temperature'], 'g--', label='temperature')
ax2.set_ylabel('T (K)')
ax2.legend(loc='lower right')
ax1.grid()
fig.tight_layout()
plt.show()
| 0.470737 | 0.991356 |
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import json
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'svg'
use_cuda = True
device = torch.device('cuda:3' if use_cuda else 'cpu')
```
# 1. load data
```
dataMNIST_train = dsets.MNIST(
root = 'data',
train = True,
download = True,
transform = transforms.Compose([
transforms.Resize((32,32)),
transforms.ToTensor(),
])
)
dataMNIST_test = dsets.MNIST(
root = 'data',
train = False,
download = True,
transform = transforms.Compose([
transforms.Resize((32,32)),
transforms.ToTensor(),
])
)
dataLoaderMNIST_train = torch.utils.data.DataLoader(
dataset = dataMNIST_train,
batch_size = 128,
shuffle = True,
)
dataLoaderMNIST_test = torch.utils.data.DataLoader(
dataset = dataMNIST_test,
batch_size = 128,
shuffle = True,
)
x,y = iter(dataLoaderMNIST_train).next()
plt.imshow(x[0][0],interpolation = 'bilinear')
```
# 2. VAE model
```
class VAE(nn.Module):
def __init__(self, inputChannel = 1, featureSize = 32):
super(VAE, self).__init__()
self.encoder = nn.Sequential(
#(32,32)
nn.Conv2d(1,16,3,stride = 2,padding = 1),#(16,16)
nn.ReLU(),
nn.Conv2d(16,32,3,stride = 2,padding = 1),#(8,8)
nn.ReLU(),
nn.Conv2d(32,64,3,stride = 2,padding = 1),#(4,4)
nn.ReLU(),
nn.Conv2d(64,128,3,stride = 2,padding = 1),#(2,2)
nn.ReLU(),
nn.Conv2d(128,256,3,stride = 2,padding = 1),#(1,1)
nn.ReLU(),
)
self.fc_mean = nn.Conv2d(256,32,1)
self.fc_logvar = nn.Conv2d(256,32,1)
self.fc_decoder = nn.Conv2d(32,256,1)
self.decoder = nn.Sequential(
#(1,1)
nn.ConvTranspose2d(256,128,4,stride = 2,padding = 1),#(2,2)
nn.ReLU(),
nn.ConvTranspose2d(128,64,4,stride = 2,padding = 1),#(4,4)
nn.ReLU(),
nn.ConvTranspose2d(64,32,4,stride = 2,padding = 1),#(8,8)
nn.ReLU(),
nn.ConvTranspose2d(32,16,4,stride = 2,padding = 1),#(16,16)
nn.ReLU(),
nn.ConvTranspose2d(16,1,4,stride = 2,padding = 1),#(32,32)
nn.Sigmoid(),
)
def reparameterize(self, mean, logvar):
std = torch.exp(0.5*logvar)
eps = torch.randn(std.size()).cuda(device)
z = mean + eps*std
return z
def encode(self, x):
h = self.encoder(x)
mean, logvar = self.fc_mean(h), self.fc_logvar(h)
z = self.reparameterize(mean,logvar)
return z, mean, logvar
def decode(self, z):
x = self.fc_decoder(z)
y = self.decoder(x)
return y
def forward(self, x):
z, mean, logvar = self.encode(x)
y = self.decode(z)
return y, z, mean, logvar
vae = VAE()
vae.to(device)
x,y = iter(dataLoaderMNIST_train).next()
y_, z, mean, logvar = vae(x[0:1].cuda(device))
y.shape
optimizer = optim.Adam(vae.parameters(),lr = 0.01)
vae.train()
for i in range(5):
for step,(x,y) in enumerate(dataLoaderMNIST_train):
x = x.cuda(device)
y_, z, mean, logvar = vae(x)
KLD_loss = -0.5 * torch.sum(1 + logvar - mean**2 - torch.exp(logvar)) # KL divergence
Bernoulli_MLP_loss = F.binary_cross_entropy(y_, x, reduction='sum') # BCE
# Gaussian_MLP_loss =
loss = KLD_loss + Bernoulli_MLP_loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('\r epoch:{epoch:3}--step:{step:5}--loss:{loss:.7f}'.format(epoch=i,step=step,loss=loss),end='')
print()
plt.subplot(221).imshow(y_[10][0].cpu().detach().numpy(),interpolation = 'bilinear')
plt.subplot(222).imshow(x[10][0].cpu().detach().numpy(),interpolation = 'bilinear')
```
|
github_jupyter
|
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import json
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'svg'
use_cuda = True
device = torch.device('cuda:3' if use_cuda else 'cpu')
dataMNIST_train = dsets.MNIST(
root = 'data',
train = True,
download = True,
transform = transforms.Compose([
transforms.Resize((32,32)),
transforms.ToTensor(),
])
)
dataMNIST_test = dsets.MNIST(
root = 'data',
train = False,
download = True,
transform = transforms.Compose([
transforms.Resize((32,32)),
transforms.ToTensor(),
])
)
dataLoaderMNIST_train = torch.utils.data.DataLoader(
dataset = dataMNIST_train,
batch_size = 128,
shuffle = True,
)
dataLoaderMNIST_test = torch.utils.data.DataLoader(
dataset = dataMNIST_test,
batch_size = 128,
shuffle = True,
)
x,y = iter(dataLoaderMNIST_train).next()
plt.imshow(x[0][0],interpolation = 'bilinear')
class VAE(nn.Module):
def __init__(self, inputChannel = 1, featureSize = 32):
super(VAE, self).__init__()
self.encoder = nn.Sequential(
#(32,32)
nn.Conv2d(1,16,3,stride = 2,padding = 1),#(16,16)
nn.ReLU(),
nn.Conv2d(16,32,3,stride = 2,padding = 1),#(8,8)
nn.ReLU(),
nn.Conv2d(32,64,3,stride = 2,padding = 1),#(4,4)
nn.ReLU(),
nn.Conv2d(64,128,3,stride = 2,padding = 1),#(2,2)
nn.ReLU(),
nn.Conv2d(128,256,3,stride = 2,padding = 1),#(1,1)
nn.ReLU(),
)
self.fc_mean = nn.Conv2d(256,32,1)
self.fc_logvar = nn.Conv2d(256,32,1)
self.fc_decoder = nn.Conv2d(32,256,1)
self.decoder = nn.Sequential(
#(1,1)
nn.ConvTranspose2d(256,128,4,stride = 2,padding = 1),#(2,2)
nn.ReLU(),
nn.ConvTranspose2d(128,64,4,stride = 2,padding = 1),#(4,4)
nn.ReLU(),
nn.ConvTranspose2d(64,32,4,stride = 2,padding = 1),#(8,8)
nn.ReLU(),
nn.ConvTranspose2d(32,16,4,stride = 2,padding = 1),#(16,16)
nn.ReLU(),
nn.ConvTranspose2d(16,1,4,stride = 2,padding = 1),#(32,32)
nn.Sigmoid(),
)
def reparameterize(self, mean, logvar):
std = torch.exp(0.5*logvar)
eps = torch.randn(std.size()).cuda(device)
z = mean + eps*std
return z
def encode(self, x):
h = self.encoder(x)
mean, logvar = self.fc_mean(h), self.fc_logvar(h)
z = self.reparameterize(mean,logvar)
return z, mean, logvar
def decode(self, z):
x = self.fc_decoder(z)
y = self.decoder(x)
return y
def forward(self, x):
z, mean, logvar = self.encode(x)
y = self.decode(z)
return y, z, mean, logvar
vae = VAE()
vae.to(device)
x,y = iter(dataLoaderMNIST_train).next()
y_, z, mean, logvar = vae(x[0:1].cuda(device))
y.shape
optimizer = optim.Adam(vae.parameters(),lr = 0.01)
vae.train()
for i in range(5):
for step,(x,y) in enumerate(dataLoaderMNIST_train):
x = x.cuda(device)
y_, z, mean, logvar = vae(x)
KLD_loss = -0.5 * torch.sum(1 + logvar - mean**2 - torch.exp(logvar)) # KL divergence
Bernoulli_MLP_loss = F.binary_cross_entropy(y_, x, reduction='sum') # BCE
# Gaussian_MLP_loss =
loss = KLD_loss + Bernoulli_MLP_loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('\r epoch:{epoch:3}--step:{step:5}--loss:{loss:.7f}'.format(epoch=i,step=step,loss=loss),end='')
print()
plt.subplot(221).imshow(y_[10][0].cpu().detach().numpy(),interpolation = 'bilinear')
plt.subplot(222).imshow(x[10][0].cpu().detach().numpy(),interpolation = 'bilinear')
| 0.933325 | 0.877948 |
# Variational Quantum Eigensolver for 2-qubit system
This is an attempt at solving task 4 of the screening tasks i.e. finding the lowest eigenvalue for the given matrix using VQE like circuits.
## Introduction
Variational Quantum Eigensolver is an algorithm that helps us find an upper bound on the lowest eigen value of a given Hamiltonian. This notebook will not go into much depth on the theoretical aspects of the algorithm such as the variational principle and what a Hamiltonian. This article<sup>[[1]]</sup> by Michał Stęchły and the original paper on VQE<sup>[[2]]</sup> do a great job of explaining them. Here, the focus will be on the implementation of the algorithm.
<br>
<div align="center">
<img alt="Variational Quantum Algorithm" src="./VQE_algorithm.png"><br>
Variational Quantum Algorithm<sup>[2]</sup>
</div>
<br><br>
The steps involved in the algorithm can be summarized as follows:
1. Design a parameterized circuit (ansatz) for trial state preparation. Design the quantum modules based on the tensor products of Pauli matrices obtained by decomposing the Hamiltonian.
2. Initialize the parameters of the circuit and prepare a trial state.
3. Pass the trial state to the quantum modules to perform measurements.
4. Calculate the expectation value $\left\langle H_{1} \right\rangle$ to $\left\langle H_{n} \right\rangle$ based on the counts of measurements obtained from quantum modules.
5. Add all the expectation values using classical adder to obtain expectation value $\left\langle H \right\rangle$.
6. Obtain new input parameters based on the expectation value using a classical optimizer.
7. Repeat steps 2 to 7 until the expectation value is optimised. This expectation value is the upper bound on the lowest eigenvalue of the given Hamiltonian.
[1]: (https://www.mustythoughts.com/variational-quantum-eigensolver-explained)
[2]: (https://arxiv.org/abs/1304.3061)
Before we proceed with the code, let's make sure that all the required dependencies are installed on your system. It is recommended that these dependencies be installed in a virtual environment so that the global ones are not messed up.
```
!pip install -U -r requirements.txt
from IPython.display import clear_output
clear_output()
''' Importing the required modules and defining global constants '''
import numpy as np
pi = np.pi
from qiskit import *
from scipy import optimize
import pandas as pd
from matplotlib.pyplot import savefig
from qiskit.providers.ibmq import *
```
## Decomposing given Hamiltonian into Pauli products
The first step in the algorithm requires us to decompose the given Hamiltonian matrix into a linear combination of Pauli matrices and tensor products of Pauli matrices (henceforth called "Pauli product"). This is done as a quantum computer can efficiently evaluate the expectation value of Pauli products<sup>[[2]]</sup>.
After digging around the internet, these resources<sup>[[3]][[4]][[5]][[8]]</sup> were found that leverage the properties of Pauli matrices, Hermitian matrices and the Hilbert-Schmidt inner product to calculate the coefficients of the decomposition. If we represent the decomposed 4 x 4 Hamiltonian matrix H as
$$H = \ \sum_{i,j = I,X,Y,Z}^{}{a_{\text{ij}}\left( \sigma_{i} \otimes \sigma_{j} \right)}$$
then we can use the relation
$$a_{\text{ij}} = \frac{1}{4}tr(\left( \sigma_{i} \otimes \sigma_{j} \right)H)\ where\ i,\ j = I,\ X,\ Y,\ Z$$
to calculate the coefficient of each Pauli product.
[2]: https://arxiv.org/abs/1304.3061
[3]: https://quantumcomputing.stackexchange.com/questions/11899/example-of-hamiltonian-decomposition-into-pauli-matrices
[4]: https://quantumcomputing.stackexchange.com/questions/8725/can-arbitrary-matrices-be-decomposed-using-the-pauli-basis
[5]: https://quantumcomputing.stackexchange.com/questions/6882/decomposition-of-a-matrix-in-the-pauli-basis
[8]: https://michaelgoerz.net/notes/decomposing-two-qubit-hamiltonians-into-pauli-matrices.html
```
def decompose_matrix(matrix):
"""
This function uses the formula describe above to calculate the coefficient of each Pauli product.
It essentially decomposes the 4 x 4 hamiltonian matrix into a linear combination of Pauli products.
Parameters:
matrix (np.array): A 4 x 4 hamiltonian matrix
Returns:
dict: Dictionary of coefficients of each Pauli product
"""
pauli_I = np.array([[1, 0],
[0, 1]], dtype=complex)
pauli_X = np.array([[0, 1],
[1, 0]], dtype=complex)
pauli_Y = np.array([[0, -1j],
[1j, 0]], dtype=complex)
pauli_Z = np.array([[1, 0],
[0, -1]], dtype=complex)
pauli_matrices = [["I", pauli_I], ["X", pauli_X], ["Y", pauli_Y], ["Z", pauli_Z]]
coefficient_dict = {}
for pauli_matrix_1 in pauli_matrices:
for pauli_matrix_2 in pauli_matrices:
tensor_product = np.kron(pauli_matrix_1[1], pauli_matrix_2[1])
coefficient_dict[f"{pauli_matrix_1[0]}{pauli_matrix_2[0]}"] = 0.25 * np.trace(np.matmul(tensor_product, given_matrix))
return coefficient_dict
given_matrix = np.array([[1, 0, 0, 0],
[0, 0, -1, 0],
[0, -1, 0, 0],
[0, 0, 0, 1]], dtype=np.float64)
print("Coefficient of each tensor product in the decomposition of given matrix: \n\n", decompose_matrix(given_matrix))
```
It is seen from the above output that the given matrix is decomposed into a linear combination of $\sigma_{I} \otimes \sigma_{I}$, $\sigma_{X} \otimes \sigma_{X}$, $\sigma_{Y} \otimes \sigma_{Y}$ and $\sigma_{Z} \otimes \sigma_{Z}$ i.e.
$$H = \frac{1}{2}\left( \sigma_{I} \otimes \sigma_{I} \right) - \frac{1}{2}\ \left( \sigma_{X} \otimes \sigma_{X} \right) - \frac{1}{2}{(\sigma}_{Y} \otimes \sigma_{Y}) + \frac{1}{2}\left( \sigma_{Z} \otimes \sigma_{Z} \right)$$
## Creating Trial States using Parametrized Ansatz
The next step is to design the circuit for trial state preparation. The goal is to create a state that is exactly the eigen state or very close to it. We could do this by iterating over all possible states in the Hilbert Space until we find the correct one but this would be computationally intensive. We need a better way to do so.
An ansatz, in general sense, is an educated guess or an additional assumption made to help solve a problem, and which is later verified to be part of the solution by its results<sup>[[6]]</sup>. In this case, an ansatz is a set of parametrized gates that gives us access to a portion of the Hilbert space. Since it is parameterized, the parameters can be varied iteratively to access the set of states represented by it. Choosing a good ansatz is key as it should represent a sufficient part of the Hilbert space and be shallow (to be less computationally intensive) and should not have too many parameters (optimizing it would become difficult)<sup>[[1]]</sup>.
After experimenting with different ansatze, it was found that the ansatz in the hint i.e. ${RX(\theta)}_{1}{(CX)H}_{1}$ was the best ansatz for the given hamiltonian as it contains the eigenvector, corresponding to the lowest eigenvalue, as a state in its subspace.
[6]: https://en.wikipedia.org/wiki/Ansatz
[1]: https://www.mustythoughts.com/variational-quantum-eigensolver-explained
```
def create_trial_state_circuit(parameters):
"""
Creates a parameterized circuit (ansatz) that prepares the trial state based
on the parameters received as input.
Parameters:
parameters (np.array): List of angles that act as parameters for the circuit
Returns:
QuantumCircuit(2, 2): Quantum circuit that prepares the trial state
"""
trial_state_circuit = QuantumCircuit(2, 2)
trial_state_circuit.h(0)
trial_state_circuit.cx(0, 1)
trial_state_circuit.rx(parameters[0], 0)
trial_state_circuit.barrier()
# trial_state_circuit.draw(output='mpl').savefig('./circuits/AnsatzHint.png') # This statement was used during the experimentation phase to store the circuits
return trial_state_circuit
```
## Creating quantum modules to perform measurements
It is impossible to know the exact state of a qubit as any external interaction collapses the qubit into one of the basis states. Sp, to get an approximate idea of what the trial state would have been, the same circuit is repeatedly prepared and measurements are performed measurements to get the counts of each output state. These can be used to calculate the probability of each output state and which can in turn be used to calculate the expectation value.
The problem that arises here is that separate circuits (i.e. quantum modules) are needed for each Pauli Product. The reason being that to calculate the expectation value for $\sigma_{X}$ and $\sigma_{y}$, measurements have to be performed and counts have to be obtained in the X and Y basis respectively. Since we cannot directly measure in an arbitrary basis, transformations have to be performed on the trial state to convert it into Z basis (i.e. the $\left| 0 \right\rangle\ and\ |1\rangle$ basis). For $\sigma_{X}$, a Hadamard (or $H$) gate is applied and for $\sigma_{y}$, $S^{\dagger}$ followed by $H$ gate are applied for the transformation.
$$S^{\dagger} = \ \begin{bmatrix} 1 & 0 \\ 0 & - i \\ \end{bmatrix}\ \ \ \ \ \ \ \ \ \ H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & - 1 \\ \end{bmatrix}$$
For $\sigma_{Z}$, there is no need for a transformation as its expectation value requires counts in the Z basis. For $\sigma_{I}$, there is no need to create a circuit as its expectation value is always 1.
For a 2-qubit system, the expectation value for tensor products has to be calculated. This can be done by doing these transformations on individual qubits according to the Pauli matrix and then doing a measurement. E.g. For calculating the expectation value of the $\sigma_{X} \otimes \sigma_{Y}$, a $H$ gate is applied on the first qubit and a $S^{\dagger}$ followed by $H$ gate are applied on the second qubit and measurements are performed.
```
def quantum_module_simulator(trial_state_circuit, pauli_product, number_of_shots):
"""
This is a generalized function that adds transformations and performs measurements on the trial
state based on the pauli matrices. The measurements are performed repeatedly to obtain counts of
each output state. The measurements are performed on an ideal simulator.
Parameters:
trial_state_circuit (QuantumCircuit): Circuit that prepares the trial state
pauli_product (str): String representation of tensor product of pauli matrices
number_of_shots (int): Number of times measurements should be performed on the trial state
Returns:
counts (dict): Dictionary of counts of each output state
"""
measurement_circuit = trial_state_circuit.copy()
qubit = 0
cbit = 0
for pauli_matrix in pauli_product:
if pauli_matrix == 'X':
measurement_circuit.h(qubit)
elif pauli_matrix == 'Y':
measurement_circuit.sdg(qubit)
measurement_circuit.h(qubit)
elif pauli_matrix != 'I' and pauli_matrix != 'Z':
raise ValueError("Pauli product should consist only of I, X, Y or Z matrices")
measurement_circuit.measure(qubit, cbit)
qubit += 1
cbit += 1
backend = Aer.get_backend('qasm_simulator')
job = execute(measurement_circuit, backend, shots = number_of_shots)
result = job.result()
counts = result.get_counts()
return counts
```
## Calculating expectation values on the basis of counts
The quantum modules give us a set of counts for each basis state i.e.$\ \left| 00 \right\rangle,\ \left| 01 \right\rangle,$$\ \left| 10 \right\rangle\ and\ |11\rangle$. The probability of each state is calculated by dividing the count of each state by the total number of counts.
The expectation value is the sum of product of probabilities of each state and their associated eigen value. As we know, the eigen value of state $\left| 0 \right\rangle,\left| + \right\rangle$, and $\left| i \right\rangle$ is $+ 1$ and of state $\left| 1 \right\rangle,\left| - \right\rangle$, and $\left| - i \right\rangle$ is $- 1$. The eigen value of tensor products of these states would then be a product of their individual eigen values. Since every state has been transformed into the Z basis, the eigenvalues of only 4 states need to tbe considered. It comes out to be $+ 1$ for $|00\rangle$ and $|11\rangle$ and $- 1\ $for $|01\rangle$ and $|10\rangle$.
The formula used for the calculation depends on the Pauli product used but can be generalised to three cases:
1. When the Pauli product is $\sigma_{I} \otimes \sigma_{X}$, $\sigma_{I} \otimes \sigma_{Y}$ or $\sigma_{I} \otimes \sigma_{Z}$:
The expectation value in these cases depends only on the second qubit as the first Pauli matrix is $I$. There is no need to create a totally new circuit in these cases and the probabilities obtained from the quantum modules can be used. The eigenvalue of the state is considered as the eigenvalue of state of qubit 2 i.e. states having 1 on the second qubit have $- 1$ and for the states having on the first qubit have $+ 1$. Hence, the expectation value for this case is:
$$\left\langle \sigma_{I} \otimes \sigma_{i} \right\rangle\ = P_{00}\left( + 1 \right) + P_{01}\left( - 1 \right) + P_{10}\left( + 1 \right) + P_{11}( - 1)\ where\ i = X,\ Y\ and\ Z\ $$
2. When the Pauli product is $\sigma_{X} \otimes \sigma_{I}$, $\sigma_{Y} \otimes \sigma_{I}$ or $\sigma_{Z} \otimes \sigma_{I}$:
Similar to the above case, the expectation value would depend only on the first qubit as the second Pauli matrix is $I$. Similar to the above case, the eigenvalue for states having 1 on the first qubit is considered as $- 1$ and for the states having 0 on the first qubit is considered as $+ 1$. Therefore, the expectation value for this case is:
$$\left\langle \sigma_{i} \otimes \sigma_{I} \right\rangle\ = P_{00}\left( + 1 \right) + P_{01}\left( + 1 \right) + P_{10}\left( - 1 \right) + P_{11}( - 1)\ where\ i = X,\ Y\ and\ Z\ $$
3. When the Pauli product is of the form $\sigma_{i} \otimes \sigma_{j}$ where $i,\ j = X,\ Y\ and\ Z$:
In this case, the eigen value of the entire state is considered. The eigen value of the 4 states was defined initially and is used here. Therefore, the expectation value for this case is:
$$\left\langle \sigma_{i} \otimes \sigma_{j} \right\rangle\ = P_{00}\left( + 1 \right) + P_{01}\left( - 1 \right) + P_{10}\left( - 1 \right) + P_{11}\left( + 1 \right)\ where\ i,j = X,\ Y,\ Z$$
This tutorial<sup>[[7]](https://github.com/DavitKhach/quantum-algorithms-tutorials/blob/master/variational_quantum_eigensolver.ipynb)</sup> by David Khachatryan goes into much depth on the significance of adding extra gates before measurement and calculating expectation values and can be referred for further details.
```
def calculate_expectation_value(counts, pauli_product):
"""
Calculates the expectation value of the Pauli product based on the counts of each state and the formula defined above
Parameters:
counts (dict): Dictionary of counts of each output state
pauli_product (str): String representation of tensor product of pauli matrices
Returns:
expectation_value (int): Expectation value of the Pauli product based on the counts
"""
if pauli_product == 'II':
return 1
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_count = counts['00'] + counts['01'] + counts['10'] + counts['11']
# The formulae vary slightly from above as Qiskit has reverse ordering of qubits (little-endian)
# i.e. qubit 1 is qubit 2 and qubit 2 is qubit 1
if pauli_product == 'IX' or pauli_product == 'IY' or pauli_product == 'IZ':
expectation_value = (counts['00'] + counts['01'] - counts['10'] - counts['11']) / total_count
elif pauli_product == 'XI' or pauli_product == 'YI' or pauli_product == 'ZI':
expectation_value = (counts['00'] - counts['01'] + counts['10'] - counts['11']) / total_count
else:
expectation_value = (counts['00'] - counts['01'] - counts['10'] + counts['11']) / total_count
return expectation_value
```
## Combining everything to calculate the expectation value
All the parts needed to calculate the expectation value of the given Hamiltonian are defined above. We use the coefficients of pauli products (obtained by decomposing the matrix) and set of parameters for the ansatz to call the respective functions and calculate expectation values for each Pauli product. These expectation values are multiplied with their respective coefficients and added to give the expectation value of the given Hamiltonian with respect to the current trial state.
```
def calculate_expectation_value_of_hamiltonian(parameters, coefficient_dict):
"""
Calculates the expectation value of the hamiltonian using the parameters for trial state circuit
and coefficients of pauli products
Parameters:
parameters (np.array): List of angles that act as parameters for the trial state circuit
coefficient_dict (dict): Coeffiecients of pauli products obtained by decomposing the hamiltonian
Returns:
expectation_value_of_hamiltonian (float): Expectation value of the hamiltonian
"""
trial_state_circuit = create_trial_state_circuit(parameters)
expectation_value = 0
for pauli_product in coefficient_dict:
if abs(coefficient_dict[pauli_product]) > 0:
counts = quantum_module_simulator(trial_state_circuit, pauli_product, 8192)
expectation_value += np.real(coefficient_dict[pauli_product]) * calculate_expectation_value(counts, pauli_product)
return expectation_value
```
## Optimizing the expectation value of the Hamiltonian
Using the above methods, the expectation value with respect to the current trial state is obtained. It needs to be optimised as it may not be the lowest upper bound on the eigenvalue. Classical optimisation methods such as gradient descent or Nelder-Mead Simplex method can be used to optimize the above function and obtain the lowest possible value. In this notebook, Powell's method is used for optimization. I have also shown the lowest eigen value that is calculated using classical methods for comparison.
```
given_matrix = np.array([[1, 0, 0, 0],
[0, 0, -1, 0],
[0, -1, 0, 0],
[0, 0, 0, 1]], dtype=np.float64)
coefficient_dict = decompose_matrix(given_matrix)
theta1 = 0
parameters = [theta1]
tolerance = 1e-5
''' Running classical algorithm on the given matrix to find exact value '''
eigenvalues = np.linalg.eigvals(given_matrix)
lowest_eigenvalue = np.min(eigenvalues)
print("Classically calculated eigenvalues of given hamiltonian are: ", eigenvalues)
print("Lowest eigenvalue calculated using classical methods: ", lowest_eigenvalue)
result = optimize.minimize(fun=calculate_expectation_value_of_hamiltonian, x0=parameters,
args=(coefficient_dict), method='Powell', tol=tolerance)
print("Upper bound on lowest eigenvalue calculated using VQE: ", result.fun)
```
As it is seen above, the VQE algorithm is able to return the exact lowest eigenvalue for the given Hamiltonian for the selected ansatz (the classical method may give a small error which can be neglected).
## Observations and Results
### Selection of Ansatz
As I mentioned earlier, I tried several different ansatze and kept a track of the expectation value of the Hamiltonian obtained using the ansatze. My initial approach was to try several ansatze randomly and observe the effect they have on the expectation value. Some of my ansatze were based on the ones shown in lectures 25 to 27 of the course Introduction to Quantum Computing and Quantum Hardware<sup>[[9]]</sup> by IBM. This paper<sup>[[10]]</sup> by Sukin Sim et. al. helped me visualize the expressibility of different circuits and guided me in choosing a good ansatz.
I obtained the expectation value for each ansatz thrice to show the variation in values due to random seed of the noiseless simulator and the convergence of the optimiser. The data for each ansatz is stored in a file name experimental_data.csv and the corresponding circuit diagram (parameters of the circuit have values for the last iteration) for each ansatz is stored in the circuits directory. Due to lack of space, I do not display the description and plot of each ansatz below. In all cases, the number of shots (i.e. total count) for each circuit is fixed at 8192.
[9]: https://qiskit.org/learn/intro-qc-qh/
[10]: https://arxiv.org/abs/1905.10876
```
experimental_data = pd.read_csv("experimental_data.csv")
experimental_data_without_description = experimental_data.loc[:, 'ID':]
experimental_data_without_description.head(9)
```
The following observations and results are made based on the above experimental data:
1. In case of Ansatz1, Ansatz2, Ansatz3 and Ansatz4, the expectation values that are obtained are 10<sup>4</sup> to 10<sup>2</sup> orders of magnitude away from the lowest eigenvalue. This may be due to 2 reasons: Firstly, the ansatze do not have many number of gates and secondly, they do not have entangling capability and hence, are limited in the number of states they represent. So, there is a good possibility that the subspace they represent does not have any vector near the required eigenvector.
Note 1: Entanglement was introduced in all circuits from Ansatz5, which increases the expressibility of the ansatz.
2. In case of Ansatz 5, the number of gates have been increased and entanglement has been introduced. So, the circuit is more expressible. But, the subspace represented by the ansatz still does not contain the exact state represented by the required eigen vector. Hence, we are still an order of magnitude away from the solution.
3. In case of Ansatz 6, the number of gates have been further increased but this worsens the expectation value. This might be due the fact that we have only 2 parameters and adding more gates makes the task of the optimiser difficult without giving a better solution.
Note 2: Adding more gates and parameters increases the time taken to calculate the eigenvalue. This delay in execution is visible while obtaining values for Ansatze 5 through 8.
4. In case of Ansatz 7, we add more parameters (4) to the circuit while keeping the number of gates constant (9). This increases the expressibility further and we get a much better upper bound.
5. In case of Ansatz 8, we increase the gates again but this time we have 4 parameters instead of 2. This is the best circuit I made with the closest upper bound. Expectation value 2 for this ansatz is an anomaly when compared to Expectation value 1 and Expectation value 3. This is the problem that plagues larger circuits. As the sub space represented by them increases, there may be local minima that occur which cause the optimizer to converge at such values. Repeatedly performing the experiments at different initial points is one of the solutions to this problem.
6. I had a look at the ansatz given in the hint and applied it here. It had only 3 gates and 1 parameter so I was suspicious. But surprisingly, it gave the exact eigenvalue of -1 each time I executed it without taking long to execute. Later, I deduced that the subspace represented by AnsatzHint contains the exact eigenvector that we need and varying the parameter would always lead us to the exact solution. Since, the circuit is small, it is not expressible but it is one of the best ansatz for this case.
After analysing all these observations, it is seen that we can get a good (or exact) upper bound on the lowest eigen value if we know the subspace in which the corresponding eigenvector lies. We can design very shallow circuits which sweep that subspace to get the eigenvalue. This approach cannot be generalised for large and random hamiltonians as we would not know the subspace in which the eigenvector lies. In such cases, we need expressible circuits and generalised circuits like Ansatz8 which would take more time but give us a good upper bound for most cases.
### Introducing noise into the circuit
I ran the circuit having AnsatzHint which gave an exact eigen value of -1 on the noiseless simulator on a noisy IBM quantum computer (ibmq_15_melbourne). Since, there is queue for executing on real devices, I executed the circuit once with a parameter of pi (This parameter is the one that gives exact eigen value).
```
provider = IBMQ.load_account() # Load credentials for IBMQ
def quantum_module_hardware(trial_state_circuit, pauli_product, number_of_shots):
"""
This is a generalized function that adds transformations and performs measurements on the trial
state based on the pauli matrices. The measurements are performed repeatedly to obtain counts of
each output state. The measurements are performed on an real hardware.
Parameters:
trial_state_circuit (QuantumCircuit): Circuit that prepares the trial state
pauli_product (str): String representation of tensor product of pauli matrices
number_of_shots (int): Number of times measurements should be performed on the trial state
Returns:
counts (dict): Dictionary of counts of each output state
"""
measurement_circuit = trial_state_circuit.copy()
qubit = 0
cbit = 0
for pauli_matrix in pauli_product:
if pauli_matrix == 'X':
measurement_circuit.h(qubit)
elif pauli_matrix == 'Y':
measurement_circuit.sdg(qubit)
measurement_circuit.h(qubit)
elif pauli_matrix != 'I' and pauli_matrix != 'Z':
raise ValueError("Pauli product should consist only of I, X, Y or Z matrices")
measurement_circuit.measure(qubit, cbit)
qubit += 1
cbit += 1
backend = provider.backends.ibmq_16_melbourne
job = execute(measurement_circuit, backend, shots = number_of_shots)
result = job.result()
counts = result.get_counts()
return counts
def calculate_expectation_value_of_hamiltonian_hardware(parameters, coefficient_dict):
"""
Calculates the expectation value of the hamiltonian using the parameters for trial state circuit
and coefficients of pauli products using real hardware
Parameters:
parameters (np.array): List of angles that act as parameters for the trial state circuit
coefficient_dict (dict): Coeffiecients of pauli products obtained by decomposing the hamiltonian
Returns:
expectation_value_of_hamiltonian (float): Expectation value of the hamiltonian
"""
trial_state_circuit = create_trial_state_circuit(parameters)
expectation_value = 0
for pauli_product in coefficient_dict:
if abs(coefficient_dict[pauli_product]) > 0:
counts = quantum_module_hardware(trial_state_circuit, pauli_product, 8192)
expectation_value += np.real(coefficient_dict[pauli_product]) * calculate_expectation_value(counts, pauli_product)
return expectation_value
given_matrix = np.array([[1, 0, 0, 0],
[0, 0, -1, 0],
[0, -1, 0, 0],
[0, 0, 0, 1]], dtype=np.float64)
coefficient_dict = decompose_matrix(given_matrix)
theta1 = pi
parameters = [theta1]
tolerance = 1e-5
''' Running classical algorithm on the given matrix to find exact value '''
eigenvalues = np.linalg.eigvals(given_matrix)
lowest_eigenvalue = np.min(eigenvalues)
print("Classically calculated eigenvalues of given hamiltonian are: ", eigenvalues)
print("Lowest eigenvalue calculated using classical methods: ", lowest_eigenvalue)
expectation_value = calculate_expectation_value_of_hamiltonian_hardware(parameters, coefficient_dict)
print("Upper bound on lowest eigenvalue calculated using VQE: ", expectation_value)
```
I wasn't able to execute the circuit within time but previously I have achieved an expectation value of -0.9023 with theta1 as 0. Noise played a big role in the deviation and lead to errors. This can be fixed only when we have better devices that are resistant to noise and errors. There is constant research being done for this and we have a bright future ahead of us.
## Conclusion
I want to conclude by saying that it was a great experience working on NISQ algorithm and learning it in depth. The main outcomes for me were that choosing an ansatz for such circuits is an art and it would take a lot of time to develop an intuition in it. I also learnt that there are several factors that can affect the outcome of a variational circuit such as cost of the circuit, expressibility of the circuit, number of parameters, noise of the device, etc. and striking the right balance is important to get a good result.
## References
1. https://www.mustythoughts.com/variational-quantum-eigensolver-explained
2. https://arxiv.org/abs/1304.3061
3. https://quantumcomputing.stackexchange.com/questions/11899/example-of-hamiltonian-decomposition-into-pauli-matrices
4. https://quantumcomputing.stackexchange.com/questions/8725/can-arbitrary-matrices-be-decomposed-using-the-pauli-basis
5. https://quantumcomputing.stackexchange.com/questions/6882/decomposition-of-a-matrix-in-the-pauli-basis
6. https://en.wikipedia.org/wiki/Ansatz
7. https://github.com/DavitKhach/quantum-algorithms-tutorials/blob/master/variational_quantum_eigensolver.ipynb
8. https://michaelgoerz.net/notes/decomposing-two-qubit-hamiltonians-into-pauli-matrices.html
9. [Introduction to Quantum Computing and Quantum Hardware - Lecture 25 to 27](https://qiskit.org/learn/intro-qc-qh/)
10. Sim, Sukin, Peter D. Johnson, and Alán Aspuru‐Guzik. “Expressibility and Entangling Capability of Parameterized Quantum Circuits for Hybrid Quantum‐Classical Algorithms.” Advanced Quantum Technologies 2.12 (2019): 1900070. Crossref. Web.
## Additional Resources used for studying Variational Quantum Eigensolver
1. [Framework agnostic VQE tutorial by Alexander Soare](https://github.com/alexander-soare/framework-agnostic-vqe-tutorial)
|
github_jupyter
|
!pip install -U -r requirements.txt
from IPython.display import clear_output
clear_output()
''' Importing the required modules and defining global constants '''
import numpy as np
pi = np.pi
from qiskit import *
from scipy import optimize
import pandas as pd
from matplotlib.pyplot import savefig
from qiskit.providers.ibmq import *
def decompose_matrix(matrix):
"""
This function uses the formula describe above to calculate the coefficient of each Pauli product.
It essentially decomposes the 4 x 4 hamiltonian matrix into a linear combination of Pauli products.
Parameters:
matrix (np.array): A 4 x 4 hamiltonian matrix
Returns:
dict: Dictionary of coefficients of each Pauli product
"""
pauli_I = np.array([[1, 0],
[0, 1]], dtype=complex)
pauli_X = np.array([[0, 1],
[1, 0]], dtype=complex)
pauli_Y = np.array([[0, -1j],
[1j, 0]], dtype=complex)
pauli_Z = np.array([[1, 0],
[0, -1]], dtype=complex)
pauli_matrices = [["I", pauli_I], ["X", pauli_X], ["Y", pauli_Y], ["Z", pauli_Z]]
coefficient_dict = {}
for pauli_matrix_1 in pauli_matrices:
for pauli_matrix_2 in pauli_matrices:
tensor_product = np.kron(pauli_matrix_1[1], pauli_matrix_2[1])
coefficient_dict[f"{pauli_matrix_1[0]}{pauli_matrix_2[0]}"] = 0.25 * np.trace(np.matmul(tensor_product, given_matrix))
return coefficient_dict
given_matrix = np.array([[1, 0, 0, 0],
[0, 0, -1, 0],
[0, -1, 0, 0],
[0, 0, 0, 1]], dtype=np.float64)
print("Coefficient of each tensor product in the decomposition of given matrix: \n\n", decompose_matrix(given_matrix))
def create_trial_state_circuit(parameters):
"""
Creates a parameterized circuit (ansatz) that prepares the trial state based
on the parameters received as input.
Parameters:
parameters (np.array): List of angles that act as parameters for the circuit
Returns:
QuantumCircuit(2, 2): Quantum circuit that prepares the trial state
"""
trial_state_circuit = QuantumCircuit(2, 2)
trial_state_circuit.h(0)
trial_state_circuit.cx(0, 1)
trial_state_circuit.rx(parameters[0], 0)
trial_state_circuit.barrier()
# trial_state_circuit.draw(output='mpl').savefig('./circuits/AnsatzHint.png') # This statement was used during the experimentation phase to store the circuits
return trial_state_circuit
def quantum_module_simulator(trial_state_circuit, pauli_product, number_of_shots):
"""
This is a generalized function that adds transformations and performs measurements on the trial
state based on the pauli matrices. The measurements are performed repeatedly to obtain counts of
each output state. The measurements are performed on an ideal simulator.
Parameters:
trial_state_circuit (QuantumCircuit): Circuit that prepares the trial state
pauli_product (str): String representation of tensor product of pauli matrices
number_of_shots (int): Number of times measurements should be performed on the trial state
Returns:
counts (dict): Dictionary of counts of each output state
"""
measurement_circuit = trial_state_circuit.copy()
qubit = 0
cbit = 0
for pauli_matrix in pauli_product:
if pauli_matrix == 'X':
measurement_circuit.h(qubit)
elif pauli_matrix == 'Y':
measurement_circuit.sdg(qubit)
measurement_circuit.h(qubit)
elif pauli_matrix != 'I' and pauli_matrix != 'Z':
raise ValueError("Pauli product should consist only of I, X, Y or Z matrices")
measurement_circuit.measure(qubit, cbit)
qubit += 1
cbit += 1
backend = Aer.get_backend('qasm_simulator')
job = execute(measurement_circuit, backend, shots = number_of_shots)
result = job.result()
counts = result.get_counts()
return counts
def calculate_expectation_value(counts, pauli_product):
"""
Calculates the expectation value of the Pauli product based on the counts of each state and the formula defined above
Parameters:
counts (dict): Dictionary of counts of each output state
pauli_product (str): String representation of tensor product of pauli matrices
Returns:
expectation_value (int): Expectation value of the Pauli product based on the counts
"""
if pauli_product == 'II':
return 1
if '00' not in counts:
counts['00'] = 0
if '01' not in counts:
counts['01'] = 0
if '10' not in counts:
counts['10'] = 0
if '11' not in counts:
counts['11'] = 0
total_count = counts['00'] + counts['01'] + counts['10'] + counts['11']
# The formulae vary slightly from above as Qiskit has reverse ordering of qubits (little-endian)
# i.e. qubit 1 is qubit 2 and qubit 2 is qubit 1
if pauli_product == 'IX' or pauli_product == 'IY' or pauli_product == 'IZ':
expectation_value = (counts['00'] + counts['01'] - counts['10'] - counts['11']) / total_count
elif pauli_product == 'XI' or pauli_product == 'YI' or pauli_product == 'ZI':
expectation_value = (counts['00'] - counts['01'] + counts['10'] - counts['11']) / total_count
else:
expectation_value = (counts['00'] - counts['01'] - counts['10'] + counts['11']) / total_count
return expectation_value
def calculate_expectation_value_of_hamiltonian(parameters, coefficient_dict):
"""
Calculates the expectation value of the hamiltonian using the parameters for trial state circuit
and coefficients of pauli products
Parameters:
parameters (np.array): List of angles that act as parameters for the trial state circuit
coefficient_dict (dict): Coeffiecients of pauli products obtained by decomposing the hamiltonian
Returns:
expectation_value_of_hamiltonian (float): Expectation value of the hamiltonian
"""
trial_state_circuit = create_trial_state_circuit(parameters)
expectation_value = 0
for pauli_product in coefficient_dict:
if abs(coefficient_dict[pauli_product]) > 0:
counts = quantum_module_simulator(trial_state_circuit, pauli_product, 8192)
expectation_value += np.real(coefficient_dict[pauli_product]) * calculate_expectation_value(counts, pauli_product)
return expectation_value
given_matrix = np.array([[1, 0, 0, 0],
[0, 0, -1, 0],
[0, -1, 0, 0],
[0, 0, 0, 1]], dtype=np.float64)
coefficient_dict = decompose_matrix(given_matrix)
theta1 = 0
parameters = [theta1]
tolerance = 1e-5
''' Running classical algorithm on the given matrix to find exact value '''
eigenvalues = np.linalg.eigvals(given_matrix)
lowest_eigenvalue = np.min(eigenvalues)
print("Classically calculated eigenvalues of given hamiltonian are: ", eigenvalues)
print("Lowest eigenvalue calculated using classical methods: ", lowest_eigenvalue)
result = optimize.minimize(fun=calculate_expectation_value_of_hamiltonian, x0=parameters,
args=(coefficient_dict), method='Powell', tol=tolerance)
print("Upper bound on lowest eigenvalue calculated using VQE: ", result.fun)
experimental_data = pd.read_csv("experimental_data.csv")
experimental_data_without_description = experimental_data.loc[:, 'ID':]
experimental_data_without_description.head(9)
provider = IBMQ.load_account() # Load credentials for IBMQ
def quantum_module_hardware(trial_state_circuit, pauli_product, number_of_shots):
"""
This is a generalized function that adds transformations and performs measurements on the trial
state based on the pauli matrices. The measurements are performed repeatedly to obtain counts of
each output state. The measurements are performed on an real hardware.
Parameters:
trial_state_circuit (QuantumCircuit): Circuit that prepares the trial state
pauli_product (str): String representation of tensor product of pauli matrices
number_of_shots (int): Number of times measurements should be performed on the trial state
Returns:
counts (dict): Dictionary of counts of each output state
"""
measurement_circuit = trial_state_circuit.copy()
qubit = 0
cbit = 0
for pauli_matrix in pauli_product:
if pauli_matrix == 'X':
measurement_circuit.h(qubit)
elif pauli_matrix == 'Y':
measurement_circuit.sdg(qubit)
measurement_circuit.h(qubit)
elif pauli_matrix != 'I' and pauli_matrix != 'Z':
raise ValueError("Pauli product should consist only of I, X, Y or Z matrices")
measurement_circuit.measure(qubit, cbit)
qubit += 1
cbit += 1
backend = provider.backends.ibmq_16_melbourne
job = execute(measurement_circuit, backend, shots = number_of_shots)
result = job.result()
counts = result.get_counts()
return counts
def calculate_expectation_value_of_hamiltonian_hardware(parameters, coefficient_dict):
"""
Calculates the expectation value of the hamiltonian using the parameters for trial state circuit
and coefficients of pauli products using real hardware
Parameters:
parameters (np.array): List of angles that act as parameters for the trial state circuit
coefficient_dict (dict): Coeffiecients of pauli products obtained by decomposing the hamiltonian
Returns:
expectation_value_of_hamiltonian (float): Expectation value of the hamiltonian
"""
trial_state_circuit = create_trial_state_circuit(parameters)
expectation_value = 0
for pauli_product in coefficient_dict:
if abs(coefficient_dict[pauli_product]) > 0:
counts = quantum_module_hardware(trial_state_circuit, pauli_product, 8192)
expectation_value += np.real(coefficient_dict[pauli_product]) * calculate_expectation_value(counts, pauli_product)
return expectation_value
given_matrix = np.array([[1, 0, 0, 0],
[0, 0, -1, 0],
[0, -1, 0, 0],
[0, 0, 0, 1]], dtype=np.float64)
coefficient_dict = decompose_matrix(given_matrix)
theta1 = pi
parameters = [theta1]
tolerance = 1e-5
''' Running classical algorithm on the given matrix to find exact value '''
eigenvalues = np.linalg.eigvals(given_matrix)
lowest_eigenvalue = np.min(eigenvalues)
print("Classically calculated eigenvalues of given hamiltonian are: ", eigenvalues)
print("Lowest eigenvalue calculated using classical methods: ", lowest_eigenvalue)
expectation_value = calculate_expectation_value_of_hamiltonian_hardware(parameters, coefficient_dict)
print("Upper bound on lowest eigenvalue calculated using VQE: ", expectation_value)
| 0.868924 | 0.99292 |
# Summary statistics
`ScmRun` objects have methods specific to calculating summary statistics. In this notebook we demonstrate them.
At present, the following methods are available:
- `process_over`
- `quantiles_over`
- `groupby`
- `groupby_all_except`
```
# NBVAL_IGNORE_OUTPUT
import traceback
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scmdata.run import ScmRun, run_append
plt.rcParams["figure.figsize"] = (12, 8)
pd.set_option("display.width", 120)
pd.set_option("display.max_columns", 15)
pd.set_option("display.max_colwidth", 80)
pd.set_option("display.min_rows", 20)
```
## Helper bits and piecs
```
def new_timeseries(
n=101,
count=1,
model="example",
scenario="ssp119",
variable="Surface Temperature",
unit="K",
region="World",
cls=ScmRun,
**kwargs,
):
data = np.random.rand(n, count) * np.arange(n)[:, np.newaxis]
index = 2000 + np.arange(n)
return cls(
data,
columns={
"model": model,
"scenario": scenario,
"variable": variable,
"region": region,
"unit": unit,
**kwargs,
},
index=index,
)
```
Let's create an `ScmRun` which contains a few variables and a number of runs. Such a dataframe would be used to store the results from an ensemble of simple climate model runs.
```
# NBVAL_IGNORE_OUTPUT
runs = run_append(
[
new_timeseries(
count=3,
variable=[
"Surface Temperature",
"Atmospheric Concentrations|CO2",
"Radiative Forcing",
],
unit=["K", "ppm", "W/m^2"],
run_id=run_id,
)
for run_id in range(10)
]
)
runs.metadata["source"] = "fake data"
runs
```
## `process_over`
The `process_over` method allows us to calculate a specific set of statistics on groups of timeseries. A number of pandas functions can be called including "sum", "mean" and "describe".
```
print(runs.process_over.__doc__)
```
### Mean
```
# NBVAL_IGNORE_OUTPUT
mean = runs.process_over(cols="run_id", operation="mean")
mean
```
### Median
```
# NBVAL_IGNORE_OUTPUT
median = runs.process_over(cols="run_id", operation="median")
median
```
### Arbitrary functions
You are also able to run arbitrary functions for each group
```
# NBVAL_IGNORE_OUTPUT
def mean_and_invert(df, axis=0):
# Take a mean across the group and then invert the result
return -df.mean(axis=axis)
runs.process_over("run_id", operation=mean_and_invert)
# NBVAL_IGNORE_OUTPUT
runs.process_over("run_id", operation=mean_and_invert, axis=1)
```
### Other quantiles
```
# NBVAL_IGNORE_OUTPUT
lower_likely_quantile = runs.process_over(
cols="run_id", operation="quantile", q=0.17
)
lower_likely_quantile
```
## `quantiles_over`
If you want to calculate more than one summary statistic, `quantiles_over` will calculate and label multiple summary statistics before returning them.
```
print(runs.quantiles_over.__doc__)
# NBVAL_IGNORE_OUTPUT
summary_stats = runs.quantiles_over(
cols="run_id", quantiles=[0.05, 0.17, 0.5, 0.83, 0.95, "mean", "median"]
)
summary_stats
```
### Plotting
#### Calculate quantiles within plotting function
We can use `plumeplot` directly to plot quantiles. This will calculate the quantiles as part of making the plot so if you're doing this lots it might be faster to pre-calculate the quantiles, then make the plot instead (see below)
Note that in this case the default setttings in `plumeplot` don't produce anything that helpful, we show how to modify them in the cell below.
```
# NBVAL_IGNORE_OUTPUT
runs.plumeplot(quantile_over="run_id")
# NBVAL_IGNORE_OUTPUT
runs.plumeplot(
quantile_over="run_id",
quantiles_plumes=[
((0.05, 0.95), 0.2),
((0.17, 0.83), 0.5),
(("median",), 1.0),
],
hue_var="variable",
hue_label="Variable",
style_var="scenario",
style_label="Scenario",
)
```
#### Pre-calculated quantiles
Alternately, we can cast the output of `quantiles_over` to an `ScmRun` object for ease of filtering and plotting.
```
# NBVAL_IGNORE_OUTPUT
summary_stats_scmrun = ScmRun(summary_stats)
summary_stats_scmrun
```
As discussed above, casting the output of `quantiles_over` to an `ScmRun` object helps avoid repeatedly calculating the quantiles.
```
# NBVAL_IGNORE_OUTPUT
summary_stats_scmrun.plumeplot(
quantiles_plumes=[
((0.05, 0.95), 0.2),
((0.17, 0.83), 0.5),
(("median",), 1.0),
],
hue_var="variable",
hue_label="Variable",
style_var="scenario",
style_label="Scenario",
pre_calculated=True,
)
```
If we don't want a plume plot, we can always our standard lineplot method.
```
# NBVAL_IGNORE_OUTPUT
summary_stats_scmrun.filter(variable="Radiative Forcing").lineplot(
hue="quantile"
)
```
## `groupby`
The `groupby` method allows us to group the data by columns in `scmrun.meta` and then perform operations. An example is given below.
```
# NBVAL_IGNORE_OUTPUT
variable_means = []
for vdf in runs.groupby("variable"):
vdf_mean = vdf.timeseries().mean(axis=0)
vdf_mean.name = vdf.get_unique_meta("variable", True)
variable_means.append(vdf_mean)
pd.DataFrame(variable_means)
```
## `groupby_all_except`
The `groupby_all_except` method allows us to group the data by all columns in `scmrun.meta` except for a certain set. Like with `groupby`, we can then use the groups to perform operations. An example is given below. Note that, in most cases, using `process_over` is likely to be more useful.
```
# NBVAL_IGNORE_OUTPUT
ensemble_means = []
for edf in runs.groupby_all_except("run_id"):
edf_mean = edf.timeseries().mean(axis=0)
edf_mean.name = edf.get_unique_meta("variable", True)
ensemble_means.append(edf_mean)
pd.DataFrame(ensemble_means)
```
As we said, in most cases using `process_over` is likely to be more useful. For example the above can be done using `process_over` in one line (and more metadata is retained).
```
# NBVAL_IGNORE_OUTPUT
runs.process_over("run_id", "mean")
```
|
github_jupyter
|
# NBVAL_IGNORE_OUTPUT
import traceback
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scmdata.run import ScmRun, run_append
plt.rcParams["figure.figsize"] = (12, 8)
pd.set_option("display.width", 120)
pd.set_option("display.max_columns", 15)
pd.set_option("display.max_colwidth", 80)
pd.set_option("display.min_rows", 20)
def new_timeseries(
n=101,
count=1,
model="example",
scenario="ssp119",
variable="Surface Temperature",
unit="K",
region="World",
cls=ScmRun,
**kwargs,
):
data = np.random.rand(n, count) * np.arange(n)[:, np.newaxis]
index = 2000 + np.arange(n)
return cls(
data,
columns={
"model": model,
"scenario": scenario,
"variable": variable,
"region": region,
"unit": unit,
**kwargs,
},
index=index,
)
# NBVAL_IGNORE_OUTPUT
runs = run_append(
[
new_timeseries(
count=3,
variable=[
"Surface Temperature",
"Atmospheric Concentrations|CO2",
"Radiative Forcing",
],
unit=["K", "ppm", "W/m^2"],
run_id=run_id,
)
for run_id in range(10)
]
)
runs.metadata["source"] = "fake data"
runs
print(runs.process_over.__doc__)
# NBVAL_IGNORE_OUTPUT
mean = runs.process_over(cols="run_id", operation="mean")
mean
# NBVAL_IGNORE_OUTPUT
median = runs.process_over(cols="run_id", operation="median")
median
# NBVAL_IGNORE_OUTPUT
def mean_and_invert(df, axis=0):
# Take a mean across the group and then invert the result
return -df.mean(axis=axis)
runs.process_over("run_id", operation=mean_and_invert)
# NBVAL_IGNORE_OUTPUT
runs.process_over("run_id", operation=mean_and_invert, axis=1)
# NBVAL_IGNORE_OUTPUT
lower_likely_quantile = runs.process_over(
cols="run_id", operation="quantile", q=0.17
)
lower_likely_quantile
print(runs.quantiles_over.__doc__)
# NBVAL_IGNORE_OUTPUT
summary_stats = runs.quantiles_over(
cols="run_id", quantiles=[0.05, 0.17, 0.5, 0.83, 0.95, "mean", "median"]
)
summary_stats
# NBVAL_IGNORE_OUTPUT
runs.plumeplot(quantile_over="run_id")
# NBVAL_IGNORE_OUTPUT
runs.plumeplot(
quantile_over="run_id",
quantiles_plumes=[
((0.05, 0.95), 0.2),
((0.17, 0.83), 0.5),
(("median",), 1.0),
],
hue_var="variable",
hue_label="Variable",
style_var="scenario",
style_label="Scenario",
)
# NBVAL_IGNORE_OUTPUT
summary_stats_scmrun = ScmRun(summary_stats)
summary_stats_scmrun
# NBVAL_IGNORE_OUTPUT
summary_stats_scmrun.plumeplot(
quantiles_plumes=[
((0.05, 0.95), 0.2),
((0.17, 0.83), 0.5),
(("median",), 1.0),
],
hue_var="variable",
hue_label="Variable",
style_var="scenario",
style_label="Scenario",
pre_calculated=True,
)
# NBVAL_IGNORE_OUTPUT
summary_stats_scmrun.filter(variable="Radiative Forcing").lineplot(
hue="quantile"
)
# NBVAL_IGNORE_OUTPUT
variable_means = []
for vdf in runs.groupby("variable"):
vdf_mean = vdf.timeseries().mean(axis=0)
vdf_mean.name = vdf.get_unique_meta("variable", True)
variable_means.append(vdf_mean)
pd.DataFrame(variable_means)
# NBVAL_IGNORE_OUTPUT
ensemble_means = []
for edf in runs.groupby_all_except("run_id"):
edf_mean = edf.timeseries().mean(axis=0)
edf_mean.name = edf.get_unique_meta("variable", True)
ensemble_means.append(edf_mean)
pd.DataFrame(ensemble_means)
# NBVAL_IGNORE_OUTPUT
runs.process_over("run_id", "mean")
| 0.459561 | 0.953794 |
<img src="images/dask_horizontal.svg"
width="45%"
alt="Dask logo\">
# Parallel Computing in Python with Dask
This notebook provides a high-level overview of Dask. We discuss why you might want to use Dask, high-level and low-level APIs for generating computational graphs, and Dask's schedulers which enable the parallel execution of these graphs.
# Overview
[Dask](https://docs.dask.org) is a flexible, [open source](https://github.com/dask/dask) library for parallel and distributed computing in Python. Dask is designed to scale the existing Python ecosystem.
You might want to use Dask because it:
- Enables parallel and larger-than-memory computations
- Uses familiar APIs you're used to from projects like NumPy, pandas, and scikit-learn
- Allows you to scale existing workflows with minimal code changes
- Dask works on your laptop, but also scales out to large clusters
- Offers great built-in diagnosic tools
### Components of Dask
From a high level, Dask is comprised of two main components:
1. **Dask collections** which extend common interfaces like NumPy, pandas, and Python iterators to larger-than-memory or distributed environments by creating *task graphs*
2. **Schedulers** which compute task graphs produced by Dask collections in parallel
<img src="images/dask-overview.png"
width="85%"
alt="Dask components\">
### Task Graphs
```
def inc(i):
return i + 1
def add(a, b):
return a + b
a, b = 1, 12
c = inc(a)
d = inc(b)
output = add(c, d)
print(f'output = {output}')
```
This computation can be encoded in the following task graph:

- Graph of inter-related tasks with dependencies between them
- Circular nodes in the graph are Python function calls
- Square nodes are Python objects that are created by one task as output and can be used as inputs in another task
# Dask Collections
Let's looks at two Dask user interfaces: Dask Array and Dask Delayed.
## Dask Arrays
- Dask arrays are chunked, n-dimensional arrays
- Can think of a Dask array as a collection of NumPy `ndarray` arrays
- Dask arrays implement a large subset of the NumPy API using blocked algorithms
- For many purposes Dask arrays can serve as drop-in replacements for NumPy arrays
<img src="images/dask-array.png" width="50%">
```
import numpy as np
import dask.array as da
x_np = np.random.random(size=(1_000, 1_000))
x_np
```
We can create a Dask array in a similar manner, but need to specify a `chunks` argument to tell Dask how to break up the underlying array into chunks.
```
x = da.random.random(size=(1_000, 1_000), chunks=(250, 500))
x # Dask arrays have nice HTML output in Jupyter notebooks
```
Dask arrays look and feel like NumPy arrays. For example, they have `dtype` and `shape` attributes
```
print(x.dtype)
print(x.shape)
```
Dask arrays are _lazily_ evaluated. The result from a computation isn't computed until you ask for it. Instead, a Dask task graph for the computation is produced. You can visualize the task graph using the `visualize()` method.
```
x.visualize()
```
To compute a task graph call the `compute()` method
```
result = x.compute() # We'll go into more detail about .compute() later on
result
```
The result of this computation is a fimilar NumPy `ndarray`
```
type(result)
```
Dask arrays support a large portion of the NumPy interface:
- Arithmetic and scalar mathematics: `+`, `*`, `exp`, `log`, ...
- Reductions along axes: `sum()`, `mean()`, `std()`, `sum(axis=0)`, ...
- Tensor contractions / dot products / matrix multiply: `tensordot`
- Axis reordering / transpose: `transpose`
- Slicing: `x[:100, 500:100:-2]`
- Fancy indexing along single axes with lists or numpy arrays: `x[:, [10, 1, 5]]`
- Array protocols like `__array__` and `__array_ufunc__`
- Some linear algebra: `svd`, `qr`, `solve`, `solve_triangular`, `lstsq`, ...
- ...
See the [Dask array API docs](http://docs.dask.org/en/latest/array-api.html) for full details about what portion of the NumPy API is implemented for Dask arrays.
We can build more complex computations using the familiar NumPy operations we're used to.
```
result = (x + x.T).sum(axis=0).mean()
result.visualize()
result.compute()
```
**Note**: Dask can be used to scale other array-like libraries that support the NumPy `ndarray` interface. For example, [pydata/sparse](https://sparse.pydata.org/en/latest/) for sparse arrays or [CuPy](https://cupy.chainer.org/) for GPU-accelerated arrays.
## Dask Delayed
Sometimes problems don’t fit nicely into one of the high-level collections like Dask arrays or Dask DataFrames. In these cases, you can parallelize custom algorithms using the lower-level Dask `delayed` interface. This allows one to manually create task graphs with a light annotation of normal Python code.
```
import time
import random
def inc(x):
time.sleep(random.random())
return x + 1
def double(x):
time.sleep(random.random())
return 2 * x
def add(x, y):
time.sleep(random.random())
return x + y
%%time
data = [1, 2, 3, 4]
output = []
for i in data:
a = inc(i)
b = double(i)
c = add(a, b)
output.append(c)
total = sum(output)
```
Dask `delayed` wraps function calls and delays their execution. `delayed` functions record what we want to compute (a function and input parameters) as a task in a graph that we’ll run later on parallel hardware by calling `compute`.
```
from dask import delayed
@delayed
def lazy_inc(x):
time.sleep(random.random())
return x + 1
lazy_inc
inc_output = lazy_inc(3) # lazily evaluate inc(3)
inc_output
inc_output.compute()
```
Using `delayed` functions, we can build up a task graph for the particular computation we want to perform
```
double_inc_output = lazy_inc(inc_output)
double_inc_output
double_inc_output.visualize()
double_inc_output.compute()
```
We can use `delayed` to make our previous example computation lazy by wrapping all the function calls with delayed
```
import time
import random
@delayed
def inc(x):
time.sleep(random.random())
return x + 1
@delayed
def double(x):
time.sleep(random.random())
return 2 * x
@delayed
def add(x, y):
time.sleep(random.random())
return x + y
%%time
data = [1, 2, 3, 4]
output = []
for i in data:
a = inc(i)
b = double(i)
c = add(a, b)
output.append(c)
total = delayed(sum)(output)
total
total.visualize()
%%time
total.compute()
```
We highly recommend checking out the [Dask delayed best practices](http://docs.dask.org/en/latest/delayed-best-practices.html) page to avoid some common pitfalls when using `delayed`.
# Schedulers
High-level collections like Dask arrays and Dask DataFrames, as well as the low-level `dask.delayed` interface build up task graphs for a computation. After these graphs are generated, they need to be executed (potentially in parallel). This is the job of a task scheduler. Different task schedulers exist within Dask. Each will consume a task graph and compute the same result, but with different performance characteristics.

Dask has two different classes of schedulers: single-machine schedulers and a distributed scheduler.
## Single Machine Schedulers
Single machine schedulers provide basic features on a local process or thread pool and require no setup (only use the Python standard library). The different single machine schedulers Dask provides are:
- `'threads'`: The threaded scheduler executes computations with a local `concurrent.futures.ThreadPoolExecutor`. The threaded scheduler is the default choice for Dask arrays, Dask DataFrames, and Dask delayed.
- `'processes'`: The multiprocessing scheduler executes computations with a local `concurrent.futures.ProcessPoolExecutor`.
- `'single-threaded'`: The single-threaded synchronous scheduler executes all computations in the local thread, with no parallelism at all. This is particularly valuable for debugging and profiling, which are more difficult when using threads or processes.
You can configure which scheduler is used in a few different ways. You can set the scheduler globally by using the `dask.config.set(scheduler=)` command
```
import dask
dask.config.set(scheduler='threads')
x.compute(); # Will use the multi-threading scheduler
```
or use it as a context manager to set the scheduler for a block of code
```
with dask.config.set(scheduler='processes'):
x.compute() # Will use the multi-processing scheduler
```
or even within a single compute call
```
x.compute(scheduler='threads'); # Will use the multi-threading scheduler
```
The `num_workers` argument is used to specify the number of threads or processes to use
```
x.compute(scheduler='threads', num_workers=4);
```
## Distributed Scheduler
Despite having "distributed" in it's name, the distributed scheduler works well on both single and multiple machines. Think of it as the "advanced scheduler".
A Dask distributed cluster is composed of a single centralized scheduler and one or more worker processes. A `Client` object is used as the user-facing entry point to interact with the cluster. We will talk about the components of Dask clusters in more detail later on in [4-distributed-scheduler.py](4-distributed-scheduler.py).
<img src="images/dask-cluster.png"
width="85%"
alt="Dask components\">
The distributed scheduler has many features:
- [Real-time, `concurrent.futures`-like interface](https://docs.dask.org/en/latest/futures.html)
- [Sophisticated memory management](https://distributed.dask.org/en/latest/memory.html)
- [Data locality](https://distributed.dask.org/en/latest/locality.html)
- [Adaptive deployments](https://distributed.dask.org/en/latest/adaptive.html)
- [Cluster resilience](https://distributed.dask.org/en/latest/resilience.html)
- ...
See the [Dask distributed documentation](https://distributed.dask.org) for full details about all the distributed scheduler features.
```
from dask.distributed import Client
# Creates a local Dask cluster
client = Client()
client
x = da.ones((20_000, 20_000), chunks=(400, 400))
result = (x + x.T).sum(axis=0).mean()
result.compute()
client.close()
```
# Next steps
Next, let's learn more about performing custom operations on Dask collections in the [2-custom-operations.ipynb](2-custom-operations.ipynb) notebook.
|
github_jupyter
|
def inc(i):
return i + 1
def add(a, b):
return a + b
a, b = 1, 12
c = inc(a)
d = inc(b)
output = add(c, d)
print(f'output = {output}')
import numpy as np
import dask.array as da
x_np = np.random.random(size=(1_000, 1_000))
x_np
x = da.random.random(size=(1_000, 1_000), chunks=(250, 500))
x # Dask arrays have nice HTML output in Jupyter notebooks
print(x.dtype)
print(x.shape)
x.visualize()
result = x.compute() # We'll go into more detail about .compute() later on
result
type(result)
result = (x + x.T).sum(axis=0).mean()
result.visualize()
result.compute()
import time
import random
def inc(x):
time.sleep(random.random())
return x + 1
def double(x):
time.sleep(random.random())
return 2 * x
def add(x, y):
time.sleep(random.random())
return x + y
%%time
data = [1, 2, 3, 4]
output = []
for i in data:
a = inc(i)
b = double(i)
c = add(a, b)
output.append(c)
total = sum(output)
from dask import delayed
@delayed
def lazy_inc(x):
time.sleep(random.random())
return x + 1
lazy_inc
inc_output = lazy_inc(3) # lazily evaluate inc(3)
inc_output
inc_output.compute()
double_inc_output = lazy_inc(inc_output)
double_inc_output
double_inc_output.visualize()
double_inc_output.compute()
import time
import random
@delayed
def inc(x):
time.sleep(random.random())
return x + 1
@delayed
def double(x):
time.sleep(random.random())
return 2 * x
@delayed
def add(x, y):
time.sleep(random.random())
return x + y
%%time
data = [1, 2, 3, 4]
output = []
for i in data:
a = inc(i)
b = double(i)
c = add(a, b)
output.append(c)
total = delayed(sum)(output)
total
total.visualize()
%%time
total.compute()
import dask
dask.config.set(scheduler='threads')
x.compute(); # Will use the multi-threading scheduler
with dask.config.set(scheduler='processes'):
x.compute() # Will use the multi-processing scheduler
x.compute(scheduler='threads'); # Will use the multi-threading scheduler
x.compute(scheduler='threads', num_workers=4);
from dask.distributed import Client
# Creates a local Dask cluster
client = Client()
client
x = da.ones((20_000, 20_000), chunks=(400, 400))
result = (x + x.T).sum(axis=0).mean()
result.compute()
client.close()
| 0.303525 | 0.987794 |
# EDA
## Imports
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import chi2_contingency
```
## Read Data
```
classification = pd.read_csv("../data/classification_data.csv")
regression = pd.read_csv("../data/regression_data.csv")
prosecution = pd.read_csv("../data/prosecution_all_years.csv")
demo_classification = pd.read_csv("../data/demographics-data/classification_data_demographics.csv")
demo_regression = pd.read_csv("../data/demographics-data/regression_data_demographics.csv")
demo_prosecution = pd.read_csv("../data/demographics-data/prosecution_all_years_demographics.csv")
#function to move column to the end
def move_col(dataframe, column_name):
return dataframe[[col for col in dataframe if col != column_name]+ [column_name]]
#move label/prosecution-rate to end
demo_classification = move_col(demo_classification, 'labels')
demo_regression = move_col(demo_regression, 'prosecution-rate')
demo_classification.drop(columns='pct_Asian', inplace=True)
demo_regression.drop(columns='pct_Asian', inplace=True)
```
## Prosecution EDA
```
#drop unwanted columns
df_pros = demo_prosecution.drop(demo_prosecution.iloc[:,5:12], axis=1)
df_pros = df_pros.drop(df_pros.iloc[:, 2:4], axis=1)
#create columns relative to population
df_pros['referred_hc_per_1000_ppl'] = df_pros['Total Hate Crime Cases Referred'] / (df_pros['population']//1000)
df_pros['dispositions_per_1000_ppl'] = df_pros['Total Dispositions'] / (df_pros['population']//1000)
#Add prosecution-rate column
df_pros["prosecution-rate"] = df_pros[
"Total Dispositions"
].astype(int) / df_pros["Total Hate Crime Cases Referred"].astype(int)
df_pros['prosecution-rate'].fillna(0, inplace=True)
plt.figure(figsize=(10,10))
sns.heatmap(df_pros.corr(),annot=True, cmap='coolwarm')
```
### Hate Crimes by Racial Population
```
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(18, 5))
sns.regplot(data=df_pros, x='pct_Black', y='referred_hc_per_1000_ppl',ax=ax1)
ax1.set_title("Hate Crimes by Black Population", fontsize=25)
ax1.set_ylabel("Hate Crimes per 1,000 People", fontsize=16)
ax1.set_xlabel("Percent Black Population", fontsize=16)
sns.regplot(data=df_pros, x='pct_Hispanic', y='referred_hc_per_1000_ppl', ax=ax2, color='darkorange')
ax2.set_title("Hate Crimes by Hispanic Population", fontsize=25)
ax2.set_ylabel("Hate Crimes per 1,000 People", fontsize=16)
ax2.set_xlabel("Percent Hispanic Population", fontsize=16)
plt.savefig("../plots/HC_population_descending.png")
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(18, 5))
sns.regplot(data=df_pros, x='pct_Black', y='prosecution-rate',ax=ax1)
ax1.set_title("Prosecution Rate by Black Population", fontsize=25)
ax1.set_ylabel("Prosecution Rate", fontsize=16)
ax1.set_xlabel("Percent Black Population", fontsize=16)
sns.regplot(data=df_pros, x='pct_Hispanic', y='prosecution-rate', ax=ax2, color='darkorange')
ax2.set_title("Prosecution Rate Hispanic Population", fontsize=25)
ax2.set_ylabel("Prosecution Rate", fontsize=16)
ax2.set_xlabel("Percent Hispanic Population", fontsize=16)
plt.savefig("./plots/PR_population_descending.png")
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(18, 5))
sns.regplot(data=df_pros, x='pct_White', y='referred_hc_per_1000_ppl',ax=ax1)
ax1.set_title("Hate Crimes by White Population", fontsize=24)
ax1.set_ylabel("Hate Crimes per 1,000 People", fontsize=16)
ax1.set_xlabel("Percent White Population", fontsize=16)
sns.regplot(data=df_pros, x='pct_Multi-Racial/Ethnic', y='referred_hc_per_1000_ppl', ax=ax2, color='darkorange')
ax2.set_title("Hate Crimes by Multi-Racial/Ethnic Population", fontsize=23)
ax2.set_ylabel("Hate Crimes per 1,000 People", fontsize=16)
ax2.set_xlabel("Percent Multi-Racial/Ethnic Population", fontsize=16)
plt.savefig("../plots/HC_population_ascending.png")
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(18, 5))
sns.regplot(data=df_pros, x='pct_White', y='prosecution-rate',ax=ax1)
ax1.set_title("Prosecution Rates by White Population", fontsize=22)
ax1.set_ylabel("Prosecution Rates per 1,000 People", fontsize=15)
ax1.set_xlabel("Percent White Population", fontsize=16)
sns.regplot(data=df_pros, x='pct_Multi-Racial/Ethnic', y='prosecution-rate', ax=ax2, color='darkorange')
ax2.set_title("Prosecution Rates by Multi-Racial/Ethnic Population", fontsize=23)
ax2.set_ylabel("Prosecution Rates per 1,000 People", fontsize=15)
ax2.set_xlabel("Percent Multi-Racial/Ethnic Population", fontsize=16)
plt.savefig("../plots/PR_population_ascending.png")
```
### Hate Crimes by Suspects/Biases
```
suspect_race = demo_classification.groupby(['SuspectsRaceAsAGroup']).size().sort_values(ascending=False).head(6)
suspect_race.head()
victim_bias = demo_classification.groupby(['MostSeriousBias']).size().sort_values(ascending=False).head(5)
victim_bias
plt.figure(figsize=(15,7))
sns.barplot(y=suspect_race.index[1:], x=suspect_race[1:], orient='h')
plt.title('Top 5 Suspect Races', fontsize=25)
plt.xlabel('Number of Incidents', fontsize=16)
plt.ylabel("")
plt.yticks([0,1,2,3,4], labels=['White', 'Black or \n African American', 'Hispanic', 'Multiple Races', 'Asian\nPacific Islander'], fontsize=12)
plt.savefig("../plots/Suspect_Race.png")
plt.figure(figsize=(15,7))
sns.barplot(y=victim_bias.index, x=victim_bias, orient='h', palette=sns.color_palette('Set2', 5))
plt.title('Top 5 Victim Biases', fontsize=25)
plt.xlabel('Number of Incidents', fontsize=16)
plt.ylabel("")
plt.yticks([0,1,2,3,4], labels=['Anti-Black or \n African American', 'Anti-Gay (Male)', 'Anti-Jewish', 'Anti-Hispanic\nor Latino', 'Anti-Other Race'], fontsize=12)
plt.savefig("../plots/Victim_Bias.png")
suspect_bias = demo_classification.groupby(['SuspectsRaceAsAGroup', 'MostSeriousBias']).size().sort_values(ascending=False).head(16)
top_suspect_bias = suspect_bias.drop('Unknown', level=0, axis=0)
plt.figure(figsize=(15,10))
ax = top_suspect_bias.unstack().plot(kind='barh', figsize=(15,10), width=.7)
plt.yticks([0,1,2], labels=['White', 'Hispanic', 'Black or \n African American'], fontsize=12)
plt.ylabel("", fontsize=16)
plt.xlabel("Incidents", fontsize=16)
plt.title("Top 10 Most Serious Bias by Suspect Race", fontsize=25)
ax.invert_yaxis()
plt.savefig("../plots/Suspect_Bias.png")
```
## Correlations
### Heatmaps
```
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(20, 7))
sns.heatmap(demo_classification.corr(), annot=True, cmap='coolwarm', ax=ax1)
ax1.set_title("Demographic Classification Heatmap", fontsize=20)
sns.heatmap(demo_regression.corr(), annot=True, cmap='coolwarm', ax=ax2)
ax2.set_title("Demographic Regression Heatmap", fontsize=20)
```
### Turn Correlations into bar charts
```
#get label and prosecution-rate correlations
demo_labels_corr = demo_classification.corr()['labels'].sort_values(ascending=False)[1:]
demo_pros_rate_corr = demo_regression.corr()['prosecution-rate'].sort_values(ascending=False)[1:]
#function to make index names human readable
def rename_index(corr):
new_index = {}
for i in range(len(corr.index)):
index = corr.index[i]
index = index.replace("pct_", "% ")
index = index.replace("_", " ")
new_index[corr.index[i]] = index
return corr.rename(new_index)
#find the 10 highest correlations
top_labels_corr = demo_labels_corr[np.abs(demo_labels_corr).sort_values(ascending=False)[:10].index].sort_values(ascending=False)
top_pros_rate_corr = demo_pros_rate_corr[np.abs(demo_pros_rate_corr).sort_values(ascending=False)[:10].index].sort_values(ascending=False)
#rename columns
top_labels_corr = rename_index(top_labels_corr)
top_pros_rate_corr = rename_index(top_pros_rate_corr)
#create palette
top_corr_cols = top_labels_corr.index.append(top_pros_rate_corr.index).drop_duplicates()
tcc_dict = {top_corr_cols[i] : sns.color_palette( n_colors=top_corr_cols.shape[0])[i] for i in range(len(top_corr_cols))}
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(20, 7))
plt.suptitle("Numeric Correlations", fontsize=25)
sns.barplot(x=top_labels_corr.index, y=top_labels_corr, ax=ax1, palette=tcc_dict)
ax1.set_xticklabels(labels=top_labels_corr.index,rotation=30, ha='right', rotation_mode='anchor')
ax1.set_title("Labels Top 10 Correlations", fontsize=20)
ax1.set_ylabel("Correlation", fontsize=15)
ax1.set_ylim((-.6, .6))
ax1.bar_label(ax1.containers[0])
sns.barplot(x=top_pros_rate_corr.index, y=top_pros_rate_corr, ax=ax2, palette=tcc_dict)
ax2.set_xticklabels(labels=top_pros_rate_corr.index,rotation=30, ha='right', rotation_mode='anchor')
ax2.set_title("Prosecution-Rate Top 10 Correlations", fontsize=20)
ax2.set_ylabel("Correlation", fontsize=15)
;
#just the classification chart
plt.figure(figsize=(20,10))
ax1 = plt.subplot(111)
sns.barplot(x=top_labels_corr.index, y=top_labels_corr, palette=tcc_dict)
ax1.set_xticklabels(labels=top_labels_corr.index,rotation=30, ha='right', rotation_mode='anchor')
ax1.set_title("Top 10 Prosecution Rate Correlations", fontsize=20)
ax1.set_ylabel("Correlation", fontsize=16)
ax1.set_ylim((-.6, .6))
ax1.bar_label(ax1.containers[0])
plt.savefig("../plots/correlations.png");
```
## Look at Chi2 Correlation
### Classification Chi2
```
#Source:
#Find P-Values for Categorical Features
class_cat = demo_classification.iloc[:,np.array(demo_classification.dtypes == 'O')].copy()
class_cat['labels'] = demo_classification['labels']
chi2 = []
for col in class_cat.columns:
crosstab_res = pd.crosstab(index=class_cat[col], columns=class_cat['labels'])
chi_res = chi2_contingency(crosstab_res)
chi2.append([col, chi_res[1]])
class_cat_corr = pd.DataFrame(chi2).sort_values(by=1).drop(9)
class_cat_corr.rename({0: 'Feature', 1 : "P-Value"}, axis=1)
```
### Greatest Correlation EDA
```
gby_labels = demo_classification.groupby('labels').agg([np.mean, np.median])
gby_labels
top_labels_corr
plt.figure(figsize=(15,5))
n=np.arange(0,3)
w=.4
ax1 = plt.subplot(1,2,2)
ax1.bar(n, gby_labels['pct_AAPI']['mean'], width=w, color=sns.color_palette('muted')[0], label='Mean')
ax1.bar(n+w, gby_labels['pct_AAPI']['median'], width=w,color=sns.color_palette('muted')[1], label='Median')
ax1.set_xticks([0.2,1.2,2.2])
ax1.set_xticklabels(['Low', 'Medium', 'High'])
ax1.set_xlabel("Prosecution Rate", fontsize=16)
ax1.set_title("Prosecution Rate by AAPI Population", fontsize=20)
ax1.set_ylabel("% AAPI", fontsize=16)
ax1.legend()
ax2 = plt.subplot(1,2,1, sharey=ax1, sharex=ax1)
ax2.bar(n, gby_labels['pct_Hispanic']['mean'], width=w, color=sns.color_palette('Set2')[0],label='Mean')
ax2.bar(n+w, gby_labels['pct_Hispanic']['median'], width=w, color=sns.color_palette('Set2')[1],label='Median')
ax2.set_xlabel("Prosecution Rate", fontsize=16)
ax2.set_title("Prosecution Rate by Hispanic Population", fontsize=20)
ax2.set_ylabel("% Hispanic", fontsize=16)
ax2.legend()
plt.savefig("../plots/top_corr_pop.png")
plt.figure(figsize=(15,5))
n=np.arange(0,3)
w=.4
ax1 = plt.subplot(1,2,2)
ax1.bar(n, gby_labels['median_hh_income_2017']['mean'], width=w, color=sns.color_palette('muted')[0], label='Mean')
ax1.bar(n+w, gby_labels['median_hh_income_2017']['median'], width=w,color=sns.color_palette('muted')[1], label='Median')
ax1.set_xticks([0.2,1.2,2.2])
ax1.set_xticklabels(['Low', 'Medium', 'High'])
ax1.set_xlabel("Prosecution Rate", fontsize=16)
ax1.set_title("Prosecution Rate by House Hold Income", fontsize=20)
ax1.set_ylabel("Median Household Income", fontsize=16)
ax1.legend()
ax2 = plt.subplot(1,2,1, sharex=ax1)
ax2.bar(n, gby_labels['pct_unemployed_2018']['mean'], width=w, color=sns.color_palette('Set2')[0],label='Mean')
ax2.bar(n+w, gby_labels['pct_unemployed_2018']['median'], width=w, color=sns.color_palette('Set2')[1],label='Median')
ax2.set_xlabel("Prosecution Rate", fontsize=16)
ax2.set_title("Prosecution Rate by Unemployment Rate", fontsize=20)
ax2.set_ylabel("% Unemployed (2018)", fontsize=16)
ax2.legend()
plt.savefig("../plots/top_corr_2.png")
```
#### Bias Motivation Distrubtion Across Labels
```
bias = demo_classification.groupby(['labels','MostSeriousBias'])['MonthOccurrence'].count()
#map names for reading ease
bias_mapping = {
'Anti-Other Race/Ethnicity/Ancestry' : "Anti-Other Race",
"Anti-Lesbian/Gay/Bisexual or Transgender (Mixed Group)" : "Anti-LGBTQ",
}
bias_0 = bias[0].sort_values(ascending=False).head(6).rename(index=bias_mapping)
bias_1 = bias[1].sort_values(ascending=False).head(6).rename(index=bias_mapping)
bias_2 = bias[2].sort_values(ascending=False).head(6).rename(index=bias_mapping)
#create palette
biases = bias_0.index.append(bias_1.index).append(bias_2.index)
biases.drop_duplicates()
bias_colors = {biases[i] : sns.color_palette('colorblind', n_colors=biases.shape[0])[i] for i in range(len(biases))}
plt.figure(figsize=(20,10))
#plt.suptitle("Bias Motivation Counts by Label", fontsize=30)
ax = plt.subplot(3,1,1)
sns.barplot(y=bias_0.index, x=bias_0, orient='h', ax=ax, palette=bias_colors)
ax.set_xlabel("")
ax.set_ylabel("")
ax.set_title("Low Prosecution Rate", fontsize=23)
ax1 = plt.subplot(3,1,2)
sns.barplot(y=bias_1.index, x=bias_1, orient='h', ax=ax1, palette=bias_colors)
ax1.set_xlabel("")
ax1.set_ylabel("")
ax1.set_title("Medium Prosecution Rate", fontsize=23)
ax2 = plt.subplot(3,1,3)
sns.barplot(y=bias_2.index, x=bias_2, orient='h', ax=ax2, palette=bias_colors)
ax2.set_xlabel("Incident Count", fontsize=17)
ax2.set_ylabel("")
ax2.set_title("High Prosecution Rate", fontsize=23)
plt.tight_layout()
#plt.savefig("./plots/biases_by_label.png")
;
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import chi2_contingency
classification = pd.read_csv("../data/classification_data.csv")
regression = pd.read_csv("../data/regression_data.csv")
prosecution = pd.read_csv("../data/prosecution_all_years.csv")
demo_classification = pd.read_csv("../data/demographics-data/classification_data_demographics.csv")
demo_regression = pd.read_csv("../data/demographics-data/regression_data_demographics.csv")
demo_prosecution = pd.read_csv("../data/demographics-data/prosecution_all_years_demographics.csv")
#function to move column to the end
def move_col(dataframe, column_name):
return dataframe[[col for col in dataframe if col != column_name]+ [column_name]]
#move label/prosecution-rate to end
demo_classification = move_col(demo_classification, 'labels')
demo_regression = move_col(demo_regression, 'prosecution-rate')
demo_classification.drop(columns='pct_Asian', inplace=True)
demo_regression.drop(columns='pct_Asian', inplace=True)
#drop unwanted columns
df_pros = demo_prosecution.drop(demo_prosecution.iloc[:,5:12], axis=1)
df_pros = df_pros.drop(df_pros.iloc[:, 2:4], axis=1)
#create columns relative to population
df_pros['referred_hc_per_1000_ppl'] = df_pros['Total Hate Crime Cases Referred'] / (df_pros['population']//1000)
df_pros['dispositions_per_1000_ppl'] = df_pros['Total Dispositions'] / (df_pros['population']//1000)
#Add prosecution-rate column
df_pros["prosecution-rate"] = df_pros[
"Total Dispositions"
].astype(int) / df_pros["Total Hate Crime Cases Referred"].astype(int)
df_pros['prosecution-rate'].fillna(0, inplace=True)
plt.figure(figsize=(10,10))
sns.heatmap(df_pros.corr(),annot=True, cmap='coolwarm')
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(18, 5))
sns.regplot(data=df_pros, x='pct_Black', y='referred_hc_per_1000_ppl',ax=ax1)
ax1.set_title("Hate Crimes by Black Population", fontsize=25)
ax1.set_ylabel("Hate Crimes per 1,000 People", fontsize=16)
ax1.set_xlabel("Percent Black Population", fontsize=16)
sns.regplot(data=df_pros, x='pct_Hispanic', y='referred_hc_per_1000_ppl', ax=ax2, color='darkorange')
ax2.set_title("Hate Crimes by Hispanic Population", fontsize=25)
ax2.set_ylabel("Hate Crimes per 1,000 People", fontsize=16)
ax2.set_xlabel("Percent Hispanic Population", fontsize=16)
plt.savefig("../plots/HC_population_descending.png")
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(18, 5))
sns.regplot(data=df_pros, x='pct_Black', y='prosecution-rate',ax=ax1)
ax1.set_title("Prosecution Rate by Black Population", fontsize=25)
ax1.set_ylabel("Prosecution Rate", fontsize=16)
ax1.set_xlabel("Percent Black Population", fontsize=16)
sns.regplot(data=df_pros, x='pct_Hispanic', y='prosecution-rate', ax=ax2, color='darkorange')
ax2.set_title("Prosecution Rate Hispanic Population", fontsize=25)
ax2.set_ylabel("Prosecution Rate", fontsize=16)
ax2.set_xlabel("Percent Hispanic Population", fontsize=16)
plt.savefig("./plots/PR_population_descending.png")
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(18, 5))
sns.regplot(data=df_pros, x='pct_White', y='referred_hc_per_1000_ppl',ax=ax1)
ax1.set_title("Hate Crimes by White Population", fontsize=24)
ax1.set_ylabel("Hate Crimes per 1,000 People", fontsize=16)
ax1.set_xlabel("Percent White Population", fontsize=16)
sns.regplot(data=df_pros, x='pct_Multi-Racial/Ethnic', y='referred_hc_per_1000_ppl', ax=ax2, color='darkorange')
ax2.set_title("Hate Crimes by Multi-Racial/Ethnic Population", fontsize=23)
ax2.set_ylabel("Hate Crimes per 1,000 People", fontsize=16)
ax2.set_xlabel("Percent Multi-Racial/Ethnic Population", fontsize=16)
plt.savefig("../plots/HC_population_ascending.png")
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(18, 5))
sns.regplot(data=df_pros, x='pct_White', y='prosecution-rate',ax=ax1)
ax1.set_title("Prosecution Rates by White Population", fontsize=22)
ax1.set_ylabel("Prosecution Rates per 1,000 People", fontsize=15)
ax1.set_xlabel("Percent White Population", fontsize=16)
sns.regplot(data=df_pros, x='pct_Multi-Racial/Ethnic', y='prosecution-rate', ax=ax2, color='darkorange')
ax2.set_title("Prosecution Rates by Multi-Racial/Ethnic Population", fontsize=23)
ax2.set_ylabel("Prosecution Rates per 1,000 People", fontsize=15)
ax2.set_xlabel("Percent Multi-Racial/Ethnic Population", fontsize=16)
plt.savefig("../plots/PR_population_ascending.png")
suspect_race = demo_classification.groupby(['SuspectsRaceAsAGroup']).size().sort_values(ascending=False).head(6)
suspect_race.head()
victim_bias = demo_classification.groupby(['MostSeriousBias']).size().sort_values(ascending=False).head(5)
victim_bias
plt.figure(figsize=(15,7))
sns.barplot(y=suspect_race.index[1:], x=suspect_race[1:], orient='h')
plt.title('Top 5 Suspect Races', fontsize=25)
plt.xlabel('Number of Incidents', fontsize=16)
plt.ylabel("")
plt.yticks([0,1,2,3,4], labels=['White', 'Black or \n African American', 'Hispanic', 'Multiple Races', 'Asian\nPacific Islander'], fontsize=12)
plt.savefig("../plots/Suspect_Race.png")
plt.figure(figsize=(15,7))
sns.barplot(y=victim_bias.index, x=victim_bias, orient='h', palette=sns.color_palette('Set2', 5))
plt.title('Top 5 Victim Biases', fontsize=25)
plt.xlabel('Number of Incidents', fontsize=16)
plt.ylabel("")
plt.yticks([0,1,2,3,4], labels=['Anti-Black or \n African American', 'Anti-Gay (Male)', 'Anti-Jewish', 'Anti-Hispanic\nor Latino', 'Anti-Other Race'], fontsize=12)
plt.savefig("../plots/Victim_Bias.png")
suspect_bias = demo_classification.groupby(['SuspectsRaceAsAGroup', 'MostSeriousBias']).size().sort_values(ascending=False).head(16)
top_suspect_bias = suspect_bias.drop('Unknown', level=0, axis=0)
plt.figure(figsize=(15,10))
ax = top_suspect_bias.unstack().plot(kind='barh', figsize=(15,10), width=.7)
plt.yticks([0,1,2], labels=['White', 'Hispanic', 'Black or \n African American'], fontsize=12)
plt.ylabel("", fontsize=16)
plt.xlabel("Incidents", fontsize=16)
plt.title("Top 10 Most Serious Bias by Suspect Race", fontsize=25)
ax.invert_yaxis()
plt.savefig("../plots/Suspect_Bias.png")
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(20, 7))
sns.heatmap(demo_classification.corr(), annot=True, cmap='coolwarm', ax=ax1)
ax1.set_title("Demographic Classification Heatmap", fontsize=20)
sns.heatmap(demo_regression.corr(), annot=True, cmap='coolwarm', ax=ax2)
ax2.set_title("Demographic Regression Heatmap", fontsize=20)
#get label and prosecution-rate correlations
demo_labels_corr = demo_classification.corr()['labels'].sort_values(ascending=False)[1:]
demo_pros_rate_corr = demo_regression.corr()['prosecution-rate'].sort_values(ascending=False)[1:]
#function to make index names human readable
def rename_index(corr):
new_index = {}
for i in range(len(corr.index)):
index = corr.index[i]
index = index.replace("pct_", "% ")
index = index.replace("_", " ")
new_index[corr.index[i]] = index
return corr.rename(new_index)
#find the 10 highest correlations
top_labels_corr = demo_labels_corr[np.abs(demo_labels_corr).sort_values(ascending=False)[:10].index].sort_values(ascending=False)
top_pros_rate_corr = demo_pros_rate_corr[np.abs(demo_pros_rate_corr).sort_values(ascending=False)[:10].index].sort_values(ascending=False)
#rename columns
top_labels_corr = rename_index(top_labels_corr)
top_pros_rate_corr = rename_index(top_pros_rate_corr)
#create palette
top_corr_cols = top_labels_corr.index.append(top_pros_rate_corr.index).drop_duplicates()
tcc_dict = {top_corr_cols[i] : sns.color_palette( n_colors=top_corr_cols.shape[0])[i] for i in range(len(top_corr_cols))}
fig, (ax1, ax2) = plt.subplots(1,2, sharey=True, figsize=(20, 7))
plt.suptitle("Numeric Correlations", fontsize=25)
sns.barplot(x=top_labels_corr.index, y=top_labels_corr, ax=ax1, palette=tcc_dict)
ax1.set_xticklabels(labels=top_labels_corr.index,rotation=30, ha='right', rotation_mode='anchor')
ax1.set_title("Labels Top 10 Correlations", fontsize=20)
ax1.set_ylabel("Correlation", fontsize=15)
ax1.set_ylim((-.6, .6))
ax1.bar_label(ax1.containers[0])
sns.barplot(x=top_pros_rate_corr.index, y=top_pros_rate_corr, ax=ax2, palette=tcc_dict)
ax2.set_xticklabels(labels=top_pros_rate_corr.index,rotation=30, ha='right', rotation_mode='anchor')
ax2.set_title("Prosecution-Rate Top 10 Correlations", fontsize=20)
ax2.set_ylabel("Correlation", fontsize=15)
;
#just the classification chart
plt.figure(figsize=(20,10))
ax1 = plt.subplot(111)
sns.barplot(x=top_labels_corr.index, y=top_labels_corr, palette=tcc_dict)
ax1.set_xticklabels(labels=top_labels_corr.index,rotation=30, ha='right', rotation_mode='anchor')
ax1.set_title("Top 10 Prosecution Rate Correlations", fontsize=20)
ax1.set_ylabel("Correlation", fontsize=16)
ax1.set_ylim((-.6, .6))
ax1.bar_label(ax1.containers[0])
plt.savefig("../plots/correlations.png");
#Source:
#Find P-Values for Categorical Features
class_cat = demo_classification.iloc[:,np.array(demo_classification.dtypes == 'O')].copy()
class_cat['labels'] = demo_classification['labels']
chi2 = []
for col in class_cat.columns:
crosstab_res = pd.crosstab(index=class_cat[col], columns=class_cat['labels'])
chi_res = chi2_contingency(crosstab_res)
chi2.append([col, chi_res[1]])
class_cat_corr = pd.DataFrame(chi2).sort_values(by=1).drop(9)
class_cat_corr.rename({0: 'Feature', 1 : "P-Value"}, axis=1)
gby_labels = demo_classification.groupby('labels').agg([np.mean, np.median])
gby_labels
top_labels_corr
plt.figure(figsize=(15,5))
n=np.arange(0,3)
w=.4
ax1 = plt.subplot(1,2,2)
ax1.bar(n, gby_labels['pct_AAPI']['mean'], width=w, color=sns.color_palette('muted')[0], label='Mean')
ax1.bar(n+w, gby_labels['pct_AAPI']['median'], width=w,color=sns.color_palette('muted')[1], label='Median')
ax1.set_xticks([0.2,1.2,2.2])
ax1.set_xticklabels(['Low', 'Medium', 'High'])
ax1.set_xlabel("Prosecution Rate", fontsize=16)
ax1.set_title("Prosecution Rate by AAPI Population", fontsize=20)
ax1.set_ylabel("% AAPI", fontsize=16)
ax1.legend()
ax2 = plt.subplot(1,2,1, sharey=ax1, sharex=ax1)
ax2.bar(n, gby_labels['pct_Hispanic']['mean'], width=w, color=sns.color_palette('Set2')[0],label='Mean')
ax2.bar(n+w, gby_labels['pct_Hispanic']['median'], width=w, color=sns.color_palette('Set2')[1],label='Median')
ax2.set_xlabel("Prosecution Rate", fontsize=16)
ax2.set_title("Prosecution Rate by Hispanic Population", fontsize=20)
ax2.set_ylabel("% Hispanic", fontsize=16)
ax2.legend()
plt.savefig("../plots/top_corr_pop.png")
plt.figure(figsize=(15,5))
n=np.arange(0,3)
w=.4
ax1 = plt.subplot(1,2,2)
ax1.bar(n, gby_labels['median_hh_income_2017']['mean'], width=w, color=sns.color_palette('muted')[0], label='Mean')
ax1.bar(n+w, gby_labels['median_hh_income_2017']['median'], width=w,color=sns.color_palette('muted')[1], label='Median')
ax1.set_xticks([0.2,1.2,2.2])
ax1.set_xticklabels(['Low', 'Medium', 'High'])
ax1.set_xlabel("Prosecution Rate", fontsize=16)
ax1.set_title("Prosecution Rate by House Hold Income", fontsize=20)
ax1.set_ylabel("Median Household Income", fontsize=16)
ax1.legend()
ax2 = plt.subplot(1,2,1, sharex=ax1)
ax2.bar(n, gby_labels['pct_unemployed_2018']['mean'], width=w, color=sns.color_palette('Set2')[0],label='Mean')
ax2.bar(n+w, gby_labels['pct_unemployed_2018']['median'], width=w, color=sns.color_palette('Set2')[1],label='Median')
ax2.set_xlabel("Prosecution Rate", fontsize=16)
ax2.set_title("Prosecution Rate by Unemployment Rate", fontsize=20)
ax2.set_ylabel("% Unemployed (2018)", fontsize=16)
ax2.legend()
plt.savefig("../plots/top_corr_2.png")
bias = demo_classification.groupby(['labels','MostSeriousBias'])['MonthOccurrence'].count()
#map names for reading ease
bias_mapping = {
'Anti-Other Race/Ethnicity/Ancestry' : "Anti-Other Race",
"Anti-Lesbian/Gay/Bisexual or Transgender (Mixed Group)" : "Anti-LGBTQ",
}
bias_0 = bias[0].sort_values(ascending=False).head(6).rename(index=bias_mapping)
bias_1 = bias[1].sort_values(ascending=False).head(6).rename(index=bias_mapping)
bias_2 = bias[2].sort_values(ascending=False).head(6).rename(index=bias_mapping)
#create palette
biases = bias_0.index.append(bias_1.index).append(bias_2.index)
biases.drop_duplicates()
bias_colors = {biases[i] : sns.color_palette('colorblind', n_colors=biases.shape[0])[i] for i in range(len(biases))}
plt.figure(figsize=(20,10))
#plt.suptitle("Bias Motivation Counts by Label", fontsize=30)
ax = plt.subplot(3,1,1)
sns.barplot(y=bias_0.index, x=bias_0, orient='h', ax=ax, palette=bias_colors)
ax.set_xlabel("")
ax.set_ylabel("")
ax.set_title("Low Prosecution Rate", fontsize=23)
ax1 = plt.subplot(3,1,2)
sns.barplot(y=bias_1.index, x=bias_1, orient='h', ax=ax1, palette=bias_colors)
ax1.set_xlabel("")
ax1.set_ylabel("")
ax1.set_title("Medium Prosecution Rate", fontsize=23)
ax2 = plt.subplot(3,1,3)
sns.barplot(y=bias_2.index, x=bias_2, orient='h', ax=ax2, palette=bias_colors)
ax2.set_xlabel("Incident Count", fontsize=17)
ax2.set_ylabel("")
ax2.set_title("High Prosecution Rate", fontsize=23)
plt.tight_layout()
#plt.savefig("./plots/biases_by_label.png")
;
| 0.415966 | 0.759091 |
```
import pyshark
import pandas as pd
# Read packet capture and set display filter to only retrieve TDS traffic
cap = pyshark.FileCapture("ms-sql-tds-rpc-requests.pcap",display_filter="tds")
tds_types = {
"1": "SQL Batch",
"2": "Pre-TDS7 Login",
"3": "RPC",
"4": "Tabular result",
"5": "unused",
"6": "Attention signal",
"7": "Bulk load data",
"8": "Federated authentication token",
"9": "unused",
"10": "unused",
"11": "unused",
"12": "unused",
"13": "unused",
"14": "Transaction manager request",
"15": "unused",
"16": "TDS7 Login",
"17": "SSPI",
"18": "Pre-login"
}
procedure_codes = {
"1": "Sp_Cursor",
"2": "Sp_CursorOpen",
"3": "Sp_CursorPrepare",
"4": "Sp_CursorExecute",
"5": "Sp_CursorPrepExec",
"6": "Sp_CursorUnprepare",
"7": "Sp_CursorFetch",
"8": "Sp_CursorOption",
"9": "Sp_CursorClose",
"10": "Sp_ExecuteSql",
"11": "Sp_Prepare",
"12": "Sp_Execute",
"13": "Sp_PrepExec",
"14": "Sp_PrepExecRpc",
"15": "Sp_Unprepare"
}
cap_data = []
for pkt in cap:
pkt_data = {}
try:
pkt_data["ts"] = pkt.sniff_timestamp
pkt_data["src_ip"] = str(pkt.ip.src)
pkt_data["dst_ip"] = str(pkt.ip.dst)
pkt_data["src_port"] = int(pkt.tcp.srcport)
pkt_data["dst_port"] = int(pkt.tcp.dstport)
pkt_data["tds_type"] = int(pkt.tds.type)
pkt_data["tds_type_str"] = str(tds_types.get(pkt.tds.type))
if pkt_data["tds_type"] == 1:
# This is a SQL batch type
pkt_data["query"] = str(pkt.tds.query)
elif pkt_data["tds_type"] == 3:
# This is a remote procedure call
# Look for a procedure ID
try:
if procedure_codes.get(pkt.tds.rpc_proc_id) is not None:
pkt_data["rpc_proc_id"] = procedure_codes.get(pkt.tds.rpc_proc_id)
else:
pkt_data["rpc_proc_id"] = int(pkt.tds.rpc_proc_id)
except:
pass
# Look for a procedure name
try:
pkt_data["rpc_proc_name"] = str(pkt.tds.rpc_name)
except:
pass
else:
# Not a RPC or query
pass
cap_data.append(pkt_data)
except Exception as e:
print("Exception: " + str(e))
# Convert the data into a pandas dataframe
df = pd.DataFrame(cap_data).fillna(value="")
#df
df[['ts','src_ip','src_port','dst_ip','dst_port','tds_type','tds_type_str','query','rpc_proc_id','rpc_proc_name']]
# Output and count all queries in the dataset
df['query'].value_counts()
# Look at named remote procedure call functions called
df['rpc_proc_name'].value_counts()
# Look at remote procedure call function values
df['rpc_proc_id'].value_counts()
df.loc[df['rpc_proc_name'] == "p_GetBogusData"][['ts','src_ip','dst_ip','rpc_proc_name']]
df2 = df.loc[(df['rpc_proc_name'] == "p_SetBogusSample") | (df['rpc_proc_name'] == "sp_executesql")]
df2[['ts','src_ip','dst_ip','rpc_proc_name']]
```
|
github_jupyter
|
import pyshark
import pandas as pd
# Read packet capture and set display filter to only retrieve TDS traffic
cap = pyshark.FileCapture("ms-sql-tds-rpc-requests.pcap",display_filter="tds")
tds_types = {
"1": "SQL Batch",
"2": "Pre-TDS7 Login",
"3": "RPC",
"4": "Tabular result",
"5": "unused",
"6": "Attention signal",
"7": "Bulk load data",
"8": "Federated authentication token",
"9": "unused",
"10": "unused",
"11": "unused",
"12": "unused",
"13": "unused",
"14": "Transaction manager request",
"15": "unused",
"16": "TDS7 Login",
"17": "SSPI",
"18": "Pre-login"
}
procedure_codes = {
"1": "Sp_Cursor",
"2": "Sp_CursorOpen",
"3": "Sp_CursorPrepare",
"4": "Sp_CursorExecute",
"5": "Sp_CursorPrepExec",
"6": "Sp_CursorUnprepare",
"7": "Sp_CursorFetch",
"8": "Sp_CursorOption",
"9": "Sp_CursorClose",
"10": "Sp_ExecuteSql",
"11": "Sp_Prepare",
"12": "Sp_Execute",
"13": "Sp_PrepExec",
"14": "Sp_PrepExecRpc",
"15": "Sp_Unprepare"
}
cap_data = []
for pkt in cap:
pkt_data = {}
try:
pkt_data["ts"] = pkt.sniff_timestamp
pkt_data["src_ip"] = str(pkt.ip.src)
pkt_data["dst_ip"] = str(pkt.ip.dst)
pkt_data["src_port"] = int(pkt.tcp.srcport)
pkt_data["dst_port"] = int(pkt.tcp.dstport)
pkt_data["tds_type"] = int(pkt.tds.type)
pkt_data["tds_type_str"] = str(tds_types.get(pkt.tds.type))
if pkt_data["tds_type"] == 1:
# This is a SQL batch type
pkt_data["query"] = str(pkt.tds.query)
elif pkt_data["tds_type"] == 3:
# This is a remote procedure call
# Look for a procedure ID
try:
if procedure_codes.get(pkt.tds.rpc_proc_id) is not None:
pkt_data["rpc_proc_id"] = procedure_codes.get(pkt.tds.rpc_proc_id)
else:
pkt_data["rpc_proc_id"] = int(pkt.tds.rpc_proc_id)
except:
pass
# Look for a procedure name
try:
pkt_data["rpc_proc_name"] = str(pkt.tds.rpc_name)
except:
pass
else:
# Not a RPC or query
pass
cap_data.append(pkt_data)
except Exception as e:
print("Exception: " + str(e))
# Convert the data into a pandas dataframe
df = pd.DataFrame(cap_data).fillna(value="")
#df
df[['ts','src_ip','src_port','dst_ip','dst_port','tds_type','tds_type_str','query','rpc_proc_id','rpc_proc_name']]
# Output and count all queries in the dataset
df['query'].value_counts()
# Look at named remote procedure call functions called
df['rpc_proc_name'].value_counts()
# Look at remote procedure call function values
df['rpc_proc_id'].value_counts()
df.loc[df['rpc_proc_name'] == "p_GetBogusData"][['ts','src_ip','dst_ip','rpc_proc_name']]
df2 = df.loc[(df['rpc_proc_name'] == "p_SetBogusSample") | (df['rpc_proc_name'] == "sp_executesql")]
df2[['ts','src_ip','dst_ip','rpc_proc_name']]
| 0.210848 | 0.376509 |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Vowpal Wabbit Deep Dive
<center>
<img src="https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/logo_assets/vowpal-wabbits-github-logo.png?raw=true" height="30%" width="30%" alt="Vowpal Wabbit">
</center>
[Vowpal Wabbit](https://github.com/VowpalWabbit/vowpal_wabbit) is a fast online machine learning library that implements several algorithms relevant to the recommendation use case.
The main advantage of Vowpal Wabbit (VW) is that training is done in an online fashion typically using Stochastic Gradient Descent or similar variants, which allows it to scale well to very large datasets. Additionally, it is optimized to run very quickly and can support distributed training scenarios for extremely large datasets.
VW is best applied to problems where the dataset is too large to fit into memory but can be stored on disk in a single node. Though distributed training is possible with additional setup and configuration of the nodes. The kinds of problems that VW handles well mostly fall into the supervised classification domain of machine learning (Linear Regression, Logistic Regression, Multiclass Classification, Support Vector Machines, Simple Neural Nets). It also supports Matrix Factorization approaches and Latent Dirichlet Allocation, as well as a few other algorithms (see the [wiki](https://github.com/VowpalWabbit/vowpal_wabbit/wiki) for more information).
A good example of a typical deployment use case is a Real Time Bidding scenario, where an auction to place an ad for a user is being decided in a matter of milliseconds. Feature information about the user and items must be extracted and passed into a model to predict likelihood of click (or other interaction) in short order. And if the user and context features are constantly changing (e.g. user browser and local time of day) it may be infeasible to score every possible input combination before hand. This is where VW provides value, as a platform to explore various algorithms offline to train a highly accurate model on a large set of historical data then deploy the model into production so it can generate rapid predictions in real time. Of course this isn't the only manner VW can be deployed, it is also possible to use it entirely online where the model is constantly updating, or use active learning approaches, or work completely offline in a pre-scoring mode.
<h3>Vowpal Wabbit for Recommendations</h3>
In this notebook we demonstrate how to use the VW library to generate recommendations on the [MovieLens](https://grouplens.org/datasets/movielens/) dataset.
Several things are worth noting in how VW is being used in this notebook:
By leveraging an Azure Data Science Virtual Machine ([DSVM](https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-machines/)), VW comes pre-installed and can be used directly from the command line. If you are not using a DSVM you must install vw yourself.
There are also python bindings to allow VW use within a python environment and even a wrapper conforming to the SciKit-Learn Estimator API. However, the python bindings must be installed as an additional python package with Boost dependencies, so for simplicity's sake execution of VW is done via a subprocess call mimicking what would happen from the command line execution of the model.
VW expects a specific [input format](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Input-format), in this notebook to_vw() is a convenience function that converts the standard movielens dataset into the required data format. Datafiles are then written to disk and passed to VW for training.
The examples shown are to demonstrate functional capabilities of VW not to indicate performance advantages of different approaches. There are several hyper-parameters (e.g. learning rate and regularization terms) that can greatly impact performance of VW models which can be adjusted using [command line options](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments). To properly compare approaches it is helpful to learn about and tune these parameters on the relevant dataset.
# 0. Global Setup
```
import sys
import os
from subprocess import run
from tempfile import TemporaryDirectory
from time import process_time
import pandas as pd
import papermill as pm
import scrapbook as sb
from reco_utils.common.notebook_utils import is_jupyter
from reco_utils.dataset.movielens import load_pandas_df
from reco_utils.dataset.python_splitters import python_random_split
from reco_utils.evaluation.python_evaluation import (rmse, mae, exp_var, rsquared, get_top_k_items,
map_at_k, ndcg_at_k, precision_at_k, recall_at_k)
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
def to_vw(df, output, logistic=False):
"""Convert Pandas DataFrame to vw input format
Args:
df (pd.DataFrame): input DataFrame
output (str): path to output file
logistic (bool): flag to convert label to logistic value
"""
with open(output, 'w') as f:
tmp = df.reset_index()
# we need to reset the rating type to an integer to simplify the vw formatting
tmp['rating'] = tmp['rating'].astype('int64')
# convert rating to binary value
if logistic:
tmp['rating'] = tmp['rating'].apply(lambda x: 1 if x >= 3 else -1)
# convert each row to VW input format (https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Input-format)
# [label] [tag]|[user namespace] [user id feature] |[item namespace] [movie id feature]
# label is the true rating, tag is a unique id for the example just used to link predictions to truth
# user and item namespaces separate the features to support interaction features through command line options
for _, row in tmp.iterrows():
f.write('{rating:d} {index:d}|user {userID:d} |item {itemID:d}\n'.format_map(row))
def run_vw(train_params, test_params, test_data, prediction_path, logistic=False):
"""Convenience function to train, test, and show metrics of interest
Args:
train_params (str): vw training parameters
test_params (str): vw testing parameters
test_data (pd.dataFrame): test data
prediction_path (str): path to vw prediction output
logistic (bool): flag to convert label to logistic value
Returns:
(dict): metrics and timing information
"""
# train model
train_start = process_time()
run(train_params.split(' '), check=True)
train_stop = process_time()
# test model
test_start = process_time()
run(test_params.split(' '), check=True)
test_stop = process_time()
# read in predictions
pred_df = pd.read_csv(prediction_path, delim_whitespace=True, names=['prediction'], index_col=1).join(test_data)
pred_df.drop("rating", axis=1, inplace=True)
test_df = test_data.copy()
if logistic:
# make the true label binary so that the metrics are captured correctly
test_df['rating'] = test['rating'].apply(lambda x: 1 if x >= 3 else -1)
else:
# ensure results are integers in correct range
pred_df['prediction'] = pred_df['prediction'].apply(lambda x: int(max(1, min(5, round(x)))))
# calculate metrics
result = dict()
result['RMSE'] = rmse(test_df, pred_df)
result['MAE'] = mae(test_df, pred_df)
result['R2'] = rsquared(test_df, pred_df)
result['Explained Variance'] = exp_var(test_df, pred_df)
result['Train Time (ms)'] = (train_stop - train_start) * 1000
result['Test Time (ms)'] = (test_stop - test_start) * 1000
return result
# create temp directory to maintain data files
tmpdir = TemporaryDirectory()
model_path = os.path.join(tmpdir.name, 'vw.model')
saved_model_path = os.path.join(tmpdir.name, 'vw_saved.model')
train_path = os.path.join(tmpdir.name, 'train.dat')
test_path = os.path.join(tmpdir.name, 'test.dat')
train_logistic_path = os.path.join(tmpdir.name, 'train_logistic.dat')
test_logistic_path = os.path.join(tmpdir.name, 'test_logistic.dat')
prediction_path = os.path.join(tmpdir.name, 'prediction.dat')
all_test_path = os.path.join(tmpdir.name, 'new_test.dat')
all_prediction_path = os.path.join(tmpdir.name, 'new_prediction.dat')
```
# 1. Load & Transform Data
```
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
TOP_K = 10
# load movielens data
df = load_pandas_df(MOVIELENS_DATA_SIZE)
# split data to train and test sets, default values take 75% of each users ratings as train, and 25% as test
train, test = python_random_split(df, 0.75)
# save train and test data in vw format
to_vw(df=train, output=train_path)
to_vw(df=test, output=test_path)
# save data for logistic regression (requires adjusting the label)
to_vw(df=train, output=train_logistic_path, logistic=True)
to_vw(df=test, output=test_logistic_path, logistic=True)
```
# 2. Regression Based Recommendations
When considering different approaches for solving a problem with machine learning it is helpful to generate a baseline approach to understand how more complex solutions perform across dimensions of performance, time, and resource (memory or cpu) usage.
Regression based approaches are some of the simplest and fastest baselines to consider for many ML problems.
## 2.1 Linear Regression
As the data provides a numerical rating between 1-5, fitting those values with a linear regression model is easy approach. This model is trained on examples of ratings as the target variable and corresponding user ids and movie ids as independent features.
By passing each user-item rating in as an example the model will begin to learn weights based on average ratings for each user as well as average ratings per item.
This however can generate predicted ratings which are no longer integers, so some additional adjustments should be made at prediction time to convert them back to the integer scale of 1 through 5 if necessary. Here, this is done in the evaluate function.
```
"""
Quick description of command line parameters used
Other optional parameters can be found here: https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments
VW uses linear regression by default, so no extra command line options
-f <model_path>: indicates where the final model file will reside after training
-d <data_path>: indicates which data file to use for training or testing
--quiet: this runs vw in quiet mode silencing stdout (for debugging it's helpful to not use quiet mode)
-i <model_path>: indicates where to load the previously model file created during training
-t: this executes inference only (no learned updates to the model)
-p <prediction_path>: indicates where to store prediction output
"""
train_params = 'vw -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
# save these results for later use during top-k analysis
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = pd.DataFrame(result, index=['Linear Regression'])
comparison
```
## 2.2 Linear Regression with Interaction Features
Previously we treated the user features and item features independently, but taking into account interactions between features can provide a mechanism to learn more fine grained preferences of the users.
To generate interaction features use the quadratic command line argument and specify the namespaces that should be combined: '-q ui' combines the user and item namespaces based on the first letter of each.
Currently the userIDs and itemIDs used are integers which means the feature ID is used directly, for instance when user ID 123 rates movie 456, the training example puts a 1 in the values for features 123 and 456. However when interaction is specified (or if a feature is a string) the resulting interaction feature is hashed into the available feature space. Feature hashing is a way to take a very sparse high dimensional feature space and reduce it into a lower dimensional space. This allows for reduced memory while retaining fast computation of feature and model weights.
The caveat with feature hashing, is that it can lead to hash collisions, where separate features are mapped to the same location. In this case it can be beneficial to increase the size of the space to support interactions between features of high cardinality. The available feature space is dictated by the --bit_precision (-b) <N> argument. Where the total available space for all features in the model is 2<sup>N</sup>.
See [Feature Hashing and Extraction](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Feature-Hashing-and-Extraction) for more details.
```
"""
Quick description of command line parameters used
-b <N>: sets the memory size to 2<sup>N</sup> entries
-q <ab>: create quadratic feature interactions between features in namespaces starting with 'a' and 'b'
"""
train_params = 'vw -b 26 -q ui -f {model} -d {data} --quiet'.format(model=saved_model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=saved_model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
saved_result = result
comparison = comparison.append(pd.DataFrame(result, index=['Linear Regression w/ Interaction']))
comparison
```
## 2.3 Multinomial Logistic Regression
An alternative to linear regression is to leverage multinomial logistic regression, or multiclass classification, which treats each rating value as a distinct class.
This avoids any non integer results, but also reduces the training data for each class which could lead to poorer performance if the counts of different rating levels are skewed.
Basic multiclass logistic regression can be accomplished using the One Against All approach specified by the '--oaa N' option, where N is the number of classes and proving the logistic option for the loss function to be used.
```
"""
Quick description of command line parameters used
--loss_function logistic: sets the model loss function for logistic regression
--oaa <N>: trains N separate models using One-Against-All approach (all models are captured in the single model file)
This expects the labels to be contiguous integers starting at 1
--link logistic: converts the predicted output from logit to probability
The predicted output is the model (label) with the largest likelihood
"""
train_params = 'vw --loss_function logistic --oaa 5 -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw --link logistic -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Multinomial Regression']))
comparison
```
## 2.4 Logistic Regression
Additionally, one might simply be interested in whether the user likes or dislikes an item and we can adjust the input data to represent a binary outcome, where ratings in (1,3] are dislikes (negative results) and (3,5] are likes (positive results).
This framing allows for a simple logistic regression model to be applied. To perform logistic regression the loss_function parameter is changed to 'logistic' and the target label is switched to [0, 1]. Also, be sure to set '--link logistic' during prediction to convert the logit output back to a probability value.
```
train_params = 'vw --loss_function logistic -f {model} -d {data} --quiet'.format(model=model_path, data=train_logistic_path)
test_params = 'vw --link logistic -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_logistic_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path,
logistic=True)
comparison = comparison.append(pd.DataFrame(result, index=['Logistic Regression']))
comparison
```
# 3. Matrix Factorization Based Recommendations
All of the above approaches train a regression model, but VW also supports matrix factorization with two different approaches.
As opposed to learning direct weights for specific users, items and interactions when training a regression model, matrix factorization attempts to learn latent factors that determine how a user rates an item. An example of how this might work is if you could represent user preference and item categorization by genre. Given a smaller set of genres we can associate how much each item belongs to each genre class, and we can set weights for a user's preference for each genre. Both sets of weights could be represented as a vectors where the inner product would be the user-item rating. Matrix factorization approaches learn low rank matrices for latent features of users and items such that those matrices can be combined to approximate the original user item matrix.
## 3.1. Singular Value Decomposition Based Matrix Factorization
The first approach performs matrix factorization based on Singular Value Decomposition (SVD) to learn a low rank approximation for the user-item rating matix. It is is called using the '--rank' command line argument.
See the [Matrix Factorization Example](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Matrix-factorization-example) for more detail.
```
"""
Quick description of command line parameters used
--rank <N>: sets the number of latent factors in the reduced matrix
"""
train_params = 'vw --rank 5 -q ui -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Matrix Factorization (Rank)']))
comparison
```
## 3.2. Factorization Machine Based Matrix Factorization
An alternative approach based on [Rendel's factorization machines](https://cseweb.ucsd.edu/classes/fa17/cse291-b/reading/Rendle2010FM.pdf) is called using '--lrq' (low rank quadratic). More LRQ details in this [demo](https://github.com/VowpalWabbit/vowpal_wabbit/tree/master/demo/movielens).
This learns two lower rank matrices which are multiplied to generate an approximation of the user-item rating matrix. Compressing the matrix in this way leads to learning generalizable factors which avoids some of the limitations of using regression models with extremely sparse interaction features. This can lead to better convergence and smaller on-disk models.
An additional term to improve performance is --lrqdropout which will dropout columns during training. This however tends to increase the optimal rank size. Other parameters such as L2 regularization can help avoid overfitting.
```
"""
Quick description of command line parameters used
--lrq <abN>: learns approximations of rank N for the quadratic interaction between namespaces starting with 'a' and 'b'
--lrqdroupout: performs dropout during training to improve generalization
"""
train_params = 'vw --lrq ui7 -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Matrix Factorization (LRQ)']))
comparison
```
# 4. Conclusion
The table above shows a few of the approaches in the VW library that can be used for recommendation prediction. The relative performance can change when applied to different datasets and properly tuned, but it is useful to note the rapid speed at which all approaches are able to train (75,000 examples) and test (25,000 examples).
# 5. Scoring
After training a model with any of the above approaches, the model can be used to score potential user-pairs in offline batch mode, or in a real-time scoring mode. The example below shows how to leverage the utilities in the reco_utils directory to generate Top-K recommendations from offline scored output.
```
# First construct a test set of all items (except those seen during training) for each user
users = df[['userID']].drop_duplicates()
users['key'] = 1
items = df[['itemID']].drop_duplicates()
items['key'] = 1
all_pairs = pd.merge(users, items, on='key').drop(columns=['key'])
# now combine with training data and keep only entries that were note in training
merged = pd.merge(train[['userID', 'itemID', 'rating']], all_pairs, on=["userID", "itemID"], how="outer")
all_user_items = merged[merged['rating'].isnull()].fillna(0).astype('int64')
# save in vw format (this can take a while)
to_vw(df=all_user_items, output=all_test_path)
# run the saved model (linear regression with interactions) on the new dataset
test_start = process_time()
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=saved_model_path, data=all_test_path, pred=prediction_path)
run(test_params.split(' '), check=True)
test_stop = process_time()
test_time = test_stop - test_start
# load predictions and get top-k from previous saved results
pred_data = pd.read_csv(prediction_path, delim_whitespace=True, names=['prediction'], index_col=1).join(all_user_items)
top_k = get_top_k_items(pred_data, col_rating='prediction', k=TOP_K)[['prediction', 'userID', 'itemID']]
top_k.head()
# get ranking metrics
args = [test, top_k]
kwargs = dict(col_user='userID', col_item='itemID', col_rating='rating', col_prediction='prediction',
relevancy_method='top_k', k=TOP_K)
rank_metrics = {'MAP': map_at_k(*args, **kwargs),
'NDCG': ndcg_at_k(*args, **kwargs),
'Precision': precision_at_k(*args, **kwargs),
'Recall': recall_at_k(*args, **kwargs)}
# final results
all_results = ['{k}: {v}'.format(k=k, v=v) for k, v in saved_result.items()]
all_results += ['{k}: {v}'.format(k=k, v=v) for k, v in rank_metrics.items()]
print('\n'.join(all_results))
```
# 6. Cleanup
```
# record results for testing
if is_jupyter():
sb.glue('rmse', saved_result['RMSE'])
sb.glue('mae', saved_result['MAE'])
sb.glue('rsquared', saved_result['R2'])
sb.glue('exp_var', saved_result['Explained Variance'])
sb.glue("train_time", saved_result['Train Time (ms)'])
sb.glue("test_time", test_time)
sb.glue('map', rank_metrics['MAP'])
sb.glue('ndcg', rank_metrics['NDCG'])
sb.glue('precision', rank_metrics['Precision'])
sb.glue('recall', rank_metrics['Recall'])
tmpdir.cleanup()
```
## References
1. John Langford, et. al. Vowpal Wabbit Wiki. URL: https://github.com/VowpalWabbit/vowpal_wabbit/wiki
2. Steffen Rendel. Factorization Machines. 2010 IEEE International Conference on Data Mining.
3. Jake Hoffman. Matrix Factorization Example. URL: https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Matrix-factorization-example
4. Paul Minero. Low Rank Quadratic Example. URL: https://github.com/VowpalWabbit/vowpal_wabbit/tree/master/demo/movielens
|
github_jupyter
|
import sys
import os
from subprocess import run
from tempfile import TemporaryDirectory
from time import process_time
import pandas as pd
import papermill as pm
import scrapbook as sb
from reco_utils.common.notebook_utils import is_jupyter
from reco_utils.dataset.movielens import load_pandas_df
from reco_utils.dataset.python_splitters import python_random_split
from reco_utils.evaluation.python_evaluation import (rmse, mae, exp_var, rsquared, get_top_k_items,
map_at_k, ndcg_at_k, precision_at_k, recall_at_k)
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
def to_vw(df, output, logistic=False):
"""Convert Pandas DataFrame to vw input format
Args:
df (pd.DataFrame): input DataFrame
output (str): path to output file
logistic (bool): flag to convert label to logistic value
"""
with open(output, 'w') as f:
tmp = df.reset_index()
# we need to reset the rating type to an integer to simplify the vw formatting
tmp['rating'] = tmp['rating'].astype('int64')
# convert rating to binary value
if logistic:
tmp['rating'] = tmp['rating'].apply(lambda x: 1 if x >= 3 else -1)
# convert each row to VW input format (https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Input-format)
# [label] [tag]|[user namespace] [user id feature] |[item namespace] [movie id feature]
# label is the true rating, tag is a unique id for the example just used to link predictions to truth
# user and item namespaces separate the features to support interaction features through command line options
for _, row in tmp.iterrows():
f.write('{rating:d} {index:d}|user {userID:d} |item {itemID:d}\n'.format_map(row))
def run_vw(train_params, test_params, test_data, prediction_path, logistic=False):
"""Convenience function to train, test, and show metrics of interest
Args:
train_params (str): vw training parameters
test_params (str): vw testing parameters
test_data (pd.dataFrame): test data
prediction_path (str): path to vw prediction output
logistic (bool): flag to convert label to logistic value
Returns:
(dict): metrics and timing information
"""
# train model
train_start = process_time()
run(train_params.split(' '), check=True)
train_stop = process_time()
# test model
test_start = process_time()
run(test_params.split(' '), check=True)
test_stop = process_time()
# read in predictions
pred_df = pd.read_csv(prediction_path, delim_whitespace=True, names=['prediction'], index_col=1).join(test_data)
pred_df.drop("rating", axis=1, inplace=True)
test_df = test_data.copy()
if logistic:
# make the true label binary so that the metrics are captured correctly
test_df['rating'] = test['rating'].apply(lambda x: 1 if x >= 3 else -1)
else:
# ensure results are integers in correct range
pred_df['prediction'] = pred_df['prediction'].apply(lambda x: int(max(1, min(5, round(x)))))
# calculate metrics
result = dict()
result['RMSE'] = rmse(test_df, pred_df)
result['MAE'] = mae(test_df, pred_df)
result['R2'] = rsquared(test_df, pred_df)
result['Explained Variance'] = exp_var(test_df, pred_df)
result['Train Time (ms)'] = (train_stop - train_start) * 1000
result['Test Time (ms)'] = (test_stop - test_start) * 1000
return result
# create temp directory to maintain data files
tmpdir = TemporaryDirectory()
model_path = os.path.join(tmpdir.name, 'vw.model')
saved_model_path = os.path.join(tmpdir.name, 'vw_saved.model')
train_path = os.path.join(tmpdir.name, 'train.dat')
test_path = os.path.join(tmpdir.name, 'test.dat')
train_logistic_path = os.path.join(tmpdir.name, 'train_logistic.dat')
test_logistic_path = os.path.join(tmpdir.name, 'test_logistic.dat')
prediction_path = os.path.join(tmpdir.name, 'prediction.dat')
all_test_path = os.path.join(tmpdir.name, 'new_test.dat')
all_prediction_path = os.path.join(tmpdir.name, 'new_prediction.dat')
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
TOP_K = 10
# load movielens data
df = load_pandas_df(MOVIELENS_DATA_SIZE)
# split data to train and test sets, default values take 75% of each users ratings as train, and 25% as test
train, test = python_random_split(df, 0.75)
# save train and test data in vw format
to_vw(df=train, output=train_path)
to_vw(df=test, output=test_path)
# save data for logistic regression (requires adjusting the label)
to_vw(df=train, output=train_logistic_path, logistic=True)
to_vw(df=test, output=test_logistic_path, logistic=True)
"""
Quick description of command line parameters used
Other optional parameters can be found here: https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments
VW uses linear regression by default, so no extra command line options
-f <model_path>: indicates where the final model file will reside after training
-d <data_path>: indicates which data file to use for training or testing
--quiet: this runs vw in quiet mode silencing stdout (for debugging it's helpful to not use quiet mode)
-i <model_path>: indicates where to load the previously model file created during training
-t: this executes inference only (no learned updates to the model)
-p <prediction_path>: indicates where to store prediction output
"""
train_params = 'vw -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
# save these results for later use during top-k analysis
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = pd.DataFrame(result, index=['Linear Regression'])
comparison
"""
Quick description of command line parameters used
-b <N>: sets the memory size to 2<sup>N</sup> entries
-q <ab>: create quadratic feature interactions between features in namespaces starting with 'a' and 'b'
"""
train_params = 'vw -b 26 -q ui -f {model} -d {data} --quiet'.format(model=saved_model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=saved_model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
saved_result = result
comparison = comparison.append(pd.DataFrame(result, index=['Linear Regression w/ Interaction']))
comparison
"""
Quick description of command line parameters used
--loss_function logistic: sets the model loss function for logistic regression
--oaa <N>: trains N separate models using One-Against-All approach (all models are captured in the single model file)
This expects the labels to be contiguous integers starting at 1
--link logistic: converts the predicted output from logit to probability
The predicted output is the model (label) with the largest likelihood
"""
train_params = 'vw --loss_function logistic --oaa 5 -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw --link logistic -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Multinomial Regression']))
comparison
train_params = 'vw --loss_function logistic -f {model} -d {data} --quiet'.format(model=model_path, data=train_logistic_path)
test_params = 'vw --link logistic -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_logistic_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path,
logistic=True)
comparison = comparison.append(pd.DataFrame(result, index=['Logistic Regression']))
comparison
"""
Quick description of command line parameters used
--rank <N>: sets the number of latent factors in the reduced matrix
"""
train_params = 'vw --rank 5 -q ui -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Matrix Factorization (Rank)']))
comparison
"""
Quick description of command line parameters used
--lrq <abN>: learns approximations of rank N for the quadratic interaction between namespaces starting with 'a' and 'b'
--lrqdroupout: performs dropout during training to improve generalization
"""
train_params = 'vw --lrq ui7 -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Matrix Factorization (LRQ)']))
comparison
# First construct a test set of all items (except those seen during training) for each user
users = df[['userID']].drop_duplicates()
users['key'] = 1
items = df[['itemID']].drop_duplicates()
items['key'] = 1
all_pairs = pd.merge(users, items, on='key').drop(columns=['key'])
# now combine with training data and keep only entries that were note in training
merged = pd.merge(train[['userID', 'itemID', 'rating']], all_pairs, on=["userID", "itemID"], how="outer")
all_user_items = merged[merged['rating'].isnull()].fillna(0).astype('int64')
# save in vw format (this can take a while)
to_vw(df=all_user_items, output=all_test_path)
# run the saved model (linear regression with interactions) on the new dataset
test_start = process_time()
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=saved_model_path, data=all_test_path, pred=prediction_path)
run(test_params.split(' '), check=True)
test_stop = process_time()
test_time = test_stop - test_start
# load predictions and get top-k from previous saved results
pred_data = pd.read_csv(prediction_path, delim_whitespace=True, names=['prediction'], index_col=1).join(all_user_items)
top_k = get_top_k_items(pred_data, col_rating='prediction', k=TOP_K)[['prediction', 'userID', 'itemID']]
top_k.head()
# get ranking metrics
args = [test, top_k]
kwargs = dict(col_user='userID', col_item='itemID', col_rating='rating', col_prediction='prediction',
relevancy_method='top_k', k=TOP_K)
rank_metrics = {'MAP': map_at_k(*args, **kwargs),
'NDCG': ndcg_at_k(*args, **kwargs),
'Precision': precision_at_k(*args, **kwargs),
'Recall': recall_at_k(*args, **kwargs)}
# final results
all_results = ['{k}: {v}'.format(k=k, v=v) for k, v in saved_result.items()]
all_results += ['{k}: {v}'.format(k=k, v=v) for k, v in rank_metrics.items()]
print('\n'.join(all_results))
# record results for testing
if is_jupyter():
sb.glue('rmse', saved_result['RMSE'])
sb.glue('mae', saved_result['MAE'])
sb.glue('rsquared', saved_result['R2'])
sb.glue('exp_var', saved_result['Explained Variance'])
sb.glue("train_time", saved_result['Train Time (ms)'])
sb.glue("test_time", test_time)
sb.glue('map', rank_metrics['MAP'])
sb.glue('ndcg', rank_metrics['NDCG'])
sb.glue('precision', rank_metrics['Precision'])
sb.glue('recall', rank_metrics['Recall'])
tmpdir.cleanup()
| 0.480235 | 0.960435 |
<a href="https://colab.research.google.com/github/indranildchandra/ML101-Codelabs/blob/master/src/Keras_Fashion_MNIST_CPU_Example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%%capture
!pip install watermark
%load_ext watermark
%watermark -p tensorflow,numpy -m
```
(Adapted from https://github.com/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb)
# Fashion MNIST with Keras and CPU
First, let's grab our dataset using `tf.keras.datasets`.
```
import os
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedShuffleSplit
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
```
Value distribution of X:
```
pd.set_option('display.float_format', lambda x: '%.3f' % x)
pd.Series(x_train.reshape(-1)).describe()
```
Value distribution of Y:
```
pd.Series(y_train.reshape(-1)).describe()
```
### Create a validation set
```
sss = StratifiedShuffleSplit(n_splits=5, random_state=0, test_size=1/6)
train_index, valid_index = next(sss.split(x_train, y_train))
x_valid, y_valid = x_train[valid_index], y_train[valid_index]
x_train, y_train = x_train[train_index], y_train[train_index]
print(x_train.shape, x_valid.shape, x_test.shape)
```
# Defining our model
We will use a standard conv-net for this example. We have 3 layers with drop-out and batch normalization between each layer.
```
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(64, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(128, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(256, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256))
model.add(tf.keras.layers.Activation('elu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10))
model.add(tf.keras.layers.Activation('softmax'))
model.summary()
```
# Training on the CPU
Here we demonstrate that we can use a generator function and `fit_generator` to train the model. You can also pass in `x_train` and `y_train` to `tpu_model.fit()` instead.
```
model.compile(
optimizer=tf.train.AdamOptimizer(learning_rate=1e-3, ),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['sparse_categorical_accuracy']
)
%%time
def train_gen(batch_size):
while True:
offset = np.random.randint(0, x_train.shape[0] - batch_size)
yield x_train[offset:offset+batch_size], y_train[offset:offset + batch_size]
model.fit_generator(
train_gen(512),
epochs=15,
steps_per_epoch=100,
validation_data=(x_valid, y_valid)
)
```
# Checking our results (inference)
```
LABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots']
from matplotlib import pyplot
%matplotlib inline
def plot_predictions(images, predictions, true_labels):
n = images.shape[0]
nc = int(np.ceil(n / 4))
fig = pyplot.figure(figsize=(4,3))
# axes = fig.add_subplot(nc, 4)
f, axes = pyplot.subplots(nc, 4)
f.tight_layout()
for i in range(nc * 4):
y = i // 4
x = i % 4
axes[x, y].axis('off')
label = LABEL_NAMES[np.argmax(predictions[i])]
confidence = np.max(predictions[i])
if i > n:
continue
axes[x, y].imshow(images[i])
pred_label = np.argmax(predictions[i])
axes[x, y].set_title("{} ({})\n {:.3f}".format(
LABEL_NAMES[pred_label],
LABEL_NAMES[true_labels[i]],
confidence
), color=("green" if true_labels[i] == pred_label else "red"))
pyplot.gcf().set_size_inches(8, 8)
plot_predictions(
np.squeeze(x_test[:16]),
model.predict(x_test[:16]),
y_test[:16]
)
%%time
# Evaluate the model on valid set
score = model.evaluate(x_valid, y_valid, verbose=0)
# Print test accuracy
print('\n', 'Valid accuracy:', score[1])
%%time
# Evaluate the model on test set
score = model.evaluate(x_test, y_test, verbose=0)
# Print test accuracy
print('\n', 'Test accuracy:', score[1])
```
|
github_jupyter
|
%%capture
!pip install watermark
%load_ext watermark
%watermark -p tensorflow,numpy -m
import os
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedShuffleSplit
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
pd.set_option('display.float_format', lambda x: '%.3f' % x)
pd.Series(x_train.reshape(-1)).describe()
pd.Series(y_train.reshape(-1)).describe()
sss = StratifiedShuffleSplit(n_splits=5, random_state=0, test_size=1/6)
train_index, valid_index = next(sss.split(x_train, y_train))
x_valid, y_valid = x_train[valid_index], y_train[valid_index]
x_train, y_train = x_train[train_index], y_train[train_index]
print(x_train.shape, x_valid.shape, x_test.shape)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(64, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(128, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(256, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256))
model.add(tf.keras.layers.Activation('elu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10))
model.add(tf.keras.layers.Activation('softmax'))
model.summary()
model.compile(
optimizer=tf.train.AdamOptimizer(learning_rate=1e-3, ),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['sparse_categorical_accuracy']
)
%%time
def train_gen(batch_size):
while True:
offset = np.random.randint(0, x_train.shape[0] - batch_size)
yield x_train[offset:offset+batch_size], y_train[offset:offset + batch_size]
model.fit_generator(
train_gen(512),
epochs=15,
steps_per_epoch=100,
validation_data=(x_valid, y_valid)
)
LABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots']
from matplotlib import pyplot
%matplotlib inline
def plot_predictions(images, predictions, true_labels):
n = images.shape[0]
nc = int(np.ceil(n / 4))
fig = pyplot.figure(figsize=(4,3))
# axes = fig.add_subplot(nc, 4)
f, axes = pyplot.subplots(nc, 4)
f.tight_layout()
for i in range(nc * 4):
y = i // 4
x = i % 4
axes[x, y].axis('off')
label = LABEL_NAMES[np.argmax(predictions[i])]
confidence = np.max(predictions[i])
if i > n:
continue
axes[x, y].imshow(images[i])
pred_label = np.argmax(predictions[i])
axes[x, y].set_title("{} ({})\n {:.3f}".format(
LABEL_NAMES[pred_label],
LABEL_NAMES[true_labels[i]],
confidence
), color=("green" if true_labels[i] == pred_label else "red"))
pyplot.gcf().set_size_inches(8, 8)
plot_predictions(
np.squeeze(x_test[:16]),
model.predict(x_test[:16]),
y_test[:16]
)
%%time
# Evaluate the model on valid set
score = model.evaluate(x_valid, y_valid, verbose=0)
# Print test accuracy
print('\n', 'Valid accuracy:', score[1])
%%time
# Evaluate the model on test set
score = model.evaluate(x_test, y_test, verbose=0)
# Print test accuracy
print('\n', 'Test accuracy:', score[1])
| 0.834407 | 0.979176 |
# Figure 3: Learning dynamics for optimal discrete time learning rates
This notebook provides the code to produce Figure 2 in the paper: "Learning dynamics of linear denoising autoencoders". (ICML 2018)
```
import numpy as np
import matplotlib.pyplot as plt
def scalar_learning_dynamics(lam, var, gamma, N, n_epoch, learning_rate):
dynamics = []
tau = 1/learning_rate
u0 = 0.000001
g = N*gamma
xi = lam + N*var
for t in range(n_epoch):
E = np.exp(2*(lam - g)*t/tau)
num = (lam-g)*E
denom = xi*(E - 1) + (lam - g)/u0
uf = num/denom
dynamics.append(uf)
return dynamics
noise_dyns = []
reg_dyns = []
var = np.arange(0, 2, 0.2)
alphas = (var-np.min(var))/(np.max(var)-np.min(var))
l = 1
fig, [ax1, ax2, ax3] = plt.subplots(1, 3, figsize=(15, 4), sharex=False, sharey=False)
ax1.set_ylabel('$w(t, \lambda)$')
ax1.set_ylim(0, 1.1)
ax1.set_xlabel('t (epoch)')
for v, a in zip(var, alphas):
lr = 0.005*(1/(2*l + 3*v))
dyns = scalar_learning_dynamics(l, v, 0, 1, 20000, lr)
noise_dyns.append(dyns)
ax1.set_ylim(0, 1.1)
ax1.plot(dyns, c='darkgreen', alpha=a, label='Noise')
reg = var*l/(var+l)
lrs = 0.005*((l + var)/(2*l*(l + 2*var)))
for r, lr in zip(reg, lrs):
dyns = scalar_learning_dynamics(l, 0, r, 1, 20000, lr)
reg_dyns.append(dyns)
ax1.plot(dyns, c='orange', alpha=r, label='Weigh decay')
# plot optimal learning rates
var = np.arange(0, 10, 0.02)
ax2.set_ylabel('Optimal learning rate')
# compute optimal learning rates
dae_lr = 1/(2*l+3*var)
eq_gamma = l*var/(l+var)
aewd_lr = 1/(2*l + eq_gamma)
ratio = (2*l + eq_gamma)/(2*l + 3*var)
# plot learning rates
ax2.plot(var, dae_lr, label='DAE', c='green')
ax2.plot(var, aewd_lr, label='WDAE', c='orange')
ax2.plot(var, ratio, label='Ratio', c='purple', ls='dashed')
ax2.set_xlabel(r'$\varepsilon$ (Noise)')
ax2.legend()
# plot training time differences
for i in range(10):
diff_dyns = np.array(noise_dyns[i]) - np.array(reg_dyns[i])
ax3.plot(diff_dyns, alpha=alphas[i], c='red')
ax3.set_xlabel('t (epoch)')
ax3.set_ylabel('Difference in learned mapping')
plt.show()
```
**Left**: Dynamics of DAEs (green) vs. WDAEs (orange), where darker line colours correspond to larger amounts noise or weigh decay.
**Middle**: Optimal learning rate as a function of noise $\varepsilon$ for DAEs, and for WDAEs using an equivalent amount of regularisation $\gamma = \lambda\varepsilon/(\lambda + \varepsilon)$.
**Right**: Difference in mapping over time.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
def scalar_learning_dynamics(lam, var, gamma, N, n_epoch, learning_rate):
dynamics = []
tau = 1/learning_rate
u0 = 0.000001
g = N*gamma
xi = lam + N*var
for t in range(n_epoch):
E = np.exp(2*(lam - g)*t/tau)
num = (lam-g)*E
denom = xi*(E - 1) + (lam - g)/u0
uf = num/denom
dynamics.append(uf)
return dynamics
noise_dyns = []
reg_dyns = []
var = np.arange(0, 2, 0.2)
alphas = (var-np.min(var))/(np.max(var)-np.min(var))
l = 1
fig, [ax1, ax2, ax3] = plt.subplots(1, 3, figsize=(15, 4), sharex=False, sharey=False)
ax1.set_ylabel('$w(t, \lambda)$')
ax1.set_ylim(0, 1.1)
ax1.set_xlabel('t (epoch)')
for v, a in zip(var, alphas):
lr = 0.005*(1/(2*l + 3*v))
dyns = scalar_learning_dynamics(l, v, 0, 1, 20000, lr)
noise_dyns.append(dyns)
ax1.set_ylim(0, 1.1)
ax1.plot(dyns, c='darkgreen', alpha=a, label='Noise')
reg = var*l/(var+l)
lrs = 0.005*((l + var)/(2*l*(l + 2*var)))
for r, lr in zip(reg, lrs):
dyns = scalar_learning_dynamics(l, 0, r, 1, 20000, lr)
reg_dyns.append(dyns)
ax1.plot(dyns, c='orange', alpha=r, label='Weigh decay')
# plot optimal learning rates
var = np.arange(0, 10, 0.02)
ax2.set_ylabel('Optimal learning rate')
# compute optimal learning rates
dae_lr = 1/(2*l+3*var)
eq_gamma = l*var/(l+var)
aewd_lr = 1/(2*l + eq_gamma)
ratio = (2*l + eq_gamma)/(2*l + 3*var)
# plot learning rates
ax2.plot(var, dae_lr, label='DAE', c='green')
ax2.plot(var, aewd_lr, label='WDAE', c='orange')
ax2.plot(var, ratio, label='Ratio', c='purple', ls='dashed')
ax2.set_xlabel(r'$\varepsilon$ (Noise)')
ax2.legend()
# plot training time differences
for i in range(10):
diff_dyns = np.array(noise_dyns[i]) - np.array(reg_dyns[i])
ax3.plot(diff_dyns, alpha=alphas[i], c='red')
ax3.set_xlabel('t (epoch)')
ax3.set_ylabel('Difference in learned mapping')
plt.show()
| 0.649579 | 0.941815 |
```
#IMPORT SEMUA LIBARARY
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY UNTUK POSTGRE
from sqlalchemy import create_engine
import psycopg2
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY BASE PATH
import os
import io
#IMPORT LIBARARY PDF
from fpdf import FPDF
#IMPORT LIBARARY CHART KE BASE64
import base64
#IMPORT LIBARARY EXCEL
import xlsxwriter
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(columns, table, filePath, engine):
#FUNGSI UNTUK MEMBACA CSV
df = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#APABILA ADA FIELD KOSONG DISINI DIFILTER
df.fillna('')
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
del df['kategori']
del df['jenis']
del df['pengiriman']
del df['satuan']
#MEMINDAHKAN DATA DARI CSV KE POSTGRESQL
df.to_sql(
table,
engine,
if_exists='replace'
)
#DIHITUNG APABILA DATA YANG DIUPLOAD BERHASIL, MAKA AKAN MENGEMBALIKAN KELUARAN TRUE(BENAR) DAN SEBALIKNYA
if len(df) == 0:
return False
else:
return True
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, columns, filePath, name, subjudul, limit, negara, basePath):
#TEST KONEKSI DATABASE
try:
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBL DATA DARI TABLE YANG DIDEFINISIKAN DIBAWAH, DAN DIORDER DARI TANGGAL TERAKHIR
#BISA DITAMBAHKAN LIMIT SUPAYA DATA YANG DIAMBIL TIDAK TERLALU BANYAK DAN BERAT
postgreSQL_select_Query = "SELECT * FROM "+table+" ORDER BY tanggal ASC LIMIT " + str(limit)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MELAKUKAN LOOPING ATAU PERULANGAN DARI DATA YANG SUDAH DIAMBIL
#KEMUDIAN DATA TERSEBUT DITEMPELKAN KE VARIABLE DIATAS INI
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
if row[2] == "":
lengthy.append(float(0))
else:
lengthy.append(float(row[2]))
#FUNGSI UNTUK MEMBUAT CHART
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#MASUKAN DATA ID DARI DATABASE, DAN JUGA DATA TANGGAL
ax.bar(uid, lengthy, align='center')
#UNTUK JUDUL CHARTNYA
ax.set_title(judul)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
#TOTAL DATA YANG DIAMBIL DARI DATABASE, DIMASUKAN DISINI
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#line
#MASUKAN DATA DARI DATABASE
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#UNTUK JUDUL CHARTNYA
plt.title(judul)
plt.grid(True)
l = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(l, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#pie
#UNTUK JUDUL CHARTNYA
plt.title(judul)
#MASUKAN DATA DARI DATABASE
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.axis('equal')
p = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(p, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#MENGAMBIL DATA DARI CSV YANG DIGUNAKAN SEBAGAI HEADER DARI TABLE UNTUK EXCEL DAN JUGA PDF
header = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
header.fillna('')
del header['tanggal']
del header['total']
#MEMANGGIL FUNGSI EXCEL
makeExcel(mobile_records, header, name, limit, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(mobile_records, header, judul, barChart, lineChart, pieChart, name, subjudul, limit, basePath)
#JIKA GAGAL KONEKSI KE DATABASE, MASUK KESINI UNTUK MENAMPILKAN ERRORNYA
except (Exception, psycopg2.Error) as error :
print (error)
#KONEKSI DITUTUP
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, dataheader, name, limit, basePath):
#MEMBUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/BLOOMBERG/SektorEksternal/excel/'+name+'.xlsx')
#MENAMBAHKAN WORKSHEET PADA FILE EXCEL TERSEBUT
worksheet = workbook.add_worksheet('sheet1')
#SETINGAN AGAR DIBERIKAN BORDER DAN FONT MENJADI BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#MENJADIKAN DATA MENJADI ARRAY
data=list(datarow)
isihead=list(dataheader.values)
header = []
body = []
#LOOPING ATAU PERULANGAN, KEMUDIAN DATA DITAMPUNG PADA VARIABLE DIATAS
for rowhead in dataheader:
header.append(str(rowhead))
for rowhead2 in datarow:
header.append(str(rowhead2[1]))
for rowbody in isihead[1]:
body.append(str(rowbody))
for rowbody2 in data:
body.append(str(rowbody2[2]))
#MEMASUKAN DATA DARI VARIABLE DIATAS KE DALAM COLUMN DAN ROW EXCEL
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
for col_num, data in enumerate(body):
worksheet.write(1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, dataheader, judul, bar, line, pie, name, subjudul, lengthPDF, basePath):
#FUNGSI UNTUK MENGATUR UKURAN KERTAS, DISINI MENGGUNAKAN UKURAN A4 DENGAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#MENAMBAHKAN HALAMAN PADA PDF
pdf.add_page()
#PENGATURAN UNTUK JARAK PADDING DAN JUGA UKURAN FONT
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#MEMASUKAN JUDUL KE DALAM PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#MEMASUKAN SUB JUDUL KE PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#MEMBUAT GARIS DI BAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','',10.0)
#MENGAMBIL DATA HEADER PDF YANG SEBELUMNYA SUDAH DIDEFINISIKAN DIATAS
datahead=list(dataheader.values)
pdf.set_font('Times','B',12.0)
pdf.ln(0.5)
th1 = pdf.font_size
#MEMBUAT TABLE PADA PDF, DAN MENAMPILKAN DATA DARI VARIABLE YANG SUDAH DIKIRIM
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][0], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Jenis", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][1], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Pengiriman", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][2], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Satuan", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][3], border=1, align='C')
pdf.ln(2*th1)
#PENGATURAN PADDING
pdf.set_xy(17.0, 75.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','B',11.0)
data=list(datarow)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
#PENGATURAN UNTUK JARAK PADDING
pdf.ln(0.5)
th = pdf.font_size
#MEMASUKAN DATA HEADER YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.cell(50, 2*th, str("Negara"), border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#MEMASUKAN DATA ISI YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(50, 2*th, negara, border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#MENGAMBIL DATA CHART, KEMUDIAN CHART TERSEBUT DIJADIKAN PNG DAN DISIMPAN PADA DIRECTORY DIBAWAH INI
#BAR CHART
bardata = base64.b64decode(bar)
barname = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINE CHART
linedata = base64.b64decode(line)
linename = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIE CHART
piedata = base64.b64decode(pie)
piename = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
widthcol = col/3
#MEMANGGIL DATA GAMBAR DARI DIREKTORY DIATAS
pdf.image(barname, link='', type='',x=8, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(linename, link='', type='',x=103, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(piename, link='', type='',x=195, y=100, w=widthcol)
pdf.ln(2*th)
#MEMBUAT FILE PDF
pdf.output(basePath+'jupyter/BLOOMBERG/SektorEksternal/pdf/'+name+'.pdf', 'F')
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#DEFINISIKAN COLUMN BERDASARKAN FIELD CSV
columns = [
"kategori",
"jenis",
"tanggal",
"total",
"pengiriman",
"satuan",
]
#UNTUK NAMA FILE
name = "SektorEksternal3_1"
#VARIABLE UNTUK KONEKSI KE DATABASE
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "bloomberg_sektoreksternal"
table = name.lower()
#JUDUL PADA PDF DAN EXCEL
judul = "Data Sektor Eksternal"
subjudul = "Badan Perencanaan Pembangunan Nasional"
#LIMIT DATA UNTUK SELECT DI DATABASE
limitdata = int(8)
#NAMA NEGARA UNTUK DITAMPILKAN DI EXCEL DAN PDF
negara = "Indonesia"
#BASE PATH DIRECTORY
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE CSV
filePath = basePath+ 'data mentah/BLOOMBERG/SektorEksternal/' +name+'.csv';
#KONEKSI KE DATABASE
engine = create_engine('postgresql://'+username+':'+password+'@'+host+':'+port+'/'+database)
#MEMANGGIL FUNGSI UPLOAD TO PSQL
checkUpload = uploadToPSQL(columns, table, filePath, engine)
#MENGECEK FUNGSI DARI UPLOAD PSQL, JIKA BERHASIL LANJUT MEMBUAT FUNGSI CHART, JIKA GAGAL AKAN MENAMPILKAN PESAN ERROR
if checkUpload == True:
makeChart(host, username, password, database, port, table, judul, columns, filePath, name, subjudul, limitdata, negara, basePath)
else:
print("Error When Upload CSV")
```
|
github_jupyter
|
#IMPORT SEMUA LIBARARY
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY UNTUK POSTGRE
from sqlalchemy import create_engine
import psycopg2
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY BASE PATH
import os
import io
#IMPORT LIBARARY PDF
from fpdf import FPDF
#IMPORT LIBARARY CHART KE BASE64
import base64
#IMPORT LIBARARY EXCEL
import xlsxwriter
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(columns, table, filePath, engine):
#FUNGSI UNTUK MEMBACA CSV
df = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#APABILA ADA FIELD KOSONG DISINI DIFILTER
df.fillna('')
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
del df['kategori']
del df['jenis']
del df['pengiriman']
del df['satuan']
#MEMINDAHKAN DATA DARI CSV KE POSTGRESQL
df.to_sql(
table,
engine,
if_exists='replace'
)
#DIHITUNG APABILA DATA YANG DIUPLOAD BERHASIL, MAKA AKAN MENGEMBALIKAN KELUARAN TRUE(BENAR) DAN SEBALIKNYA
if len(df) == 0:
return False
else:
return True
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, columns, filePath, name, subjudul, limit, negara, basePath):
#TEST KONEKSI DATABASE
try:
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBL DATA DARI TABLE YANG DIDEFINISIKAN DIBAWAH, DAN DIORDER DARI TANGGAL TERAKHIR
#BISA DITAMBAHKAN LIMIT SUPAYA DATA YANG DIAMBIL TIDAK TERLALU BANYAK DAN BERAT
postgreSQL_select_Query = "SELECT * FROM "+table+" ORDER BY tanggal ASC LIMIT " + str(limit)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MELAKUKAN LOOPING ATAU PERULANGAN DARI DATA YANG SUDAH DIAMBIL
#KEMUDIAN DATA TERSEBUT DITEMPELKAN KE VARIABLE DIATAS INI
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
if row[2] == "":
lengthy.append(float(0))
else:
lengthy.append(float(row[2]))
#FUNGSI UNTUK MEMBUAT CHART
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#MASUKAN DATA ID DARI DATABASE, DAN JUGA DATA TANGGAL
ax.bar(uid, lengthy, align='center')
#UNTUK JUDUL CHARTNYA
ax.set_title(judul)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
#TOTAL DATA YANG DIAMBIL DARI DATABASE, DIMASUKAN DISINI
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#line
#MASUKAN DATA DARI DATABASE
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#UNTUK JUDUL CHARTNYA
plt.title(judul)
plt.grid(True)
l = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(l, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#pie
#UNTUK JUDUL CHARTNYA
plt.title(judul)
#MASUKAN DATA DARI DATABASE
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.axis('equal')
p = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(p, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#MENGAMBIL DATA DARI CSV YANG DIGUNAKAN SEBAGAI HEADER DARI TABLE UNTUK EXCEL DAN JUGA PDF
header = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
header.fillna('')
del header['tanggal']
del header['total']
#MEMANGGIL FUNGSI EXCEL
makeExcel(mobile_records, header, name, limit, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(mobile_records, header, judul, barChart, lineChart, pieChart, name, subjudul, limit, basePath)
#JIKA GAGAL KONEKSI KE DATABASE, MASUK KESINI UNTUK MENAMPILKAN ERRORNYA
except (Exception, psycopg2.Error) as error :
print (error)
#KONEKSI DITUTUP
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, dataheader, name, limit, basePath):
#MEMBUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/BLOOMBERG/SektorEksternal/excel/'+name+'.xlsx')
#MENAMBAHKAN WORKSHEET PADA FILE EXCEL TERSEBUT
worksheet = workbook.add_worksheet('sheet1')
#SETINGAN AGAR DIBERIKAN BORDER DAN FONT MENJADI BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#MENJADIKAN DATA MENJADI ARRAY
data=list(datarow)
isihead=list(dataheader.values)
header = []
body = []
#LOOPING ATAU PERULANGAN, KEMUDIAN DATA DITAMPUNG PADA VARIABLE DIATAS
for rowhead in dataheader:
header.append(str(rowhead))
for rowhead2 in datarow:
header.append(str(rowhead2[1]))
for rowbody in isihead[1]:
body.append(str(rowbody))
for rowbody2 in data:
body.append(str(rowbody2[2]))
#MEMASUKAN DATA DARI VARIABLE DIATAS KE DALAM COLUMN DAN ROW EXCEL
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
for col_num, data in enumerate(body):
worksheet.write(1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, dataheader, judul, bar, line, pie, name, subjudul, lengthPDF, basePath):
#FUNGSI UNTUK MENGATUR UKURAN KERTAS, DISINI MENGGUNAKAN UKURAN A4 DENGAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#MENAMBAHKAN HALAMAN PADA PDF
pdf.add_page()
#PENGATURAN UNTUK JARAK PADDING DAN JUGA UKURAN FONT
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#MEMASUKAN JUDUL KE DALAM PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#MEMASUKAN SUB JUDUL KE PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#MEMBUAT GARIS DI BAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','',10.0)
#MENGAMBIL DATA HEADER PDF YANG SEBELUMNYA SUDAH DIDEFINISIKAN DIATAS
datahead=list(dataheader.values)
pdf.set_font('Times','B',12.0)
pdf.ln(0.5)
th1 = pdf.font_size
#MEMBUAT TABLE PADA PDF, DAN MENAMPILKAN DATA DARI VARIABLE YANG SUDAH DIKIRIM
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][0], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Jenis", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][1], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Pengiriman", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][2], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Satuan", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][3], border=1, align='C')
pdf.ln(2*th1)
#PENGATURAN PADDING
pdf.set_xy(17.0, 75.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','B',11.0)
data=list(datarow)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
#PENGATURAN UNTUK JARAK PADDING
pdf.ln(0.5)
th = pdf.font_size
#MEMASUKAN DATA HEADER YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.cell(50, 2*th, str("Negara"), border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#MEMASUKAN DATA ISI YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(50, 2*th, negara, border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#MENGAMBIL DATA CHART, KEMUDIAN CHART TERSEBUT DIJADIKAN PNG DAN DISIMPAN PADA DIRECTORY DIBAWAH INI
#BAR CHART
bardata = base64.b64decode(bar)
barname = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINE CHART
linedata = base64.b64decode(line)
linename = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIE CHART
piedata = base64.b64decode(pie)
piename = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
widthcol = col/3
#MEMANGGIL DATA GAMBAR DARI DIREKTORY DIATAS
pdf.image(barname, link='', type='',x=8, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(linename, link='', type='',x=103, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(piename, link='', type='',x=195, y=100, w=widthcol)
pdf.ln(2*th)
#MEMBUAT FILE PDF
pdf.output(basePath+'jupyter/BLOOMBERG/SektorEksternal/pdf/'+name+'.pdf', 'F')
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#DEFINISIKAN COLUMN BERDASARKAN FIELD CSV
columns = [
"kategori",
"jenis",
"tanggal",
"total",
"pengiriman",
"satuan",
]
#UNTUK NAMA FILE
name = "SektorEksternal3_1"
#VARIABLE UNTUK KONEKSI KE DATABASE
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "bloomberg_sektoreksternal"
table = name.lower()
#JUDUL PADA PDF DAN EXCEL
judul = "Data Sektor Eksternal"
subjudul = "Badan Perencanaan Pembangunan Nasional"
#LIMIT DATA UNTUK SELECT DI DATABASE
limitdata = int(8)
#NAMA NEGARA UNTUK DITAMPILKAN DI EXCEL DAN PDF
negara = "Indonesia"
#BASE PATH DIRECTORY
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE CSV
filePath = basePath+ 'data mentah/BLOOMBERG/SektorEksternal/' +name+'.csv';
#KONEKSI KE DATABASE
engine = create_engine('postgresql://'+username+':'+password+'@'+host+':'+port+'/'+database)
#MEMANGGIL FUNGSI UPLOAD TO PSQL
checkUpload = uploadToPSQL(columns, table, filePath, engine)
#MENGECEK FUNGSI DARI UPLOAD PSQL, JIKA BERHASIL LANJUT MEMBUAT FUNGSI CHART, JIKA GAGAL AKAN MENAMPILKAN PESAN ERROR
if checkUpload == True:
makeChart(host, username, password, database, port, table, judul, columns, filePath, name, subjudul, limitdata, negara, basePath)
else:
print("Error When Upload CSV")
| 0.154759 | 0.082957 |
# Assignment Data Structure and Solution
#### We can use this Jupyter Notebook to analysis the data structure of different file and variables to help us to solve these three model construction and analysis.
# First Class(CommandLine)
<big>**Calss Function**</big>:
1. Judge the command line to accord with the stationary Formate.
2. Print Help Inforamtion if the command formate is wrong.
3. Default:<br>
1)'index_nostoplist_nostemming.txt' / queries_nostoplist_nostemming.txt'. <br>
2) For the First dataset(index?), it's the key vocabularly in the text. <br>
3) For the second dataset(queries?), it's the summary of all the search queris. <br>
**Example Command Formate**:
```
> python ir_engine.py -o result.txt
```
```
#==============================================================================
# Importing
import sys, getopt, re
import time
#==============================================================================
# Command line processing
class CommandLine:
def __init__(self):
opts, args = getopt.getopt(sys.argv[1:], 'hspw:o:')
opts = dict(opts)
self.exit = True
if '-h' in opts:
self.printHelp()
return
if len(args) > 0:
print("*** ERROR: no arg files - only options! ***", file=sys.stderr)
self.printHelp()
return
if '-w' in opts:
if opts['-w'] in ('binary', 'tf', 'tfidf'):
self.termWeighting = opts['-w']
else:
warning = (
"*** ERROR: term weighting label (opt: -w LABEL)! ***\n"
" -- value (%s) not recognised!\n"
" -- must be one of: binary / tf / tfidf"
) % (opts['-w'])
print(warning, file=sys.stderr)
self.printHelp()
return
else:
self.termWeighting = 'binary'
if '-o' in opts:
self.outfile = opts['-o']
else:
print("*** ERROR: must specify output file (opt: -o FILE) ***",
file=sys.stderr)
self.printHelp()
return
if '-s' in opts and '-p' in opts:
self.indexFile = 'index_withstoplist_withstemming.txt'
self.queriesFile = 'queries_withstoplist_withstemming.txt'
elif '-s' in opts:
self.indexFile = 'index_withstoplist_nostemming.txt'
self.queriesFile = 'queries_withstoplist_nostemming.txt'
elif '-p' in opts:
self.indexFile = 'index_nostoplist_withstemming.txt'
self.queriesFile = 'queries_nostoplist_withstemming.txt'
else:
self.indexFile = 'index_nostoplist_nostemming.txt'
self.queriesFile = 'queries_nostoplist_nostemming.txt'
self.exit = False
def printHelp(self):
progname = sys.argv[0]
progname = progname.split('/')[-1] # strip off extended path
help = __doc__.replace('<PROGNAME>', progname, 1)
print(help, file=sys.stderr)
```
# Second Class(IndexLoader)
<big>**Calss Function**</big>:
1. Use the **Regular Expression** to extract the information from original file and index in a new dictionary.
2. The New Generator dictionary has 2 dimensions.
<big>**Data Structure(Python Dictionary)**</big>:<br>
1. {(term name)->{(term name)-> Count}}
```
#==============================================================================
# Load (precomputed) Index File for (preprocessed) Document Collection
class IndexLoader:
def __init__(self, indexFile):
self.index = {}
docidCountRE = re.compile('(\d+):(\d+)')
f = open(indexFile, 'r')
for line in f:
term = line.split(' ', 1)[0]
self.index[term] = {}
for (docid, count) in docidCountRE.findall(line):
docid = int(docid)
self.index[term][docid] = int(count)
def getIndex(self):
return self.index
```
# Third Class(Queries)
<big>**Calss Function**</big>:
1. Use the **Regular Expression** to extract the information from original file and index in a new dictionary.
2. The New Generator dictionary has 2 or more dimensions.
3. This class is to extract the queries and get the same dataset formate(python dictionary) as IndexLoader
<big>**Data Structure(Dictionary)**</big>:<br>
1. {(qid name)->{(qid name)-> count_num}}
```
class Queries:
def __init__(self, queriesFile):
self.qStore = {}
termCountRE = re.compile('(\w+):(\d+)')
f = open(queriesFile, 'r')
for line in f:
qid = int(line.split(' ', 1)[0])
self.qStore[qid] = {}
for (term, count) in termCountRE.findall(line):
self.qStore[qid][term] = int(count)
def getQuery(self, qid):
if qid in self.qStore:
return self.qStore[qid]
else:
print("*** ERROR: unknown query identifier (\"%s\") ***" % qid, file=sys.stderr)
if type(qid) == type(''):
print('WARNING: query identifiers should be of type: integer', file=sys.stderr)
print(' -- your query identifier is of type: string', file=sys.stderr)
print(' -- program exiting', file=sys.stderr)
def qids(self):
return sorted(self.qStore)
```
# Forth Class(ResultStore)
<big>**Calss Function**</big>:<br>
<big>**Data Structure(Dictionary)**</big>:<br>
```
class ResultStore:
def __init__(self, outfile):
self.outfile = outfile
self.results = []
def store(self, qid, docids):
if len(docids) > 10:
docids = docids[:10]
self.results.append((qid, docids))
def output(self):
with open(self.outfile, 'w') as out:
for (qid, docids) in self.results:
for docid in docids:
print(qid, docid, file=out)
```
# Target Class(Retrieve)
```
# TODO: Finish the assignment part
class Retrieve:
# Create new Retrieve object storing index and termWeighting scheme
def __init__(self,index, termWeighting):
self.index = index
self.termWeighting = termWeighting
# Method performing retrieval for specified query
def forQuery(self, query):
return range(1,11)
```
<big>**Below Code is the Formate of Command(Please customize with on the basis of your requirements)**</big><br><br>
**How to generate command line As you wish**<br>
`sys.argv` is the python reserved variables and applied to `ArgumentParser.parse_args(args=None, namespace=None)` to be the one of the parameters. This is a list of strings, each of which is obviously derived from command line arguments, separated by spaces. We can redefine this list by below command.
We should delete the key parameters like `python` and put the remains into the list.
```
# Example Command Formate:
# > python ir_engine.py -o result.txt
sys.argv = ['ir_engine.py', '-o', 'result.txt']
config = CommandLine()
index = IndexLoader(config.indexFile).getIndex()
```
|
github_jupyter
|
> python ir_engine.py -o result.txt
#==============================================================================
# Importing
import sys, getopt, re
import time
#==============================================================================
# Command line processing
class CommandLine:
def __init__(self):
opts, args = getopt.getopt(sys.argv[1:], 'hspw:o:')
opts = dict(opts)
self.exit = True
if '-h' in opts:
self.printHelp()
return
if len(args) > 0:
print("*** ERROR: no arg files - only options! ***", file=sys.stderr)
self.printHelp()
return
if '-w' in opts:
if opts['-w'] in ('binary', 'tf', 'tfidf'):
self.termWeighting = opts['-w']
else:
warning = (
"*** ERROR: term weighting label (opt: -w LABEL)! ***\n"
" -- value (%s) not recognised!\n"
" -- must be one of: binary / tf / tfidf"
) % (opts['-w'])
print(warning, file=sys.stderr)
self.printHelp()
return
else:
self.termWeighting = 'binary'
if '-o' in opts:
self.outfile = opts['-o']
else:
print("*** ERROR: must specify output file (opt: -o FILE) ***",
file=sys.stderr)
self.printHelp()
return
if '-s' in opts and '-p' in opts:
self.indexFile = 'index_withstoplist_withstemming.txt'
self.queriesFile = 'queries_withstoplist_withstemming.txt'
elif '-s' in opts:
self.indexFile = 'index_withstoplist_nostemming.txt'
self.queriesFile = 'queries_withstoplist_nostemming.txt'
elif '-p' in opts:
self.indexFile = 'index_nostoplist_withstemming.txt'
self.queriesFile = 'queries_nostoplist_withstemming.txt'
else:
self.indexFile = 'index_nostoplist_nostemming.txt'
self.queriesFile = 'queries_nostoplist_nostemming.txt'
self.exit = False
def printHelp(self):
progname = sys.argv[0]
progname = progname.split('/')[-1] # strip off extended path
help = __doc__.replace('<PROGNAME>', progname, 1)
print(help, file=sys.stderr)
#==============================================================================
# Load (precomputed) Index File for (preprocessed) Document Collection
class IndexLoader:
def __init__(self, indexFile):
self.index = {}
docidCountRE = re.compile('(\d+):(\d+)')
f = open(indexFile, 'r')
for line in f:
term = line.split(' ', 1)[0]
self.index[term] = {}
for (docid, count) in docidCountRE.findall(line):
docid = int(docid)
self.index[term][docid] = int(count)
def getIndex(self):
return self.index
class Queries:
def __init__(self, queriesFile):
self.qStore = {}
termCountRE = re.compile('(\w+):(\d+)')
f = open(queriesFile, 'r')
for line in f:
qid = int(line.split(' ', 1)[0])
self.qStore[qid] = {}
for (term, count) in termCountRE.findall(line):
self.qStore[qid][term] = int(count)
def getQuery(self, qid):
if qid in self.qStore:
return self.qStore[qid]
else:
print("*** ERROR: unknown query identifier (\"%s\") ***" % qid, file=sys.stderr)
if type(qid) == type(''):
print('WARNING: query identifiers should be of type: integer', file=sys.stderr)
print(' -- your query identifier is of type: string', file=sys.stderr)
print(' -- program exiting', file=sys.stderr)
def qids(self):
return sorted(self.qStore)
class ResultStore:
def __init__(self, outfile):
self.outfile = outfile
self.results = []
def store(self, qid, docids):
if len(docids) > 10:
docids = docids[:10]
self.results.append((qid, docids))
def output(self):
with open(self.outfile, 'w') as out:
for (qid, docids) in self.results:
for docid in docids:
print(qid, docid, file=out)
# TODO: Finish the assignment part
class Retrieve:
# Create new Retrieve object storing index and termWeighting scheme
def __init__(self,index, termWeighting):
self.index = index
self.termWeighting = termWeighting
# Method performing retrieval for specified query
def forQuery(self, query):
return range(1,11)
# Example Command Formate:
# > python ir_engine.py -o result.txt
sys.argv = ['ir_engine.py', '-o', 'result.txt']
config = CommandLine()
index = IndexLoader(config.indexFile).getIndex()
| 0.292494 | 0.835316 |
# Coronagraph Basics
This set of exercises guides the user through a step-by-step process of simulating NIRCam coronagraphic observations of the HR 8799 exoplanetary system. The goal is to familiarize the user with basic `pynrc` classes and functions relevant to coronagraphy.
```
# Import the usual libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Enable inline plotting at lower left
%matplotlib inline
```
We will start by first importing `pynrc` along with the `obs_hci` (High Contrast Imaging) class, which lives in the `pynrc.obs_nircam` module.
```
import pynrc
from pynrc import nrc_utils # Variety of useful functions and classes
from pynrc.obs_nircam import obs_hci # High-contrast imaging observation class
# Progress bar
from tqdm.auto import tqdm, trange
# Disable informational messages and only include warnings and higher
pynrc.setup_logging(level='WARN')
```
## Source Definitions
The `obs_hci` class first requires two arguments describing the spectra of the science and reference sources (`sp_sci` and `sp_ref`, respectively. Each argument should be a Pysynphot spectrum already normalized to some known flux. `pynrc` includes built-in functions for generating spectra. The user may use either of these or should feel free to supply their own as long as it meets the requirements.
1. The `pynrc.stellar_spectrum` function provides the simplest way to define a new spectrum:
```python
bp_k = pynrc.bp_2mass('k') # Define bandpass to normalize spectrum
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k)
```
You can also be more specific about the stellar properties with `Teff`, `metallicity`, and `log_g` keywords.
```python
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k,
Teff=7430, metallicity=-0.47, log_g=4.35)
```
2. Alternatively, the `pynrc.source_spectrum` class ingests spectral information of a given target and generates a model fit to the known photometric SED. Two model routines can be fit. The first is a very simple scale factor that is applied to the input spectrum, while the second takes the input spectrum and adds an IR excess modeled as a modified blackbody function. The user can find the relevant photometric data at http://vizier.u-strasbg.fr/vizier/sed/ and click download data as a VOTable.
```
# Define 2MASS Ks bandpass and source information
bp_k = pynrc.bp_2mass('k')
# Science source, dist, age, sptype, Teff, [Fe/H], log_g, mag, band
args_sources = [('HR 8799', 39.0, 30, 'F0V', 7430, -0.47, 4.35, 5.24, bp_k)]
# References source, sptype, Teff, [Fe/H], log_g, mag, band
ref_sources = [('HD 220657', 'F8III', 5888, -0.01, 3.22, 3.04, bp_k)]
name_sci, dist_sci, age, spt_sci, Teff_sci, feh_sci, logg_sci, mag_sci, bp_sci = args_sources[0]
name_ref, spt_ref, Teff_ref, feh_ref, logg_ref, mag_ref, bp_ref = ref_sources[0]
# For the purposes of simplicity, we will use pynrc.stellar_spectrum()
sp_sci = pynrc.stellar_spectrum(spt_sci, mag_sci, 'vegamag', bp_sci,
Teff=Teff_sci, metallicity=feh_sci, log_g=logg_sci)
sp_sci.name = name_sci
# And the refernece source
sp_ref = pynrc.stellar_spectrum(spt_ref, mag_ref, 'vegamag', bp_ref,
Teff=Teff_ref, metallicity=feh_ref, log_g=logg_ref)
sp_ref.name = name_ref
# Plot the two spectra
fig, ax = plt.subplots(1,1, figsize=(8,5))
xr = [2.5,5.5]
for sp in [sp_sci, sp_ref]:
w = sp.wave / 1e4
ind = (w>=xr[0]) & (w<=xr[1])
sp.convert('Jy')
f = sp.flux / np.interp(4.0, w, sp.flux)
ax.semilogy(w[ind], f[ind], lw=1.5, label=sp.name)
ax.set_ylabel('Flux (Jy) normalized at 4 $\mu m$')
sp.convert('flam')
ax.set_xlim(xr)
ax.set_xlabel(r'Wavelength ($\mu m$)')
ax.set_title('Spectral Sources')
# Overplot Filter Bandpass
bp = pynrc.read_filter('F444W', 'CIRCLYOT', 'MASK430R')
ax2 = ax.twinx()
ax2.plot(bp.wave/1e4, bp.throughput, color='C2', label=bp.name+' Bandpass')
ax2.set_ylim([0,0.8])
ax2.set_xlim(xr)
ax2.set_ylabel('Bandpass Throughput')
ax.legend(loc='upper left')
ax2.legend(loc='upper right')
fig.tight_layout()
```
## Initialize Observation
Now we will initialize the high-contrast imaging class `pynrc.obs_hci` using the spectral objects and various other settings. The `obs_hci` object is a subclass of the more generalized `NIRCam` class. It implements new settings and functions specific to high-contrast imaging observations for corongraphy and direct imaging.
For this tutorial, we want to observe these targets using the `MASK430R` coronagraph in the `F444W` filter. All circular coronagraphic masks such as the `430R` (R=round) should be paired with the `CIRCLYOT` pupil element, whereas wedge/bar masks are paired with `WEDGELYOT` pupil. Observations in the LW channel are most commonly observed in `WINDOW` mode with a 320x320 detector subarray size. Full detector sizes are also available.
The PSF simulation size (`fov_pix` keyword) should also be of similar size as the subarray window (recommend avoiding anything above `fov_pix=1024` due to computation time and memory usage). Use odd numbers to center the PSF in the middle of the pixel. If `fov_pix` is specified as even, then PSFs get centered at the corners. This distinction really only matter for unocculted observations, (ie., where the PSF flux is concentrated in a tight central core).
The `obs_hci` class also allows one to specify WFE drift values in terms of nm RMS. The `wfe_ref_drift` parameter defaults to 2nm between
We also need to specify a WFE drift value (`wfe_ref_drift` parameter), which defines the anticipated drift in nm between the science and reference sources. For the moment, let's intialize with a value of 0nm. This prevents an initially long process by which `pynrc` calculates changes made to the PSF over a wide range of drift values. This process only happens once, then stores the resulting coefficient residuals to disk for future quick retrieval.
Extended disk models can also be specified upon initialization using the `disk_params` keyword (which should be a dictionary).
The `large_grid` keyword controls the quality of PSF variations near and under the corongraphic masks. If False, then a sparse grid is used (faster to generate during initial calculations; less disk space and memory). If True, then a higher density grid is calculated (~2.5 hrs for initial creation; ~3.5x larger sizes), which produces improved PSFs at the SGD positions. For purposes of this demo, we set it to False.
```
# The initial call make take some time, as it will need to generate coefficients
# to calculate PSF variations across wavelength, WFE drift, and mask location
filt, mask, pupil = ('F444W', 'MASK430R', 'CIRCLYOT')
wind_mode, subsize = ('WINDOW', 320)
fov_pix, oversample = (321, 2)
obs = pynrc.obs_hci(sp_sci, dist_sci, sp_ref=sp_ref, use_ap_info=False,
filter=filt, image_mask=mask, pupil_mask=pupil,
wind_mode=wind_mode, xpix=subsize, ypix=subsize,
fov_pix=fov_pix, oversample=oversample, large_grid=True)
```
Some information for the reference observation is stored in the attribute `obs.Detector_ref`, which is a separate NIRCam `DetectorOps` class that we use to keep track of the detector a multiaccum configurations, which may differe between science and reference observations. Settings for the reference observation can be updated using the `obs.gen_ref()` function.
```
# Set default WFE drift values between Roll1, Roll2, and Ref
# WFE drift amount between rolls
obs.wfe_roll_drift = 2
# Drift amount between Roll 1 and Reference.
obs.wfe_ref_drift = 5
```
## Exposure Settings
Optimization of exposure settings are demonstrated in another tutorial, so we will not repeat that process here. We can assume the optimization process was performed elsewhere to choose the `DEEP8` pattern with 16 groups and 5 total integrations. These settings apply to each roll position of the science observation sequence as well as the for the reference observation.
```
# Update both the science and reference observations
obs.update_detectors(read_mode='DEEP8', ngroup=16, nint=5, verbose=True)
obs.gen_ref_det(read_mode='DEEP8', ngroup=16, nint=5)
```
## Add Planets
There are four known giant planets orbiting HR 8799. Ideally, we would like to position them at their predicted locations on the anticipated observation date. For this case, we choose a plausible observation date of November 1, 2022. To convert between $(x,y)$ and $(r,\theta)$, use the `nrc_utils.xy_to_rtheta` and `nrc_utils.rtheta_to_xy` functions.
When adding the planets, it doesn't matter too much which exoplanet model spectrum we decide to use since the spectra are still fairly unconstrained at these wavelengths. We do know roughly the planets' luminosities, so we can simply choose some reasonable model and renormalize it to the appropriate filter brightness.
Their are a few exoplanet models available to `pynrc` (SB12, BEX, COND), but let's choose those from Spiegel & Burrows (2012).
```
# Projected locations for date 11/01/2022
# These are prelimary positions, but within constrained orbital parameters
loc_list = [(-1.625, 0.564), (0.319, 0.886), (0.588, -0.384), (0.249, 0.294)]
# Estimated magnitudes within F444W filter
pmags = [16.0, 15.0, 14.6, 14.7]
# Add planet information to observation class.
# These are stored in obs.planets.
# Can be cleared using obs.delete_planets().
obs.delete_planets()
for i, loc in enumerate(loc_list):
obs.add_planet(model='SB12', mass=10, entropy=13, age=age, xy=loc, runits='arcsec',
renorm_args=(pmags[i], 'vegamag', obs.bandpass))
# Generate and plot a noiseless slope image to verify orientation
PA1 = 85 # Telescope V3 PA
PA_offset = -1*PA1 # Image field is rotated opposite direction
im_planets = obs.gen_planets_image(PA_offset=PA_offset, return_oversample=False)
from matplotlib.patches import Circle
from pynrc.nrc_utils import plotAxes
from pynrc.obs_nircam import get_cen_offsets
fig, ax = plt.subplots(figsize=(6,6))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
xylim = 4
vmin = 0
vmax = 0.5*im_planets.max()
ax.imshow(im_planets, extent=extent, vmin=vmin, vmax=vmax)
# Overlay the coronagraphic mask
detid = obs.Detector.detid
im_mask = obs.mask_images['DETSAMP']
# Do some masked transparency overlays
masked = np.ma.masked_where(im_mask>0.98*im_mask.max(), im_mask)
ax.imshow(1-masked, extent=extent, alpha=0.3, cmap='Greys_r', vmin=-0.5)
for loc in loc_list:
xc, yc = get_cen_offsets(obs, idl_offset=loc, PA_offset=PA_offset)
circle = Circle((xc,yc), radius=xylim/20., alpha=0.7, lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
xlim = ylim = np.array([-1,1])*xylim
xlim = xlim + obs.bar_offset
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.set_title('{} planets -- {} {}'.format(sp_sci.name, obs.filter, obs.image_mask))
color = 'grey'
ax.tick_params(axis='both', color=color, which='both')
for k in ax.spines.keys():
ax.spines[k].set_color(color)
plotAxes(ax, width=1, headwidth=5, alength=0.15, angle=PA_offset,
position=(0.1,0.1), label1='E', label2='N')
fig.tight_layout()
```
As we can see, even with "perfect PSF subtraction" and no noise, it's difficult to make out planet _e_ despite providing a similiar magnitude as _d_. This is primarily due to its location relative to the occulting mask reducing throughput along with confusion of bright diffraction spots from the other nearby sources.
**Note**: the circled regions of the expected planet positions don't perfectly align with the PSFs, because the LW wavelengths have a slight dispersion through the Lyot mask material.
## Estimated Performance
Now we are ready to determine contrast performance and sensitivites as a function of distance from the star.
### 1. Roll-Subtracted Images
First, we will create a quick simulated roll-subtracted image using the in `gen_roll_image` method. For the selected observation date of 11/1/2022, APT shows a PA range of 84$^{\circ}$ to 96$^{\circ}$. So, we'll assume Roll 1 has PA1=85, while Roll 2 has PA2=95. In this case, "roll subtraction" simply creates two science images observed at different parallactic angles, then subtracts the same reference observation from each. The two results are then de-rotated to a common PA=0 and averaged.
There is also the option to create ADI images, where the other roll position becomes the reference star by setting `no_ref=True`.
Images generated with the `gen_roll_image` method will also include random pointing offsets described in the `pointing_info` dictionary. These can be generated by calling `obs.gen_pointing_offsets()`.
```
# Create pointing offset with a random seed for reproducibility
obs.gen_pointing_offsets(rand_seed=1234, verbose=True)
# Cycle through a few WFE drift values
wfe_list = [0,5,10]
# PA values for each roll
PA1, PA2 = (85,95)
# A dictionary of HDULists
hdul_dict = {}
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
hdulist = obs.gen_roll_image(PA1=PA1, PA2=PA2,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift)
hdul_dict[wfe_drift] = hdulist
from pynrc.nb_funcs import plot_hdulist
from matplotlib.patches import Circle
fig, axes = plt.subplots(1,3, figsize=(14,4.3))
xylim = 2.5
xlim = ylim = np.array([-1,1])*xylim
for j, wfe_drift in enumerate(wfe_list):
ax = axes[j]
hdul = hdul_dict[wfe_drift]
plot_hdulist(hdul, xr=xlim, yr=ylim, ax=ax, vmin=0, vmax=10)
# Location of planet
for loc in loc_list:
circle = Circle(loc, radius=xylim/15., lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
ax.set_title('$\Delta$WFE = {:.0f} nm'.format(wfe_drift))
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, position=(0.9,0.1), label1='E', label2='N')
fig.suptitle('{} -- {} {}'.format(name_sci, obs.filter, obs.image_mask), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=0.85)
```
The majority of the speckle noise here originates from small pointing offsets between the roll positions and reference observation. These PSF centering mismatches dominate the subtraction residuals compared to the WFE drift variations. Small-grid dithers acquired during the reference observations should produce improved subtraction performance through PCA/KLIP algorithms. To get a better idea of the post-processing performance, we re-run these observations assuming perfect target acquisition.
```
hdul_dict = {}
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
# Assume perfect centering by setting xyoff_***=(0,0)
hdulist = obs.gen_roll_image(PA1=PA1, PA2=PA2,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift,
xyoff_roll1=(0,0), xyoff_roll2=(0,0), xyoff_ref=(0,0))
hdul_dict[wfe_drift] = hdulist
from pynrc.nb_funcs import plot_hdulist
from matplotlib.patches import Circle
fig, axes = plt.subplots(1,3, figsize=(14,4.3))
xylim = 2.5
xlim = ylim = np.array([-1,1])*xylim
for j, wfe_drift in enumerate(wfe_list):
ax = axes[j]
hdul = hdul_dict[wfe_drift]
plot_hdulist(hdul, xr=xlim, yr=ylim, ax=ax, vmin=0, vmax=10)
# Location of planet
for loc in loc_list:
circle = Circle(loc, radius=xylim/15., lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
ax.set_title('$\Delta$WFE = {:.0f} nm'.format(wfe_drift))
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, position=(0.9,0.1), label1='E', label2='N')
fig.suptitle('Ideal TA ({} -- {} {})'.format(name_sci, obs.filter, obs.image_mask), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=0.85)
```
### 2. Contrast Curves
Next, we will cycle through a few WFE drift values to get an idea of potential predicted sensitivity curves. The `calc_contrast` method returns a tuple of three arrays:
1. The radius in arcsec.
2. The n-sigma contrast.
3. The n-sigma magnitude sensitivity limit (vega mag).
In order to better understand the relative contributes of WFE drift to contrast loss, we're going to ignore telescope pointing offsets by explicitly passing `xoff_* = (0,0)` keywords for Roll 1, Roll 2, and Ref observations.
```
nsig = 5
roll_angle = np.abs(PA2 - PA1)
curves = []
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
# Generate contrast curves
result = obs.calc_contrast(roll_angle=roll_angle, nsig=nsig,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift,
xyoff_roll1=(0,0), xyoff_roll2=(0,0), xyoff_ref=(0,0))
curves.append(result)
from pynrc.nb_funcs import plot_contrasts, plot_planet_patches, plot_contrasts_mjup, update_yscale
import matplotlib.patches as mpatches
# fig, ax = plt.subplots(figsize=(8,5))
fig, axes = plt.subplots(1,2, figsize=(14,4.5))
xr=[0,5]
yr=[24,8]
# 1a. Plot contrast curves and set x/y limits
ax = axes[0]
ax, ax2, ax3 = plot_contrasts(curves, nsig, wfe_list, obs=obs,
xr=xr, yr=yr, ax=ax, return_axes=True)
# 1b. Plot the locations of exoplanet companions
label = 'Companions ({})'.format(filt)
planet_dist = [np.sqrt(x**2+y**2) for x,y in loc_list]
ax.plot(planet_dist, pmags, marker='o', ls='None', label=label, color='k', zorder=10)
# 1c. Plot Spiegel & Burrows (2012) exoplanet fluxes (Hot Start)
plot_planet_patches(ax, obs, age=age, entropy=13, av_vals=None)
ax.legend(ncol=2)
# 2. Plot in terms of MJup using COND models
ax = axes[1]
ax1, ax2, ax3 = plot_contrasts_mjup(curves, nsig, wfe_list, obs=obs, age=age,
ax=ax, twin_ax=True, xr=xr, yr=None, return_axes=True)
yr = [0.03,100]
for xval in planet_dist:
ax.plot([xval,xval],yr, lw=1, ls='--', color='k', alpha=0.7)
update_yscale(ax1, 'log', ylim=yr)
yr_temp = np.array(ax1.get_ylim()) * 318.0
update_yscale(ax2, 'log', ylim=yr_temp)
# ax.set_yscale('log')
# ax.set_ylim([0.08,100])
ax.legend(loc='upper right', title='BEX ({:.0f} Myr)'.format(age))
fig.suptitle('{} ({} + {})'.format(name_sci, obs.filter, obs.image_mask), fontsize=16)
fig.tight_layout()
fig.subplots_adjust(top=0.85, bottom=0.1 , left=0.05, right=0.97)
```
### 3. Saturation Levels
Create an image showing level of saturation for each pixel. For NIRCam, saturation is important to track for purposes of accurate slope fits and persistence correction. In this case, we will plot the saturation levels both at `NGROUP=2` and `NGROUP=obs.det_info['ngroup']`. Saturation is defined at 80% well level, but can be modified using the `well_frac` keyword.
We want to perform this analysis for both science and reference targets.
```
# Saturation limits
ng_max = obs.det_info['ngroup']
sp_flat = pynrc.stellar_spectrum('flat')
print('NGROUP=2')
_ = obs.sat_limits(sp=sp_flat,ngroup=2,verbose=True)
print('')
print(f'NGROUP={ng_max}')
_ = obs.sat_limits(sp=sp_flat,ngroup=ng_max,verbose=True)
mag_sci = obs.star_flux('vegamag')
mag_ref = obs.star_flux('vegamag', sp=obs.sp_ref)
print('')
print(f'{obs.sp_sci.name} flux at {obs.filter}: {mag_sci:0.2f} mags')
print(f'{obs.sp_ref.name} flux at {obs.filter}: {mag_ref:0.2f} mags')
```
In this case, we don't expect HR 8799 to saturate. However, the reference source should have some saturated pixels before the end of an integration.
```
# Well level of each pixel for science source
sci_levels1 = obs.saturation_levels(ngroup=2, exclude_planets=True)
sci_levels2 = obs.saturation_levels(ngroup=ng_max, exclude_planets=True)
# Which pixels are saturated? Assume sat level at 90% full well.
sci_mask1 = sci_levels1 > 0.9
sci_mask2 = sci_levels2 > 0.9
# Well level of each pixel for reference source
ref_levels1 = obs.saturation_levels(ngroup=2, do_ref=True)
ref_levels2 = obs.saturation_levels(ngroup=ng_max, do_ref=True)
# Which pixels are saturated? Assume sat level at 90% full well.
ref_mask1 = ref_levels1 > 0.9
ref_mask2 = ref_levels2 > 0.9
# How many saturated pixels?
nsat1_sci = len(sci_levels1[sci_mask1])
nsat2_sci = len(sci_levels2[sci_mask2])
print(obs.sp_sci.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_sci))
print('{} saturated pixel at NGROUP={}'.format(nsat2_sci,ng_max))
# How many saturated pixels?
nsat1_ref = len(ref_levels1[ref_mask1])
nsat2_ref = len(ref_levels2[ref_mask2])
print('')
print(obs.sp_ref.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_ref))
print('{} saturated pixel at NGROUP={}'.format(nsat2_ref,ng_max))
# Saturation Mask for science target
nsat1, nsat2 = (nsat1_sci, nsat2_sci)
sat_mask1, sat_mask2 = (sci_mask1, sci_mask2)
sp = obs.sp_sci
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title('{} Saturation (NGROUP=2)'.format(sp.name))
axes[1].set_title('{} Saturation (NGROUP={})'.format(sp.name,ng_max))
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
# Saturation Mask for reference
nsat1, nsat2 = (nsat1_ref, nsat2_ref)
sat_mask1, sat_mask2 = (ref_mask1, ref_mask2)
sp = obs.sp_ref
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title(f'{sp.name} Saturation (NGROUP=2)')
axes[1].set_title(f'{sp.name} Saturation (NGROUP={ng_max})')
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
```
|
github_jupyter
|
# Import the usual libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Enable inline plotting at lower left
%matplotlib inline
import pynrc
from pynrc import nrc_utils # Variety of useful functions and classes
from pynrc.obs_nircam import obs_hci # High-contrast imaging observation class
# Progress bar
from tqdm.auto import tqdm, trange
# Disable informational messages and only include warnings and higher
pynrc.setup_logging(level='WARN')
bp_k = pynrc.bp_2mass('k') # Define bandpass to normalize spectrum
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k)
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k,
Teff=7430, metallicity=-0.47, log_g=4.35)
# Define 2MASS Ks bandpass and source information
bp_k = pynrc.bp_2mass('k')
# Science source, dist, age, sptype, Teff, [Fe/H], log_g, mag, band
args_sources = [('HR 8799', 39.0, 30, 'F0V', 7430, -0.47, 4.35, 5.24, bp_k)]
# References source, sptype, Teff, [Fe/H], log_g, mag, band
ref_sources = [('HD 220657', 'F8III', 5888, -0.01, 3.22, 3.04, bp_k)]
name_sci, dist_sci, age, spt_sci, Teff_sci, feh_sci, logg_sci, mag_sci, bp_sci = args_sources[0]
name_ref, spt_ref, Teff_ref, feh_ref, logg_ref, mag_ref, bp_ref = ref_sources[0]
# For the purposes of simplicity, we will use pynrc.stellar_spectrum()
sp_sci = pynrc.stellar_spectrum(spt_sci, mag_sci, 'vegamag', bp_sci,
Teff=Teff_sci, metallicity=feh_sci, log_g=logg_sci)
sp_sci.name = name_sci
# And the refernece source
sp_ref = pynrc.stellar_spectrum(spt_ref, mag_ref, 'vegamag', bp_ref,
Teff=Teff_ref, metallicity=feh_ref, log_g=logg_ref)
sp_ref.name = name_ref
# Plot the two spectra
fig, ax = plt.subplots(1,1, figsize=(8,5))
xr = [2.5,5.5]
for sp in [sp_sci, sp_ref]:
w = sp.wave / 1e4
ind = (w>=xr[0]) & (w<=xr[1])
sp.convert('Jy')
f = sp.flux / np.interp(4.0, w, sp.flux)
ax.semilogy(w[ind], f[ind], lw=1.5, label=sp.name)
ax.set_ylabel('Flux (Jy) normalized at 4 $\mu m$')
sp.convert('flam')
ax.set_xlim(xr)
ax.set_xlabel(r'Wavelength ($\mu m$)')
ax.set_title('Spectral Sources')
# Overplot Filter Bandpass
bp = pynrc.read_filter('F444W', 'CIRCLYOT', 'MASK430R')
ax2 = ax.twinx()
ax2.plot(bp.wave/1e4, bp.throughput, color='C2', label=bp.name+' Bandpass')
ax2.set_ylim([0,0.8])
ax2.set_xlim(xr)
ax2.set_ylabel('Bandpass Throughput')
ax.legend(loc='upper left')
ax2.legend(loc='upper right')
fig.tight_layout()
# The initial call make take some time, as it will need to generate coefficients
# to calculate PSF variations across wavelength, WFE drift, and mask location
filt, mask, pupil = ('F444W', 'MASK430R', 'CIRCLYOT')
wind_mode, subsize = ('WINDOW', 320)
fov_pix, oversample = (321, 2)
obs = pynrc.obs_hci(sp_sci, dist_sci, sp_ref=sp_ref, use_ap_info=False,
filter=filt, image_mask=mask, pupil_mask=pupil,
wind_mode=wind_mode, xpix=subsize, ypix=subsize,
fov_pix=fov_pix, oversample=oversample, large_grid=True)
# Set default WFE drift values between Roll1, Roll2, and Ref
# WFE drift amount between rolls
obs.wfe_roll_drift = 2
# Drift amount between Roll 1 and Reference.
obs.wfe_ref_drift = 5
# Update both the science and reference observations
obs.update_detectors(read_mode='DEEP8', ngroup=16, nint=5, verbose=True)
obs.gen_ref_det(read_mode='DEEP8', ngroup=16, nint=5)
# Projected locations for date 11/01/2022
# These are prelimary positions, but within constrained orbital parameters
loc_list = [(-1.625, 0.564), (0.319, 0.886), (0.588, -0.384), (0.249, 0.294)]
# Estimated magnitudes within F444W filter
pmags = [16.0, 15.0, 14.6, 14.7]
# Add planet information to observation class.
# These are stored in obs.planets.
# Can be cleared using obs.delete_planets().
obs.delete_planets()
for i, loc in enumerate(loc_list):
obs.add_planet(model='SB12', mass=10, entropy=13, age=age, xy=loc, runits='arcsec',
renorm_args=(pmags[i], 'vegamag', obs.bandpass))
# Generate and plot a noiseless slope image to verify orientation
PA1 = 85 # Telescope V3 PA
PA_offset = -1*PA1 # Image field is rotated opposite direction
im_planets = obs.gen_planets_image(PA_offset=PA_offset, return_oversample=False)
from matplotlib.patches import Circle
from pynrc.nrc_utils import plotAxes
from pynrc.obs_nircam import get_cen_offsets
fig, ax = plt.subplots(figsize=(6,6))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
xylim = 4
vmin = 0
vmax = 0.5*im_planets.max()
ax.imshow(im_planets, extent=extent, vmin=vmin, vmax=vmax)
# Overlay the coronagraphic mask
detid = obs.Detector.detid
im_mask = obs.mask_images['DETSAMP']
# Do some masked transparency overlays
masked = np.ma.masked_where(im_mask>0.98*im_mask.max(), im_mask)
ax.imshow(1-masked, extent=extent, alpha=0.3, cmap='Greys_r', vmin=-0.5)
for loc in loc_list:
xc, yc = get_cen_offsets(obs, idl_offset=loc, PA_offset=PA_offset)
circle = Circle((xc,yc), radius=xylim/20., alpha=0.7, lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
xlim = ylim = np.array([-1,1])*xylim
xlim = xlim + obs.bar_offset
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.set_title('{} planets -- {} {}'.format(sp_sci.name, obs.filter, obs.image_mask))
color = 'grey'
ax.tick_params(axis='both', color=color, which='both')
for k in ax.spines.keys():
ax.spines[k].set_color(color)
plotAxes(ax, width=1, headwidth=5, alength=0.15, angle=PA_offset,
position=(0.1,0.1), label1='E', label2='N')
fig.tight_layout()
# Create pointing offset with a random seed for reproducibility
obs.gen_pointing_offsets(rand_seed=1234, verbose=True)
# Cycle through a few WFE drift values
wfe_list = [0,5,10]
# PA values for each roll
PA1, PA2 = (85,95)
# A dictionary of HDULists
hdul_dict = {}
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
hdulist = obs.gen_roll_image(PA1=PA1, PA2=PA2,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift)
hdul_dict[wfe_drift] = hdulist
from pynrc.nb_funcs import plot_hdulist
from matplotlib.patches import Circle
fig, axes = plt.subplots(1,3, figsize=(14,4.3))
xylim = 2.5
xlim = ylim = np.array([-1,1])*xylim
for j, wfe_drift in enumerate(wfe_list):
ax = axes[j]
hdul = hdul_dict[wfe_drift]
plot_hdulist(hdul, xr=xlim, yr=ylim, ax=ax, vmin=0, vmax=10)
# Location of planet
for loc in loc_list:
circle = Circle(loc, radius=xylim/15., lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
ax.set_title('$\Delta$WFE = {:.0f} nm'.format(wfe_drift))
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, position=(0.9,0.1), label1='E', label2='N')
fig.suptitle('{} -- {} {}'.format(name_sci, obs.filter, obs.image_mask), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=0.85)
hdul_dict = {}
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
# Assume perfect centering by setting xyoff_***=(0,0)
hdulist = obs.gen_roll_image(PA1=PA1, PA2=PA2,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift,
xyoff_roll1=(0,0), xyoff_roll2=(0,0), xyoff_ref=(0,0))
hdul_dict[wfe_drift] = hdulist
from pynrc.nb_funcs import plot_hdulist
from matplotlib.patches import Circle
fig, axes = plt.subplots(1,3, figsize=(14,4.3))
xylim = 2.5
xlim = ylim = np.array([-1,1])*xylim
for j, wfe_drift in enumerate(wfe_list):
ax = axes[j]
hdul = hdul_dict[wfe_drift]
plot_hdulist(hdul, xr=xlim, yr=ylim, ax=ax, vmin=0, vmax=10)
# Location of planet
for loc in loc_list:
circle = Circle(loc, radius=xylim/15., lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
ax.set_title('$\Delta$WFE = {:.0f} nm'.format(wfe_drift))
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, position=(0.9,0.1), label1='E', label2='N')
fig.suptitle('Ideal TA ({} -- {} {})'.format(name_sci, obs.filter, obs.image_mask), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=0.85)
nsig = 5
roll_angle = np.abs(PA2 - PA1)
curves = []
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
# Generate contrast curves
result = obs.calc_contrast(roll_angle=roll_angle, nsig=nsig,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift,
xyoff_roll1=(0,0), xyoff_roll2=(0,0), xyoff_ref=(0,0))
curves.append(result)
from pynrc.nb_funcs import plot_contrasts, plot_planet_patches, plot_contrasts_mjup, update_yscale
import matplotlib.patches as mpatches
# fig, ax = plt.subplots(figsize=(8,5))
fig, axes = plt.subplots(1,2, figsize=(14,4.5))
xr=[0,5]
yr=[24,8]
# 1a. Plot contrast curves and set x/y limits
ax = axes[0]
ax, ax2, ax3 = plot_contrasts(curves, nsig, wfe_list, obs=obs,
xr=xr, yr=yr, ax=ax, return_axes=True)
# 1b. Plot the locations of exoplanet companions
label = 'Companions ({})'.format(filt)
planet_dist = [np.sqrt(x**2+y**2) for x,y in loc_list]
ax.plot(planet_dist, pmags, marker='o', ls='None', label=label, color='k', zorder=10)
# 1c. Plot Spiegel & Burrows (2012) exoplanet fluxes (Hot Start)
plot_planet_patches(ax, obs, age=age, entropy=13, av_vals=None)
ax.legend(ncol=2)
# 2. Plot in terms of MJup using COND models
ax = axes[1]
ax1, ax2, ax3 = plot_contrasts_mjup(curves, nsig, wfe_list, obs=obs, age=age,
ax=ax, twin_ax=True, xr=xr, yr=None, return_axes=True)
yr = [0.03,100]
for xval in planet_dist:
ax.plot([xval,xval],yr, lw=1, ls='--', color='k', alpha=0.7)
update_yscale(ax1, 'log', ylim=yr)
yr_temp = np.array(ax1.get_ylim()) * 318.0
update_yscale(ax2, 'log', ylim=yr_temp)
# ax.set_yscale('log')
# ax.set_ylim([0.08,100])
ax.legend(loc='upper right', title='BEX ({:.0f} Myr)'.format(age))
fig.suptitle('{} ({} + {})'.format(name_sci, obs.filter, obs.image_mask), fontsize=16)
fig.tight_layout()
fig.subplots_adjust(top=0.85, bottom=0.1 , left=0.05, right=0.97)
# Saturation limits
ng_max = obs.det_info['ngroup']
sp_flat = pynrc.stellar_spectrum('flat')
print('NGROUP=2')
_ = obs.sat_limits(sp=sp_flat,ngroup=2,verbose=True)
print('')
print(f'NGROUP={ng_max}')
_ = obs.sat_limits(sp=sp_flat,ngroup=ng_max,verbose=True)
mag_sci = obs.star_flux('vegamag')
mag_ref = obs.star_flux('vegamag', sp=obs.sp_ref)
print('')
print(f'{obs.sp_sci.name} flux at {obs.filter}: {mag_sci:0.2f} mags')
print(f'{obs.sp_ref.name} flux at {obs.filter}: {mag_ref:0.2f} mags')
# Well level of each pixel for science source
sci_levels1 = obs.saturation_levels(ngroup=2, exclude_planets=True)
sci_levels2 = obs.saturation_levels(ngroup=ng_max, exclude_planets=True)
# Which pixels are saturated? Assume sat level at 90% full well.
sci_mask1 = sci_levels1 > 0.9
sci_mask2 = sci_levels2 > 0.9
# Well level of each pixel for reference source
ref_levels1 = obs.saturation_levels(ngroup=2, do_ref=True)
ref_levels2 = obs.saturation_levels(ngroup=ng_max, do_ref=True)
# Which pixels are saturated? Assume sat level at 90% full well.
ref_mask1 = ref_levels1 > 0.9
ref_mask2 = ref_levels2 > 0.9
# How many saturated pixels?
nsat1_sci = len(sci_levels1[sci_mask1])
nsat2_sci = len(sci_levels2[sci_mask2])
print(obs.sp_sci.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_sci))
print('{} saturated pixel at NGROUP={}'.format(nsat2_sci,ng_max))
# How many saturated pixels?
nsat1_ref = len(ref_levels1[ref_mask1])
nsat2_ref = len(ref_levels2[ref_mask2])
print('')
print(obs.sp_ref.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_ref))
print('{} saturated pixel at NGROUP={}'.format(nsat2_ref,ng_max))
# Saturation Mask for science target
nsat1, nsat2 = (nsat1_sci, nsat2_sci)
sat_mask1, sat_mask2 = (sci_mask1, sci_mask2)
sp = obs.sp_sci
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title('{} Saturation (NGROUP=2)'.format(sp.name))
axes[1].set_title('{} Saturation (NGROUP={})'.format(sp.name,ng_max))
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
# Saturation Mask for reference
nsat1, nsat2 = (nsat1_ref, nsat2_ref)
sat_mask1, sat_mask2 = (ref_mask1, ref_mask2)
sp = obs.sp_ref
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title(f'{sp.name} Saturation (NGROUP=2)')
axes[1].set_title(f'{sp.name} Saturation (NGROUP={ng_max})')
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
| 0.696784 | 0.97957 |
# Exploratory Data Analysis - Univariate Analysis
```
import json
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from ipywidgets import widgets
from scipy.stats import shapiro
import statsmodels.api as sm
import plotly.io as pio
pio.renderers.default = "vscode"
df = pd.read_csv('./../../../data/cleaned_data.csv')
# Load lists of numerical and categorical columns from the static file
with open('./../../../data/statics.json') as f:
statics = json.load(f)
categorical_columns = statics['categorical_columns']
numerical_columns = statics['numerical_columns']
# Separate out the dataframe intro numerical and categorical dataframe
num_df = df[numerical_columns]
cat_df = df[categorical_columns]
```
## Numerical Columns
### Distribution
```
# Descriptive statics for numerical variables
num_df.describe()
```
From the above table, it can be observed that some of the highly skewed columns include $MonthlyIncome$, $YearsAtCompany$, and $YearsSinceLastPromotion$. More information can be obtained by observing the distribution of all the variables.
```
# Create interactive plots
# Create a widget for selecting column
numcols = widgets.Dropdown(options = numerical_columns, value = numerical_columns[0], description="Numerial columns")
# Create plotly trace of histogram
num_trace1 = go.Histogram(x=num_df[numerical_columns[0]],
histnorm='probability',
name = 'Distribution')
# Create plotly trace of boc plot
num_trace2 = go.Box(x=num_df[numerical_columns[0]],
boxpoints='outliers', name = 'Quartiles representation')
# Create a widget for histogram
ng1 = go.FigureWidget(data=[num_trace1],
layout = go.Layout(
title = dict(text='Distribution of features')
))
# Create a widget for box plot
ng2 = go.FigureWidget(data=[num_trace2],
layout = go.Layout(
title = dict(text='Quartiles representation of features')
))
# Create a function for observing the change in the selection
def num_response(change):
"""
Function to update the values in the graph based on the selected column.
"""
with ng1.batch_update():
ng1.data[0].x = num_df[numcols.value]
ng1.layout.xaxis.title = 'Distribution of ' + str(numcols.value) + ' variable'
with ng2.batch_update():
ng2.data[0].x = num_df[numcols.value]
ng2.layout.xaxis.title = numcols.value
numcols.observe(num_response, names='value')
num_container = widgets.VBox([numcols, ng1, ng2])
```
```{margin}
You need to run the noteboon in order to see the graphs. Jupyter book is not capable of rendering plotly plots with ipywidgets.
```
```
display(num_container)
```
From the above distributions following observations can be noted:
- The average age of the participants is 37 years while the median age is rests at 36 years of age. We have representation of almost all sorts of working population right from the age of 18 to the age of 60. There are no outliers that exist in the dataset as far as age is concerned.
- Variables that approximately follows uniform distribution are variables representing daily rate, hourly rate with exception for values greater than 100, and monthly rate.
- There are variables which are positively skewed that includes distance from home, monthly income, number of companies worked, percentage hike, total working years, and years at a company.
- There are 2 variables which have double peaks. The variables represents years in current role and years since last promotion.
- Only 1 variable representing number of training in last year seems to be following normal distribution.
- There are outliers present in variables such as monthly income, number of companies worked, total working years, number of trainings in last year, years at company, years in current role, years since last promotion, and years with current manager. In order to decide whether to keep or remove the outliers a more closer look into variables are required.
### Normality check
```
sw_df = pd.DataFrame(columns=['Name of the column', 'SW Statistics', 'P-value', 'Is Normal'])
for column in numerical_columns:
result = shapiro(num_df[column])
# Alpha is set to 5%
is_norm = True if result[1]>0.05 else False
sw_df = sw_df.append(pd.Series({
'Name of the column': column,
'SW Statistics': result[0],
'P-value': result[1],
'Is Normal': is_norm
}),
ignore_index=True)
sw_df
```
Since the dataset is not huge, it is safe for us to trust these values and conclude that not a single variable follow normal distribution.
## Categorical variable
### Distribution
```
# Create interactive plots
# Create widget for selecting column
catcols = widgets.Dropdown(options=categorical_columns, value=categorical_columns[0], description='Categorical columns')
# Create bar plot trace for histogram
cat_trace1 = go.Bar(x = cat_df[categorical_columns[0]].value_counts().index,
y = cat_df[categorical_columns[0]].value_counts().values)
# Create a widget for bar plot
cg = go.FigureWidget(data=[cat_trace1],
layout=go.Layout(
title = dict(text="Distribution of features")
))
# Create function for observing the change in the column name
def cat_response(change):
with cg.batch_update():
cg.data[0].x = cat_df[catcols.value].value_counts().index
cg.data[0].y = cat_df[catcols.value].value_counts().values
cg.layout.xaxis.title = 'Distribution of ' + str(catcols.value) + ' variable'
catcols.observe(cat_response, names='value')
cat_container = widgets.VBox([catcols, cg])
display(cat_container)
```
From the above bar charts, following observations can noted:
- The target variable is highly imbalanced.
- Most of the employees travel rarely. Frequent travellers and non-travellers are too less as compared to rarede travellers.
- Most of the employees belongs to Research and Development department which is followed by Sales and then Human Resources.
- Maximum number of employees completed their Bachelor's degree followed by employees who even complete their Master's degree.
- Maximum number of employees have their majors in Life Sciences and Medical. The number of employees with majors in Marketing, Technical Degree, Human Resources and Other are too less as compared to the top 2 fields mentioned.
- People are quite content with the environment in which they are working.
- Dataset is represented by more number of males than females.
- Emplpoyees are also content with their involvement in their respective jobs.
- Most of the employees belongs to the lower levels in the heirarachy, mostly level 1 and level 2.
- The top 5 roles that exist in the current samples are sales executive, research scientist, laboratory technician, manufacturing director and healthcare representative.
- Most of the employees are satisfied with their jobs but still we have quite a significant number of people who are not.
- Maximum number of employees are married but there is significant portion of employees who are divorced.
- Around one-thord employees do overtime.
- Performance rating for all employeed lie in only 2 bands i.e. execellent and outstanding.
- Most of the employees are satisfied with their relationship with the company but still a signifiacnt portion does not fell so.
- More than 75% of population own stock options at levels 0 and 1.
- More than 80% of employees feel that the work-life balance is available.
|
github_jupyter
|
import json
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from ipywidgets import widgets
from scipy.stats import shapiro
import statsmodels.api as sm
import plotly.io as pio
pio.renderers.default = "vscode"
df = pd.read_csv('./../../../data/cleaned_data.csv')
# Load lists of numerical and categorical columns from the static file
with open('./../../../data/statics.json') as f:
statics = json.load(f)
categorical_columns = statics['categorical_columns']
numerical_columns = statics['numerical_columns']
# Separate out the dataframe intro numerical and categorical dataframe
num_df = df[numerical_columns]
cat_df = df[categorical_columns]
# Descriptive statics for numerical variables
num_df.describe()
# Create interactive plots
# Create a widget for selecting column
numcols = widgets.Dropdown(options = numerical_columns, value = numerical_columns[0], description="Numerial columns")
# Create plotly trace of histogram
num_trace1 = go.Histogram(x=num_df[numerical_columns[0]],
histnorm='probability',
name = 'Distribution')
# Create plotly trace of boc plot
num_trace2 = go.Box(x=num_df[numerical_columns[0]],
boxpoints='outliers', name = 'Quartiles representation')
# Create a widget for histogram
ng1 = go.FigureWidget(data=[num_trace1],
layout = go.Layout(
title = dict(text='Distribution of features')
))
# Create a widget for box plot
ng2 = go.FigureWidget(data=[num_trace2],
layout = go.Layout(
title = dict(text='Quartiles representation of features')
))
# Create a function for observing the change in the selection
def num_response(change):
"""
Function to update the values in the graph based on the selected column.
"""
with ng1.batch_update():
ng1.data[0].x = num_df[numcols.value]
ng1.layout.xaxis.title = 'Distribution of ' + str(numcols.value) + ' variable'
with ng2.batch_update():
ng2.data[0].x = num_df[numcols.value]
ng2.layout.xaxis.title = numcols.value
numcols.observe(num_response, names='value')
num_container = widgets.VBox([numcols, ng1, ng2])
From the above distributions following observations can be noted:
- The average age of the participants is 37 years while the median age is rests at 36 years of age. We have representation of almost all sorts of working population right from the age of 18 to the age of 60. There are no outliers that exist in the dataset as far as age is concerned.
- Variables that approximately follows uniform distribution are variables representing daily rate, hourly rate with exception for values greater than 100, and monthly rate.
- There are variables which are positively skewed that includes distance from home, monthly income, number of companies worked, percentage hike, total working years, and years at a company.
- There are 2 variables which have double peaks. The variables represents years in current role and years since last promotion.
- Only 1 variable representing number of training in last year seems to be following normal distribution.
- There are outliers present in variables such as monthly income, number of companies worked, total working years, number of trainings in last year, years at company, years in current role, years since last promotion, and years with current manager. In order to decide whether to keep or remove the outliers a more closer look into variables are required.
### Normality check
Since the dataset is not huge, it is safe for us to trust these values and conclude that not a single variable follow normal distribution.
## Categorical variable
### Distribution
| 0.698741 | 0.926769 |
### Convex feasibility problem for random balls
$\newcommand{\n}[1]{\left\|#1 \right\|}$
$\renewcommand{\a}{\alpha} $
$\renewcommand{\b}{\beta} $
$\renewcommand{\c}{\gamma} $
$\renewcommand{\d}{\delta} $
$\newcommand{\D}{\Delta} $
$\newcommand{\la}{\lambda} $
$\renewcommand{\t}{\tau} $
$\newcommand{\s}{\sigma} $
$\newcommand{\e}{\varepsilon} $
$\renewcommand{\th}{\theta} $
$\newcommand{\x}{\bar x} $
$\newcommand{\R}{\mathbb R} $
$\newcommand{\N}{\mathbb N} $
$\newcommand{\Z}{\mathbb Z} $
$\newcommand{\E}{\mathcal E} $
$\newcommand{\lr}[1]{\left\langle #1\right\rangle}$
$\newcommand{\nf}[1]{\nabla f(#1)} $
$\newcommand{\hx}{\hat x} $
$\newcommand{\hy}{\hat y} $
$\DeclareMathOperator{\prox}{prox} $
$\DeclareMathOperator{\argmin}{argmin} $
$\DeclareMathOperator{\dom}{dom} $
$\DeclareMathOperator{\id}{Id} $
$\DeclareMathOperator{\conv}{conv} $
We want to find a point $x\in \bigcap_{i=1}^m S_i$, where $S_i \in \R^n$ are some random balls.
This Jupyter file illustrates the performance of simultaneous projection algorithm and adaptive GRAAL. If you want to compare these methods for many random problems, it might be better to run `convex_feasibility_problem_for_random_balls.py`
```
from fixed_points import *
import matplotlib.pyplot as plt
# Comment the next line if seaborn is not installed.
# It is only for nicer plots
import seaborn as sns
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
Generate the data
```
#n, m = 30, 100
n, m = 1000, 2000
# fix a random generator
gen = 0
np.random.seed(gen)
# define balls
C = np.random.normal(0, 100, (m, n))
R = LA.norm(C,axis=1) + 0.1
# starting point
x0 = np.mean(C, axis=0)
# define operator T
def T(x):
dist = LA.norm(x - C, axis=1)
ind = np.where(dist > R)
# define the number of projection that we have to compute
n_ind = ind[0].shape[0]
C_ind = C[ind]
# compute projections only for those balls that are needed
Y = (R[ind] / dist[ind] * (x - C_ind).T).T + C_ind
return ((np.sum(Y, axis=0) + (m - n_ind)*x)) / m
```
Run algoithms. Krasnoselskii-Mann algorithm with a weight $\alpha=0$ coincides with simultaneous projection algorithm.
```
N = 1000
ans1 = krasn_mann(T, x0, 0, numb_iter=N)
ans2 = fixed_point_agraal(T, x0, numb_iter=N, phi=1.5, output=False)
```
Show the results:
```
sns.set() # comment if seaborn is not imported
plt.plot(ans1[0],'b',label="KM: $x^{k+1}=Tx^k$")
plt.plot(ans2[0],'#FFD700', label="aGRAAL")
plt.yscale('log')
plt.legend()
plt.xlabel(u'iterations, $k$')
plt.ylabel('residual')
plt.legend()
#plt.savefig('figures/KM-Graal.pdf',bbox_inches='tight')
plt.show()
plt.clf()
```
Let us look how large the stepsizes for aGRAAL were
```
plt.plot(ans2[2], '.', color='#FFD700', label="aGRAAL")
plt.legend()
plt.xlim(0,N)
plt.ylim(0)
plt.xlabel(u'iterations, $k$')
plt.ylabel('stepsize $\lambda_k$')
plt.legend()
#plt.grid()
#plt.savefig('figures/KM-Graal-steps2.pdf',bbox_inches='tight')
plt.show()
plt.clf()
```
|
github_jupyter
|
from fixed_points import *
import matplotlib.pyplot as plt
# Comment the next line if seaborn is not installed.
# It is only for nicer plots
import seaborn as sns
%load_ext autoreload
%autoreload 2
%matplotlib inline
#n, m = 30, 100
n, m = 1000, 2000
# fix a random generator
gen = 0
np.random.seed(gen)
# define balls
C = np.random.normal(0, 100, (m, n))
R = LA.norm(C,axis=1) + 0.1
# starting point
x0 = np.mean(C, axis=0)
# define operator T
def T(x):
dist = LA.norm(x - C, axis=1)
ind = np.where(dist > R)
# define the number of projection that we have to compute
n_ind = ind[0].shape[0]
C_ind = C[ind]
# compute projections only for those balls that are needed
Y = (R[ind] / dist[ind] * (x - C_ind).T).T + C_ind
return ((np.sum(Y, axis=0) + (m - n_ind)*x)) / m
N = 1000
ans1 = krasn_mann(T, x0, 0, numb_iter=N)
ans2 = fixed_point_agraal(T, x0, numb_iter=N, phi=1.5, output=False)
sns.set() # comment if seaborn is not imported
plt.plot(ans1[0],'b',label="KM: $x^{k+1}=Tx^k$")
plt.plot(ans2[0],'#FFD700', label="aGRAAL")
plt.yscale('log')
plt.legend()
plt.xlabel(u'iterations, $k$')
plt.ylabel('residual')
plt.legend()
#plt.savefig('figures/KM-Graal.pdf',bbox_inches='tight')
plt.show()
plt.clf()
plt.plot(ans2[2], '.', color='#FFD700', label="aGRAAL")
plt.legend()
plt.xlim(0,N)
plt.ylim(0)
plt.xlabel(u'iterations, $k$')
plt.ylabel('stepsize $\lambda_k$')
plt.legend()
#plt.grid()
#plt.savefig('figures/KM-Graal-steps2.pdf',bbox_inches='tight')
plt.show()
plt.clf()
| 0.626924 | 0.967287 |
<table style="width:100%; background-color: #EBF5FB">
<tr>
<td style="border: 1px solid #CFCFCF">
<b>Time series: Main Notebook</b>
<ul>
<li>Main Notebook</li>
<li><a href="processing.ipynb">Processing Notebook</a></li>
</ul>
<br>This Notebook is part of the <a href="http://data.open-power-system-data.org/time_series">Time series Data Package</a> of <a href="http://open-power-system-data.org">Open Power System Data</a>.
</td>
</tr>
</table>
# 1. About Open Power System Data
This notebook is part of the project [Open Power System Data](http://open-power-system-data.org). Open Power System Data develops a platform for free and open data for electricity system modeling. We collect, check, process, document, and provide data that are publicly available but currently inconvenient to use.
More info on Open Power System Data:
- [Information on the project on our website](http://open-power-system-data.org)
- [Data and metadata on our data platform](http://data.open-power-system-data.org)
- [Data processing scripts on our GitHub page](https://github.com/Open-Power-System-Data)
# 2. About Jupyter Notebooks and GitHub
This file is a [Jupyter Notebook](http://jupyter.org/). A Jupyter Notebook is a file that combines executable programming code with visualizations and comments in markdown format, allowing for an intuitive documentation of the code. We use Jupyter Notebooks for combined coding and documentation. We use Python 3 as programming language. All Notebooks are stored on [GitHub](https://github.com/), a platform for software development, and are publicly available. More information on our IT-concept can be found [here](http://open-power-system-data.org/it). See also our [step-by-step manual](http://open-power-system-data.org/step-by-step) how to use the dataplatform.
# 3. About this datapackage
We provide data in different chunks, or [data packages](http://frictionlessdata.io/data-packages/). The one you are looking at right now, [Time series](http://data.open-power-system-data.org/time_series/), contains various kinds of time series data in 15min, 30min or 60min resolution, namely:
- electricity consumption (load)
- wind and solar power: capacity, generation forecast, actual generation
- day-ahead spot prices
The main focus of this datapackage is German data, but we include data from other countries wherever possible.
The timeseries become available at different points in time depending on the sources. The full dataset is only available from 2015 onwards.
The data has been downloaded from the sources, resampled and merged in a large CSV file with hourly resolution. Additionally, the data available at a higher resolution (some renewables in-feed, 15 minutes) is provided in a separate file.
# 4. Data sources
The main data sources are the various European Transmission System Operators (TSOs) and the [ENTSO-E Data Portal](https://www.entsoe.eu/data/data-portal/Pages/default.aspx). Where no data is available from hte TSOs directly, data are taken from the [ENTSO-E Transparency Plstform](https://transparency.entsoe.eu). A complete list of data sources is provided on the [datapackage information website](http://data.open-power-system-data.org/time_series/). They are also contained in the JSON file that contains all metadata.
# 5. Naming conventions
```
import pandas as pd; pd.read_csv('input/notation.csv', index_col=list(range(4)))
```
# 6. License
This notebook as well as all other documents in this repository is published under the [MIT License](LICENSE.md).
|
github_jupyter
|
import pandas as pd; pd.read_csv('input/notation.csv', index_col=list(range(4)))
| 0.146667 | 0.891055 |
```
import numpy as np
import pandas as pd
import scipy
import nltk
import sklearn
import random
import re
from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer
from sklearn.preprocessing import OneHotEncoder,scale, MinMaxScaler, binarize
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import f1_score, precision_score, recall_score
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.decomposition import PCA, RandomizedPCA
from sklearn import svm
from sklearn.neural_network import BernoulliRBM
from sklearn.grid_search import GridSearchCV,ParameterGrid
from sklearn.pipeline import Pipeline
from sklearn.base import BaseEstimator,TransformerMixin
nltk.download('reuters')
nltk.download('punkt') # needed for tokenization
dataset = nltk.corpus.reuters
# http://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction
corpus_train = []
corpus_test = []
for fileid in dataset.fileids():
document = dataset.raw(fileid)
if re.match('training/',fileid):
corpus_train.append(document)
else:
corpus_test.append(document)
def preprocessor(string):
repl = re.sub('<','',string)
return repl.lower()
%%time
Y_train = []
Y_test = []
for (idx,fileid) in enumerate(dataset.fileids()):
categories = '*'.join(dataset.categories(fileid))
if re.match('training/',fileid):
Y_train.append(categories)
else:
Y_test.append(categories)
series_train = pd.Series(Y_train)
Y_train_df = series_train.str.get_dummies(sep='*')
series_test = pd.Series(Y_test)
Y_test_df = series_test.str.get_dummies(sep='*')
Y_train = Y_train_df.values
Y_test = Y_test_df.values
class DenseTransformer(BaseEstimator,TransformerMixin):
def transform(self, X, y=None, **fit_params):
return X.todense()
def fit_transform(self, X, y=None, **fit_params):
self.fit(X, y, **fit_params)
return self.transform(X)
def fit(self, X, y=None, **fit_params):
return self
clf = OneVsRestClassifier(Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('to_dense',DenseTransformer()),
('minmax', MinMaxScaler()),
('rbm', BernoulliRBM() ),
('clf', svm.LinearSVC()),
]))
parameters = [
{
"estimator__vect__min_df": [5],
"estimator__vect__preprocessor":[preprocessor],
"estimator__vect__stop_words": ['english'],
"estimator__vect__strip_accents":['ascii'],
"estimator__minmax__copy":[False],
"estimator__clf__penalty": ["l1"],
"estimator__clf__dual":[False],
"estimator__clf__multi_class":["crammer_singer"],
"estimator__clf__tol": [0.001],
}
]
# parameters = {
# 'rbm__n_components':[2,5,10,25,30,50],
# 'rbm__n_iter':[5,10,20,50,100],
# 'rbm__batch_size': [10,50,100,500],
# 'rbm__learning_rate': [0.1,0.2,0.3,0.6]}
best_score = float("-inf")
# I had to manually search over the parameter grid because, since we have a mod-apte split
# we cannot do any cross-validations selecting random train/test sets.
# GridSearchCV does not let one do grid search *without* also doing cross validation so we need to do this
for g in ParameterGrid(parameters):
clf.set_params(**g)
clf.fit(corpus_train,Y_train)
Y_pred = clf.predict(corpus_test)
current_score = f1_score(Y_test,Y_pred,average='micro')
print("current_score was {} and the current grid was {}".format(current_score,g))
if current_score > best_score:
best_score = current_score
best_grid = g
best_score
best_grid
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import scipy
import nltk
import sklearn
import random
import re
from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer
from sklearn.preprocessing import OneHotEncoder,scale, MinMaxScaler, binarize
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import f1_score, precision_score, recall_score
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.decomposition import PCA, RandomizedPCA
from sklearn import svm
from sklearn.neural_network import BernoulliRBM
from sklearn.grid_search import GridSearchCV,ParameterGrid
from sklearn.pipeline import Pipeline
from sklearn.base import BaseEstimator,TransformerMixin
nltk.download('reuters')
nltk.download('punkt') # needed for tokenization
dataset = nltk.corpus.reuters
# http://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction
corpus_train = []
corpus_test = []
for fileid in dataset.fileids():
document = dataset.raw(fileid)
if re.match('training/',fileid):
corpus_train.append(document)
else:
corpus_test.append(document)
def preprocessor(string):
repl = re.sub('<','',string)
return repl.lower()
%%time
Y_train = []
Y_test = []
for (idx,fileid) in enumerate(dataset.fileids()):
categories = '*'.join(dataset.categories(fileid))
if re.match('training/',fileid):
Y_train.append(categories)
else:
Y_test.append(categories)
series_train = pd.Series(Y_train)
Y_train_df = series_train.str.get_dummies(sep='*')
series_test = pd.Series(Y_test)
Y_test_df = series_test.str.get_dummies(sep='*')
Y_train = Y_train_df.values
Y_test = Y_test_df.values
class DenseTransformer(BaseEstimator,TransformerMixin):
def transform(self, X, y=None, **fit_params):
return X.todense()
def fit_transform(self, X, y=None, **fit_params):
self.fit(X, y, **fit_params)
return self.transform(X)
def fit(self, X, y=None, **fit_params):
return self
clf = OneVsRestClassifier(Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('to_dense',DenseTransformer()),
('minmax', MinMaxScaler()),
('rbm', BernoulliRBM() ),
('clf', svm.LinearSVC()),
]))
parameters = [
{
"estimator__vect__min_df": [5],
"estimator__vect__preprocessor":[preprocessor],
"estimator__vect__stop_words": ['english'],
"estimator__vect__strip_accents":['ascii'],
"estimator__minmax__copy":[False],
"estimator__clf__penalty": ["l1"],
"estimator__clf__dual":[False],
"estimator__clf__multi_class":["crammer_singer"],
"estimator__clf__tol": [0.001],
}
]
# parameters = {
# 'rbm__n_components':[2,5,10,25,30,50],
# 'rbm__n_iter':[5,10,20,50,100],
# 'rbm__batch_size': [10,50,100,500],
# 'rbm__learning_rate': [0.1,0.2,0.3,0.6]}
best_score = float("-inf")
# I had to manually search over the parameter grid because, since we have a mod-apte split
# we cannot do any cross-validations selecting random train/test sets.
# GridSearchCV does not let one do grid search *without* also doing cross validation so we need to do this
for g in ParameterGrid(parameters):
clf.set_params(**g)
clf.fit(corpus_train,Y_train)
Y_pred = clf.predict(corpus_test)
current_score = f1_score(Y_test,Y_pred,average='micro')
print("current_score was {} and the current grid was {}".format(current_score,g))
if current_score > best_score:
best_score = current_score
best_grid = g
best_score
best_grid
| 0.535341 | 0.261437 |
Our `Session` manages a connection to the Postgres database automatically and allows us to save intermediate results.
The first in our set of tutorials introduces the infrastructure of `HoloClean` and presents the initial steps needed to get your data interacting with `HoloClean`. We'll also discuss Denial Constraints, the primary source of information that `HoloClean` uses to perform repairs.
# Intro to Holoclean: Data Loading and Denial Constraints
## Part 1: Setup & Loading Data
### Connecting to the Database
Without further ado, let's see some code! We begin by initializing `HoloClean` and `Session` objects.
```
from holoclean.holoclean import HoloClean, Session
holo = HoloClean(
holoclean_path="..", # path to holoclean package
verbose=False,
# to limit possible values for training data
use_dask=True,
pruning_threshold1=0.1,
# to limit possible values for training data to less than k values
pruning_clean_breakoff=6,
# to limit possible values for dirty data (applied after
# Threshold 1)
pruning_threshold2=0,
# to limit possible values for dirty data to less than k values
pruning_dk_breakoff=6,
# learning parameters
learning_iterations=30,
learning_rate=0.001,
batch_size=5
)
session = Session(holo)
```
### Loading Data
Next, we ingest the hospital data we'd like to clean. This is a commonly used research dataset that we'll be using for all of our introductory tutorials.
```
data_path = "data/hospital.csv"
data = session.load_data(data_path)
```
At this time, we only support .csv files for our data format.
The data is then loaded into the database and a representation is returned. `HoloClean` uses PySpark DataFrames as its internal data structure and so any PySpark operations can be used.
For Example:
```
data[['HospitalName', 'City']].head(5)
data.columns
len(data)
```
Please see the [Apache Spark website](https://spark.apache.org/docs/latest/sql-programming-guide.html) for a full guide through DataFrames and their functionality.
## Part 2: Introduction to Denial Constraints
HoloClean's goal is to clean your data, and the system is driven by a description of what clean data *should* be like. These are expressed in the form of a Denial Constraint, which is similar to a [functional dependency](https://en.wikipedia.org/wiki/Functional_dependency). However, functional dependencies express things that should hold for your data, a denial constraint expresses what clean data is not like.
### An Example: The Hospital Dataset
This tutorial will walk through one of the Denial Constraints used in the Hospital Dataset. The data has the following fields:
`
index,
ProviderNumber,
HospitalName,
Address1,
Address2,
Address3,
City,
State,
ZipCode,
CountyName,
PhoneNumber,
HospitalType,
HospitalOwner,
EmergencyService,
Condition,
MeasureCode,
MeasureName,
Score,
Sample,
Stateavg`
And we know that there are some errors in our data. For example some people have mistyped the city name, and so we see results like
```
data[['City', 'ZipCode']].head(10)
```
Clearly we have an issue with a city called `BIRMGINxAM`. However, we know that whenever the zip codes are the same, the city should be the same. In the language of functional dependencies we could write this as: for any records $t_1, t_2$
$$t_1.ZipCode = t_2.ZipCode \implies t_1.City = t_2.City$$
However the HoloClean denial constraint will be.
`t1&t2&EQ(t1.ZipCode,t2.ZipCode)&IQ(t1.City,t2.City)`
Let's break down how this works:`t1&t2` specifies that two records will be involved in the error. `EQ(t1.ZipCode, t2.ZipCode)&IQ(t1.City, t2.City)` says that the records will have equal zip codes, but inequal cities. Now any pairs of records in the hospital dataset which make this true will be marked as potentially dirty.
## Adding Denial Constraints to HoloClean
There are multiple ways to add denial constraints to the system, the first is to load from a text file
```
#Load a set of denial contstraints
dc_path = "data/hospital_constraints.txt"
dcs = session.load_denial_constraints(dc_path)
dcs
```
## Adding/Removing Constraints one-by-one
```
dcs = session.add_denial_constraint('t1&t2&EQ(t1.ZipCode,t2.ZipCode)&IQ(t1.Stateavg,t2.Stateavg)')
dcs
```
# Denial Constraint Operators
If you want a thorough introduction to denial constraints, refer to the [HoloClean Paper](https://arxiv.org/pdf/1702.00820.pdf). For the brief introduction the logical operators available are:
|Operator|Meaning|
|--------|-----|
|`EQ(x.y,z.w)`| `x.y==z.w` |
|`IQ(x.y,z.w)`| `x.y != z.w` |
|`GT(x.y, z.w)`| `x.y > z.y`|
|`GTE(x.y, z.w)`| `x.y >= z.y`|
|`LT(x.y, z.w)`| `x.y < z.y`|
|`LT(x.y, z.w)`| `x.y <= z.y`|
All denial constraints are of the form `t1&t2&<X>&<Y>&...` where `<X>` and `<Y>` are logical operators mentioned above.
# Next Steps
Denial Constraints are just one of HoloClean's error detectors that it uses for learning, if you'd like to write your own check out our [Error Detectors](Tutorial_3.ipynb) tutorial. If you want to learn about the next steps in the HoloClean pipeline, check out our [Complete Pipeline](Tutorial_2.ipynb) tutorial.
|
github_jupyter
|
from holoclean.holoclean import HoloClean, Session
holo = HoloClean(
holoclean_path="..", # path to holoclean package
verbose=False,
# to limit possible values for training data
use_dask=True,
pruning_threshold1=0.1,
# to limit possible values for training data to less than k values
pruning_clean_breakoff=6,
# to limit possible values for dirty data (applied after
# Threshold 1)
pruning_threshold2=0,
# to limit possible values for dirty data to less than k values
pruning_dk_breakoff=6,
# learning parameters
learning_iterations=30,
learning_rate=0.001,
batch_size=5
)
session = Session(holo)
data_path = "data/hospital.csv"
data = session.load_data(data_path)
data[['HospitalName', 'City']].head(5)
data.columns
len(data)
data[['City', 'ZipCode']].head(10)
#Load a set of denial contstraints
dc_path = "data/hospital_constraints.txt"
dcs = session.load_denial_constraints(dc_path)
dcs
dcs = session.add_denial_constraint('t1&t2&EQ(t1.ZipCode,t2.ZipCode)&IQ(t1.Stateavg,t2.Stateavg)')
dcs
| 0.351868 | 0.979413 |
## Imports
```
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import *
from tensorflow.keras.callbacks import *
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.utils import class_weight
from wandb.keras import WandbCallback
from ast import literal_eval
from typing import Union
from utils import utils
import tensorflow as tf
import numpy as np
import wandb
import time
```
## Data loading
```
def load_data(filename:str) -> np.ndarray:
content = np.load(filename, allow_pickle=True)
return content
X_train, y_train = load_data('data/X_train.npy'), load_data('data/y_train.npy')
X_test, y_test = load_data('data/X_test.npy'), load_data('data/y_test.npy')
X_train.shape, X_test.shape
```
## Data preprocessing
```
clean_title = np.vectorize(utils.clean_title)
X_train = clean_title(X_train)
X_test = clean_title(X_test)
# Preview
X_train[:10]
def init_wandb(name):
wandb.init(project='text-prediction-logger', sync_tensorboard=True, name=name)
config = wandb.config
return config
def init_hyperparams(config):
config.filter_length = 300
config.max_words = 3000
config.maxlen = 300
config.batch_size = 32
config.embedding_dims = 30
config.filters = 10
config.kernel_size = 3
config.hidden_dims = 10
config.epochs = 10
return config
config = init_wandb("cnn")
config = init_hyperparams(config)
tokenizer = Tokenizer(num_words=config.max_words, lower=True)
tokenizer.fit_on_texts(X_train)
def get_features(text_sequence: np.ndarray) -> np.ndarray:
sequences = tokenizer.texts_to_sequences(text_sequence)
return pad_sequences(sequences, maxlen=config.maxlen)
train_features = get_features(X_train)
test_features = get_features(X_test)
train_features.shape, test_features.shape
y_train[:10]
# Label binarization
list_preprocessed = [literal_eval(i) for i in y_train]
mlb = MultiLabelBinarizer()
y_train_binarized = mlb.fit_transform(list_preprocessed)
mlb.classes_
```
## Derive class weights and model training
```
class_weight = class_weight.compute_sample_weight('balanced', y_train)
class_weight
# Helper function to return a compiled CNN-based model
def get_a_cnn_model(config: wandb.wandb_config.Config) -> tf.keras.models.Sequential:
model = Sequential()
model.add(Embedding(config.max_words, config.embedding_dims,
input_length=config.maxlen))
model.add(Dropout(0.1))
model.add(Conv1D(config.filter_length, config.kernel_size,
padding='valid', activation='relu', strides=1))
model.add(GlobalMaxPool1D())
model.add(Dense(32, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['categorical_accuracy'])
return model
# A helper training script
def train_model(model:tf.keras.models.Sequential,
config: wandb.wandb_config.Config,
class_weight=None,
epochs=config.epochs,
batch_size=config.batch_size,
callbacks=None) -> (tf.keras.callbacks.History, str):
start = time.time()
history = model.fit(train_features, y_train_binarized,
class_weight=class_weight,
epochs=epochs,
batch_size=batch_size,
validation_split=0.1,
callbacks=callbacks)
time_message = 'It took {} seconds'.format(time.time()-start)
return (history, time_message)
# Helper function to process the predictions
def generate_predictions(model:tf.keras.models.Sequential, article_title: str) -> list:
labels = []
title = np.array([article_title])
cleaned_title = clean_title(title)
tokenized = get_features(cleaned_title)
probabilities = model.predict(tokenized)
probabilities = probabilities.reshape(32,)
idxs = np.argsort(probabilities)[::-1][:2]
for (i, j) in enumerate(idxs):
label = "{}: {:.2f}%".format(mlb.classes_[j], probabilities[j] * 100)
labels.append(label)
return (labels)
# Define a few paper titles for our custom callback
sample_paper_titles = {"On the Variance of the Adaptive Learning Rate and Beyond": "cs.LG, stat.ML",
"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding": "cs.CL",
"MultiFiT: Efficient Multi-lingual Language Model Fine-tuning": "cs.CL, cs.LG"}
# A custom callback to view predictions on the above samples in real-time
class TextLogger(tf.keras.callbacks.Callback):
def __init__(self):
super(TextLogger, self).__init__()
def on_epoch_end(self, logs, epoch):
samples = []
for (title, true_label) in sample_paper_titles.items():
predicted_label = generate_predictions(self.model, title)
sample = [title, predicted_label, true_label]
samples.append(sample)
wandb.log({"text": wandb.Table(data=samples,
columns=["Text", "Predicted Label", "True Label"])})
# Define the callbacks
callbacks = [
TextLogger(),
WandbCallback()
]
# Kickstart the model training
cnn_model = get_a_cnn_model(config)
(history, time_message) = train_model(cnn_model, config, callbacks=callbacks)
print(time_message)
```
You can [this run page](https://app.wandb.ai/sayakpaul/text-prediction-logger/runs/eendlfxo) to see all the real-time predictions. Here's a snap:

|
github_jupyter
|
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import *
from tensorflow.keras.callbacks import *
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.utils import class_weight
from wandb.keras import WandbCallback
from ast import literal_eval
from typing import Union
from utils import utils
import tensorflow as tf
import numpy as np
import wandb
import time
def load_data(filename:str) -> np.ndarray:
content = np.load(filename, allow_pickle=True)
return content
X_train, y_train = load_data('data/X_train.npy'), load_data('data/y_train.npy')
X_test, y_test = load_data('data/X_test.npy'), load_data('data/y_test.npy')
X_train.shape, X_test.shape
clean_title = np.vectorize(utils.clean_title)
X_train = clean_title(X_train)
X_test = clean_title(X_test)
# Preview
X_train[:10]
def init_wandb(name):
wandb.init(project='text-prediction-logger', sync_tensorboard=True, name=name)
config = wandb.config
return config
def init_hyperparams(config):
config.filter_length = 300
config.max_words = 3000
config.maxlen = 300
config.batch_size = 32
config.embedding_dims = 30
config.filters = 10
config.kernel_size = 3
config.hidden_dims = 10
config.epochs = 10
return config
config = init_wandb("cnn")
config = init_hyperparams(config)
tokenizer = Tokenizer(num_words=config.max_words, lower=True)
tokenizer.fit_on_texts(X_train)
def get_features(text_sequence: np.ndarray) -> np.ndarray:
sequences = tokenizer.texts_to_sequences(text_sequence)
return pad_sequences(sequences, maxlen=config.maxlen)
train_features = get_features(X_train)
test_features = get_features(X_test)
train_features.shape, test_features.shape
y_train[:10]
# Label binarization
list_preprocessed = [literal_eval(i) for i in y_train]
mlb = MultiLabelBinarizer()
y_train_binarized = mlb.fit_transform(list_preprocessed)
mlb.classes_
class_weight = class_weight.compute_sample_weight('balanced', y_train)
class_weight
# Helper function to return a compiled CNN-based model
def get_a_cnn_model(config: wandb.wandb_config.Config) -> tf.keras.models.Sequential:
model = Sequential()
model.add(Embedding(config.max_words, config.embedding_dims,
input_length=config.maxlen))
model.add(Dropout(0.1))
model.add(Conv1D(config.filter_length, config.kernel_size,
padding='valid', activation='relu', strides=1))
model.add(GlobalMaxPool1D())
model.add(Dense(32, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['categorical_accuracy'])
return model
# A helper training script
def train_model(model:tf.keras.models.Sequential,
config: wandb.wandb_config.Config,
class_weight=None,
epochs=config.epochs,
batch_size=config.batch_size,
callbacks=None) -> (tf.keras.callbacks.History, str):
start = time.time()
history = model.fit(train_features, y_train_binarized,
class_weight=class_weight,
epochs=epochs,
batch_size=batch_size,
validation_split=0.1,
callbacks=callbacks)
time_message = 'It took {} seconds'.format(time.time()-start)
return (history, time_message)
# Helper function to process the predictions
def generate_predictions(model:tf.keras.models.Sequential, article_title: str) -> list:
labels = []
title = np.array([article_title])
cleaned_title = clean_title(title)
tokenized = get_features(cleaned_title)
probabilities = model.predict(tokenized)
probabilities = probabilities.reshape(32,)
idxs = np.argsort(probabilities)[::-1][:2]
for (i, j) in enumerate(idxs):
label = "{}: {:.2f}%".format(mlb.classes_[j], probabilities[j] * 100)
labels.append(label)
return (labels)
# Define a few paper titles for our custom callback
sample_paper_titles = {"On the Variance of the Adaptive Learning Rate and Beyond": "cs.LG, stat.ML",
"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding": "cs.CL",
"MultiFiT: Efficient Multi-lingual Language Model Fine-tuning": "cs.CL, cs.LG"}
# A custom callback to view predictions on the above samples in real-time
class TextLogger(tf.keras.callbacks.Callback):
def __init__(self):
super(TextLogger, self).__init__()
def on_epoch_end(self, logs, epoch):
samples = []
for (title, true_label) in sample_paper_titles.items():
predicted_label = generate_predictions(self.model, title)
sample = [title, predicted_label, true_label]
samples.append(sample)
wandb.log({"text": wandb.Table(data=samples,
columns=["Text", "Predicted Label", "True Label"])})
# Define the callbacks
callbacks = [
TextLogger(),
WandbCallback()
]
# Kickstart the model training
cnn_model = get_a_cnn_model(config)
(history, time_message) = train_model(cnn_model, config, callbacks=callbacks)
print(time_message)
| 0.921852 | 0.747869 |
## Homework 2. Simple text processing.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
### Prohibited Comment Classification
This part of assigment is fully based on YSDA NLP_course homework. Special thanks to YSDA team for making it available on github.

__In this part__ you will build an algorithm that classifies social media comments into normal or toxic.
Like in many real-world cases, you only have a small (10^3) dataset of hand-labeled examples to work with. We'll tackle this problem using both classical nlp methods and embedding-based approach.
```
# In colab uncomment this cell
# ! wget https://raw.githubusercontent.com/ml-mipt/ml-mipt/basic/homeworks/homework2_texts/comments.tsv
import pandas as pd
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
```
__Note:__ it is generally a good idea to split data into train/test before anything is done to them.
It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation.
### Preprocessing and tokenization
Comments contain raw text with punctuation, upper/lowercase letters and even newline symbols.
To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.
Generally, library `nltk` [link](https://www.nltk.org) is widely used in NLP. It is not necessary in here, but mentioned to intoduce it to you.
```
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "I don\'t want to do that" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = pd.DataFrame(texts_train)[0].apply(preprocess).values
texts_test = pd.DataFrame(texts_test)[0].apply(preprocess).values
# Small check that everything is done properly
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
```
### Solving it: bag of words

One traditional approach to such problem is to use bag of words features:
1. build a vocabulary of frequent words (use train data only)
2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).
3. consider this count a feature for some classifier
__Note:__ in practice, you can compute such features using sklearn. __Please don't do that in the current assignment, though.__
* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
```
import re
import operator
all_words_train = ' '.join(texts_train).split()
word_frequency = {word: all_words_train.count(word) for word in set(all_words_train)}
word_frequency_sorted = dict(sorted(word_frequency.items(), key = operator.itemgetter(1), reverse = True))
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = min(10000, len(set(' '.join(texts_train).split())))
bow_vocabulary = list(word_frequency_sorted)[:k]
print('example features:', sorted(bow_vocabulary)[::100])
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
word_freq_text = [text.split().count(word) for word in bow_vocabulary]
return np.array(word_freq_text, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
# Small check that everything is done properly
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
```
Machine learning stuff: fit, predict, evaluate. You know the drill.
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
bow_model = LogisticRegression(solver = 'liblinear', max_iter=1000).fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
def plotting(X_train_bow, X_test_bow, y_train, y_test, bow_model):
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
plotting(X_train_bow, X_test_bow, y_train, y_test, bow_model)
```
Try to vary the number of tokens `k` and check how the model performance changes. Show it on a plot.
```
import tqdm
aucs = []
tokens = np.arange(4500, 6000, 100)
for token in tqdm.tqdm_notebook(tokens):
bow_vocabulary = list(word_frequency_sorted)[:token]
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
bow_model = LogisticRegression(solver = 'liblinear', max_iter=1000).fit(X_train_bow, y_train)
proba = bow_model.predict_proba(X_test_bow)[:, 1]
aucs.append(roc_auc_score(y_test, proba))
plt.plot(tokens, aucs, label = 'Test')
plt.xlabel('Number of tokens')
plt.ylabel('AUC Score')
plt.title('AUC')
plt.legend()
plt.grid()
```
#### Task: implement TF-IDF features
Not all words are equally useful. One can prioritize rare words and downscale words like "and"/"or" by using __tf-idf features__. This abbreviation stands for __text frequency/inverse document frequence__ and means exactly that:
$$ feature_i = { Count(word_i \in x) \times { log {N \over Count(word_i \in D) + \alpha} }}, $$
where x is a single text, D is your dataset (a collection of texts), N is a total number of documents and $\alpha$ is a smoothing hyperparameter (typically 1).
And $Count(word_i \in D)$ is the number of documents where $word_i$ appears.
It may also be a good idea to normalize each data sample after computing tf-idf features.
__Your task:__ implement tf-idf features, train a model and evaluate ROC curve. Compare it with basic BagOfWords model from above.
__Please don't use sklearn/nltk builtin tf-idf vectorizers in your solution :)__ You can still use 'em for debugging though.
Blog post about implementing the TF-IDF features from scratch: https://triton.ml/blog/tf-idf-from-scratch
```
# Your beautiful code here
alpha = 0.95
def idf(word):
freq = sum(1 for text in texts_train if word in text.split())
return np.log(len(texts_train) / (freq + alpha))
def tf_idf(text):
features = [text.split().count(word) * idf(word) if word in text.split() else 0 for word in bow_vocabulary]
return np.array(features)
X_train_feat = np.stack(list(map(tf_idf, texts_train)))
X_test_feat = np.stack(list(map(tf_idf, texts_test)))
feat_model = LogisticRegression(solver = 'liblinear', max_iter=1000).fit(X_train_feat, y_train)
plotting(X_train_feat, X_test_feat, y_train, y_test, feat_model)
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# In colab uncomment this cell
# ! wget https://raw.githubusercontent.com/ml-mipt/ml-mipt/basic/homeworks/homework2_texts/comments.tsv
import pandas as pd
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "I don\'t want to do that" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = pd.DataFrame(texts_train)[0].apply(preprocess).values
texts_test = pd.DataFrame(texts_test)[0].apply(preprocess).values
# Small check that everything is done properly
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
import re
import operator
all_words_train = ' '.join(texts_train).split()
word_frequency = {word: all_words_train.count(word) for word in set(all_words_train)}
word_frequency_sorted = dict(sorted(word_frequency.items(), key = operator.itemgetter(1), reverse = True))
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = min(10000, len(set(' '.join(texts_train).split())))
bow_vocabulary = list(word_frequency_sorted)[:k]
print('example features:', sorted(bow_vocabulary)[::100])
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
word_freq_text = [text.split().count(word) for word in bow_vocabulary]
return np.array(word_freq_text, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
# Small check that everything is done properly
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
bow_model = LogisticRegression(solver = 'liblinear', max_iter=1000).fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
def plotting(X_train_bow, X_test_bow, y_train, y_test, bow_model):
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
plotting(X_train_bow, X_test_bow, y_train, y_test, bow_model)
import tqdm
aucs = []
tokens = np.arange(4500, 6000, 100)
for token in tqdm.tqdm_notebook(tokens):
bow_vocabulary = list(word_frequency_sorted)[:token]
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
bow_model = LogisticRegression(solver = 'liblinear', max_iter=1000).fit(X_train_bow, y_train)
proba = bow_model.predict_proba(X_test_bow)[:, 1]
aucs.append(roc_auc_score(y_test, proba))
plt.plot(tokens, aucs, label = 'Test')
plt.xlabel('Number of tokens')
plt.ylabel('AUC Score')
plt.title('AUC')
plt.legend()
plt.grid()
# Your beautiful code here
alpha = 0.95
def idf(word):
freq = sum(1 for text in texts_train if word in text.split())
return np.log(len(texts_train) / (freq + alpha))
def tf_idf(text):
features = [text.split().count(word) * idf(word) if word in text.split() else 0 for word in bow_vocabulary]
return np.array(features)
X_train_feat = np.stack(list(map(tf_idf, texts_train)))
X_test_feat = np.stack(list(map(tf_idf, texts_test)))
feat_model = LogisticRegression(solver = 'liblinear', max_iter=1000).fit(X_train_feat, y_train)
plotting(X_train_feat, X_test_feat, y_train, y_test, feat_model)
| 0.606615 | 0.933491 |
# Explaining text sentiment analysis using SageMaker Clarify
1. [Overview](#Overview)
1. [Prerequisites and Data](#Prerequisites-and-Data)
1. [Initialize SageMaker](#Initialize-SageMaker)
1. [Loading the data: Women's Ecommerce clothing reviews Dataset](#Loading-the-data:-Women's-ecommerce-clothing-reviews-dataset)
1. [Data preparation for model training](#Data-preparation-for-model-training)
1. [Train and Deploy Hugging Face Model](#Train-and-Deploy-Hugging-Face-Model)
1. [Train model with Hugging Face estimator](#Train-model-with-Hugging-Face-estimator)
1. [Deploy Model to Endpoint](#Deploy-Model)
1. [Model Explainability with SageMaker Clarify for text features](#Model-Explainability-with-SageMaker-Clarify-for-text-features)
1. [Explaining Predictions](#Explaining-Predictions)
1. [Visualize local explanations](#Visualize-local-explanations)
1. [Clean Up](#Clean-Up)
## Overview
Amazon SageMaker Clarify helps improve your machine learning models by detecting potential bias and helping explain how these models make predictions. The fairness and explainability functionality provided by SageMaker Clarify takes a step towards enabling AWS customers to build trustworthy and understandable machine learning models. The product comes with the tools to help you with the following tasks.
* Measure biases that can occur during each stage of the ML lifecycle (data collection, model training and tuning, and monitoring of ML models deployed for inference).
* Generate model governance reports targeting risk and compliance teams and external regulators.
* Provide explanations of the data, models, and monitoring used to assess predictions for input containing data of various modalities like numerical data, categorical data, text, and images.
Learn more about SageMaker Clarify [here](https://aws.amazon.com/sagemaker/clarify/). This sample notebook walks you through:
1. Key terms and concepts needed to understand SageMaker Clarify
1. The incremental updates required to explain text features, along with other tabular features.
1. Explaining the importance of the various new input features on the model's decision
In doing so, the notebook will first train a [Hugging Face model](https://huggingface.co/models) using the [Hugging Face Estimator](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html) in the SageMaker Python SDK using training dataset, then use SageMaker Clarify to analyze a testing dataset in CSV format, and then visualize the results.
## Prerequisites and Data
We require the following AWS resources to be able to successfully run this notebook.
1. Kernel: Python 3 (Data Science) kernel on SageMaker Studio or `conda_python3` kernel on notebook instances
2. Instance type: Any GPU instance. Here, we use `ml.g4dn.xlarge`
3. [SageMaker Python SDK](https://pypi.org/project/sagemaker/) version 2.70.0 or greater
4. [Transformers](https://pypi.org/project/transformers/) > 4.6.1
5. [Datasets](https://pypi.org/project/datasets/) > 1.6.2
```
!pip --quiet install "transformers==4.6.1" "datasets[s3]==1.6.2" "captum" --upgrade
```
Let's start by installing preview wheels of the Python SDK, boto and aws cli
```
# Fallback in case wheels are unavailable
! pip install sagemaker botocore boto3 awscli --upgrade
import subprocess
def execute_cmd(cmd):
print(cmd)
output = subprocess.getstatusoutput(cmd)
return output
def _download_from_s3(_file_path):
_path = f"s3://reinvent21-sm-rc-wheels/{_file_path}"
print(f"Path is {_path}")
ls_cmd = f"aws s3 ls {_path}"
print(execute_cmd(ls_cmd))
cmd = f"aws s3 cp {_path} /tmp/"
print("Downloading: ", cmd)
return execute_cmd(cmd)
def _install_wheel(wheel_name):
cmd = f"pip install --no-deps --log /tmp/output3.log /tmp/{wheel_name} --force-reinstall"
ret = execute_cmd(cmd)
_name = wheel_name.split(".")[0]
_, _version = execute_cmd(f"python -c 'import {_name}; print({_name}.__version__)'")
for package in ["botocore", "sagemaker", "boto3", "awscli"]:
print(execute_cmd(f"python -c 'import {package}; print({package}.__version__)'"))
print(f"Installed {_name}:{_version}")
return ret
def install_sm_py_sdk():
pySDK_name = "sagemaker.tar.gz"
exit_code, _ = _download_from_s3("dist/sagemaker.tar.gz")
if not exit_code:
_install_wheel(pySDK_name)
else:
print(f"'{pySDK_name}' is not present in S3 Bucket. Installing from public PyPi...")
execute_cmd("pip install sagemaker")
def install_boto_wheels():
WHEELS = ["botocore.tar.gz", "boto3.tar.gz", "awscli.tar.gz"]
for wheel_name in WHEELS:
_path = f"boto3/{wheel_name}"
exit_code, _ = _download_from_s3(_path)
if not exit_code:
_install_wheel(wheel_name)
else:
print(f"'{wheel_name}' is not present in S3 Bucket. Ignoring...")
install_boto_wheels()
install_sm_py_sdk()
```
#### Initialize SageMaker
```
# Import libraries for data loading and pre-processing
import os
import numpy as np
import pandas as pd
import json
import botocore
import sagemaker
import tarfile
from sagemaker.huggingface import HuggingFace
from sagemaker.pytorch import PyTorchModel
from sagemaker import get_execution_role, clarify
from captum.attr import visualization
from sklearn.model_selection import train_test_split
from datasets import Dataset
from datasets.filesystems import S3FileSystem
# SageMaker session bucket is used to upload the dataset, model and model training logs
sess = sagemaker.Session()
sess = sagemaker.Session(default_bucket=sess.default_bucket())
region = sess.boto_region_name
bucket = sess.default_bucket()
prefix = "sagemaker/DEMO-sagemaker-clarify-text"
# Define the IAM role
role = sagemaker.get_execution_role()
# SageMaker Clarify model directory name
model_path = "model/"
```
If you change the value of `model_path` variable above, please be sure to update the `model_path` in [`code/inference.py`](./code/inference.py) script as well.
### Loading the data: Women's ecommerce clothing reviews dataset
#### Download the dataset
Data Source: `https://www.kaggle.com/nicapotato/womens-ecommerce-clothing-reviews/`
The Women’s E-Commerce Clothing Reviews dataset has been made available under a Creative Commons Public Domain license. A copy of the dataset has been saved in a sample data Amazon S3 bucket. In the first section of the notebook, we’ll walk through how to download the data and get started with building the ML workflow as a SageMaker pipeline
```
!aws s3 cp s3://sagemaker-sample-files/datasets/tabular/womens_clothing_ecommerce/Womens_Clothing_E-Commerce_Reviews.csv womens_clothing_reviews_dataset.csv
# If the above does not work, please uncomment the following lines to download the dataset from s3
#! curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/tabular/womens_clothing_ecommerce/Womens_Clothing_E-Commerce_Reviews.csv > womens_clothing_reviews_dataset.csv
```
#### Load the dataset
```
df = pd.read_csv("womens_clothing_reviews_dataset.csv", index_col=[0])
df.head()
```
**Context**
The Women’s Clothing E-Commerce dataset contains reviews written by customers. Because the dataset contains real commercial data, it has been anonymized, and any references to the company in the review text and body have been replaced with “retailer”.
**Content**
The dataset contains 23486 rows and 10 columns. Each row corresponds to a customer review.
The columns include:
* Clothing ID: Integer Categorical variable that refers to the specific piece being reviewed.
* Age: Positive Integer variable of the reviewer's age.
* Title: String variable for the title of the review.
* Review Text: String variable for the review body.
* Rating: Positive Ordinal Integer variable for the product score granted by the customer from 1 Worst, to 5 Best.
* Recommended IND: Binary variable stating where the customer recommends the product where 1 is recommended, 0 is not recommended.
* Positive Feedback Count: Positive Integer documenting the number of other customers who found this review positive.
* Division Name: Categorical name of the product high level division.
* Department Name: Categorical name of the product department name.
* Class Name: Categorical name of the product class name.
**Goal**
To predict the sentiment of a review based on the text, and then explain the predictions using SageMaker Clarify.
### Data preparation for model training
#### Target Variable Creation
Since the dataset does not contain a column that indicates the sentiment of the customer reviews, lets create one. To do this, let's assume that reviews with a `Rating` of 4 or higher indicate positive sentiment and reviews with a `Rating` of 2 or lower indicate negative sentiment. Let's also assume that a `Rating` of 3 indicates neutral sentiment and exclude these rows from the dataset. Additionally, to predict the sentiment of a review, we are going to use the `Review Text` column; therefore let's remove rows that are empty in the `Review Text` column of the dataset
```
def create_target_column(df, min_positive_score, max_negative_score):
neutral_values = [i for i in range(max_negative_score + 1, min_positive_score)]
for neutral_value in neutral_values:
df = df[df["Rating"] != neutral_value]
df["Sentiment"] = df["Rating"] >= min_positive_score
return df.replace({"Sentiment": {True: 1, False: 0}})
df = create_target_column(df, 4, 2)
df = df[~df["Review Text"].isna()]
```
#### Train-Validation-Test splits
The most common approach for model evaluation is using the train/validation/test split. Although this approach can be very effective in general, it can result in misleading results and potentially fail when used on classification problems with a severe class imbalance. Instead, the technique must be modified to stratify the sampling by the class label as below. Stratification ensures that all classes are well represented across the train, validation and test datasets.
```
target = "Sentiment"
cols = "Review Text"
X = df[cols]
y = df[target]
# Data split: 11%(val) of the 90% (train and test) of the dataset ~ 10%; resulting in 80:10:10split
test_dataset_size = 0.10
val_dataset_size = 0.11
RANDOM_STATE = 42
# Stratified train-val-test split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_dataset_size, stratify=y, random_state=RANDOM_STATE
)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_dataset_size, stratify=y_train, random_state=RANDOM_STATE
)
print(
"Dataset: train ",
X_train.shape,
y_train.shape,
y_train.value_counts(dropna=False, normalize=True).to_dict(),
)
print(
"Dataset: validation ",
X_val.shape,
y_val.shape,
y_val.value_counts(dropna=False, normalize=True).to_dict(),
)
print(
"Dataset: test ",
X_test.shape,
y_test.shape,
y_test.value_counts(dropna=False, normalize=True).to_dict(),
)
# Combine the independent columns with the label
df_train = pd.concat([X_train, y_train], axis=1).reset_index(drop=True)
df_test = pd.concat([X_test, y_test], axis=1).reset_index(drop=True)
df_val = pd.concat([X_val, y_val], axis=1).reset_index(drop=True)
```
We have split the dataset into train, test, and validation datasets. We use the train and validation datasets during training process, and run Clarify on the test dataset.
In the cell below, we convert the Pandas DataFrames into Hugging Face Datasets for downstream modeling
```
train_dataset = Dataset.from_pandas(df_train)
test_dataset = Dataset.from_pandas(df_val)
```
#### Upload prepared dataset to the S3
Here, we upload the prepared datasets to S3 buckets so that we can train the model with the Hugging Face Estimator.
```
# S3 key prefix for the datasets
s3_prefix = "samples/datasets/womens_clothing_ecommerce_reviews"
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/train"
train_dataset.save_to_disk(training_input_path, fs=s3)
# save test_dataset to s3
test_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/test"
test_dataset.save_to_disk(test_input_path, fs=s3)
```
## Train and Deploy Hugging Face Model
In this step of the workflow, we use the [Hugging Face Estimator](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html) to load the pre-trained `distilbert-base-uncased` model and fine-tune the model on our dataset.
### Train model with Hugging Face estimator
The hyperparameters defined below are parameters that are passed to the custom PyTorch code in [`scripts/train.py`](./scripts/train.py). The only required parameter is `model_name`. The other parameters like `epoch`, `train_batch_size` all have default values which can be overriden by setting their values here.
```
# Hyperparameters passed into the training job
hyperparameters = {"epochs": 1, "model_name": "distilbert-base-uncased"}
huggingface_estimator = HuggingFace(
entry_point="train.py",
source_dir="scripts",
instance_type="ml.g4dn.xlarge",
instance_count=1,
transformers_version="4.6.1",
pytorch_version="1.7.1",
py_version="py36",
role=role,
hyperparameters=hyperparameters,
)
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({"train": training_input_path, "test": test_input_path})
```
### Download the trained model files for model inference
```
! aws s3 cp {huggingface_estimator.model_data} model.tar.gz
! mkdir -p {model_path}
! tar -xvf model.tar.gz -C {model_path}/
```
### Deploy Model
We are going to use the trained model files along with the PyTorch Inference container to deploy the model to a SageMaker endpoint.
```
with tarfile.open("hf_model.tar.gz", mode="w:gz") as archive:
archive.add(model_path, recursive=True)
archive.add("code/")
prefix = s3_prefix.split("/")[-1]
zipped_model_path = sess.upload_data(path="hf_model.tar.gz", key_prefix=prefix + "/hf-model-sm")
model_name = "womens-ecommerce-reviews-model"
endpoint_name = "womens-ecommerce-reviews-endpoint"
model = PyTorchModel(
entry_point="inference.py",
name=model_name,
model_data=zipped_model_path,
role=get_execution_role(),
framework_version="1.7.1",
py_version="py3",
)
predictor = model.deploy(
initial_instance_count=1, instance_type="ml.g4dn.xlarge", endpoint_name=endpoint_name
)
```
#### Test the model endpoint
Lets test the model endpoint to ensure that deployment was successful.
```
test_sentence1 = "A very versatile and cozy top. would look great dressed up or down for a casual comfy fall day. what a fun piece for my wardrobe!"
test_sentence2 = "Love the color! very soft. unique look. can't wait to wear it this fall"
test_sentence3 = (
"These leggings are loose fitting and the quality is just not there.. i am returning the item."
)
test_sentence4 = "Very disappointed the back of this blouse is plain, not as displayed."
predictor = sagemaker.predictor.Predictor(endpoint_name, sess)
predictor.serializer = sagemaker.serializers.CSVSerializer()
predictor.deserializer = sagemaker.deserializers.CSVDeserializer()
predictor.predict([[test_sentence1], [test_sentence2], [test_sentence3], [test_sentence4]])
```
## Model Explainability with SageMaker Clarify for text features
Now that the model is ready, and we are able to get predictions, we are ready to get explanations for text data from Clarify processing job. For a detailed example that showcases how to use the Clarify processing job, please refer to [this example](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker_processing/fairness_and_explainability/fairness_and_explainability.ipynb). This example shows how to get explanations for text data from Clarify.
In the cell below, we create the CSV file to pass on to the Clarify dataset. We are using 10 samples here to make it fast, but we can use entire dataset at a time. We are also filtering out any reviews with less than 500 characters as long reviews provide better visualization with `sentence` level granularity (When granularity is `sentence`, each sentence is a feature, and we need a few sentences per review for good visualization).
```
file_path = "clarify_data.csv"
num_examples = 10
df_test["len"] = df_test["Review Text"].apply(lambda ele: len(ele))
df_test_clarify = pd.DataFrame(
df_test[df_test["len"] > 500].sample(n=num_examples, random_state=RANDOM_STATE),
columns=["Review Text"],
)
df_test_clarify.to_csv(file_path, header=True, index=False)
df_test_clarify
```
### Explaining Predictions
There are expanding business needs and legislative regulations that require explanations of _why_ a model made the decision it did. SageMaker Clarify uses SHAP to explain the contribution that each input feature makes to the final decision.
How does the Kernel SHAP algorithm work? Kernel SHAP algorithm is a local explanation method. That is, it explains each instance or row of the dataset at a time. To explain each instance, it perturbs the features values - that is, it changes the values of some features to a baseline (or non-informative) value, and then get predictions from the model for the perturbed samples. It does this for a number of times per instance (determined by the optional parameter `num_samples` in `SHAPConfig`), and computes the importance of each feature based on how the model prediction changed.
We are now extending this functionality to text data. In order to be able to explain text, we need the `TextConfig`. The `TextConfig` is an optional parameter of `SHAPConfig`, which you need to provide if you need explanations for the text features in your dataset. `TextConfig` in turn requires three parameters:
1. `granularity` (required): To explain text features, Clarify further breaks down text into smaller text units, and considers each such text unit as a feature. The parameter `granularity` informs the level to which Clarify will break down the text: `token`, `sentence`, or `paragraph` are the allowed values for `granularity`.
2. `language` (required): the language of the text features. This is required to tokenize the text to break them down to their granular form.
3. `max_top_tokens` (optional): the number of top token attributions that will be shown in the output (we need this because the size of vocabulary can be very big). This is an optional parameter, and defaults to 50.
Kernel SHAP algorithm requires a baseline (also known as background dataset). In case of tabular features, the baseline value/s for a feature is ideally a non-informative or least informative value for that feature. However, for text feature, the baseline values must be the value you want to replace the individual text feature (token, sentence or paragraph) with. For instance, in the example below, we have chosen the baseline values for `review_text` as `<UNK>`, and `granularity` is `sentence`. Every time a sentence has to replaced in the perturbed inputs, we will replace it with `<UNK>`.
If baseline is not provided, a baseline is calculated automatically by SageMaker Clarify using K-means or K-prototypes in the input dataset for tabular features. For text features, if baseline is not provided, the default replacement value will be the string `<PAD>`.
```
clarify_processor = clarify.SageMakerClarifyProcessor(
role=role, instance_count=1, instance_type="ml.m5.xlarge", sagemaker_session=sess
)
model_config = clarify.ModelConfig(
model_name=model_name,
instance_type="ml.m5.xlarge",
instance_count=1,
accept_type="text/csv",
content_type="text/csv",
)
explainability_output_path = "s3://{}/{}/clarify-text-explainability".format(bucket, prefix)
explainability_data_config = clarify.DataConfig(
s3_data_input_path=file_path,
s3_output_path=explainability_output_path,
headers=["Review Text"],
dataset_type="text/csv",
)
shap_config = clarify.SHAPConfig(
baseline=[["<UNK>"]],
num_samples=1000,
agg_method="mean_abs",
save_local_shap_values=True,
text_config=clarify.TextConfig(granularity="sentence", language="english"),
)
# Running the clarify explainability job involves spinning up a processing job and a model endpoint which may take a few minutes.
# After this you will see a progress bar for the SHAP computation.
# The size of the dataset (num_examples) and the num_samples for shap will effect the running time.
clarify_processor.run_explainability(
data_config=explainability_data_config,
model_config=model_config,
explainability_config=shap_config,
)
```
### Visualize local explanations
We use Captum to visualize the feature importances computed by Clarify.
First, lets load the local explanations. Local text explanations can be found in the analysis results folder in a file named `out.jsonl` in the `explanations_shap` directory.
```
local_feature_attributions_file = "out.jsonl"
analysis_results = []
analysis_result = sagemaker.s3.S3Downloader.download(
explainability_output_path + "/explanations_shap/" + local_feature_attributions_file,
local_path="./",
)
shap_out = []
file = sagemaker.s3.S3Downloader.read_file(
explainability_output_path + "/explanations_shap/" + local_feature_attributions_file
)
for line in file.split("\n"):
if line:
shap_out.append(json.loads(line))
```
The local explanations file is a JSON Lines file, that contains the explanation of one instance per row. Let's examine the output format of the explanations.
```
print(json.dumps(shap_out[0], indent=2))
```
At the highest level of this JSON Line, there are two keys: `explanations`, `join_source_value` (Not present here as we have not included a `joinsource` column in the input dataset). `explanations` contains a list of attributions for each feature in the dataset. In this case, we have a single element, because the input dataset also had a single feature. It also contains details like `feature_name`, `data_type` of the features (indicating whether Clarify inferred the column as numerical, categorical or text). Each token attribution also contains a `description` field that contains the token itself, and the starting index of the token in original input. This allows you to reconstruct the original sentence from the output as well.
In the following block, we create a list of attributions and a list of tokens for use in visualizations.
```
attributions_dataset = [
np.array([attr["attribution"][0] for attr in expl["explanations"][0]["attributions"]])
for expl in shap_out
]
tokens_dataset = [
np.array(
[attr["description"]["partial_text"] for attr in expl["explanations"][0]["attributions"]]
)
for expl in shap_out
]
```
We obtain predictions as well so that they can be displayed alongside the feature attributions.
```
preds = predictor.predict([t for t in df_test_clarify.values])
# This method is a wrapper around the captum that helps produce visualizations for local explanations. It will
# visualize the attributions for the tokens with red or green colors for negative and positive attributions.
def visualization_record(
attributions, # list of attributions for the tokens
text, # list of tokens
pred, # the prediction value obtained from the endpoint
delta,
true_label, # the true label from the dataset
normalize=True, # normalizes the attributions so that the max absolute value is 1. Yields stronger colors.
max_frac_to_show=0.05, # what fraction of tokens to highlight, set to 1 for all.
match_to_pred=False, # whether to limit highlights to red for negative predictions and green for positive ones.
# By enabling `match_to_pred` you show what tokens contribute to a high/low prediction not those that oppose it.
):
if normalize:
attributions = attributions / max(max(attributions), max(-attributions))
if max_frac_to_show is not None and max_frac_to_show < 1:
num_show = int(max_frac_to_show * attributions.shape[0])
sal = attributions
if pred < 0.5:
sal = -sal
if not match_to_pred:
sal = np.abs(sal)
top_idxs = np.argsort(-sal)[:num_show]
mask = np.zeros_like(attributions)
mask[top_idxs] = 1
attributions = attributions * mask
return visualization.VisualizationDataRecord(
attributions,
pred,
int(pred > 0.5),
true_label,
attributions.sum() > 0,
attributions.sum(),
text,
delta,
)
# You can customize the following display settings
normalize = True
max_frac_to_show = 1
match_to_pred = False
labels = test_dataset["Sentiment"][:num_examples]
vis = []
for attr, token, pred, label in zip(attributions_dataset, tokens_dataset, preds, labels):
vis.append(
visualization_record(
attr, token, float(pred[0]), 0.0, label, normalize, max_frac_to_show, match_to_pred
)
)
```
Now that we compiled the record we are finally ready to render the visualization.
We see a row per review in the selected dataset. For each row we have the prediction, the label, and the highlighted text. Additionally, we show the total sum of attributions (as attribution score) and its label (as attribution label), which indicates whether it is greater than zero.
```
_ = visualization.visualize_text(vis)
```
# Cleanup
Finally, please remember to delete the Amazon SageMaker endpoint to avoid charges:
```
predictor.delete_endpoint()
```
|
github_jupyter
|
!pip --quiet install "transformers==4.6.1" "datasets[s3]==1.6.2" "captum" --upgrade
# Fallback in case wheels are unavailable
! pip install sagemaker botocore boto3 awscli --upgrade
import subprocess
def execute_cmd(cmd):
print(cmd)
output = subprocess.getstatusoutput(cmd)
return output
def _download_from_s3(_file_path):
_path = f"s3://reinvent21-sm-rc-wheels/{_file_path}"
print(f"Path is {_path}")
ls_cmd = f"aws s3 ls {_path}"
print(execute_cmd(ls_cmd))
cmd = f"aws s3 cp {_path} /tmp/"
print("Downloading: ", cmd)
return execute_cmd(cmd)
def _install_wheel(wheel_name):
cmd = f"pip install --no-deps --log /tmp/output3.log /tmp/{wheel_name} --force-reinstall"
ret = execute_cmd(cmd)
_name = wheel_name.split(".")[0]
_, _version = execute_cmd(f"python -c 'import {_name}; print({_name}.__version__)'")
for package in ["botocore", "sagemaker", "boto3", "awscli"]:
print(execute_cmd(f"python -c 'import {package}; print({package}.__version__)'"))
print(f"Installed {_name}:{_version}")
return ret
def install_sm_py_sdk():
pySDK_name = "sagemaker.tar.gz"
exit_code, _ = _download_from_s3("dist/sagemaker.tar.gz")
if not exit_code:
_install_wheel(pySDK_name)
else:
print(f"'{pySDK_name}' is not present in S3 Bucket. Installing from public PyPi...")
execute_cmd("pip install sagemaker")
def install_boto_wheels():
WHEELS = ["botocore.tar.gz", "boto3.tar.gz", "awscli.tar.gz"]
for wheel_name in WHEELS:
_path = f"boto3/{wheel_name}"
exit_code, _ = _download_from_s3(_path)
if not exit_code:
_install_wheel(wheel_name)
else:
print(f"'{wheel_name}' is not present in S3 Bucket. Ignoring...")
install_boto_wheels()
install_sm_py_sdk()
# Import libraries for data loading and pre-processing
import os
import numpy as np
import pandas as pd
import json
import botocore
import sagemaker
import tarfile
from sagemaker.huggingface import HuggingFace
from sagemaker.pytorch import PyTorchModel
from sagemaker import get_execution_role, clarify
from captum.attr import visualization
from sklearn.model_selection import train_test_split
from datasets import Dataset
from datasets.filesystems import S3FileSystem
# SageMaker session bucket is used to upload the dataset, model and model training logs
sess = sagemaker.Session()
sess = sagemaker.Session(default_bucket=sess.default_bucket())
region = sess.boto_region_name
bucket = sess.default_bucket()
prefix = "sagemaker/DEMO-sagemaker-clarify-text"
# Define the IAM role
role = sagemaker.get_execution_role()
# SageMaker Clarify model directory name
model_path = "model/"
!aws s3 cp s3://sagemaker-sample-files/datasets/tabular/womens_clothing_ecommerce/Womens_Clothing_E-Commerce_Reviews.csv womens_clothing_reviews_dataset.csv
# If the above does not work, please uncomment the following lines to download the dataset from s3
#! curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/tabular/womens_clothing_ecommerce/Womens_Clothing_E-Commerce_Reviews.csv > womens_clothing_reviews_dataset.csv
df = pd.read_csv("womens_clothing_reviews_dataset.csv", index_col=[0])
df.head()
def create_target_column(df, min_positive_score, max_negative_score):
neutral_values = [i for i in range(max_negative_score + 1, min_positive_score)]
for neutral_value in neutral_values:
df = df[df["Rating"] != neutral_value]
df["Sentiment"] = df["Rating"] >= min_positive_score
return df.replace({"Sentiment": {True: 1, False: 0}})
df = create_target_column(df, 4, 2)
df = df[~df["Review Text"].isna()]
target = "Sentiment"
cols = "Review Text"
X = df[cols]
y = df[target]
# Data split: 11%(val) of the 90% (train and test) of the dataset ~ 10%; resulting in 80:10:10split
test_dataset_size = 0.10
val_dataset_size = 0.11
RANDOM_STATE = 42
# Stratified train-val-test split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_dataset_size, stratify=y, random_state=RANDOM_STATE
)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_dataset_size, stratify=y_train, random_state=RANDOM_STATE
)
print(
"Dataset: train ",
X_train.shape,
y_train.shape,
y_train.value_counts(dropna=False, normalize=True).to_dict(),
)
print(
"Dataset: validation ",
X_val.shape,
y_val.shape,
y_val.value_counts(dropna=False, normalize=True).to_dict(),
)
print(
"Dataset: test ",
X_test.shape,
y_test.shape,
y_test.value_counts(dropna=False, normalize=True).to_dict(),
)
# Combine the independent columns with the label
df_train = pd.concat([X_train, y_train], axis=1).reset_index(drop=True)
df_test = pd.concat([X_test, y_test], axis=1).reset_index(drop=True)
df_val = pd.concat([X_val, y_val], axis=1).reset_index(drop=True)
train_dataset = Dataset.from_pandas(df_train)
test_dataset = Dataset.from_pandas(df_val)
# S3 key prefix for the datasets
s3_prefix = "samples/datasets/womens_clothing_ecommerce_reviews"
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/train"
train_dataset.save_to_disk(training_input_path, fs=s3)
# save test_dataset to s3
test_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/test"
test_dataset.save_to_disk(test_input_path, fs=s3)
# Hyperparameters passed into the training job
hyperparameters = {"epochs": 1, "model_name": "distilbert-base-uncased"}
huggingface_estimator = HuggingFace(
entry_point="train.py",
source_dir="scripts",
instance_type="ml.g4dn.xlarge",
instance_count=1,
transformers_version="4.6.1",
pytorch_version="1.7.1",
py_version="py36",
role=role,
hyperparameters=hyperparameters,
)
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({"train": training_input_path, "test": test_input_path})
! aws s3 cp {huggingface_estimator.model_data} model.tar.gz
! mkdir -p {model_path}
! tar -xvf model.tar.gz -C {model_path}/
with tarfile.open("hf_model.tar.gz", mode="w:gz") as archive:
archive.add(model_path, recursive=True)
archive.add("code/")
prefix = s3_prefix.split("/")[-1]
zipped_model_path = sess.upload_data(path="hf_model.tar.gz", key_prefix=prefix + "/hf-model-sm")
model_name = "womens-ecommerce-reviews-model"
endpoint_name = "womens-ecommerce-reviews-endpoint"
model = PyTorchModel(
entry_point="inference.py",
name=model_name,
model_data=zipped_model_path,
role=get_execution_role(),
framework_version="1.7.1",
py_version="py3",
)
predictor = model.deploy(
initial_instance_count=1, instance_type="ml.g4dn.xlarge", endpoint_name=endpoint_name
)
test_sentence1 = "A very versatile and cozy top. would look great dressed up or down for a casual comfy fall day. what a fun piece for my wardrobe!"
test_sentence2 = "Love the color! very soft. unique look. can't wait to wear it this fall"
test_sentence3 = (
"These leggings are loose fitting and the quality is just not there.. i am returning the item."
)
test_sentence4 = "Very disappointed the back of this blouse is plain, not as displayed."
predictor = sagemaker.predictor.Predictor(endpoint_name, sess)
predictor.serializer = sagemaker.serializers.CSVSerializer()
predictor.deserializer = sagemaker.deserializers.CSVDeserializer()
predictor.predict([[test_sentence1], [test_sentence2], [test_sentence3], [test_sentence4]])
file_path = "clarify_data.csv"
num_examples = 10
df_test["len"] = df_test["Review Text"].apply(lambda ele: len(ele))
df_test_clarify = pd.DataFrame(
df_test[df_test["len"] > 500].sample(n=num_examples, random_state=RANDOM_STATE),
columns=["Review Text"],
)
df_test_clarify.to_csv(file_path, header=True, index=False)
df_test_clarify
clarify_processor = clarify.SageMakerClarifyProcessor(
role=role, instance_count=1, instance_type="ml.m5.xlarge", sagemaker_session=sess
)
model_config = clarify.ModelConfig(
model_name=model_name,
instance_type="ml.m5.xlarge",
instance_count=1,
accept_type="text/csv",
content_type="text/csv",
)
explainability_output_path = "s3://{}/{}/clarify-text-explainability".format(bucket, prefix)
explainability_data_config = clarify.DataConfig(
s3_data_input_path=file_path,
s3_output_path=explainability_output_path,
headers=["Review Text"],
dataset_type="text/csv",
)
shap_config = clarify.SHAPConfig(
baseline=[["<UNK>"]],
num_samples=1000,
agg_method="mean_abs",
save_local_shap_values=True,
text_config=clarify.TextConfig(granularity="sentence", language="english"),
)
# Running the clarify explainability job involves spinning up a processing job and a model endpoint which may take a few minutes.
# After this you will see a progress bar for the SHAP computation.
# The size of the dataset (num_examples) and the num_samples for shap will effect the running time.
clarify_processor.run_explainability(
data_config=explainability_data_config,
model_config=model_config,
explainability_config=shap_config,
)
local_feature_attributions_file = "out.jsonl"
analysis_results = []
analysis_result = sagemaker.s3.S3Downloader.download(
explainability_output_path + "/explanations_shap/" + local_feature_attributions_file,
local_path="./",
)
shap_out = []
file = sagemaker.s3.S3Downloader.read_file(
explainability_output_path + "/explanations_shap/" + local_feature_attributions_file
)
for line in file.split("\n"):
if line:
shap_out.append(json.loads(line))
print(json.dumps(shap_out[0], indent=2))
attributions_dataset = [
np.array([attr["attribution"][0] for attr in expl["explanations"][0]["attributions"]])
for expl in shap_out
]
tokens_dataset = [
np.array(
[attr["description"]["partial_text"] for attr in expl["explanations"][0]["attributions"]]
)
for expl in shap_out
]
preds = predictor.predict([t for t in df_test_clarify.values])
# This method is a wrapper around the captum that helps produce visualizations for local explanations. It will
# visualize the attributions for the tokens with red or green colors for negative and positive attributions.
def visualization_record(
attributions, # list of attributions for the tokens
text, # list of tokens
pred, # the prediction value obtained from the endpoint
delta,
true_label, # the true label from the dataset
normalize=True, # normalizes the attributions so that the max absolute value is 1. Yields stronger colors.
max_frac_to_show=0.05, # what fraction of tokens to highlight, set to 1 for all.
match_to_pred=False, # whether to limit highlights to red for negative predictions and green for positive ones.
# By enabling `match_to_pred` you show what tokens contribute to a high/low prediction not those that oppose it.
):
if normalize:
attributions = attributions / max(max(attributions), max(-attributions))
if max_frac_to_show is not None and max_frac_to_show < 1:
num_show = int(max_frac_to_show * attributions.shape[0])
sal = attributions
if pred < 0.5:
sal = -sal
if not match_to_pred:
sal = np.abs(sal)
top_idxs = np.argsort(-sal)[:num_show]
mask = np.zeros_like(attributions)
mask[top_idxs] = 1
attributions = attributions * mask
return visualization.VisualizationDataRecord(
attributions,
pred,
int(pred > 0.5),
true_label,
attributions.sum() > 0,
attributions.sum(),
text,
delta,
)
# You can customize the following display settings
normalize = True
max_frac_to_show = 1
match_to_pred = False
labels = test_dataset["Sentiment"][:num_examples]
vis = []
for attr, token, pred, label in zip(attributions_dataset, tokens_dataset, preds, labels):
vis.append(
visualization_record(
attr, token, float(pred[0]), 0.0, label, normalize, max_frac_to_show, match_to_pred
)
)
_ = visualization.visualize_text(vis)
predictor.delete_endpoint()
| 0.372734 | 0.963984 |
```
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
import pydotplus
from six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
from sklearn.naive_bayes import BernoulliNB , GaussianNB ,MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import RidgeClassifier
from sklearn.linear_model import LassoCV
from sklearn.metrics import classification_report, confusion_matrix
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('IPL Matches 2008-2020.csv')
df = df[['team1' , 'team2' , 'venue' , 'toss_winner' ,'toss_decision' , 'winner' , 'neutral_venue']]
df.head()
df.team1.unique()
df.venue.unique()
# Keeping only consistent teams
consistent_teams = ['Kolkata Knight Riders', 'Chennai Super Kings', 'Rajasthan Royals',
'Mumbai Indians', 'Kings XI Punjab', 'Royal Challengers Bangalore',
'Delhi Daredevils', 'Sunrisers Hyderabad' , 'Delhi Capitals']
consistent_venues = ['M Chinnaswamy Stadium',
'Punjab Cricket Association Stadium, Mohali', 'Feroz Shah Kotla', 'Wankhede Stadium', 'Sawai Mansingh Stadium',
'MA Chidambaram Stadium, Chepauk', 'Eden Gardens', 'Dr DY Patil Sports Academy', 'Brabourne Stadium',
'Sardar Patel Stadium, Motera', 'Himachal Pradesh Cricket Association Stadium', 'Subrata Roy Sahara Stadium',
'Rajiv Gandhi International Stadium, Uppal', 'Shaheed Veer Narayan Singh International Stadium',
'JSCA International Stadium Complex', 'Barabati Stadium', 'Maharashtra Cricket Association Stadium',
'Dr. Y.S. Rajasekhara Reddy ACA-VDCA Cricket Stadium', 'Punjab Cricket Association IS Bindra Stadium, Mohali',
'M.Chinnaswamy Stadium', 'Holkar Cricket Stadium', 'Vidarbha Cricket Association Stadium, Jamtha', 'Nehru Stadium',
'Saurashtra Cricket Association Stadium']
df = df[(df['team1'].isin(consistent_teams)) & (df['team2'].isin(consistent_teams)) & (df['neutral_venue']==0) & (df['venue'].isin(consistent_venues))]
df.drop('neutral_venue', inplace=True , axis=1)
# delhi capitals and delhi daredevils are same team
df.team1.unique()
df['team1'] = np.where(df.team1 == 'Delhi Daredevils' , 'Delhi Capitals' , df.team1)
df['team2'] = np.where(df.team2 == 'Delhi Daredevils' , 'Delhi Capitals' , df.team2)
df['toss_winner'] = np.where(df.toss_winner == 'Delhi Daredevils' , 'Delhi Capitals' , df.toss_winner)
df['winner'] = np.where(df.winner == 'Delhi Daredevils' , 'Delhi Capitals' , df.winner)
# M Chinnaswamy Stadium and M.Chinnaswamy Stadium are same team
# Punjab Cricket Association IS Bindra Stadium, Mohali and Punjab Cricket Association Stadium, Mohali are same
df.venue.unique()
df['venue'] = np.where(df.venue == 'M.Chinnaswamy Stadium' , 'M Chinnaswamy Stadium' , df.venue)
df['venue'] = np.where(df.venue == 'Punjab Cricket Association IS Bindra Stadium, Mohali' , 'Punjab Cricket Association Stadium, Mohali' , df.venue)
df.venue.unique()
df.head()
def getNumber_team(x):
if x=='Royal Challengers Bangalore':
return 0
elif x=='Kings XI Punjab':
return 1
elif x=='Delhi Capitals':
return 2
elif x=='Mumbai Indians':
return 3
elif x=='Rajasthan Royals':
return 4
elif x=='Chennai Super Kings':
return 5
elif x=='Kolkata Knight Riders':
return 6
else:
return 7
df['team1'] = df['team1'].apply(getNumber_team)
df['team2'] = df['team2'].apply(getNumber_team)
df['toss_winner'] = df['toss_winner'].apply(getNumber_team)
df['winner'] = df['winner'].apply(getNumber_team)
df.venue.unique()
def getNumber_venue(x):
if x=='M Chinnaswamy Stadium':
return 1
elif x=='Punjab Cricket Association Stadium, Mohali':
return 2
elif x=='Feroz Shah Kotla':
return 3
elif x=='Wankhede Stadium':
return 4
elif x=='Sawai Mansingh Stadium':
return 5
elif x=='MA Chidambaram Stadium, Chepauk':
return 6
elif x=='Eden Gardens':
return 7
elif x=='Dr DY Patil Sports Academy':
return 8
elif x=='Brabourne Stadium':
return 9
elif x=='Sardar Patel Stadium, Motera':
return 10
elif x=='Himachal Pradesh Cricket Association Stadium':
return 11
elif x=='Subrata Roy Sahara Stadium':
return 12
elif x=='Rajiv Gandhi International Stadium, Uppal':
return 13
elif x=='Shaheed Veer Narayan Singh International Stadium':
return 14
elif x=='JSCA International Stadium Complex':
return 15
elif x=='Maharashtra Cricket Association Stadium':
return 16
elif x=='Dr. Y.S. Rajasekhara Reddy ACA-VDCA Cricket Stadium':
return 17
elif x=='Barabati Stadium':
return 18
else:
return 19
df['venue'] = df['venue'].apply(getNumber_venue)
def getNumber_tossDecision(x):
if x=='field':
return 0
else:
return 1
df['toss_decision'] = df['toss_decision'].apply(getNumber_tossDecision)
df.dtypes
df
df.corr()
import seaborn as sns
sns.heatmap(df.corr())
X = df.drop(labels='winner', axis=1)
y = df['winner']
X = np.array(X)
y = np.array(y)
```
```
zeros = 0
for i in range(len(X)):
if y[i] == X[i][0]:
if zeros <= 250:
y[i] = 0
zeros = zeros + 1
else:
y[i] = 1
t = X[i][0]
X[i][0] = X[i][1]
X[i][1] = t
else:
y[i] = 1
for i in range(len(X)):
if X[i][3]==X[i][0]:
X[i][3]=0
else:
X[i][3]=1
X = np.array(X , dtype='int32')
y = np.array(y , dtype='int32')
y = y.ravel()
print(np.unique(y, return_counts=True))
# now balanced dataset
from sklearn.model_selection import train_test_split
X_train, X_test, y_train,y_test = train_test_split(X, y, test_size=0.2 , random_state=0)
alg1 = LogisticRegression(solver='liblinear')
start = time.time()
alg1.fit(X_train , y_train)
end = time.time()
total_time1 = end - start
y_pred1 = alg1.predict(X_test)
print('accuracy : ', alg1.score(X_test , y_test))
print('time : ' , total_time1)
print(classification_report(y_test , y_pred1))
print(confusion_matrix(y_test , y_pred1))
alg2 = RandomForestClassifier(n_estimators=60)
start = time.time()
alg2.fit(X_train , y_train)
end = time.time()
total_time2 = end - start
y_pred2 = alg2.predict(X_test)
print('accuracy : ', alg2.score(X_test , y_test))
print('time : ' , total_time2)
print(classification_report(y_test , y_pred2))
print(confusion_matrix(y_test , y_pred2))
alg3 = DecisionTreeClassifier(max_depth=1 , criterion='gini')
start = time.time()
alg3.fit(X_train , y_train)
end = time.time()
total_time3 = end - start
y_pred3 = alg3.predict(X_test)
print('accuracy : ', alg3.score(X_test , y_test))
print('time : ' , total_time3)
print(classification_report(y_test , y_pred3))
print(confusion_matrix(y_test , y_pred3))
# Printing tree alongwith class names
dot_data = StringIO()
export_graphviz(alg3, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names = ['team1', 'team2', 'venue','toss_winner', 'toss_decision'])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
dot_data = export_graphviz(alg3, out_file=None,
feature_names=['team1', 'team2', 'venue', 'toss_winner', 'toss_decision'])
graph = pydotplus.graph_from_dot_data(dot_data)
graph.write_pdf("ipl_winner_decision_tree.pdf")
alg4 = BernoulliNB()
start = time.time()
alg4.fit(X_train,y_train)
end = time.time()
total_time4 = end - start
y_pred4 = alg4.predict(X_test)
print('accuracy : ', alg4.score(X_test , y_test))
print('time : ' , total_time4)
print(classification_report(y_test , y_pred4))
print(confusion_matrix(y_test , y_pred4))
alg5 = GaussianNB()
start = time.time()
alg5.fit(X_train,y_train)
end = time.time()
total_time5 = end - start
y_pred5 = alg5.predict(X_test)
print('accuracy : ', alg5.score(X_test , y_test))
print('time : ' , total_time5)
print(classification_report(y_test , y_pred5))
print(confusion_matrix(y_test , y_pred5))
alg6 = MultinomialNB()
start = time.time()
alg6.fit(X_train,y_train)
end = time.time()
total_time6 = end - start
y_pred6 = alg6.predict(X_test)
print('accuracy : ', alg6.score(X_test , y_test))
print('time : ' , total_time6)
print(classification_report(y_test , y_pred6))
print(confusion_matrix(y_test , y_pred6))
x_axis = []
y_axis = []
for k in range(1, 26, 2):
clf = KNeighborsClassifier(n_neighbors = k)
score = cross_val_score(clf, X_train, y_train, cv = KFold(n_splits=5, shuffle=True, random_state=0))
x_axis.append(k)
y_axis.append(score.mean())
import matplotlib.pyplot as plt
plt.plot(x_axis, y_axis)
plt.xlabel("k")
plt.ylabel("cross_val_score")
plt.title("variation of score on different values of k")
plt.show()
alg7 = KNeighborsClassifier(n_neighbors=19, weights='distance', algorithm='auto', p=2, metric='minkowski')
start = time.time()
alg7.fit(X_train, y_train)
end = time.time()
total_time7 = end - start
y_pred7 = alg7.predict(X_test)
print('accuracy : ', alg7.score(X_test , y_test))
print('time : ' , total_time7)
print(classification_report(y_test , y_pred7))
print(confusion_matrix(y_test , y_pred7))
clf = SVC(kernel='rbf')
grid = {'C': [1e2,1e3, 5e3, 1e4, 5e4, 1e5],
'gamma': [1e-3, 5e-4, 1e-4, 5e-3]}
alg8 = GridSearchCV(clf, grid)
start = time.time()
alg8.fit(X_train, y_train)
end = time.time()
total_time8 = end - start
y_pred8 = alg8.predict(X_test)
print(alg8.best_estimator_)
print('accuracy : ', alg8.score(X_test , y_test))
print('time : ' , total_time8)
print(classification_report(y_test , y_pred8))
print(confusion_matrix(y_test , y_pred8))
alg9 = LinearSVC(multi_class='crammer_singer')
start = time.time()
alg9.fit(X_train, y_train)
end = time.time()
total_time9 = end - start
y_pred9 = alg9.predict(X_test)
print('accuracy : ', alg9.score(X_test , y_test))
print('time : ' , total_time9)
print(classification_report(y_test , y_pred9))
print(confusion_matrix(y_test , y_pred9))
ridge = RidgeClassifier()
parameters={'alpha':[1e-15,1e-10,1e-8,1e-3,1e-2,1,5,10,20,30,35,40]}
alg10=GridSearchCV(ridge,parameters)
start = time.time()
alg10.fit(X_train, y_train)
end = time.time()
total_time10 = end - start
y_pred10 = alg10.predict(X_test)
print('accuracy : ', alg10.score(X_test , y_test))
print('time : ' , total_time10)
print(classification_report(y_test , y_pred10))
print(confusion_matrix(y_test , y_pred10))
test = np.array([2, 4, 1, 1, 1]).reshape(1,-1)
print('alg1 : ' , alg1.predict(test))
print('alg2 : ' , alg2.predict(test))
print('alg3 : ' , alg3.predict(test))
print('alg4 : ' , alg4.predict(test))
print('alg5 : ' , alg5.predict(test))
print('alg6 : ' , alg6.predict(test))
print('alg7 : ' , alg7.predict(test))
print('alg8 : ' , alg8.predict(test))
print('alg9 : ' , alg9.predict(test))
print('alg10 :' , alg10.predict(test))
test = np.array([4, 2, 1, 0, 1]).reshape(1,-1)
print('alg1 : ' , alg1.predict(test))
print('alg2 : ' , alg2.predict(test))
print('alg3 : ' , alg3.predict(test))
print('alg4 : ' , alg4.predict(test))
print('alg5 : ' , alg5.predict(test))
print('alg6 : ' , alg6.predict(test))
print('alg7 : ' , alg7.predict(test))
print('alg8 : ' , alg8.predict(test))
print('alg9 : ' , alg9.predict(test))
print('alg10 :' , alg10.predict(test))
df_model=pd.DataFrame({
'Model_Applied':['Logistic_Regression', 'Random_Forest', 'Decision_tree', 'BernoulliNB', 'GausianNB', 'MultinomialNB', 'KNN', 'SVC', 'Linear_SVC', 'Ridge_Classifier'],
'Accuracy':[alg1.score(X_test,y_test), alg2.score(X_test,y_test), alg3.score(X_test,y_test), alg4.score(X_test,y_test),
alg5.score(X_test,y_test), alg6.score(X_test,y_test), alg7.score(X_test,y_test), alg8.score(X_test,y_test),
alg9.score(X_test,y_test), alg10.score(X_test,y_test)],
'Training_Time':[total_time1, total_time2, total_time3, total_time4, total_time5, total_time6, total_time7, total_time8,
total_time9, total_time10]})
df_model
df_model.plot(kind='bar',x='Model_Applied', ylim=[0,1] , y='Accuracy', figsize=(10,10) , ylabel='Accuracy', title='Accurcy comparison of different Models')
df_model.plot(kind='bar',x='Model_Applied', ylim=[0,0.14] , y='Training_Time', figsize=(10,10), ylabel='Training Time', title='Training time comparison of different Models')
import pickle as pkl
with open('winner.pkl', 'wb') as f:
pkl.dump(alg3, f)
with open('winner.pkl', 'rb') as f:
model = pkl.load(f)
model.predict(test)
```
|
github_jupyter
|
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
import pydotplus
from six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
from sklearn.naive_bayes import BernoulliNB , GaussianNB ,MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import RidgeClassifier
from sklearn.linear_model import LassoCV
from sklearn.metrics import classification_report, confusion_matrix
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('IPL Matches 2008-2020.csv')
df = df[['team1' , 'team2' , 'venue' , 'toss_winner' ,'toss_decision' , 'winner' , 'neutral_venue']]
df.head()
df.team1.unique()
df.venue.unique()
# Keeping only consistent teams
consistent_teams = ['Kolkata Knight Riders', 'Chennai Super Kings', 'Rajasthan Royals',
'Mumbai Indians', 'Kings XI Punjab', 'Royal Challengers Bangalore',
'Delhi Daredevils', 'Sunrisers Hyderabad' , 'Delhi Capitals']
consistent_venues = ['M Chinnaswamy Stadium',
'Punjab Cricket Association Stadium, Mohali', 'Feroz Shah Kotla', 'Wankhede Stadium', 'Sawai Mansingh Stadium',
'MA Chidambaram Stadium, Chepauk', 'Eden Gardens', 'Dr DY Patil Sports Academy', 'Brabourne Stadium',
'Sardar Patel Stadium, Motera', 'Himachal Pradesh Cricket Association Stadium', 'Subrata Roy Sahara Stadium',
'Rajiv Gandhi International Stadium, Uppal', 'Shaheed Veer Narayan Singh International Stadium',
'JSCA International Stadium Complex', 'Barabati Stadium', 'Maharashtra Cricket Association Stadium',
'Dr. Y.S. Rajasekhara Reddy ACA-VDCA Cricket Stadium', 'Punjab Cricket Association IS Bindra Stadium, Mohali',
'M.Chinnaswamy Stadium', 'Holkar Cricket Stadium', 'Vidarbha Cricket Association Stadium, Jamtha', 'Nehru Stadium',
'Saurashtra Cricket Association Stadium']
df = df[(df['team1'].isin(consistent_teams)) & (df['team2'].isin(consistent_teams)) & (df['neutral_venue']==0) & (df['venue'].isin(consistent_venues))]
df.drop('neutral_venue', inplace=True , axis=1)
# delhi capitals and delhi daredevils are same team
df.team1.unique()
df['team1'] = np.where(df.team1 == 'Delhi Daredevils' , 'Delhi Capitals' , df.team1)
df['team2'] = np.where(df.team2 == 'Delhi Daredevils' , 'Delhi Capitals' , df.team2)
df['toss_winner'] = np.where(df.toss_winner == 'Delhi Daredevils' , 'Delhi Capitals' , df.toss_winner)
df['winner'] = np.where(df.winner == 'Delhi Daredevils' , 'Delhi Capitals' , df.winner)
# M Chinnaswamy Stadium and M.Chinnaswamy Stadium are same team
# Punjab Cricket Association IS Bindra Stadium, Mohali and Punjab Cricket Association Stadium, Mohali are same
df.venue.unique()
df['venue'] = np.where(df.venue == 'M.Chinnaswamy Stadium' , 'M Chinnaswamy Stadium' , df.venue)
df['venue'] = np.where(df.venue == 'Punjab Cricket Association IS Bindra Stadium, Mohali' , 'Punjab Cricket Association Stadium, Mohali' , df.venue)
df.venue.unique()
df.head()
def getNumber_team(x):
if x=='Royal Challengers Bangalore':
return 0
elif x=='Kings XI Punjab':
return 1
elif x=='Delhi Capitals':
return 2
elif x=='Mumbai Indians':
return 3
elif x=='Rajasthan Royals':
return 4
elif x=='Chennai Super Kings':
return 5
elif x=='Kolkata Knight Riders':
return 6
else:
return 7
df['team1'] = df['team1'].apply(getNumber_team)
df['team2'] = df['team2'].apply(getNumber_team)
df['toss_winner'] = df['toss_winner'].apply(getNumber_team)
df['winner'] = df['winner'].apply(getNumber_team)
df.venue.unique()
def getNumber_venue(x):
if x=='M Chinnaswamy Stadium':
return 1
elif x=='Punjab Cricket Association Stadium, Mohali':
return 2
elif x=='Feroz Shah Kotla':
return 3
elif x=='Wankhede Stadium':
return 4
elif x=='Sawai Mansingh Stadium':
return 5
elif x=='MA Chidambaram Stadium, Chepauk':
return 6
elif x=='Eden Gardens':
return 7
elif x=='Dr DY Patil Sports Academy':
return 8
elif x=='Brabourne Stadium':
return 9
elif x=='Sardar Patel Stadium, Motera':
return 10
elif x=='Himachal Pradesh Cricket Association Stadium':
return 11
elif x=='Subrata Roy Sahara Stadium':
return 12
elif x=='Rajiv Gandhi International Stadium, Uppal':
return 13
elif x=='Shaheed Veer Narayan Singh International Stadium':
return 14
elif x=='JSCA International Stadium Complex':
return 15
elif x=='Maharashtra Cricket Association Stadium':
return 16
elif x=='Dr. Y.S. Rajasekhara Reddy ACA-VDCA Cricket Stadium':
return 17
elif x=='Barabati Stadium':
return 18
else:
return 19
df['venue'] = df['venue'].apply(getNumber_venue)
def getNumber_tossDecision(x):
if x=='field':
return 0
else:
return 1
df['toss_decision'] = df['toss_decision'].apply(getNumber_tossDecision)
df.dtypes
df
df.corr()
import seaborn as sns
sns.heatmap(df.corr())
X = df.drop(labels='winner', axis=1)
y = df['winner']
X = np.array(X)
y = np.array(y)
zeros = 0
for i in range(len(X)):
if y[i] == X[i][0]:
if zeros <= 250:
y[i] = 0
zeros = zeros + 1
else:
y[i] = 1
t = X[i][0]
X[i][0] = X[i][1]
X[i][1] = t
else:
y[i] = 1
for i in range(len(X)):
if X[i][3]==X[i][0]:
X[i][3]=0
else:
X[i][3]=1
X = np.array(X , dtype='int32')
y = np.array(y , dtype='int32')
y = y.ravel()
print(np.unique(y, return_counts=True))
# now balanced dataset
from sklearn.model_selection import train_test_split
X_train, X_test, y_train,y_test = train_test_split(X, y, test_size=0.2 , random_state=0)
alg1 = LogisticRegression(solver='liblinear')
start = time.time()
alg1.fit(X_train , y_train)
end = time.time()
total_time1 = end - start
y_pred1 = alg1.predict(X_test)
print('accuracy : ', alg1.score(X_test , y_test))
print('time : ' , total_time1)
print(classification_report(y_test , y_pred1))
print(confusion_matrix(y_test , y_pred1))
alg2 = RandomForestClassifier(n_estimators=60)
start = time.time()
alg2.fit(X_train , y_train)
end = time.time()
total_time2 = end - start
y_pred2 = alg2.predict(X_test)
print('accuracy : ', alg2.score(X_test , y_test))
print('time : ' , total_time2)
print(classification_report(y_test , y_pred2))
print(confusion_matrix(y_test , y_pred2))
alg3 = DecisionTreeClassifier(max_depth=1 , criterion='gini')
start = time.time()
alg3.fit(X_train , y_train)
end = time.time()
total_time3 = end - start
y_pred3 = alg3.predict(X_test)
print('accuracy : ', alg3.score(X_test , y_test))
print('time : ' , total_time3)
print(classification_report(y_test , y_pred3))
print(confusion_matrix(y_test , y_pred3))
# Printing tree alongwith class names
dot_data = StringIO()
export_graphviz(alg3, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names = ['team1', 'team2', 'venue','toss_winner', 'toss_decision'])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
dot_data = export_graphviz(alg3, out_file=None,
feature_names=['team1', 'team2', 'venue', 'toss_winner', 'toss_decision'])
graph = pydotplus.graph_from_dot_data(dot_data)
graph.write_pdf("ipl_winner_decision_tree.pdf")
alg4 = BernoulliNB()
start = time.time()
alg4.fit(X_train,y_train)
end = time.time()
total_time4 = end - start
y_pred4 = alg4.predict(X_test)
print('accuracy : ', alg4.score(X_test , y_test))
print('time : ' , total_time4)
print(classification_report(y_test , y_pred4))
print(confusion_matrix(y_test , y_pred4))
alg5 = GaussianNB()
start = time.time()
alg5.fit(X_train,y_train)
end = time.time()
total_time5 = end - start
y_pred5 = alg5.predict(X_test)
print('accuracy : ', alg5.score(X_test , y_test))
print('time : ' , total_time5)
print(classification_report(y_test , y_pred5))
print(confusion_matrix(y_test , y_pred5))
alg6 = MultinomialNB()
start = time.time()
alg6.fit(X_train,y_train)
end = time.time()
total_time6 = end - start
y_pred6 = alg6.predict(X_test)
print('accuracy : ', alg6.score(X_test , y_test))
print('time : ' , total_time6)
print(classification_report(y_test , y_pred6))
print(confusion_matrix(y_test , y_pred6))
x_axis = []
y_axis = []
for k in range(1, 26, 2):
clf = KNeighborsClassifier(n_neighbors = k)
score = cross_val_score(clf, X_train, y_train, cv = KFold(n_splits=5, shuffle=True, random_state=0))
x_axis.append(k)
y_axis.append(score.mean())
import matplotlib.pyplot as plt
plt.plot(x_axis, y_axis)
plt.xlabel("k")
plt.ylabel("cross_val_score")
plt.title("variation of score on different values of k")
plt.show()
alg7 = KNeighborsClassifier(n_neighbors=19, weights='distance', algorithm='auto', p=2, metric='minkowski')
start = time.time()
alg7.fit(X_train, y_train)
end = time.time()
total_time7 = end - start
y_pred7 = alg7.predict(X_test)
print('accuracy : ', alg7.score(X_test , y_test))
print('time : ' , total_time7)
print(classification_report(y_test , y_pred7))
print(confusion_matrix(y_test , y_pred7))
clf = SVC(kernel='rbf')
grid = {'C': [1e2,1e3, 5e3, 1e4, 5e4, 1e5],
'gamma': [1e-3, 5e-4, 1e-4, 5e-3]}
alg8 = GridSearchCV(clf, grid)
start = time.time()
alg8.fit(X_train, y_train)
end = time.time()
total_time8 = end - start
y_pred8 = alg8.predict(X_test)
print(alg8.best_estimator_)
print('accuracy : ', alg8.score(X_test , y_test))
print('time : ' , total_time8)
print(classification_report(y_test , y_pred8))
print(confusion_matrix(y_test , y_pred8))
alg9 = LinearSVC(multi_class='crammer_singer')
start = time.time()
alg9.fit(X_train, y_train)
end = time.time()
total_time9 = end - start
y_pred9 = alg9.predict(X_test)
print('accuracy : ', alg9.score(X_test , y_test))
print('time : ' , total_time9)
print(classification_report(y_test , y_pred9))
print(confusion_matrix(y_test , y_pred9))
ridge = RidgeClassifier()
parameters={'alpha':[1e-15,1e-10,1e-8,1e-3,1e-2,1,5,10,20,30,35,40]}
alg10=GridSearchCV(ridge,parameters)
start = time.time()
alg10.fit(X_train, y_train)
end = time.time()
total_time10 = end - start
y_pred10 = alg10.predict(X_test)
print('accuracy : ', alg10.score(X_test , y_test))
print('time : ' , total_time10)
print(classification_report(y_test , y_pred10))
print(confusion_matrix(y_test , y_pred10))
test = np.array([2, 4, 1, 1, 1]).reshape(1,-1)
print('alg1 : ' , alg1.predict(test))
print('alg2 : ' , alg2.predict(test))
print('alg3 : ' , alg3.predict(test))
print('alg4 : ' , alg4.predict(test))
print('alg5 : ' , alg5.predict(test))
print('alg6 : ' , alg6.predict(test))
print('alg7 : ' , alg7.predict(test))
print('alg8 : ' , alg8.predict(test))
print('alg9 : ' , alg9.predict(test))
print('alg10 :' , alg10.predict(test))
test = np.array([4, 2, 1, 0, 1]).reshape(1,-1)
print('alg1 : ' , alg1.predict(test))
print('alg2 : ' , alg2.predict(test))
print('alg3 : ' , alg3.predict(test))
print('alg4 : ' , alg4.predict(test))
print('alg5 : ' , alg5.predict(test))
print('alg6 : ' , alg6.predict(test))
print('alg7 : ' , alg7.predict(test))
print('alg8 : ' , alg8.predict(test))
print('alg9 : ' , alg9.predict(test))
print('alg10 :' , alg10.predict(test))
df_model=pd.DataFrame({
'Model_Applied':['Logistic_Regression', 'Random_Forest', 'Decision_tree', 'BernoulliNB', 'GausianNB', 'MultinomialNB', 'KNN', 'SVC', 'Linear_SVC', 'Ridge_Classifier'],
'Accuracy':[alg1.score(X_test,y_test), alg2.score(X_test,y_test), alg3.score(X_test,y_test), alg4.score(X_test,y_test),
alg5.score(X_test,y_test), alg6.score(X_test,y_test), alg7.score(X_test,y_test), alg8.score(X_test,y_test),
alg9.score(X_test,y_test), alg10.score(X_test,y_test)],
'Training_Time':[total_time1, total_time2, total_time3, total_time4, total_time5, total_time6, total_time7, total_time8,
total_time9, total_time10]})
df_model
df_model.plot(kind='bar',x='Model_Applied', ylim=[0,1] , y='Accuracy', figsize=(10,10) , ylabel='Accuracy', title='Accurcy comparison of different Models')
df_model.plot(kind='bar',x='Model_Applied', ylim=[0,0.14] , y='Training_Time', figsize=(10,10), ylabel='Training Time', title='Training time comparison of different Models')
import pickle as pkl
with open('winner.pkl', 'wb') as f:
pkl.dump(alg3, f)
with open('winner.pkl', 'rb') as f:
model = pkl.load(f)
model.predict(test)
| 0.253584 | 0.313426 |
# Installation
- Run these commands
- git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
- cd Monk_Object_Detection/6_cornernet_lite/installation
- Select the right requirements file and run
- chmod +x install.sh
- ./install.sh
# About the network
1. Paper on CornerNet: https://arxiv.org/abs/1808.01244
2. Paper on CornerNet-Lite: https://arxiv.org/abs/1904.08900
3. Blog 1 on CornerNet: https://joshua19881228.github.io/2019-01-20-CornerNet/
4. Blog 2 on CornerNet: https://zhangtemplar.github.io/anchor-free-detection/
5. Blog 3 on CornerNet: https://opencv.org/latest-trends-of-object-detection-from-cornernet-to-centernet-explained-part-i-cornernet/
6. Blog 4 on CornerNet: https://towardsdatascience.com/centernet-keypoint-triplets-for-object-detection-review-a314a8e4d4b0
7. Blog 5 on CornerNet: https://medium.com/@andersasac/the-end-of-anchors-improving-object-detection-models-and-annotations-73828c7b39f6
# COCO Format - 1
## Dataset Directory Structure
../sample_dataset (root_dir)
|
|------ship (coco_dir)
| |
| |----images (img_dir)
| |
| |------Train (set_dir) (Train)
| |
| |---------img1.jpg
| |---------img2.jpg
| |---------..........(and so on)
|
|
| |---annotations
| |----|
| |--------------------instances_Train.json (instances_<set_dir>.json)
| |--------------------classes.txt
- instances_Train.json -> In proper COCO format
- classes.txt -> A list of classes in alphabetical order
For TrainSet
- root_dir = "../sample_dataset";
- coco_dir = "ship";
- img_dir = "images";
- set_dir = "Train";
Note: Annotation file name too coincides against the set_dir
# COCO Format - 2
## Dataset Directory Structure
../sample_dataset (root_dir)
|
|------ship (coco_dir)
| |
| |---ImagesTrain (set_dir)
| |----|
| |-------------------img1.jpg
| |-------------------img2.jpg
| |-------------------.........(and so on)
|
|
| |---annotations
| |----|
| |--------------------instances_ImagesTrain.json (instances_<set_dir>.json)
| |--------------------classes.txt
- instances_Train.json -> In proper COCO format
- classes.txt -> A list of classes in alphabetical order
For TrainSet
- root_dir = "../sample_dataset";
- coco_dir = "ship";
- img_dir = "./";
- set_dir = "ImagesTrain";
Note: Annotation file name too coincides against the set_dir
# Sample Dataset Credits
credits: https://github.com/experiencor/kangaroo
```
import os
import sys
sys.path.append("../../6_cornernet_lite/lib/")
from train_detector import Detector
gtf = Detector();
root_dir = "../sample_dataset";
coco_dir = "kangaroo"
img_dir = "/"
set_dir = "Images"
gtf.Train_Dataset(root_dir, coco_dir, img_dir, set_dir, batch_size=4, use_gpu=True, num_workers=4)
gtf.Model(model_name="CornerNet_Saccade")
gtf.Hyper_Params(lr=0.00025, total_iterations=1000)
gtf.Setup();
gtf.Train();
```
# Inference
```
import os
import sys
sys.path.append("../../6_cornernet_lite/lib/")
from infer_detector import Infer
gtf = Infer();
class_list = ["kangaroo"]
gtf.Model(class_list,
base="CornerNet_Saccade",
model_path="./cache/nnet/CornerNet_Saccade/CornerNet_Saccade_final.pkl")
boxes = gtf.Predict("../sample_dataset/kangaroo/test/kg1.jpeg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("../sample_dataset/kangaroo/test/kg4.jpeg", vis_thresh=0.15, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("../sample_dataset/kangaroo/test/kg3.jpeg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
```
|
github_jupyter
|
import os
import sys
sys.path.append("../../6_cornernet_lite/lib/")
from train_detector import Detector
gtf = Detector();
root_dir = "../sample_dataset";
coco_dir = "kangaroo"
img_dir = "/"
set_dir = "Images"
gtf.Train_Dataset(root_dir, coco_dir, img_dir, set_dir, batch_size=4, use_gpu=True, num_workers=4)
gtf.Model(model_name="CornerNet_Saccade")
gtf.Hyper_Params(lr=0.00025, total_iterations=1000)
gtf.Setup();
gtf.Train();
import os
import sys
sys.path.append("../../6_cornernet_lite/lib/")
from infer_detector import Infer
gtf = Infer();
class_list = ["kangaroo"]
gtf.Model(class_list,
base="CornerNet_Saccade",
model_path="./cache/nnet/CornerNet_Saccade/CornerNet_Saccade_final.pkl")
boxes = gtf.Predict("../sample_dataset/kangaroo/test/kg1.jpeg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("../sample_dataset/kangaroo/test/kg4.jpeg", vis_thresh=0.15, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
boxes = gtf.Predict("../sample_dataset/kangaroo/test/kg3.jpeg", vis_thresh=0.3, output_img="output.jpg")
from IPython.display import Image
Image(filename='output.jpg')
| 0.173463 | 0.65597 |
```
# Good resources
# https://colab.research.google.com/github/deepmind/deepmind-research/blob/master/polygen/training.ipynb
# https://towardsdatascience.com/generating-3d-models-with-polygen-and-pytorch-4895f3f61a2e
# https://pytorch3d.org/tutorials/render_textured_meshes
```
## Attempt 2 (Using PyTorch3D)
```
import sys
import torch
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
# print(pyt_version_str)
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
import pytorch3d as p3d
print(p3d.__version__)
!pip uninstall pytorch3d
import os
import torch
import numpy as np
from tqdm.notebook import tqdm
import imageio
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage import img_as_ubyte
# io utils
from pytorch3d.io import load_obj,load_objs_as_meshes
# datastructures
from pytorch3d.structures import Meshes
# 3D transformations functions
from pytorch3d.transforms import Rotate, Translate
# rendering components
from pytorch3d.renderer import (
FoVPerspectiveCameras, look_at_view_transform, look_at_rotation,
RasterizationSettings, MeshRenderer, MeshRasterizer, BlendParams,
SoftSilhouetteShader, HardPhongShader, PointLights, TexturesVertex,
)
# verts_l, faces_l, aux_l = load_obj(f'{data_path}/{fname}/{fname}_lower.obj')
# verts_u, faces_u, aux_u = load_obj(f'{data_path}/{fname}/{fname}_upper.obj')
# verts_u, faces_u, _= load_obj('/content/016KWDMV_upper.obj')
!wget https://dl.fbaipublicfiles.com/pytorch3d/data/teapot/teapot.obj
# verts_u, faces_u, _ = load_obj("/content/teapot.obj")
verts_u, faces_u, _= load_obj('/content/016KWDMV_upper.obj')
print(verts_u.shape)
print(faces_u.textures_idx.shape)
# print(aux_u)
# !rm -rf /content/data
# device = torch.device("cuda:0")
# verts_u, faces_u, _= load_obj('/content/016KWDMV_upper.obj')
faces = faces_u.verts_idx
# Initialize each vertex to be white in color.
verts_rgb = torch.ones_like(verts_u)[None] # (1, V, 3)
textures = TexturesVertex(verts_features=verts_rgb.to('cuda'))
# Create a Meshes object for the teapot. Here we have only one mesh in the batch.
mesh = Meshes(
verts=[verts_u.to('cuda')],
faces=[faces.to('cuda')],
textures=textures
)
print(verts_u)
print(verts_rgb)
# plt.figure(figsize=(7,7))
# print(mesh.textures)
# # texturesuv_image_matplotlib(mesh.textures, subsample=None)
# plt.axis("off");
# Initialize a perspective camera.
cameras = FoVPerspectiveCameras(device='cuda')
# To blend the 100 faces we set a few parameters which control the opacity and the sharpness of
# edges. Refer to blending.py for more details.
blend_params = BlendParams(sigma=1e-4, gamma=1e-4)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 256x256. To form the blended image we use 100 faces for each pixel. We also set bin_size and max_faces_per_bin to None which ensure that
# the faster coarse-to-fine rasterization method is used. Refer to rasterize_meshes.py for
# explanations of these parameters. Refer to docs/notes/renderer.md for an explanation of
# the difference between naive and coarse-to-fine rasterization.
raster_settings = RasterizationSettings(
image_size=256,
blur_radius=np.log(1. / 1e-4 - 1.) * blend_params.sigma,
faces_per_pixel=100,
)
# Create a silhouette mesh renderer by composing a rasterizer and a shader.
silhouette_renderer = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=cameras,
raster_settings=raster_settings
),
shader=SoftSilhouetteShader(blend_params=blend_params)
)
# We will also create a Phong renderer. This is simpler and only needs to render one face per pixel.
raster_settings = RasterizationSettings(
image_size=256,
blur_radius=0.0,
faces_per_pixel=1,
)
# We can add a point light in front of the object.
lights = PointLights(device='cuda', location=((2.0, 2.0, -2.0),))
phong_renderer = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=cameras,
raster_settings=raster_settings
),
shader=HardPhongShader(device='cuda', cameras=cameras, lights=lights)
)
# Select the viewpoint using spherical angles
distance = 10 # distance from camera to the object
elevation = 0.0 # angle of elevation in degrees
azimuth = 0.0 # No rotation so the camera is positioned on the +Z axis.
# Get the position of the camera based on the spherical angles
R, T = look_at_view_transform(distance, elevation, azimuth, device='cuda')
# Render the teapot providing the values of R and T.
silhouette = silhouette_renderer(meshes_world=mesh, R=R, T=T)
image_ref = phong_renderer(meshes_world=mesh, R=R, T=T)
silhouette = silhouette.cpu().numpy()
image_ref = image_ref.cpu().numpy()
plt.figure(figsize=(16, 16))
plt.subplot(1, 2, 1)
plt.imshow(silhouette.squeeze()[..., 3]) # only plot the alpha channel of the RGBA image
plt.grid(False)
plt.subplot(1, 2, 2)
plt.imshow(image_ref.squeeze())
plt.grid(False)
```
|
github_jupyter
|
# Good resources
# https://colab.research.google.com/github/deepmind/deepmind-research/blob/master/polygen/training.ipynb
# https://towardsdatascience.com/generating-3d-models-with-polygen-and-pytorch-4895f3f61a2e
# https://pytorch3d.org/tutorials/render_textured_meshes
import sys
import torch
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
# print(pyt_version_str)
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
import pytorch3d as p3d
print(p3d.__version__)
!pip uninstall pytorch3d
import os
import torch
import numpy as np
from tqdm.notebook import tqdm
import imageio
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage import img_as_ubyte
# io utils
from pytorch3d.io import load_obj,load_objs_as_meshes
# datastructures
from pytorch3d.structures import Meshes
# 3D transformations functions
from pytorch3d.transforms import Rotate, Translate
# rendering components
from pytorch3d.renderer import (
FoVPerspectiveCameras, look_at_view_transform, look_at_rotation,
RasterizationSettings, MeshRenderer, MeshRasterizer, BlendParams,
SoftSilhouetteShader, HardPhongShader, PointLights, TexturesVertex,
)
# verts_l, faces_l, aux_l = load_obj(f'{data_path}/{fname}/{fname}_lower.obj')
# verts_u, faces_u, aux_u = load_obj(f'{data_path}/{fname}/{fname}_upper.obj')
# verts_u, faces_u, _= load_obj('/content/016KWDMV_upper.obj')
!wget https://dl.fbaipublicfiles.com/pytorch3d/data/teapot/teapot.obj
# verts_u, faces_u, _ = load_obj("/content/teapot.obj")
verts_u, faces_u, _= load_obj('/content/016KWDMV_upper.obj')
print(verts_u.shape)
print(faces_u.textures_idx.shape)
# print(aux_u)
# !rm -rf /content/data
# device = torch.device("cuda:0")
# verts_u, faces_u, _= load_obj('/content/016KWDMV_upper.obj')
faces = faces_u.verts_idx
# Initialize each vertex to be white in color.
verts_rgb = torch.ones_like(verts_u)[None] # (1, V, 3)
textures = TexturesVertex(verts_features=verts_rgb.to('cuda'))
# Create a Meshes object for the teapot. Here we have only one mesh in the batch.
mesh = Meshes(
verts=[verts_u.to('cuda')],
faces=[faces.to('cuda')],
textures=textures
)
print(verts_u)
print(verts_rgb)
# plt.figure(figsize=(7,7))
# print(mesh.textures)
# # texturesuv_image_matplotlib(mesh.textures, subsample=None)
# plt.axis("off");
# Initialize a perspective camera.
cameras = FoVPerspectiveCameras(device='cuda')
# To blend the 100 faces we set a few parameters which control the opacity and the sharpness of
# edges. Refer to blending.py for more details.
blend_params = BlendParams(sigma=1e-4, gamma=1e-4)
# Define the settings for rasterization and shading. Here we set the output image to be of size
# 256x256. To form the blended image we use 100 faces for each pixel. We also set bin_size and max_faces_per_bin to None which ensure that
# the faster coarse-to-fine rasterization method is used. Refer to rasterize_meshes.py for
# explanations of these parameters. Refer to docs/notes/renderer.md for an explanation of
# the difference between naive and coarse-to-fine rasterization.
raster_settings = RasterizationSettings(
image_size=256,
blur_radius=np.log(1. / 1e-4 - 1.) * blend_params.sigma,
faces_per_pixel=100,
)
# Create a silhouette mesh renderer by composing a rasterizer and a shader.
silhouette_renderer = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=cameras,
raster_settings=raster_settings
),
shader=SoftSilhouetteShader(blend_params=blend_params)
)
# We will also create a Phong renderer. This is simpler and only needs to render one face per pixel.
raster_settings = RasterizationSettings(
image_size=256,
blur_radius=0.0,
faces_per_pixel=1,
)
# We can add a point light in front of the object.
lights = PointLights(device='cuda', location=((2.0, 2.0, -2.0),))
phong_renderer = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=cameras,
raster_settings=raster_settings
),
shader=HardPhongShader(device='cuda', cameras=cameras, lights=lights)
)
# Select the viewpoint using spherical angles
distance = 10 # distance from camera to the object
elevation = 0.0 # angle of elevation in degrees
azimuth = 0.0 # No rotation so the camera is positioned on the +Z axis.
# Get the position of the camera based on the spherical angles
R, T = look_at_view_transform(distance, elevation, azimuth, device='cuda')
# Render the teapot providing the values of R and T.
silhouette = silhouette_renderer(meshes_world=mesh, R=R, T=T)
image_ref = phong_renderer(meshes_world=mesh, R=R, T=T)
silhouette = silhouette.cpu().numpy()
image_ref = image_ref.cpu().numpy()
plt.figure(figsize=(16, 16))
plt.subplot(1, 2, 1)
plt.imshow(silhouette.squeeze()[..., 3]) # only plot the alpha channel of the RGBA image
plt.grid(False)
plt.subplot(1, 2, 2)
plt.imshow(image_ref.squeeze())
plt.grid(False)
| 0.748168 | 0.887205 |
# "Backtesting with your own custom strategy"
> "Write your own buy and sell signals from custom indicators and built-in indicators"
- toc: true
- branch: master
- badges: true
- comments: true
- author: Benj Del Mundo
- categories: [backtest, custom strategy]
# Overview
In this example, we will
1. Create a new indicator outside fastquant (this could be anything from time-series methods to machine-learning-based methods)
2. Combine our new indicator with some built-in indicators in our strategy
3. Use multiple conditions on buy and sell signals
```
# uncomment to install in colab
# !pip3 install fastquant
from fastquant import backtest,get_stock_data
import numpy as np
df = get_stock_data("AAPL",start_date='2019-01-01',end_date='2019-06-01')
df.head()
```
### Create our own custom indicator. In this case, we'll use scipy implementation of Arnaud Legoux Moving Average (ALMA)
> Arnaud Legoux Moving Average (ALMA) removes small price fluctuations and enhances the trend by applying a moving average twice, once from left to right, and once from right to left. At the end of this process the phase shift (price lag) commonly associated with moving averages is significantly reduced
(https://www.interactivebrokers.com/en/software/tws/usersguidebook/technicalanalytics/arnaudlegoux.htm)
```
from scipy.ndimage import convolve1d as conv
def alma_indicator(data,window=9,offset=0.85,sigma=6):
m = int(offset * window-1)
s = window/sigma
dss = 2*s*s
wtds = np.exp(-(np.arange(window) - m)**2/dss)
return conv(data, weights=wtds/wtds.sum(),axis=0, mode='nearest')
%matplotlib inline
df["alma"] = alma_indicator(df.close, window=9,offset=0.85,sigma=6)
df["sma"] = df.close.rolling(9).mean()
df[["close","alma","sma"]].plot(figsize=(30,10),title="Comparison of SMA(9) vs ALMA(9)")
df.head()
```
## Implementing our custom strategy
In this strategy we will have the following signals:
Buy on:
- Closing price is above ALMA
- MACD crosses above the MACD signal line
Sell on:
- Closing price falls below ALMA
```
from fastquant import CustomStrategy, BaseStrategy
from fastquant.indicators import MACD, CrossOver
from fastquant.indicators.custom import CustomIndicator
# Create a subclass of the BaseStrategy, We call this MAMAStrategy (MACD + ALMA)
class MAMAStrategy(BaseStrategy):
params = (
("alma_column", "alma"), # name for the ALMA column from the dataframe
("macd_fast_period", 12), # period for the MACD
("macd_slow_period", 16),
("macd_signal_period",9)
)
def __init__(self):
# Initialize global variables
super().__init__()
# Setup MACD indicator parameters
self.macd_fast_period = self.params.macd_fast_period
self.macd_slow_period = self.params.macd_slow_period
self.macd_signal_period = self.params.macd_signal_period
# Setup MACD indicator, macd line and macd signal line, and macd signal line crossover
self.macd_ind = MACD(
period_me1=self.macd_fast_period,
period_me2=self.macd_slow_period,
period_signal=self.macd_signal_period
)
self.macd = self.macd_ind.macd
self.macd_signal = self.macd_ind.signal
# Add signal line cross over
self.macd_signal_crossover = CrossOver(
self.macd_ind, self.macd_signal
)
# Assign ALMA column from the dataframe
self.alma_column = self.params.alma_column
# Set ALMA indicator from the alma column of data
self.alma = CustomIndicator(
self.data, custom_column=self.alma_column,
)
# Plot the ALMA indicator along with the price instead of a separate plot
self.alma.plotinfo.subplot = False
self.alma.plotinfo.plotname = "ALMA"
print("===Strategy level arguments===")
print("PARAMS: ", self.params)
# Buy when the custom indicator is below the lower limit, and sell when it's above the upper limit
def buy_signal(self):
alma_buy = self.dataclose[0] > self.alma[0] # Close is above ALMA
macd_buy = self.macd_signal_crossover > 0 # MACD crosses signal line upward
return alma_buy and macd_buy
def sell_signal(self):
return self.alma[0] > self.dataclose[0]
%matplotlib inline
result, history = backtest(MAMAStrategy,df, verbose=False, return_history=True)
# result
result
history['orders']
history['indicators']
```
|
github_jupyter
|
# uncomment to install in colab
# !pip3 install fastquant
from fastquant import backtest,get_stock_data
import numpy as np
df = get_stock_data("AAPL",start_date='2019-01-01',end_date='2019-06-01')
df.head()
from scipy.ndimage import convolve1d as conv
def alma_indicator(data,window=9,offset=0.85,sigma=6):
m = int(offset * window-1)
s = window/sigma
dss = 2*s*s
wtds = np.exp(-(np.arange(window) - m)**2/dss)
return conv(data, weights=wtds/wtds.sum(),axis=0, mode='nearest')
%matplotlib inline
df["alma"] = alma_indicator(df.close, window=9,offset=0.85,sigma=6)
df["sma"] = df.close.rolling(9).mean()
df[["close","alma","sma"]].plot(figsize=(30,10),title="Comparison of SMA(9) vs ALMA(9)")
df.head()
from fastquant import CustomStrategy, BaseStrategy
from fastquant.indicators import MACD, CrossOver
from fastquant.indicators.custom import CustomIndicator
# Create a subclass of the BaseStrategy, We call this MAMAStrategy (MACD + ALMA)
class MAMAStrategy(BaseStrategy):
params = (
("alma_column", "alma"), # name for the ALMA column from the dataframe
("macd_fast_period", 12), # period for the MACD
("macd_slow_period", 16),
("macd_signal_period",9)
)
def __init__(self):
# Initialize global variables
super().__init__()
# Setup MACD indicator parameters
self.macd_fast_period = self.params.macd_fast_period
self.macd_slow_period = self.params.macd_slow_period
self.macd_signal_period = self.params.macd_signal_period
# Setup MACD indicator, macd line and macd signal line, and macd signal line crossover
self.macd_ind = MACD(
period_me1=self.macd_fast_period,
period_me2=self.macd_slow_period,
period_signal=self.macd_signal_period
)
self.macd = self.macd_ind.macd
self.macd_signal = self.macd_ind.signal
# Add signal line cross over
self.macd_signal_crossover = CrossOver(
self.macd_ind, self.macd_signal
)
# Assign ALMA column from the dataframe
self.alma_column = self.params.alma_column
# Set ALMA indicator from the alma column of data
self.alma = CustomIndicator(
self.data, custom_column=self.alma_column,
)
# Plot the ALMA indicator along with the price instead of a separate plot
self.alma.plotinfo.subplot = False
self.alma.plotinfo.plotname = "ALMA"
print("===Strategy level arguments===")
print("PARAMS: ", self.params)
# Buy when the custom indicator is below the lower limit, and sell when it's above the upper limit
def buy_signal(self):
alma_buy = self.dataclose[0] > self.alma[0] # Close is above ALMA
macd_buy = self.macd_signal_crossover > 0 # MACD crosses signal line upward
return alma_buy and macd_buy
def sell_signal(self):
return self.alma[0] > self.dataclose[0]
%matplotlib inline
result, history = backtest(MAMAStrategy,df, verbose=False, return_history=True)
# result
result
history['orders']
history['indicators']
| 0.738103 | 0.908212 |
```
from pyaccelerator import *
import numpy as np
import matplotlib.pyplot as plt
drift_l = 5 # m
focal_length = 10 # meters
FODO_cell = [QuadrupoleThin(focal_length * 2),
Drift(drift_l),
QuadrupoleThin(-focal_length),
Drift(drift_l),
QuadrupoleThin(focal_length * 2)]
fodo = Lattice(FODO_cell)
fodo
fodo.plot()
n_fodo = 8
n_dipole = n_fodo # one dipole after each fodo
curve_perimeter = 120 # m
dip_theta = 2 * np.pi / n_dipole
dip_rho = curve_perimeter / (n_dipole * dip_theta)
drift_l = 10
dip = Dipole(dip_rho, dip_theta)
sequence = (FODO_cell + [dip, Drift(drift_l)]) * n_fodo
lattice = Lattice(sequence)
lattice.plot()
lattice.plot.top_down()
lattice.dispersion_solution()
tracked = lattice.dispersion()
tracked.plot()
lattice.plot()
```
# FODO dispersion
```
cirumference = 1000 # meters
proton_energy = 15 # GeV
dipole_length = 5 # meters
dipole_B_max = 2 # T
n_cells = 8 # ??
dipole_angle = np.pi / 16 # ??
quad_length = 3
quad_strength = 8.89e-3 / quad_length
dipole_length = 5
dipole_angle = np.pi / 16
dipole_bending_radius = dipole_length / dipole_angle
# reduce the drift lengths to compensate for the now thick elements
drift_length = (cirumference / n_cells - (2 * quad_length) - (4 * dipole_length)) / 6
half_quad_f = Quadrupole(quad_strength, quad_length/2, name="quad_f")
quad_d = Quadrupole(-quad_strength, quad_length, name="quad_d")
dipole = Dipole(dipole_bending_radius, dipole_angle)
drift = Drift(drift_length)
# We take the same FODO as exercise 1 and add some quadupoles
FODO_thick = Lattice([half_quad_f, drift, dipole, drift, dipole, drift,
quad_d, drift, dipole, drift, dipole, drift, half_quad_f])
FODO_thick.plot()
FODO_thick.m
FODO_thick.dispersion_solution()
(FODO_thick * 8).plot.top_down()
tracked = FODO_thick.dispersion()
plt.plot(tracked.s, tracked.x, label="dispersion")
plt.legend()
```
# particles with energy spread
```
beam = Beam(n_particles=50, sigma_energy=1e-6)
beam
transported = FODO_thick.transport(beam.match(FODO_thick.twiss_solution()))
plt.plot(transported.s, transported.x.T);
s, disp, *_ = FODO_thick.dispersion()
plt.plot(s, disp)
FODO_thick.plot()
```
# dp/p orbit
```
lat = Lattice([Dipole(1, np.pi/2)])
tracked = lat.slice(Dipole, 100).transport([[0, 0.1], [0, 0], [0, 0], [0, 0], [0, 0.1]])
plt.plot(tracked.s, tracked.x.T)
# top down view, projecting around the bend
x_circle = np.cos(tracked.s) + tracked.x * np.cos(tracked.s)
y_circle = np.sin(tracked.s) + tracked.x * np.sin(tracked.s)
fig, ax = plt.subplots(1, 1)
ax.plot(x_circle.T, y_circle.T)
ax.set_aspect("equal")
```
|
github_jupyter
|
from pyaccelerator import *
import numpy as np
import matplotlib.pyplot as plt
drift_l = 5 # m
focal_length = 10 # meters
FODO_cell = [QuadrupoleThin(focal_length * 2),
Drift(drift_l),
QuadrupoleThin(-focal_length),
Drift(drift_l),
QuadrupoleThin(focal_length * 2)]
fodo = Lattice(FODO_cell)
fodo
fodo.plot()
n_fodo = 8
n_dipole = n_fodo # one dipole after each fodo
curve_perimeter = 120 # m
dip_theta = 2 * np.pi / n_dipole
dip_rho = curve_perimeter / (n_dipole * dip_theta)
drift_l = 10
dip = Dipole(dip_rho, dip_theta)
sequence = (FODO_cell + [dip, Drift(drift_l)]) * n_fodo
lattice = Lattice(sequence)
lattice.plot()
lattice.plot.top_down()
lattice.dispersion_solution()
tracked = lattice.dispersion()
tracked.plot()
lattice.plot()
cirumference = 1000 # meters
proton_energy = 15 # GeV
dipole_length = 5 # meters
dipole_B_max = 2 # T
n_cells = 8 # ??
dipole_angle = np.pi / 16 # ??
quad_length = 3
quad_strength = 8.89e-3 / quad_length
dipole_length = 5
dipole_angle = np.pi / 16
dipole_bending_radius = dipole_length / dipole_angle
# reduce the drift lengths to compensate for the now thick elements
drift_length = (cirumference / n_cells - (2 * quad_length) - (4 * dipole_length)) / 6
half_quad_f = Quadrupole(quad_strength, quad_length/2, name="quad_f")
quad_d = Quadrupole(-quad_strength, quad_length, name="quad_d")
dipole = Dipole(dipole_bending_radius, dipole_angle)
drift = Drift(drift_length)
# We take the same FODO as exercise 1 and add some quadupoles
FODO_thick = Lattice([half_quad_f, drift, dipole, drift, dipole, drift,
quad_d, drift, dipole, drift, dipole, drift, half_quad_f])
FODO_thick.plot()
FODO_thick.m
FODO_thick.dispersion_solution()
(FODO_thick * 8).plot.top_down()
tracked = FODO_thick.dispersion()
plt.plot(tracked.s, tracked.x, label="dispersion")
plt.legend()
beam = Beam(n_particles=50, sigma_energy=1e-6)
beam
transported = FODO_thick.transport(beam.match(FODO_thick.twiss_solution()))
plt.plot(transported.s, transported.x.T);
s, disp, *_ = FODO_thick.dispersion()
plt.plot(s, disp)
FODO_thick.plot()
lat = Lattice([Dipole(1, np.pi/2)])
tracked = lat.slice(Dipole, 100).transport([[0, 0.1], [0, 0], [0, 0], [0, 0], [0, 0.1]])
plt.plot(tracked.s, tracked.x.T)
# top down view, projecting around the bend
x_circle = np.cos(tracked.s) + tracked.x * np.cos(tracked.s)
y_circle = np.sin(tracked.s) + tracked.x * np.sin(tracked.s)
fig, ax = plt.subplots(1, 1)
ax.plot(x_circle.T, y_circle.T)
ax.set_aspect("equal")
| 0.61682 | 0.914634 |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
from pathlib import Path
sys.path.append(str(Path.cwd().parent))
from typing import Tuple
import numpy as np
import pandas as pd
from statsmodels.graphics import tsaplots
from load_dataset import Dataset
import matplotlib.pyplot as plt
import plotting
from statsmodels.tsa.stattools import adfuller
```
### Пример работы stl
не импортировать и не запускать дальнейшие ячейки до блока "задание"
```
from stl import detect_ts, extract_trend, extract_seasonality
```
#### Возьмем типичный пример ряда с трендом и сезонностью
```
dataset = Dataset('../data/dataset/')
ts = dataset['stl_example.csv']
ts.plot(figsize=(10, 5))
```
#### Извлечем линейный тренд
```
trend = extract_trend(ts)[0]
trend.plot()
```
#### Вычтем тренд из исходного ряда
```
ts_detrended = ts - trend
ts_detrended.plot()
```
#### Извлечем сезонность из получившегося ряда
```
season = extract_seasonality(ts_detrended, period=6)
season
season.plot(figsize=(10, 5))
```
#### Вычтем сезонность из ряда ts_detrended и получим остатки
```
resid = ts_detrended - season
plotting.plot_ts(resid)
```
#### Так как мы убрали из ряда тренд и сезонность, получившиеся остатки по идее должны быть стационарными. Давайте это проверим по критерию Дики-Фуллера.
```
adfuller(resid.dropna())
```
### Задание 1 - реализовать "наивную" имплементацию stl-разложения:
Ряд - stl_example.csv
1. Апроксимировать ряд линейным трендом.
2. Вычесть линейный тренд из ряда.
3. Найти период сезонности получившегося ряда по коррелограме.
4. Получить сезонность ряда без тренда одним из двух способов:
а) при помощи медианного фильтра с окном равным период/к, к подобрать (обычно 2-3)
б) продифференцировав его с лагом, равным найденному периоду сезонности и вычтя
получившийся ряд из исходного
в)* попробуйте подумать, какие есть плюсы и минусы у каждого из способов?
4. Вычесть тренд и сезонность, получить остатки.
6. Проверить остатки на стационарность.
detect_ts должна возвращать tuple из: (тренд, сезонность, остатки)
```
def extract_trend(ts: pd.Series):
"""
Извлекает линейный тренд из временного ряда
"""
# <ваш код здесь>
k, b = np.polyfit(range(len(ts)), ts.values, 1)
trend = pd.Series(k * np.array(range(len(ts))) + b, index=ts.index)
return trend, k, b
trend = extract_trend(ts)[0]
detrended_ts = ts - trend
plotting.plot_ts(detrended_ts)
tsaplots.plot_acf(detrended_ts);
period = 6
def extract_seasonality(ts_detrended, period=None):
"""
Извлекает сезонную компоненту
"""
# <ваш код здесь>
smoothing_window = period // 3
season = ts_detrended.rolling(smoothing_window, center=True).median()
return season
season = extract_seasonality(ts_detrended, period=period)
plotting.plot_ts(season)
def extract_seasonality_diff(ts_detrended, period=None):
"""
Извлекает сезонную компоненту при помощи дифференцирования
"""
# <ваш код здесь>
resid = ts_detrended.diff(period)
season = ts_detrended - resid
return season
season = extract_seasonality_diff(ts_detrended, period=6)
season.plot()
res = ts - trend - season
plotting.plot_ts(res)
plotting.plot_ts(ts)
adfuller(res.dropna())[1]
```
### Задание 2 - найти аномалии во временном ряде при помощи получившихся остатков
1. Расчитать стандартное отклонение остатков std
2. Получить порог на остатки по формуле `threshold = k * std`, k обычно берется от 2 до 3.
3. Найти аномалии, как точки ряда, абсолютные значения которых превышают найденный порог
```
k = 2.7
threshold = 2.7 * res.std()
indexes = np.where(abs(res) > threshold)[0]
anomalies = ts[indexes]
anomalies
```
### Задание 3 - сделать прогноз ряда на 6 периодов вперед (36 точек)
1. Экстраполируйте линейный тренд.
2. Сделайте рекурсивный прогноз сезонной компоненты по формуле y(t) = y(t-6)
3. Остатки по-хорошему должны моделироваться моделью arma, но в нашем случае сделайте просто прогноз средним значением. (Т.к. в нашем случае это 0, остатки можно вообще проигнорировать)
4. Сложите получившиеся компоненты и получите финальный прогноз
5. profit!
```
from datetime import timedelta
index = pd.date_range(start=ts.index[-1]+timedelta(hours=1), freq='h', periods=36)
# предскажем тренд как интерполяция линейного тренда на следующие 36 точек
_, k, b = extract_trend(ts)
trend_predictions = pd.Series(
data=k * np.arange(len(ts), len(ts)+36) + b,
index=index
)
season_predictions = pd.Series(
data=season.shift(6)[-36:].values,
index=index
)
predctions = trend_predictions + season_predictions
plotting.plot_ts(ts, predctions)
```
### stl-Разложение "из коробки" - statsmodels.
```
from statsmodels.tsa.seasonal import seasonal_decompose
decomp = seasonal_decompose(ts, period=6)
decomp.seasonal.plot()
decomp.trend.plot()
decomp.resid.plot()
adfuller(decomp.resid.dropna())
```
### Другие пакеты
- stldecompose (также "наивная реализация")
- pyloess (давно не было обновлений)
|
github_jupyter
|
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
from pathlib import Path
sys.path.append(str(Path.cwd().parent))
from typing import Tuple
import numpy as np
import pandas as pd
from statsmodels.graphics import tsaplots
from load_dataset import Dataset
import matplotlib.pyplot as plt
import plotting
from statsmodels.tsa.stattools import adfuller
from stl import detect_ts, extract_trend, extract_seasonality
dataset = Dataset('../data/dataset/')
ts = dataset['stl_example.csv']
ts.plot(figsize=(10, 5))
trend = extract_trend(ts)[0]
trend.plot()
ts_detrended = ts - trend
ts_detrended.plot()
season = extract_seasonality(ts_detrended, period=6)
season
season.plot(figsize=(10, 5))
resid = ts_detrended - season
plotting.plot_ts(resid)
adfuller(resid.dropna())
def extract_trend(ts: pd.Series):
"""
Извлекает линейный тренд из временного ряда
"""
# <ваш код здесь>
k, b = np.polyfit(range(len(ts)), ts.values, 1)
trend = pd.Series(k * np.array(range(len(ts))) + b, index=ts.index)
return trend, k, b
trend = extract_trend(ts)[0]
detrended_ts = ts - trend
plotting.plot_ts(detrended_ts)
tsaplots.plot_acf(detrended_ts);
period = 6
def extract_seasonality(ts_detrended, period=None):
"""
Извлекает сезонную компоненту
"""
# <ваш код здесь>
smoothing_window = period // 3
season = ts_detrended.rolling(smoothing_window, center=True).median()
return season
season = extract_seasonality(ts_detrended, period=period)
plotting.plot_ts(season)
def extract_seasonality_diff(ts_detrended, period=None):
"""
Извлекает сезонную компоненту при помощи дифференцирования
"""
# <ваш код здесь>
resid = ts_detrended.diff(period)
season = ts_detrended - resid
return season
season = extract_seasonality_diff(ts_detrended, period=6)
season.plot()
res = ts - trend - season
plotting.plot_ts(res)
plotting.plot_ts(ts)
adfuller(res.dropna())[1]
k = 2.7
threshold = 2.7 * res.std()
indexes = np.where(abs(res) > threshold)[0]
anomalies = ts[indexes]
anomalies
from datetime import timedelta
index = pd.date_range(start=ts.index[-1]+timedelta(hours=1), freq='h', periods=36)
# предскажем тренд как интерполяция линейного тренда на следующие 36 точек
_, k, b = extract_trend(ts)
trend_predictions = pd.Series(
data=k * np.arange(len(ts), len(ts)+36) + b,
index=index
)
season_predictions = pd.Series(
data=season.shift(6)[-36:].values,
index=index
)
predctions = trend_predictions + season_predictions
plotting.plot_ts(ts, predctions)
from statsmodels.tsa.seasonal import seasonal_decompose
decomp = seasonal_decompose(ts, period=6)
decomp.seasonal.plot()
decomp.trend.plot()
decomp.resid.plot()
adfuller(decomp.resid.dropna())
| 0.497559 | 0.924824 |
```
#default_exp stats
#hide
%load_ext autoreload
%autoreload 2 #Reload the code automatically
%config Completer.use_jedi = False
#hide
import sys, os
from pathlib import Path
# Insert in Path Project Directory
sys.path.insert(0, str(Path().cwd().parent))
#export
from typing import Iterable
import pandas as pd
import numpy as np
from fastcore.foundation import L, listify
from rich.console import Console
from rich.theme import Theme
from rich.progress import Progress
from rfpye.utils import get_files
from rfpye.parser import parse_bin
from loguru import logger
custom_theme = Theme({"info": "dim cyan", "warning": "magenta", "danger": "bold red"})
console = Console(theme=custom_theme)
#exporti
# For scripts
config = {
"handlers": [
{"sink": "stats.log", "serialize": True, 'rotation': "1 month", 'compression' :'zip', 'backtrace': True, 'diagnose': True},
],
}
logger.configure(**config)
```
## Resumo Estatístico
A seguinte função recebe um DataFrame cujas linhas são as diferentes varreduras do espectro, cada uma com seu timestamp, e colunas as diferentes frequências centrais medidas. Essa função é chamada pela função `extract_bin_stats`
```
#export
@logger.catch
def filter_spectrum(
df: pd.DataFrame,
time_start: str = None,
time_stop: str = None,
freq_start: str = None,
freq_stop: str = None,
) -> pd.DataFrame:
"""Recebe o arquivo de espectro df e retorna de acordo com os filtros
Args:
df (pd.DataFrame): Arquivo de espectro. Timestamp como linhas e frequências como colunas
time_start (str): Timestamp de início. Se None filtra desde o início do arquivo
time_stop (str): Timestamp de fim. Se None filtra até o fim do arquivo
freq_start (str): Filtro inicial de frequência. Se None retorna desde a menor frequências
freq_stop (str): Filtro Final de frequência. Se None retorna até a maior frequência.
Returns:
pd.DataFrame: DataFrame com Frequência, min, max e mean após os filtros aplicados.
"""
df = df.copy()
if time_start is None:
time_start = "01/01/2000"
if time_stop is None:
time_stop = "31/12/2100"
try:
time_start = pd.to_datetime(time_start)
time_stop = pd.to_datetime(time_stop)
except pd.errors.ParserError:
logger.error(
f"[bold red blink] Datas inválidas! Verifique as strings de data {freq_start} e {freq_stop}"
)
try:
df.set_index("index", inplace=True)
df.index = pd.to_datetime(df.index)
except pd.errors.KeyError:
if not isinstance(df.index, pd.DatetimeIndex):
logger.warning(
f"Não foi passado uma coluna ou índice com datetime a ser filtrado, todas as linhas serão processadas",
exc_info=True,
)
time_start = 0
time_stop = df.shape[0]
cols = df.columns.values.astype("float")
rows = df.index.values
if freq_start is None:
freq_start = 0
if freq_stop is None:
freq_stop = np.inf
filtered_cols = df.columns[(float(freq_start) <= cols) & (cols <= float(freq_stop))]
filtered_rows = df.index[(time_start <= rows) & (rows <= time_stop)]
if len(filtered_cols) == 0 or len(filtered_rows) == 0:
return None
count = filtered_rows.shape[0]
array = df.loc[filtered_rows, filtered_cols].values
freq = filtered_cols.values.astype("float32")
min_ = array.min(axis=0)
max_ = array.max(axis=0)
mean = array.mean(axis=0)
return pd.DataFrame(
{"Frequency": freq, "Min": min_, "Max": max_, "Mean": mean, "Count": count}
)
```
# Extração, Estatísticas e Salvamento dos Arquivos de
> Nesse módulo temos algumas funções que extraem estatísticas dados determinados filtros
```
#exporti
def read_meta(filename):
ext = filename.suffix
if ext == ".csv":
df = pd.read_csv(filename)
elif ext == ".xlsx":
df = pd.read_excel(filename, engine="openpyxl")
elif ext == ".fth":
df = pd.read_feather(filename)
if "wallclock_datetime" in df.columns:
df.set_index("wallclock_datetime", inplace=True)
else:
raise ValueError(f"Extension {ext} not implemented")
return df
```
## Processamento e Extração
A função a seguir é um wrapper de toda funcionalidade desta biblioteca. Ela recebe o caminho `entrada` para um arquivo `.bin` ou pasta contendo vários arquivos `.bin`, extrai os metadados e os dados de espectro. Mescla o timestamp dos metadados com o arquivo de espectro e salva ambos na pasta `saida`. Essa pasta é usada como repositório e cache dos dados processados que serão utilizados pela função `extract_bin_stats`.
```
#export
def process_bin(
entrada: str,
saida: str,
recursivo: bool = False,
pastas: Iterable[str] = None,
levels: bool = False,
substituir: bool = False,
dtype: str = "float16",
) -> None:
"""Recebe uma pasta ou arquivo bin, processa e salva os metadados e espectro na saida.
Args:
entrada (str): Caminho para a Pasta ou Arquivo .bin
saida (str): Pasta onde salvar os arquivos processados
recursivo (bool, optional): Buscar os arquivos de entrada recursivamente. Defaults to False.
pastas (Iterable[str], optional): Limitar a busca a essas pastas. Defaults to None.
levels (bool, optional): Extrair e salvar os dados de espectro. Defaults to False.
substituir (bool, optional): Reprocessar arquivos já processados?. Defaults to False.
dtype (str, optional): Tipo de dados a salvar o espectro. Defaults to "float16".
"""
entrada = Path(entrada)
if entrada.is_file():
lista_bins = [entrada]
else:
lista_bins = get_files(
entrada, extensions=[".bin"], recurse=recursivo, folders=pastas
)
parsed_bins = {}
meta_path = Path(f"{saida}/meta")
levels_path = Path(f"{saida}/levels")
meta_path.mkdir(exist_ok=True, parents=True)
levels_path.mkdir(exist_ok=True, parents=True)
log_meta = Path(f"{saida}/log_meta.txt")
log_levels = Path(f"{saida}/log_levels.txt")
if substituir:
done_meta = set()
done_levels = set()
else:
done_meta = (
set(log_meta.read_text().split("\n")) if log_meta.exists() else set()
)
done_levels = (
set(log_levels.read_text().split("\n")) if log_levels.exists() else set()
)
console.rule("Lista de Arquivos a serem processados", style="bold red")
console.print(
[f.name for f in lista_bins],
style="bold white",
overflow="fold",
justify="left",
)
if not lista_bins:
console.print(":sleeping: Nenhum arquivo .bin a processar :zzz:")
return
if not levels:
lista_bins = [f for f in lista_bins if f.name not in done_meta]
else:
lista_bins = [f for f in lista_bins if f.name not in done_levels]
if not lista_bins:
console.print(":sleeping: Nenhum arquivo novo a processar :zzz:")
console.print(
":point_up: use --substituir no terminal ou substituir=True na chamada caso queira reprocessar os bins e sobrepôr os arquivos existentes :wink:"
)
return
try:
with Progress(transient=True, auto_refresh=False) as progress:
bins = progress.track(
lista_bins,
total=len(lista_bins),
description="[green]Processando Blocos Binários",
)
for file in bins:
progress.console.print(f"[cyan]Processando Blocos de: [red]{file.name}")
parsed_bins[file.name] = parse_bin(file)
progress.refresh()
lista_meta = [(k, v) for k, v in parsed_bins.items() if k not in done_meta]
if lista_meta:
blocks = progress.track(
lista_meta,
total=len(lista_meta),
description="[cyan]Exportando Metadados",
)
for filename, block_dict in blocks:
progress.console.print(f"[cyan]Extraindo Metadados de: [red]{file}")
export_metadata(filename, block_dict, meta_path, ext=".fth")
done_meta.add(file)
progress.refresh()
if levels:
lista_levels = lista_meta = [
(k, v) for k, v in parsed_bins.items() if k not in done_levels
]
if lista_levels:
bins = progress.track(
lista_levels,
total=len(lista_levels),
description="[grey]Exportando Dados de Espectro",
)
for file, block_obj in bins:
progress.console.print(
f"[grey]Extraindo Espectro de: [red]{file}"
)
meta_index = []
blocks = block_obj["blocks"]
for (tipo, tid) in blocks.keys():
if tipo not in SPECTRAL_BLOCKS:
continue
meta_file = Path(
f"{meta_path}/{file}-B_{tipo}_TId_{tid}.fth"
)
if not meta_file.exists():
export_meta(
file,
block_obj,
meta_path,
ext=".fth",
)
done_meta.add(file)
meta_df = read_meta(meta_file)
meta_index.append(meta_df.index.tolist())
export_level(
file,
block_obj,
levels_path,
ext=".fth",
index=meta_index,
dtype=dtype,
)
done_levels.add(file)
progress.refresh()
console.print("kbô :satisfied:")
finally:
log_meta.write_text("\n".join(sorted(list(done_meta))))
log_levels.write_text("\n".join(sorted(list(done_levels))))
```
A função a seguir é a que será mais comumente chamada por outro módulo que utilizar esta lib. Ela recebe o caminho para um arquivo `.bin` e retorna um DataFrame com Frequência, Máximo, Mínimo e Média dos dados de Espectro presentes no arquivo `.bin`
```
#export
def extract_bin_stats(
filename: str,
time_start: str = None,
time_stop: str = None,
freq_start: str = None,
freq_stop: str = None,
cache: str = CACHE_FOLDER,
) -> pd.DataFrame:
"""Recebe o caminho para um arquivo CRFS bin e retorna um dataframe com o resumo estatístico dos dados de espectro
Args:
filename (str): Caminho para o arquivo bin
time_start (str): Timestamp de início. Se None filtra desde o início do arquivo
time_stop (str): Timestamp de fim. Se None filtra até o fim do arquivo
freq_start (str): Filtro inicial de frequência. Se None retorna desde a menor frequências
freq_stop (str): Filtro Final de frequência. Se None retorna até a maior frequência.
cache (str, optional): Caminho para a pasta de cache. Default é criar uma pasta oculta .cache no diretório atual.
Returns:
pd.DataFrame: Dataframe contendo o resumo estatístico do arquivo
"""
cache = Path(cache)
cache.mkdir(exist_ok=True, parents=True)
filename = Path(filename)
if filename.is_dir():
filenames = get_files(filename, extensions=[".bin"])
else:
filenames = listify(filename)
cached_files = get_files(cache / "levels")
files = L()
for filename in filenames:
while True:
# TODO filter based on metadata
subset = cached_files.filter(lambda name: filename.stem in str(name))
if not len(subset):
process_bin(entrada=filename, saida=cache, levels=True)
else:
break
files += subset
subset = L()
dfs = files.map(pd.read_feather)
tids = files.map(lambda x: x.stem.split("_")[-1])
spectra = dfs.map(
filter_spectrum,
time_start=time_start,
time_stop=time_stop,
freq_start=freq_start,
freq_stop=freq_stop,
)
spectra = [(i, s) for i, s in zip(tids, spectra) if s is not None]
columns = ["Tid", "Frequency", "Min", "Max", "Mean"]
out = pd.DataFrame(columns=columns)
if not spectra:
log.warning(
f"Os parâmetros repassados não correspondem a nenhum dado espectral do arquivo",
exc_info=True,
)
return out
for i, df in spectra:
df["Tid"] = i
spectra = [s for i, s in spectra]
spectra = pd.concat(spectra)
if len(spectra.Frequency) == len(spectra.Frequency.unique()):
return spectra[columns]
gb = spectra.groupby(["Tid", "Frequency"])
out = gb.apply(appended_mean)
Min = gb.min()["Min"]
Max = gb.max()["Max"]
Mean = gb.apply(appended_mean)
out = pd.concat([Min, Max, Mean], axis=1).reset_index()
out.columns = columns
return out
```
Chamada da função somente fornecendo o caminho do arquivo `.bin`
Pela saída do código acima, vemos que o arquivo `.bin` foi processado, seus metadados e espectro extraídos e salvos. Como não passamos uma pasta de saída uma pasta local `.cache` é criada e os arquivos são salvos nela. Posteriormente o arquivo de espectro no cache é lido e o resumo estatístico das frequências no tempo é retornado.
Se chamarmos novamente a função com os mesmos argumentos, dessa vez a execução será mais rápida por conta do cache e assim o arquivo `.bin` não precisa ser processado novamente.
```
dados = extract_bin_stats(binfile) ; dados
#exporti
def appended_mean(df: pd.Series) -> float:
"""Recebe um agrupamento do DataFrame e retorna sua média ponderada pela coluna Count
Args:
df (pd.DataFrame): Groupby do DataFrame
Returns:
float: Média Ponderada da linha pela coluna Count
"""
return (df["Count"] * df["Mean"]).sum() / df["Count"].sum()
```
Vemos que o arquivo possui frequências de 70MHz a 110MHz. Se tivermos interessados em faixas menos, podemos filtrá-las. Por exemplo, vamos filtrar pela faixa de FM somente `88 a 108`:
```
dados = extract_bin_stats(binfile, freq_start=88, freq_stop=108) ; dados
```
Para filtrarmos os dados estatísticos relativo a um tempo específico, precisamos saber de antemão qual o período específico o arquivo `.bin` compreende, se passarmos um período de tempo inválido, é retornado um DataFrame vazio e uma mensagem de aviso é salva no log.
```
dados = extract_bin_stats(binfile, time_start='2021-05-21') ; dados
dados = extract_bin_stats(binfile, time_stop='2020-05-12') ; dados
```
Esse arquivo específico compreende o período de `Timestamp('2020-12-01 15:34:21.578869') a Timestamp('2020-12-01 16:13:53.920250')`, um período de menos de uma hora.
Basta passarmos uma string de data válida, as horas, minutos e segundos são opcionais.
```
dados = extract_bin_stats(binfile, time_start='2020-12-01 16:00') ; dados
dados = extract_bin_stats(binfile, time_start='01/12/2020 16:00') ; dados
dados = extract_bin_stats(binfile, time_stop='2020-12-01 16:00') ; dados
```
Se quisermos filtrar para constar somente a faixa de FM e somente os 15 minutos de 15:45 a 16:00 do dia 01/12/2020
```
dados = extract_bin_stats(binfile,
time_start='01/12/2020 15:45',
time_stop='2020-12-01 16:00',
freq_start=88,
freq_stop=108)
dados
```
|
github_jupyter
|
#default_exp stats
#hide
%load_ext autoreload
%autoreload 2 #Reload the code automatically
%config Completer.use_jedi = False
#hide
import sys, os
from pathlib import Path
# Insert in Path Project Directory
sys.path.insert(0, str(Path().cwd().parent))
#export
from typing import Iterable
import pandas as pd
import numpy as np
from fastcore.foundation import L, listify
from rich.console import Console
from rich.theme import Theme
from rich.progress import Progress
from rfpye.utils import get_files
from rfpye.parser import parse_bin
from loguru import logger
custom_theme = Theme({"info": "dim cyan", "warning": "magenta", "danger": "bold red"})
console = Console(theme=custom_theme)
#exporti
# For scripts
config = {
"handlers": [
{"sink": "stats.log", "serialize": True, 'rotation': "1 month", 'compression' :'zip', 'backtrace': True, 'diagnose': True},
],
}
logger.configure(**config)
#export
@logger.catch
def filter_spectrum(
df: pd.DataFrame,
time_start: str = None,
time_stop: str = None,
freq_start: str = None,
freq_stop: str = None,
) -> pd.DataFrame:
"""Recebe o arquivo de espectro df e retorna de acordo com os filtros
Args:
df (pd.DataFrame): Arquivo de espectro. Timestamp como linhas e frequências como colunas
time_start (str): Timestamp de início. Se None filtra desde o início do arquivo
time_stop (str): Timestamp de fim. Se None filtra até o fim do arquivo
freq_start (str): Filtro inicial de frequência. Se None retorna desde a menor frequências
freq_stop (str): Filtro Final de frequência. Se None retorna até a maior frequência.
Returns:
pd.DataFrame: DataFrame com Frequência, min, max e mean após os filtros aplicados.
"""
df = df.copy()
if time_start is None:
time_start = "01/01/2000"
if time_stop is None:
time_stop = "31/12/2100"
try:
time_start = pd.to_datetime(time_start)
time_stop = pd.to_datetime(time_stop)
except pd.errors.ParserError:
logger.error(
f"[bold red blink] Datas inválidas! Verifique as strings de data {freq_start} e {freq_stop}"
)
try:
df.set_index("index", inplace=True)
df.index = pd.to_datetime(df.index)
except pd.errors.KeyError:
if not isinstance(df.index, pd.DatetimeIndex):
logger.warning(
f"Não foi passado uma coluna ou índice com datetime a ser filtrado, todas as linhas serão processadas",
exc_info=True,
)
time_start = 0
time_stop = df.shape[0]
cols = df.columns.values.astype("float")
rows = df.index.values
if freq_start is None:
freq_start = 0
if freq_stop is None:
freq_stop = np.inf
filtered_cols = df.columns[(float(freq_start) <= cols) & (cols <= float(freq_stop))]
filtered_rows = df.index[(time_start <= rows) & (rows <= time_stop)]
if len(filtered_cols) == 0 or len(filtered_rows) == 0:
return None
count = filtered_rows.shape[0]
array = df.loc[filtered_rows, filtered_cols].values
freq = filtered_cols.values.astype("float32")
min_ = array.min(axis=0)
max_ = array.max(axis=0)
mean = array.mean(axis=0)
return pd.DataFrame(
{"Frequency": freq, "Min": min_, "Max": max_, "Mean": mean, "Count": count}
)
#exporti
def read_meta(filename):
ext = filename.suffix
if ext == ".csv":
df = pd.read_csv(filename)
elif ext == ".xlsx":
df = pd.read_excel(filename, engine="openpyxl")
elif ext == ".fth":
df = pd.read_feather(filename)
if "wallclock_datetime" in df.columns:
df.set_index("wallclock_datetime", inplace=True)
else:
raise ValueError(f"Extension {ext} not implemented")
return df
#export
def process_bin(
entrada: str,
saida: str,
recursivo: bool = False,
pastas: Iterable[str] = None,
levels: bool = False,
substituir: bool = False,
dtype: str = "float16",
) -> None:
"""Recebe uma pasta ou arquivo bin, processa e salva os metadados e espectro na saida.
Args:
entrada (str): Caminho para a Pasta ou Arquivo .bin
saida (str): Pasta onde salvar os arquivos processados
recursivo (bool, optional): Buscar os arquivos de entrada recursivamente. Defaults to False.
pastas (Iterable[str], optional): Limitar a busca a essas pastas. Defaults to None.
levels (bool, optional): Extrair e salvar os dados de espectro. Defaults to False.
substituir (bool, optional): Reprocessar arquivos já processados?. Defaults to False.
dtype (str, optional): Tipo de dados a salvar o espectro. Defaults to "float16".
"""
entrada = Path(entrada)
if entrada.is_file():
lista_bins = [entrada]
else:
lista_bins = get_files(
entrada, extensions=[".bin"], recurse=recursivo, folders=pastas
)
parsed_bins = {}
meta_path = Path(f"{saida}/meta")
levels_path = Path(f"{saida}/levels")
meta_path.mkdir(exist_ok=True, parents=True)
levels_path.mkdir(exist_ok=True, parents=True)
log_meta = Path(f"{saida}/log_meta.txt")
log_levels = Path(f"{saida}/log_levels.txt")
if substituir:
done_meta = set()
done_levels = set()
else:
done_meta = (
set(log_meta.read_text().split("\n")) if log_meta.exists() else set()
)
done_levels = (
set(log_levels.read_text().split("\n")) if log_levels.exists() else set()
)
console.rule("Lista de Arquivos a serem processados", style="bold red")
console.print(
[f.name for f in lista_bins],
style="bold white",
overflow="fold",
justify="left",
)
if not lista_bins:
console.print(":sleeping: Nenhum arquivo .bin a processar :zzz:")
return
if not levels:
lista_bins = [f for f in lista_bins if f.name not in done_meta]
else:
lista_bins = [f for f in lista_bins if f.name not in done_levels]
if not lista_bins:
console.print(":sleeping: Nenhum arquivo novo a processar :zzz:")
console.print(
":point_up: use --substituir no terminal ou substituir=True na chamada caso queira reprocessar os bins e sobrepôr os arquivos existentes :wink:"
)
return
try:
with Progress(transient=True, auto_refresh=False) as progress:
bins = progress.track(
lista_bins,
total=len(lista_bins),
description="[green]Processando Blocos Binários",
)
for file in bins:
progress.console.print(f"[cyan]Processando Blocos de: [red]{file.name}")
parsed_bins[file.name] = parse_bin(file)
progress.refresh()
lista_meta = [(k, v) for k, v in parsed_bins.items() if k not in done_meta]
if lista_meta:
blocks = progress.track(
lista_meta,
total=len(lista_meta),
description="[cyan]Exportando Metadados",
)
for filename, block_dict in blocks:
progress.console.print(f"[cyan]Extraindo Metadados de: [red]{file}")
export_metadata(filename, block_dict, meta_path, ext=".fth")
done_meta.add(file)
progress.refresh()
if levels:
lista_levels = lista_meta = [
(k, v) for k, v in parsed_bins.items() if k not in done_levels
]
if lista_levels:
bins = progress.track(
lista_levels,
total=len(lista_levels),
description="[grey]Exportando Dados de Espectro",
)
for file, block_obj in bins:
progress.console.print(
f"[grey]Extraindo Espectro de: [red]{file}"
)
meta_index = []
blocks = block_obj["blocks"]
for (tipo, tid) in blocks.keys():
if tipo not in SPECTRAL_BLOCKS:
continue
meta_file = Path(
f"{meta_path}/{file}-B_{tipo}_TId_{tid}.fth"
)
if not meta_file.exists():
export_meta(
file,
block_obj,
meta_path,
ext=".fth",
)
done_meta.add(file)
meta_df = read_meta(meta_file)
meta_index.append(meta_df.index.tolist())
export_level(
file,
block_obj,
levels_path,
ext=".fth",
index=meta_index,
dtype=dtype,
)
done_levels.add(file)
progress.refresh()
console.print("kbô :satisfied:")
finally:
log_meta.write_text("\n".join(sorted(list(done_meta))))
log_levels.write_text("\n".join(sorted(list(done_levels))))
#export
def extract_bin_stats(
filename: str,
time_start: str = None,
time_stop: str = None,
freq_start: str = None,
freq_stop: str = None,
cache: str = CACHE_FOLDER,
) -> pd.DataFrame:
"""Recebe o caminho para um arquivo CRFS bin e retorna um dataframe com o resumo estatístico dos dados de espectro
Args:
filename (str): Caminho para o arquivo bin
time_start (str): Timestamp de início. Se None filtra desde o início do arquivo
time_stop (str): Timestamp de fim. Se None filtra até o fim do arquivo
freq_start (str): Filtro inicial de frequência. Se None retorna desde a menor frequências
freq_stop (str): Filtro Final de frequência. Se None retorna até a maior frequência.
cache (str, optional): Caminho para a pasta de cache. Default é criar uma pasta oculta .cache no diretório atual.
Returns:
pd.DataFrame: Dataframe contendo o resumo estatístico do arquivo
"""
cache = Path(cache)
cache.mkdir(exist_ok=True, parents=True)
filename = Path(filename)
if filename.is_dir():
filenames = get_files(filename, extensions=[".bin"])
else:
filenames = listify(filename)
cached_files = get_files(cache / "levels")
files = L()
for filename in filenames:
while True:
# TODO filter based on metadata
subset = cached_files.filter(lambda name: filename.stem in str(name))
if not len(subset):
process_bin(entrada=filename, saida=cache, levels=True)
else:
break
files += subset
subset = L()
dfs = files.map(pd.read_feather)
tids = files.map(lambda x: x.stem.split("_")[-1])
spectra = dfs.map(
filter_spectrum,
time_start=time_start,
time_stop=time_stop,
freq_start=freq_start,
freq_stop=freq_stop,
)
spectra = [(i, s) for i, s in zip(tids, spectra) if s is not None]
columns = ["Tid", "Frequency", "Min", "Max", "Mean"]
out = pd.DataFrame(columns=columns)
if not spectra:
log.warning(
f"Os parâmetros repassados não correspondem a nenhum dado espectral do arquivo",
exc_info=True,
)
return out
for i, df in spectra:
df["Tid"] = i
spectra = [s for i, s in spectra]
spectra = pd.concat(spectra)
if len(spectra.Frequency) == len(spectra.Frequency.unique()):
return spectra[columns]
gb = spectra.groupby(["Tid", "Frequency"])
out = gb.apply(appended_mean)
Min = gb.min()["Min"]
Max = gb.max()["Max"]
Mean = gb.apply(appended_mean)
out = pd.concat([Min, Max, Mean], axis=1).reset_index()
out.columns = columns
return out
dados = extract_bin_stats(binfile) ; dados
#exporti
def appended_mean(df: pd.Series) -> float:
"""Recebe um agrupamento do DataFrame e retorna sua média ponderada pela coluna Count
Args:
df (pd.DataFrame): Groupby do DataFrame
Returns:
float: Média Ponderada da linha pela coluna Count
"""
return (df["Count"] * df["Mean"]).sum() / df["Count"].sum()
dados = extract_bin_stats(binfile, freq_start=88, freq_stop=108) ; dados
dados = extract_bin_stats(binfile, time_start='2021-05-21') ; dados
dados = extract_bin_stats(binfile, time_stop='2020-05-12') ; dados
dados = extract_bin_stats(binfile, time_start='2020-12-01 16:00') ; dados
dados = extract_bin_stats(binfile, time_start='01/12/2020 16:00') ; dados
dados = extract_bin_stats(binfile, time_stop='2020-12-01 16:00') ; dados
dados = extract_bin_stats(binfile,
time_start='01/12/2020 15:45',
time_stop='2020-12-01 16:00',
freq_start=88,
freq_stop=108)
dados
| 0.421433 | 0.689462 |
# The network is build
Here we run a couple of test. In particular I am interested in testing whether it can learn more than one orthogonal sequence and how the parameters affect
```
import sys
sys.path.append('../')
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
%matplotlib inline
sns.set(font_scale=3.0)
def bernoulli_mask(size_from, size_to, p, binomial=True):
if binomial:
return np.random.binomial(n=1, p=p, size=(size_to, size_from))
else:
return np.random.choice(2, size=(size_to, size_from), replace=True, p=[1 - p, p])
def pre_synaptic_simple(epsilon, w, z_post, z_pre):
increase = np.zeros_like(w)
n = w.shape[0]
for i in range(n):
for j in range(n):
increase[i, j] = z_pre[j] * z_post[i] - z_pre[j] * w[i, j]
return epsilon * increase
def pre_synaptic(epsilon, w, z_post, z_pre):
return epsilon * (np.outer(z_post, z_pre) - w * z_pre)
def post_synaptic(epsilon, w, z_post, z_pre):
return epsilon * (np.outer(z_pre, z_post) - z_post * w)
def update_activity(k, z, x_i, c, weight, Is, Ir, s, m):
inhibition = Is * s + Ir * m
recurrent_excitation = np.dot(c * weight, z)
input_excitation = k * x_i
return input_excitation, recurrent_excitation, inhibition
def build_pattern_dictionary(number_of_patterns, sparsity, input_size):
patterns_dictionary = {}
for pattern_number in range(number_of_patterns):
# Initialize the pattern with zero
pattern = np.zeros(input_size)
# Chose some indexes and set them to 1
indexes = [pattern_number * sparsity + i for i in range(sparsity)]
pattern[indexes] = 1
# Creat the pattern entry in the dictionary
patterns_dictionary[pattern_number] = pattern
return patterns_dictionary
def train_network(N_input, N_recurrent, p, v, b, theta, phi, Ki, Kr, Ci, Cr, epsilon, training_time,
sequence, uniform_w=True, save_quantities=False):
save_dictionary = {}
if uniform_w:
small_value = 0.1
w = np.ones((N_recurrent, N_recurrent)) * small_value
else:
w = np.random.rand(n, n)
a = np.zeros((N_input, N_recurrent))
m_history = []
w_history = []
a_history = []
excitation_r_history = []
excitation_out_history = []
inhibition_r_history = []
inhibition_out_history = []
input_r_history = []
input_out_history = []
c1 = bernoulli_mask(size_from=N_input, size_to=N_recurrent, p=p, binomial=True)
c2 = bernoulli_mask(size_from=N_input, size_to=N_recurrent, p=p, binomial=True)
for _ in range(training_time):
y_r = np.zeros(N_recurrent)
z_r = np.zeros(N_recurrent)
m = np.sum(z_r)
y_out = np.zeros(N_input)
z_out = np.zeros(N_input)
for sequence_number in sequence:
# Input variables
x = patterns_dictionary[sequence_number]
s = np.sum(x)
modified_input = np.zeros(N_recurrent)
modified_input[np.where(x == 1)[0]] = 1.0
# Update values for the C3
input_excitation_r, recurrent_excitation_r, inhibition_r = update_activity(v, z_r, modified_input, c1, w,
Ki, Kr, s, m)
y_r = input_excitation_r + recurrent_excitation_r - inhibition_r
z_r_pre = np.copy(z_r)
z_r = (y_r > theta).astype('float')
# Update values for C1
input_excitation_out, recurrent_excitation_out, inhibition_out = update_activity(b, z_r_pre, x, c2, a,
Ci, Cr, s, m)
y_out = input_excitation_out + recurrent_excitation_out - inhibition_out
z_out = (y_out > phi).astype('float')
# Update dynamical values
m = np.sum(z_r)
# Presynaptic rules
w += pre_synaptic(epsilon=epsilon, w=w, z_post=z_r, z_pre=z_r_pre)
a += pre_synaptic(epsilon=epsilon, w=a, z_post=z_out, z_pre=z_r_pre)
# post-synaptic rules
#w = post_synaptic(epislon=epislon, w=w, z_post=z_r, z_pre=z_r_pre)
# Save history
if save_quantities:
m_history.append(m)
w_history.append(w)
a_history.append(a)
excitation_r_history.append(recurrent_excitation_r)
excitation_out_history.append(recurrent_excitation_out)
inhibition_r_history.append(inhibition_r)
inhibition_out_history.append(inhibition_out)
input_r_history.append(input_excitation_r)
input_out_history.append(input_excitation_out)
# Let's store the saved values and return the weight matrixes
if save_quantities:
save_dictionary['m'] = m_history
save_dictionary['a'] = a_history
save_dictionary['excitation_r'] = excitation_r_history
save_dictionary['excitation_out'] = excitation_out_history
save_dictionary['inhibition_r'] = inhibition_r_history
save_dictionary['inhibition_out'] = inhibition_out_history
save_dictionary['input_r'] = input_r_history
save_dictionary['input_out'] = input_out_history
return w, c1, a, c2, save_dictionary
def recall(N_input, N_recurrent, w, a, v, b, theta, phi, Ki, Kr, Ci, Cr, recall_time, cue, verbose = False):
x = cue
recall_history = np.zeros((recall_time, N_input))
# Initialize the variables
y_r = np.zeros(N_recurrent)
z_r = np.zeros(N_recurrent)
y_out = np.zeros(N_input)
z_out = np.zeros(N_input)
m = 0
for _ in range(6):
s = np.sum(x)
modified_input = np.zeros(N_recurrent)
modified_input[np.where(x == 1)[0]] = 1.0
if verbose:
print('------')
print(_)
print('----')
print('s')
print(s)
print('m')
print(m)
# Update values for the C3
input_excitation_r, recurrent_excitation_r, inhibition_r = update_activity(v, z_r, modified_input, c1, w,
Ki, Kr, s, m)
y_r = input_excitation_r + recurrent_excitation_r - inhibition_r
z_r_pre = np.copy(z_r)
z_r = (y_r > theta).astype('float')
# Update values for C1
input_excitation_out, recurrent_excitation_out, inhibition_out = update_activity(b, z_r_pre, x, c2, a, Ci, Cr,
s, m)
y_out = input_excitation_out + recurrent_excitation_out - inhibition_out
z_out_pre = np.copy(z_out)
z_out = (y_out > phi).astype('float')
# Update dynamical values
m = np.sum(z_r)
# History
recall_history[_, ...] = z_out
# Eliminate the input
x = np.zeros(N_input)
if verbose:
print('C3 layer')
print('recurrent excitation')
print(recurrent_excitation_r.astype('int'))
print('---- inhibition')
print(inhibition_r)
print('excitation input')
print(input_excitation_r)
print('y_r')
print(y_r)
print('z_r')
print(z_r)
print('C1 layer')
print('recurrent excitation_out')
print(recurrent_excitation_out.astype('int'))
print('inhibition_out')
print(inhibition_out)
print('excitation input_out')
print(input_excitation_out)
print('y_out')
print(y_out)
print('z_out')
print(z_out)
return recall_history
```
Let's create some paramaters
## An example
```
### Structure paramters
N_input = 200 # Inputs size
N_recurrent = 200 # C3 size
v = 21.0 # Input - C3 connection
b = 21.0 # Input - C1 connection
Kr = 0.5 # Recurrent self-inhibition gain
Ki = 1.0 # Input - C3 inhibition
Ci = 1.0 # Inhibition from the input to C1
Cr = 0.5 # Inhibition from C3 to C1
p = 1.0 # Sparness parameter
# Dynamical parameters
theta = 0.0
phi = 0
# Training parameters
training_time = 100
epsilon = 0.1
# Patterns
number_of_patterns = 20
sparsity = 10
patterns_dictionary = build_pattern_dictionary(number_of_patterns, sparsity, N_input)
sequence = [0, 1, 2, 3, 4, 5]
w, c1, a, c2, aux = train_network(N_input, N_recurrent, p, v, b, theta, phi, Ki, Kr, Ci, Cr,
epsilon, training_time, sequence, save_quantities=False)
fig = plt.figure(figsize=(16, 12))
fig.suptitle('Connectivities (w left, a right)')
ax1 = fig.add_subplot(121)
im1 = ax1.imshow(w, aspect='auto')
ax1.grid()
ax2 = fig.add_subplot(122)
im2 = ax2.imshow(a, aspect='auto')
ax2.grid()
fig.colorbar(im1, ax=ax1);
fig.colorbar(im2, ax=ax2);
cue = patterns_dictionary[0]
recall_time = 8
recall_history = recall(N_input, N_recurrent, w, a, v, b, theta, phi, Ki, Kr, Ci,
Cr, recall_time, cue, verbose = False)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
im = ax.imshow(recall_history, aspect='auto')
ax.grid()
fig.colorbar(im, ax=ax);
```
|
github_jupyter
|
import sys
sys.path.append('../')
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
%matplotlib inline
sns.set(font_scale=3.0)
def bernoulli_mask(size_from, size_to, p, binomial=True):
if binomial:
return np.random.binomial(n=1, p=p, size=(size_to, size_from))
else:
return np.random.choice(2, size=(size_to, size_from), replace=True, p=[1 - p, p])
def pre_synaptic_simple(epsilon, w, z_post, z_pre):
increase = np.zeros_like(w)
n = w.shape[0]
for i in range(n):
for j in range(n):
increase[i, j] = z_pre[j] * z_post[i] - z_pre[j] * w[i, j]
return epsilon * increase
def pre_synaptic(epsilon, w, z_post, z_pre):
return epsilon * (np.outer(z_post, z_pre) - w * z_pre)
def post_synaptic(epsilon, w, z_post, z_pre):
return epsilon * (np.outer(z_pre, z_post) - z_post * w)
def update_activity(k, z, x_i, c, weight, Is, Ir, s, m):
inhibition = Is * s + Ir * m
recurrent_excitation = np.dot(c * weight, z)
input_excitation = k * x_i
return input_excitation, recurrent_excitation, inhibition
def build_pattern_dictionary(number_of_patterns, sparsity, input_size):
patterns_dictionary = {}
for pattern_number in range(number_of_patterns):
# Initialize the pattern with zero
pattern = np.zeros(input_size)
# Chose some indexes and set them to 1
indexes = [pattern_number * sparsity + i for i in range(sparsity)]
pattern[indexes] = 1
# Creat the pattern entry in the dictionary
patterns_dictionary[pattern_number] = pattern
return patterns_dictionary
def train_network(N_input, N_recurrent, p, v, b, theta, phi, Ki, Kr, Ci, Cr, epsilon, training_time,
sequence, uniform_w=True, save_quantities=False):
save_dictionary = {}
if uniform_w:
small_value = 0.1
w = np.ones((N_recurrent, N_recurrent)) * small_value
else:
w = np.random.rand(n, n)
a = np.zeros((N_input, N_recurrent))
m_history = []
w_history = []
a_history = []
excitation_r_history = []
excitation_out_history = []
inhibition_r_history = []
inhibition_out_history = []
input_r_history = []
input_out_history = []
c1 = bernoulli_mask(size_from=N_input, size_to=N_recurrent, p=p, binomial=True)
c2 = bernoulli_mask(size_from=N_input, size_to=N_recurrent, p=p, binomial=True)
for _ in range(training_time):
y_r = np.zeros(N_recurrent)
z_r = np.zeros(N_recurrent)
m = np.sum(z_r)
y_out = np.zeros(N_input)
z_out = np.zeros(N_input)
for sequence_number in sequence:
# Input variables
x = patterns_dictionary[sequence_number]
s = np.sum(x)
modified_input = np.zeros(N_recurrent)
modified_input[np.where(x == 1)[0]] = 1.0
# Update values for the C3
input_excitation_r, recurrent_excitation_r, inhibition_r = update_activity(v, z_r, modified_input, c1, w,
Ki, Kr, s, m)
y_r = input_excitation_r + recurrent_excitation_r - inhibition_r
z_r_pre = np.copy(z_r)
z_r = (y_r > theta).astype('float')
# Update values for C1
input_excitation_out, recurrent_excitation_out, inhibition_out = update_activity(b, z_r_pre, x, c2, a,
Ci, Cr, s, m)
y_out = input_excitation_out + recurrent_excitation_out - inhibition_out
z_out = (y_out > phi).astype('float')
# Update dynamical values
m = np.sum(z_r)
# Presynaptic rules
w += pre_synaptic(epsilon=epsilon, w=w, z_post=z_r, z_pre=z_r_pre)
a += pre_synaptic(epsilon=epsilon, w=a, z_post=z_out, z_pre=z_r_pre)
# post-synaptic rules
#w = post_synaptic(epislon=epislon, w=w, z_post=z_r, z_pre=z_r_pre)
# Save history
if save_quantities:
m_history.append(m)
w_history.append(w)
a_history.append(a)
excitation_r_history.append(recurrent_excitation_r)
excitation_out_history.append(recurrent_excitation_out)
inhibition_r_history.append(inhibition_r)
inhibition_out_history.append(inhibition_out)
input_r_history.append(input_excitation_r)
input_out_history.append(input_excitation_out)
# Let's store the saved values and return the weight matrixes
if save_quantities:
save_dictionary['m'] = m_history
save_dictionary['a'] = a_history
save_dictionary['excitation_r'] = excitation_r_history
save_dictionary['excitation_out'] = excitation_out_history
save_dictionary['inhibition_r'] = inhibition_r_history
save_dictionary['inhibition_out'] = inhibition_out_history
save_dictionary['input_r'] = input_r_history
save_dictionary['input_out'] = input_out_history
return w, c1, a, c2, save_dictionary
def recall(N_input, N_recurrent, w, a, v, b, theta, phi, Ki, Kr, Ci, Cr, recall_time, cue, verbose = False):
x = cue
recall_history = np.zeros((recall_time, N_input))
# Initialize the variables
y_r = np.zeros(N_recurrent)
z_r = np.zeros(N_recurrent)
y_out = np.zeros(N_input)
z_out = np.zeros(N_input)
m = 0
for _ in range(6):
s = np.sum(x)
modified_input = np.zeros(N_recurrent)
modified_input[np.where(x == 1)[0]] = 1.0
if verbose:
print('------')
print(_)
print('----')
print('s')
print(s)
print('m')
print(m)
# Update values for the C3
input_excitation_r, recurrent_excitation_r, inhibition_r = update_activity(v, z_r, modified_input, c1, w,
Ki, Kr, s, m)
y_r = input_excitation_r + recurrent_excitation_r - inhibition_r
z_r_pre = np.copy(z_r)
z_r = (y_r > theta).astype('float')
# Update values for C1
input_excitation_out, recurrent_excitation_out, inhibition_out = update_activity(b, z_r_pre, x, c2, a, Ci, Cr,
s, m)
y_out = input_excitation_out + recurrent_excitation_out - inhibition_out
z_out_pre = np.copy(z_out)
z_out = (y_out > phi).astype('float')
# Update dynamical values
m = np.sum(z_r)
# History
recall_history[_, ...] = z_out
# Eliminate the input
x = np.zeros(N_input)
if verbose:
print('C3 layer')
print('recurrent excitation')
print(recurrent_excitation_r.astype('int'))
print('---- inhibition')
print(inhibition_r)
print('excitation input')
print(input_excitation_r)
print('y_r')
print(y_r)
print('z_r')
print(z_r)
print('C1 layer')
print('recurrent excitation_out')
print(recurrent_excitation_out.astype('int'))
print('inhibition_out')
print(inhibition_out)
print('excitation input_out')
print(input_excitation_out)
print('y_out')
print(y_out)
print('z_out')
print(z_out)
return recall_history
### Structure paramters
N_input = 200 # Inputs size
N_recurrent = 200 # C3 size
v = 21.0 # Input - C3 connection
b = 21.0 # Input - C1 connection
Kr = 0.5 # Recurrent self-inhibition gain
Ki = 1.0 # Input - C3 inhibition
Ci = 1.0 # Inhibition from the input to C1
Cr = 0.5 # Inhibition from C3 to C1
p = 1.0 # Sparness parameter
# Dynamical parameters
theta = 0.0
phi = 0
# Training parameters
training_time = 100
epsilon = 0.1
# Patterns
number_of_patterns = 20
sparsity = 10
patterns_dictionary = build_pattern_dictionary(number_of_patterns, sparsity, N_input)
sequence = [0, 1, 2, 3, 4, 5]
w, c1, a, c2, aux = train_network(N_input, N_recurrent, p, v, b, theta, phi, Ki, Kr, Ci, Cr,
epsilon, training_time, sequence, save_quantities=False)
fig = plt.figure(figsize=(16, 12))
fig.suptitle('Connectivities (w left, a right)')
ax1 = fig.add_subplot(121)
im1 = ax1.imshow(w, aspect='auto')
ax1.grid()
ax2 = fig.add_subplot(122)
im2 = ax2.imshow(a, aspect='auto')
ax2.grid()
fig.colorbar(im1, ax=ax1);
fig.colorbar(im2, ax=ax2);
cue = patterns_dictionary[0]
recall_time = 8
recall_history = recall(N_input, N_recurrent, w, a, v, b, theta, phi, Ki, Kr, Ci,
Cr, recall_time, cue, verbose = False)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
im = ax.imshow(recall_history, aspect='auto')
ax.grid()
fig.colorbar(im, ax=ax);
| 0.315103 | 0.752104 |
# Continuous Control
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program.
### 1. Start the Environment
We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
!pip -q install ./python
import os
from os import path
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from functools import partial
import datetime as dt
from unityagents import UnityEnvironment
from src.ac_agent import AgentDDPG, GaussianProcess, OUNoise
import src.utils as utils
from unity_reacher_utils import train
# Experiment
RND_SEED = 123
EXP_NAME = 'ddpg-20a:v02'
EXP_FOLDER = 'ddpg2_20a'
# Problem
N_EPISODES = 1000
MAX_STEPS = 1000
SOLVED_AT = 30
GAMMA = 0.99 # Discount factor
# Noise
NOISE_MU = 0
NOISE_SIGMA = 0.1
NOISE_DECAY = 0.992
NOISE_MIN_WEIGHT = 0.2
# Agent
ACT_HID_LAYERS = (256, 128)
CRIT_HID_LAYERS = (256, 128)
ACT_ADD_BN = (True)
CRIT_ADD_BN =(True)
GRAD_CLIP = (False, 1.)
BATCH_SIZE = 128
LEARNING_RATES = (1e-4, 1e-3)
WEIGHT_DECAY = (0, 0)
SOFT_UPD_PARAM = 1e-3
UPDATE_EVERY = 1
BUFFER_SIZE = int(1e6)
LEARN_EVERY = 20
LEARN_NUM = 10
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Reacher.app"`
- **Windows** (x86): `"path/to/Reacher_Windows_x86/Reacher.exe"`
- **Windows** (x86_64): `"path/to/Reacher_Windows_x86_64/Reacher.exe"`
- **Linux** (x86): `"path/to/Reacher_Linux/Reacher.x86"`
- **Linux** (x86_64): `"path/to/Reacher_Linux/Reacher.x86_64"`
- **Linux** (x86, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86"`
- **Linux** (x86_64, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86_64"`
For instance, if you are using a Mac, then you downloaded `Reacher.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Reacher.app")
```
```
from unityagents import UnityEnvironment
import numpy as np
# select this option to load version 1 (with a single agent) of the environment
#env = UnityEnvironment(file_name='/data/Reacher_One_Linux_NoVis/Reacher_One_Linux_NoVis.x86_64')
# select this option to load version 2 (with 20 agents) of the environment
env = UnityEnvironment(file_name='/data/Reacher_Linux_NoVis/Reacher.x86_64')
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
In this environment, a double-jointed arm can move to target locations. A reward of `+0.1` is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.
The observation space consists of `33` variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between `-1` and `1`.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
action_scaler = partial(utils.action_scaler_fn, lower=-1., upper=1.)
action_scaler(np.array(-0.5)), action_scaler(np.array(-3)), action_scaler(np.array(.5)), action_scaler(np.array(3)),
g_noise = GaussianProcess(action_size, RND_SEED, mu=NOISE_MU, sigma=NOISE_SIGMA)
g_noise_sched_df = utils.get_noise_schedulling(N_EPISODES, decay=NOISE_DECAY, noise=g_noise)
g_noise_sched_df.plot()
plt.axhline(NOISE_MIN_WEIGHT)
plt.show()
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
```
When finished, you can close the environment.
```
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
```
ddpg_agent = AgentDDPG(
state_size=state_size, action_size=action_size, gamma=GAMMA,
actor_hidden_layers=ACT_HID_LAYERS, critic_hidden_layers=CRIT_HID_LAYERS,
actor_add_bn=ACT_ADD_BN, critic_add_bn=CRIT_ADD_BN,
grad_clipping=GRAD_CLIP,
batch_size=BATCH_SIZE, learning_rates=LEARNING_RATES, weight_decay=WEIGHT_DECAY,
soft_upd_param=SOFT_UPD_PARAM, update_every=UPDATE_EVERY, buffer_size=BUFFER_SIZE,
noise=g_noise, learn_every=LEARN_EVERY, learn_num=LEARN_NUM,
seed=RND_SEED, action_dtype='float')
path_ddpg_agent = os.path.join('models', EXP_FOLDER)
from workspace_utils import active_session
with active_session():
# do long-running work here
scores_ddpg = train(env, brain_name, ddpg_agent, n_episodes=N_EPISODES, max_t=MAX_STEPS, solved=SOLVED_AT,
action_scaler_fn=action_scaler, add_noise=True, noise_decay=NOISE_DECAY, min_noise_weight=NOISE_MIN_WEIGHT,
model_save_path=path_ddpg_agent)
scores_ddpg['experiment'] = EXP_NAME
checkpoint_metadata = pd.Series(
index=['N_episodes', 'gamma', 'actor_hidden_layers', 'critic_hidden_layers',
'grad_clipping', 'batch_size', 'learning_rates',
'soft_upd_param', 'update_every', 'buffer_size', 'noise', 'learn_every', 'learn_num',
'solved', 'checkpoint_folder'],
data = [len(scores_ddpg), GAMMA, ACT_HID_LAYERS, CRIT_HID_LAYERS,
GRAD_CLIP, BATCH_SIZE, LEARNING_RATES,
SOFT_UPD_PARAM, UPDATE_EVERY, BUFFER_SIZE, 'g-noise', LEARN_EVERY, LEARN_NUM,
True, EXP_FOLDER],
name=f'experiment:{EXP_NAME}')
checkpoint_metadata
experiment_dt = dt.datetime.strftime(dt.datetime.now(), "%Y%m%d%H%M%S")
checkpoint_metadata.to_json(f'models/experiments/hparams_{experiment_dt}.json')
scores_ddpg1.to_csv(f'models/experiments/scores_{experiment_dt}.csv')
```
|
github_jupyter
|
!pip -q install ./python
import os
from os import path
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from functools import partial
import datetime as dt
from unityagents import UnityEnvironment
from src.ac_agent import AgentDDPG, GaussianProcess, OUNoise
import src.utils as utils
from unity_reacher_utils import train
# Experiment
RND_SEED = 123
EXP_NAME = 'ddpg-20a:v02'
EXP_FOLDER = 'ddpg2_20a'
# Problem
N_EPISODES = 1000
MAX_STEPS = 1000
SOLVED_AT = 30
GAMMA = 0.99 # Discount factor
# Noise
NOISE_MU = 0
NOISE_SIGMA = 0.1
NOISE_DECAY = 0.992
NOISE_MIN_WEIGHT = 0.2
# Agent
ACT_HID_LAYERS = (256, 128)
CRIT_HID_LAYERS = (256, 128)
ACT_ADD_BN = (True)
CRIT_ADD_BN =(True)
GRAD_CLIP = (False, 1.)
BATCH_SIZE = 128
LEARNING_RATES = (1e-4, 1e-3)
WEIGHT_DECAY = (0, 0)
SOFT_UPD_PARAM = 1e-3
UPDATE_EVERY = 1
BUFFER_SIZE = int(1e6)
LEARN_EVERY = 20
LEARN_NUM = 10
env = UnityEnvironment(file_name="Reacher.app")
from unityagents import UnityEnvironment
import numpy as np
# select this option to load version 1 (with a single agent) of the environment
#env = UnityEnvironment(file_name='/data/Reacher_One_Linux_NoVis/Reacher_One_Linux_NoVis.x86_64')
# select this option to load version 2 (with 20 agents) of the environment
env = UnityEnvironment(file_name='/data/Reacher_Linux_NoVis/Reacher.x86_64')
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
action_scaler = partial(utils.action_scaler_fn, lower=-1., upper=1.)
action_scaler(np.array(-0.5)), action_scaler(np.array(-3)), action_scaler(np.array(.5)), action_scaler(np.array(3)),
g_noise = GaussianProcess(action_size, RND_SEED, mu=NOISE_MU, sigma=NOISE_SIGMA)
g_noise_sched_df = utils.get_noise_schedulling(N_EPISODES, decay=NOISE_DECAY, noise=g_noise)
g_noise_sched_df.plot()
plt.axhline(NOISE_MIN_WEIGHT)
plt.show()
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
env.close()
env_info = env.reset(train_mode=True)[brain_name]
ddpg_agent = AgentDDPG(
state_size=state_size, action_size=action_size, gamma=GAMMA,
actor_hidden_layers=ACT_HID_LAYERS, critic_hidden_layers=CRIT_HID_LAYERS,
actor_add_bn=ACT_ADD_BN, critic_add_bn=CRIT_ADD_BN,
grad_clipping=GRAD_CLIP,
batch_size=BATCH_SIZE, learning_rates=LEARNING_RATES, weight_decay=WEIGHT_DECAY,
soft_upd_param=SOFT_UPD_PARAM, update_every=UPDATE_EVERY, buffer_size=BUFFER_SIZE,
noise=g_noise, learn_every=LEARN_EVERY, learn_num=LEARN_NUM,
seed=RND_SEED, action_dtype='float')
path_ddpg_agent = os.path.join('models', EXP_FOLDER)
from workspace_utils import active_session
with active_session():
# do long-running work here
scores_ddpg = train(env, brain_name, ddpg_agent, n_episodes=N_EPISODES, max_t=MAX_STEPS, solved=SOLVED_AT,
action_scaler_fn=action_scaler, add_noise=True, noise_decay=NOISE_DECAY, min_noise_weight=NOISE_MIN_WEIGHT,
model_save_path=path_ddpg_agent)
scores_ddpg['experiment'] = EXP_NAME
checkpoint_metadata = pd.Series(
index=['N_episodes', 'gamma', 'actor_hidden_layers', 'critic_hidden_layers',
'grad_clipping', 'batch_size', 'learning_rates',
'soft_upd_param', 'update_every', 'buffer_size', 'noise', 'learn_every', 'learn_num',
'solved', 'checkpoint_folder'],
data = [len(scores_ddpg), GAMMA, ACT_HID_LAYERS, CRIT_HID_LAYERS,
GRAD_CLIP, BATCH_SIZE, LEARNING_RATES,
SOFT_UPD_PARAM, UPDATE_EVERY, BUFFER_SIZE, 'g-noise', LEARN_EVERY, LEARN_NUM,
True, EXP_FOLDER],
name=f'experiment:{EXP_NAME}')
checkpoint_metadata
experiment_dt = dt.datetime.strftime(dt.datetime.now(), "%Y%m%d%H%M%S")
checkpoint_metadata.to_json(f'models/experiments/hparams_{experiment_dt}.json')
scores_ddpg1.to_csv(f'models/experiments/scores_{experiment_dt}.csv')
| 0.316792 | 0.939415 |
```
import pandas as pd
BlendDF = pd.read_csv('BlendedReviews.csv')
import numpy as np
import pandas as pd
import nltk
import matplotlib.pyplot as plt
import multiprocessing
from sklearn import utils
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.ensemble import StackingClassifier
"""
Five Emotions Approach and other variables approach
First group of models are binary models predicting positive or negative rating
SVM Models have been excluded due to high number of continuous variables makes processing power/time overwhelming
"""
#Split data into training and test sets with a 80/20 split for all binary models
#Based on the very low coefficients for both WordCount and vote, these variables were left out of the models.
X = BlendDF[['Joy','Anger','Sadness','Fear','Disgust','Short','Verified','Long','IsImage']] #set independent variables for regression
Y = BlendDF['BinaryRating'] #set dependent variable for regression
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1) #Split into 80/20 train and test sets
#Run Naive Bayes Classifier
NB = GaussianNB()
NB.fit(X_train, Y_train)
#Look at ability of model to predict test set
NBScore = round((NB.score(X_test, Y_test))*100,2)
print('Naive Bayes Classifier Score is for Five Emotions Model: ',NBScore,'%','\n')
Y_pred = NB.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
#Run binary logistic regression
LR = linear_model.LogisticRegression(solver='lbfgs',max_iter=10000)
LR.fit(X_train, Y_train)
#Look at ability of model to predict test set
LRScore = round((LR.score(X_test, Y_test))*100,2)
print('Binary Logistic Model Score for Five Emotions Model: ',LRScore,'%','\n')
Y_pred = LR.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
from yellowbrick.classifier import ClassificationReport
viz = ClassificationReport(GaussianNB())
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from yellowbrick.classifier import ClassificationReport
from sklearn.linear_model import LogisticRegression
viz = ClassificationReport(LogisticRegression())
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from sklearn.model_selection import TimeSeriesSplit
from yellowbrick.target import ClassBalance
# Create the training and test data
tscv = TimeSeriesSplit()
for train_index, test_index in tscv.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
Y_train, Y_test = Y.iloc[train_index], Y.iloc[test_index]
# Instantiate the visualizer
visualizer = ClassBalance()
visualizer.fit(Y_train, Y_test)
visualizer.show()
"""
Five Emotions Approach and other variables approach
Second group of models are multiclass models for 1-5 rating
SVM Models have been excluded due to high number of continuous variables makes processing power/time overwhelming
"""
#Split data into training and test sets with a 80/20 split for multiclass models
#Based on the very low coefficients for both WordCount, vote and categories, these variables were left out of the models.
X = BlendDF[['Joy','Anger','Sadness','Fear','Disgust','Short','Verified','Long','IsImage']] #set independent variables for regression
Y = BlendDF['overall'] #set dependent variable for regression
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1) #Split into 80/20 train and test sets
#Run multinomial logistic regression
MLR = linear_model.LogisticRegression(multi_class='multinomial', solver='lbfgs',max_iter=10000)
MLR.fit(X_train, Y_train)
#Look at ability of model to predict test set
MLRScore = round((MLR.score(X_test, Y_test))*100,2)
print('Multinomial Logistic Model Score for Five Emotions Model: ',MLRScore,'%','\n')
Y_pred = MLR.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
#Run K Nearest Neighbors Algorithm
KNN = KNeighborsClassifier(n_neighbors = 15)
KNN.fit(X_train, Y_train)
#Look at ability of model to predict test set
KNNScore = round((KNN.score(X_test, Y_test))*100,2)
print('K Nearest Neighbors Algorithm Model Score for Five Emotions Model: ',KNNScore,'%','\n')
Y_pred = KNN.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
#Run Random Forest Algorithm
RF = RandomForestClassifier(n_estimators=5, random_state=0)
RF.fit(X_train, Y_train)
#Look at ability of model to predict test set
RFScore = round((RF.score(X_test, Y_test))*100,2)
print('Random Forest Classifier Model Score for Five Emotions Model: ',RFScore,'%','\n')
Y_pred = RF.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
from yellowbrick.classifier import ClassificationReport
from sklearn.linear_model import LogisticRegression
viz = ClassificationReport(LogisticRegression())
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from yellowbrick.classifier import ClassificationReport
viz = ClassificationReport(KNeighborsClassifier(n_neighbors = 15))
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from yellowbrick.classifier import ClassificationReport
viz = ClassificationReport(RandomForestClassifier(n_estimators=5, random_state=0))
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from sklearn.model_selection import TimeSeriesSplit
from yellowbrick.target import ClassBalance
# Create the training and test data
tscv = TimeSeriesSplit()
for train_index, test_index in tscv.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
Y_train, Y_test = Y.iloc[train_index], Y.iloc[test_index]
# Instantiate the visualizer
visualizer = ClassBalance()
visualizer.fit(Y_train, Y_test)
visualizer.show()
```
|
github_jupyter
|
import pandas as pd
BlendDF = pd.read_csv('BlendedReviews.csv')
import numpy as np
import pandas as pd
import nltk
import matplotlib.pyplot as plt
import multiprocessing
from sklearn import utils
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.ensemble import StackingClassifier
"""
Five Emotions Approach and other variables approach
First group of models are binary models predicting positive or negative rating
SVM Models have been excluded due to high number of continuous variables makes processing power/time overwhelming
"""
#Split data into training and test sets with a 80/20 split for all binary models
#Based on the very low coefficients for both WordCount and vote, these variables were left out of the models.
X = BlendDF[['Joy','Anger','Sadness','Fear','Disgust','Short','Verified','Long','IsImage']] #set independent variables for regression
Y = BlendDF['BinaryRating'] #set dependent variable for regression
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1) #Split into 80/20 train and test sets
#Run Naive Bayes Classifier
NB = GaussianNB()
NB.fit(X_train, Y_train)
#Look at ability of model to predict test set
NBScore = round((NB.score(X_test, Y_test))*100,2)
print('Naive Bayes Classifier Score is for Five Emotions Model: ',NBScore,'%','\n')
Y_pred = NB.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
#Run binary logistic regression
LR = linear_model.LogisticRegression(solver='lbfgs',max_iter=10000)
LR.fit(X_train, Y_train)
#Look at ability of model to predict test set
LRScore = round((LR.score(X_test, Y_test))*100,2)
print('Binary Logistic Model Score for Five Emotions Model: ',LRScore,'%','\n')
Y_pred = LR.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
from yellowbrick.classifier import ClassificationReport
viz = ClassificationReport(GaussianNB())
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from yellowbrick.classifier import ClassificationReport
from sklearn.linear_model import LogisticRegression
viz = ClassificationReport(LogisticRegression())
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from sklearn.model_selection import TimeSeriesSplit
from yellowbrick.target import ClassBalance
# Create the training and test data
tscv = TimeSeriesSplit()
for train_index, test_index in tscv.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
Y_train, Y_test = Y.iloc[train_index], Y.iloc[test_index]
# Instantiate the visualizer
visualizer = ClassBalance()
visualizer.fit(Y_train, Y_test)
visualizer.show()
"""
Five Emotions Approach and other variables approach
Second group of models are multiclass models for 1-5 rating
SVM Models have been excluded due to high number of continuous variables makes processing power/time overwhelming
"""
#Split data into training and test sets with a 80/20 split for multiclass models
#Based on the very low coefficients for both WordCount, vote and categories, these variables were left out of the models.
X = BlendDF[['Joy','Anger','Sadness','Fear','Disgust','Short','Verified','Long','IsImage']] #set independent variables for regression
Y = BlendDF['overall'] #set dependent variable for regression
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1) #Split into 80/20 train and test sets
#Run multinomial logistic regression
MLR = linear_model.LogisticRegression(multi_class='multinomial', solver='lbfgs',max_iter=10000)
MLR.fit(X_train, Y_train)
#Look at ability of model to predict test set
MLRScore = round((MLR.score(X_test, Y_test))*100,2)
print('Multinomial Logistic Model Score for Five Emotions Model: ',MLRScore,'%','\n')
Y_pred = MLR.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
#Run K Nearest Neighbors Algorithm
KNN = KNeighborsClassifier(n_neighbors = 15)
KNN.fit(X_train, Y_train)
#Look at ability of model to predict test set
KNNScore = round((KNN.score(X_test, Y_test))*100,2)
print('K Nearest Neighbors Algorithm Model Score for Five Emotions Model: ',KNNScore,'%','\n')
Y_pred = KNN.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
#Run Random Forest Algorithm
RF = RandomForestClassifier(n_estimators=5, random_state=0)
RF.fit(X_train, Y_train)
#Look at ability of model to predict test set
RFScore = round((RF.score(X_test, Y_test))*100,2)
print('Random Forest Classifier Model Score for Five Emotions Model: ',RFScore,'%','\n')
Y_pred = RF.predict(X_test)
print(classification_report(Y_test, Y_pred, zero_division=0), '\n')
from yellowbrick.classifier import ClassificationReport
from sklearn.linear_model import LogisticRegression
viz = ClassificationReport(LogisticRegression())
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from yellowbrick.classifier import ClassificationReport
viz = ClassificationReport(KNeighborsClassifier(n_neighbors = 15))
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from yellowbrick.classifier import ClassificationReport
viz = ClassificationReport(RandomForestClassifier(n_estimators=5, random_state=0))
viz.fit(X_train, Y_train)
viz.score(X_test, Y_test)
viz.show()
from sklearn.model_selection import TimeSeriesSplit
from yellowbrick.target import ClassBalance
# Create the training and test data
tscv = TimeSeriesSplit()
for train_index, test_index in tscv.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
Y_train, Y_test = Y.iloc[train_index], Y.iloc[test_index]
# Instantiate the visualizer
visualizer = ClassBalance()
visualizer.fit(Y_train, Y_test)
visualizer.show()
| 0.714927 | 0.598752 |
## Advanced Graphs
Prequisites:
- A running Kubernetes cluster
- [Git clone of Seldon Core](https://github.com/SeldonIO/seldon-core)
- [seldon-core Python package](https://pypi.org/project/seldon-core/) (```pip install seldon-core```)
- [Helm](https://github.com/kubernetes/helm)
In this notebook we will illustrate the different types of microservices that can be deployed in Seldon:
* Model
* Transformer
* Router
* Combiner
* Output Transformer
We will deploy graphs of increasing complexity. But first we need to install seldon on the cluster.
If running via Minikube you may need to ensure you have cluster-admin rights
```
!!kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
!kubectl -n kube-system create sa tiller
!kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
!helm init --service-account tiller
!helm install ../helm-charts/seldon-core-crd --name seldon-core-crd --set usage_metrics.enabled=true
!kubectl create namespace graphs
!helm install ../helm-charts/seldon-core --name seldon-core \
--set cluster_manager.rbac=true \
--namespace graphs
```
## Set up REST and gRPC methods
**Ensure you port forward the seldon api-server REST and GRPC ports**:
REST:
```
kubectl port-forward $(kubectl get pods -n graphs -l app=seldon-apiserver-container-app -o jsonpath='{.items[0].metadata.name}') -n graphs 8002:8080
```
GRPC:
```
kubectl port-forward $(kubectl get pods -n graphs -l app=seldon-apiserver-container-app -o jsonpath='{.items[0].metadata.name}') -n graphs 8003:5000
```
```
from visualizer import get_graph
import json
from seldon_utils import *
API_GATEWAY_REST="localhost:8002"
API_GATEWAY_GRPC="localhost:8003"
```
## Simple Model
```
get_graph("resources/model.json")
```
First we will check that everything works by running a simple model
```
!kubectl apply -f resources/model.json -n graphs
!kubectl get seldondeployments seldon-model -o jsonpath="{.status}" -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/model.json -n graphs
```
## Random AB Test
```
get_graph("resources/random_ab_test.json")
```
In this example we will deploy 2 models under an AB test router. The Random AB Test we will use is implemented directly in Seldon, this is not a microservice, so no docker image needs to be specified.
The json graph is as follows:
```
json.load(open("./resources/random_ab_test.json",'r')).get("spec").get("predictors")[0].get("graph")
```
We specify ``` "implementation": "RANDOM_ABTEST" ``` to get the AB Test router implemented in Seldon.
We pass the parameter ratioA which corresponds to the ratio of requests that will be passed to the first child of the Router
```
!kubectl apply -f resources/random_ab_test.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/random_ab_test.json -n graphs
```
## Average Combiner
```
get_graph("resources/ensemble.json")
```
In this example again we will use a service implemented in seldon, called Average Combiner. It takes the outputs of several models and returns the arithmetic mean of them.
The json is as follows:
```
json.load(open("./resources/ensemble.json",'r')).get("spec").get("predictors")[0].get("graph")
!kubectl apply -f resources/ensemble.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/ensemble.json -n graphs
```
## Feature Transformer
```
get_graph("resources/feature_transform.json")
```
In this example we deploy a simple model under a feature transformation microservice. For the transformer we will use the docker image seldonio/mock_transformer:1.0
Since this is not implemented in Seldon, the type of predictive unit (TRANSFORMER) needs to be specified in the graph so that Seldon Core knows which API this microservice implements.
The json is as follows:
```
json.load(open("./resources/feature_transform.json",'r')).get("spec").get("predictors")[0].get("graph")
!kubectl apply -f resources/feature_transform.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/feature_transform.json -n graphs
```
## Outlier Detector
```
get_graph("resources/outlier_detector.json")
```
In this example we will have different four components:
* A transformer, the outlier detector
* A router, the random AB test
* Two models
The outlier detector is a special kind of transformer that will populate a tag in the response metadata with the outlier score it has calculated.
We use the docker image seldonio/outlier_mahalanobis:0.2 for the outlier detector.
The json is as follows:
```
json.load(open("./resources/outlier_detector.json",'r')).get("spec").get("predictors")[0].get("graph")
!kubectl apply -f resources/outlier_detector.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,data_size=10,rows=2)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC,data_size=10,rows=2)
!kubectl delete -f resources/outlier_detector.json -n graphs
```
## Complex Graph
```
get_graph("resources/complex_graph.json")
```
In this final example we will deploy a complex graph with all of the components that have been used so far.
```
!kubectl apply -f resources/complex_graph.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/complex_graph.json -n graphs
```
## Tear Down
```
!helm delete seldon-core --purge
!helm delete seldon-core-crd --purge
```
|
github_jupyter
|
!!kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
!kubectl -n kube-system create sa tiller
!kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
!helm init --service-account tiller
!helm install ../helm-charts/seldon-core-crd --name seldon-core-crd --set usage_metrics.enabled=true
!kubectl create namespace graphs
!helm install ../helm-charts/seldon-core --name seldon-core \
--set cluster_manager.rbac=true \
--namespace graphs
kubectl port-forward $(kubectl get pods -n graphs -l app=seldon-apiserver-container-app -o jsonpath='{.items[0].metadata.name}') -n graphs 8002:8080
kubectl port-forward $(kubectl get pods -n graphs -l app=seldon-apiserver-container-app -o jsonpath='{.items[0].metadata.name}') -n graphs 8003:5000
from visualizer import get_graph
import json
from seldon_utils import *
API_GATEWAY_REST="localhost:8002"
API_GATEWAY_GRPC="localhost:8003"
get_graph("resources/model.json")
!kubectl apply -f resources/model.json -n graphs
!kubectl get seldondeployments seldon-model -o jsonpath="{.status}" -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/model.json -n graphs
get_graph("resources/random_ab_test.json")
json.load(open("./resources/random_ab_test.json",'r')).get("spec").get("predictors")[0].get("graph")
!kubectl apply -f resources/random_ab_test.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/random_ab_test.json -n graphs
get_graph("resources/ensemble.json")
json.load(open("./resources/ensemble.json",'r')).get("spec").get("predictors")[0].get("graph")
!kubectl apply -f resources/ensemble.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/ensemble.json -n graphs
get_graph("resources/feature_transform.json")
json.load(open("./resources/feature_transform.json",'r')).get("spec").get("predictors")[0].get("graph")
!kubectl apply -f resources/feature_transform.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/feature_transform.json -n graphs
get_graph("resources/outlier_detector.json")
json.load(open("./resources/outlier_detector.json",'r')).get("spec").get("predictors")[0].get("graph")
!kubectl apply -f resources/outlier_detector.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,data_size=10,rows=2)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC,data_size=10,rows=2)
!kubectl delete -f resources/outlier_detector.json -n graphs
get_graph("resources/complex_graph.json")
!kubectl apply -f resources/complex_graph.json -n graphs
!kubectl get seldondeployments seldon-deployment -o jsonpath='{.status}' -n graphs
r = rest_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST)
print(r.text)
grpc_request_api_gateway("oauth-key","oauth-secret",None,API_GATEWAY_REST,API_GATEWAY_GRPC)
!kubectl delete -f resources/complex_graph.json -n graphs
!helm delete seldon-core --purge
!helm delete seldon-core-crd --purge
| 0.165863 | 0.848722 |
### For Capstone Project
### Data Collection
We will look for a suitable place for our business based on neihborhoods in City of Chicago. For this we need relevant data to go ahead with our analysis. Data will be collected from the following, we will need names of neighborhoods, zip codes, lat,lng for map marking
- **For NeigbourHood names**: https://en.wikipedia.org/wiki/List_of_neighborhoods_in_Chicago
- **For Zip Codes** https://data.cityofchicago.org/api/views/unjd-c2ca/rows.csv?accessType=DOWNLOAD
- **For Lat Lng** https://simplemaps.com/data/us-zips
- **FourSquare API for Venues** https://developer.foursquare.com/docs/resources/categories
After that we will form clusters and analyze which cluster have space/land for commercial activity for this data will be obatined from the following.
- **Chicago City Owned Lands** https://data.cityofchicago.org/Community-Economic-Development/City-Owned-Land-Inventory/aksk-kvfp/data
### 1. Setting up the environment
```
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import json
from geopy.geocoders import Nominatim
from bs4 import BeautifulSoup
from urllib.request import urlopen
import requests
from pandas.io.json import json_normalize
import geocoder
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import seaborn as sns
from sklearn.cluster import KMeans
import folium
```
## Geting Neighborhoods for City of Chicago
### 1. Parsing the html
```
url = 'https://en.wikipedia.org/wiki/List_of_neighborhoods_in_Chicago'
page = urlopen(url).read().decode('utf-8')
soup = BeautifulSoup(page, 'html.parser')
wiki_table = soup.body.table.tbody
```
### 2. Extracting data from the table to the data frame
```
def get_cell(element):
cells = element.find_all('td')
row = []
for cell in cells:
if cell.a:
if (cell.a.text):
row.append(cell.a.text)
continue
row.append(cell.string.strip())
return row
def get_row():
data = []
for tr in wiki_table.find_all('tr'):
row = get_cell(tr)
if len(row) != 2:
continue
data.append(row)
return data
data = get_row()
columns = ['Neighborhood', 'Community Area']
df = pd.DataFrame(data, columns=columns)
df.head()
df.shape
```
### 3. Cleaning the data
```
df = df[df.Neighborhood != 'Not assigned']
df = df.sort_values(by=['Neighborhood','Community Area'])
df.reset_index(inplace=True)
df.drop('index',axis=1,inplace=True)
df.head()
df.shape
```
We have our Neighborhoods but we need more info to get geographical locations. One way is to use ZipCodes and [this city of chicago website](https://data.cityofchicago.org/api/views/unjd-c2ca/rows.csv?accessType=DOWNLOAD) provides relevant data.
```
df_zip=pd.read_csv('Zip_Codes.csv')
df_zip.head()
```
We only need ZIP column
```
df_zip=df_zip['ZIP']
df_zip=pd.DataFrame(df_zip)
df_zip.head()
```
We have Zip codes but we don't know, which Zip codes fall in which neigbourhood or what there lat, lng is so, we need to look for some data that can either provide some info that could help us in mapping these zip codes to lat lng and then to neigborhoods. The data set at [SimpleMaps](https://simplemaps.com/data/us-zips) provide us with this info. So, we will use it.
```
df3=pd.read_csv('uszips.csv')
df3.head()
df3['city'].unique()
```
We only need data related to Chicago
```
df3=df3[df3['city']=='Chicago']
df3.rename(columns={'zip':'ZIP'}, inplace=True) # renaming column for merging
chicago_df=pd.merge(df_zip, df3, how='left')
chicago_df
```
Drop Null Entries
```
chicago_df = chicago_df[np.isfinite(chicago_df['lat'])]
chicago_df.drop(['zcta','parent_zcta','county_fips','all_county_weights','imprecise','military','timezone'],axis=1,inplace=True )
chicago_df
chicago_df['coord_pairs']=chicago_df[['lat', 'lng']].values.round(4).tolist()
chicago_df.head()
```
### Getting NeihborHood Names for Each ZIP code
We will now use geocoder to extract neighborhood names names for the lat lng pairs which are already mapped to zip codes.
```
def get_neighbor(latlng):
g=geocoder.mapbox(latlng, method='reverse',key='pk.eyJ1IjoiaGNkNzQ5ODYiLCJhIjoiY2sxejh6OGNuMG82YzNjbnNjNjAxbXd4ayJ9.UyAu6s5crbE_QpzNpGg4fw')
a=g.json['raw']['neighborhood']
return a
# df['Neighbour']=df['new'].apply(lambda x : get_neighbour(x))
chicago_df['Neighborhood'] = chicago_df['coord_pairs'].apply(get_neighbor)
chicago_df.head()
chicago_df.describe()
chicago_df['Neighborhood'].value_counts()
```
Some neighbourhoods appear to have more than 1 Zip codes, this will be surplus for us and may effect our clusters and their analysis so, we need to drop them.
```
chicago_df.drop_duplicates(subset ="Neighborhood", keep = 'first', inplace = True)
```
Saving the cleaned data for further analysis.
```
chicago_df.to_csv('Chicago.csv')
```
**Getting Lat,Lng for the city of Chicago**
```
address = 'Chicago, IL'
geolocator = Nominatim(user_agent="ch_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Chicago City are {}, {}.'.format(latitude, longitude))
```
## Create Map for Chicago and Place markers over to idenity neighborhoods
**Folium** is a great visualization library. We can zoom into the below map, and click on each circle mark to reveal the name of the neighborhood and its respective borough.
```
# create map of Chicago using latitude and longitude values
map_Chicago = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, neighborhood in zip(chicago_df['lat'], chicago_df['lng'], chicago_df['Neighbor']):
label = '{}'.format(neighborhood)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_Chicago)
map_Chicago
```
|
github_jupyter
|
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import json
from geopy.geocoders import Nominatim
from bs4 import BeautifulSoup
from urllib.request import urlopen
import requests
from pandas.io.json import json_normalize
import geocoder
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import seaborn as sns
from sklearn.cluster import KMeans
import folium
url = 'https://en.wikipedia.org/wiki/List_of_neighborhoods_in_Chicago'
page = urlopen(url).read().decode('utf-8')
soup = BeautifulSoup(page, 'html.parser')
wiki_table = soup.body.table.tbody
def get_cell(element):
cells = element.find_all('td')
row = []
for cell in cells:
if cell.a:
if (cell.a.text):
row.append(cell.a.text)
continue
row.append(cell.string.strip())
return row
def get_row():
data = []
for tr in wiki_table.find_all('tr'):
row = get_cell(tr)
if len(row) != 2:
continue
data.append(row)
return data
data = get_row()
columns = ['Neighborhood', 'Community Area']
df = pd.DataFrame(data, columns=columns)
df.head()
df.shape
df = df[df.Neighborhood != 'Not assigned']
df = df.sort_values(by=['Neighborhood','Community Area'])
df.reset_index(inplace=True)
df.drop('index',axis=1,inplace=True)
df.head()
df.shape
df_zip=pd.read_csv('Zip_Codes.csv')
df_zip.head()
df_zip=df_zip['ZIP']
df_zip=pd.DataFrame(df_zip)
df_zip.head()
df3=pd.read_csv('uszips.csv')
df3.head()
df3['city'].unique()
df3=df3[df3['city']=='Chicago']
df3.rename(columns={'zip':'ZIP'}, inplace=True) # renaming column for merging
chicago_df=pd.merge(df_zip, df3, how='left')
chicago_df
chicago_df = chicago_df[np.isfinite(chicago_df['lat'])]
chicago_df.drop(['zcta','parent_zcta','county_fips','all_county_weights','imprecise','military','timezone'],axis=1,inplace=True )
chicago_df
chicago_df['coord_pairs']=chicago_df[['lat', 'lng']].values.round(4).tolist()
chicago_df.head()
def get_neighbor(latlng):
g=geocoder.mapbox(latlng, method='reverse',key='pk.eyJ1IjoiaGNkNzQ5ODYiLCJhIjoiY2sxejh6OGNuMG82YzNjbnNjNjAxbXd4ayJ9.UyAu6s5crbE_QpzNpGg4fw')
a=g.json['raw']['neighborhood']
return a
# df['Neighbour']=df['new'].apply(lambda x : get_neighbour(x))
chicago_df['Neighborhood'] = chicago_df['coord_pairs'].apply(get_neighbor)
chicago_df.head()
chicago_df.describe()
chicago_df['Neighborhood'].value_counts()
chicago_df.drop_duplicates(subset ="Neighborhood", keep = 'first', inplace = True)
chicago_df.to_csv('Chicago.csv')
address = 'Chicago, IL'
geolocator = Nominatim(user_agent="ch_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Chicago City are {}, {}.'.format(latitude, longitude))
# create map of Chicago using latitude and longitude values
map_Chicago = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, neighborhood in zip(chicago_df['lat'], chicago_df['lng'], chicago_df['Neighbor']):
label = '{}'.format(neighborhood)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_Chicago)
map_Chicago
| 0.479991 | 0.950915 |
Hypothesis Testing
==================
Copyright 2016 Allen Downey
License: [Creative Commons Attribution 4.0 International](http://creativecommons.org/licenses/by/4.0/)
```
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
import first
# seed the random number generator so we all get the same results
numpy.random.seed(19)
# some nicer colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
%matplotlib inline
```
## Part One
Suppose you observe an apparent difference between two groups and you want to check whether it might be due to chance.
As an example, we'll look at differences between first babies and others. The `first` module provides code to read data from the National Survey of Family Growth (NSFG).
```
live, firsts, others = first.MakeFrames()
```
We'll look at a couple of variables, including pregnancy length and birth weight. The effect size we'll consider is the difference in the means.
Other examples might include a correlation between variables or a coefficient in a linear regression. The number that quantifies the size of the effect is called the "test statistic".
```
def TestStatistic(data):
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
```
For the first example, I extract the pregnancy length for first babies and others. The results are pandas Series objects.
```
group1 = firsts.prglngth
group2 = others.prglngth
```
The actual difference in the means is 0.078 weeks, which is only 13 hours.
```
actual = TestStatistic((group1, group2))
actual
```
The null hypothesis is that there is no difference between the groups. We can model that by forming a pooled sample that includes first babies and others.
```
n, m = len(group1), len(group2)
pool = numpy.hstack((group1, group2))
```
Then we can simulate the null hypothesis by shuffling the pool and dividing it into two groups, using the same sizes as the actual sample.
```
def RunModel():
numpy.random.shuffle(pool)
data = pool[:n], pool[n:]
return data
```
The result of running the model is two NumPy arrays with the shuffled pregnancy lengths:
```
RunModel()
```
Then we compute the same test statistic using the simulated data:
```
TestStatistic(RunModel())
```
If we run the model 1000 times and compute the test statistic, we can see how much the test statistic varies under the null hypothesis.
```
test_stats = numpy.array([TestStatistic(RunModel()) for i in range(1000)])
test_stats.shape
```
Here's the sampling distribution of the test statistic under the null hypothesis, with the actual difference in means indicated by a gray line.
```
pyplot.vlines(actual, 0, 300, linewidth=3, color='0.8')
pyplot.hist(test_stats, color=COLOR5)
pyplot.xlabel('difference in means')
pyplot.ylabel('count')
None
```
The p-value is the probability that the test statistic under the null hypothesis exceeds the actual value.
```
pvalue = sum(test_stats >= actual) / len(test_stats)
pvalue
```
In this case the result is about 15%, which means that even if there is no difference between the groups, it is plausible that we could see a sample difference as big as 0.078 weeks.
We conclude that the apparent effect might be due to chance, so we are not confident that it would appear in the general population, or in another sample from the same population.
STOP HERE
---------
Part Two
========
We can take the pieces from the previous section and organize them in a class that represents the structure of a hypothesis test.
```
class HypothesisTest(object):
"""Represents a hypothesis test."""
def __init__(self, data):
"""Initializes.
data: data in whatever form is relevant
"""
self.data = data
self.MakeModel()
self.actual = self.TestStatistic(data)
self.test_stats = None
def PValue(self, iters=1000):
"""Computes the distribution of the test statistic and p-value.
iters: number of iterations
returns: float p-value
"""
self.test_stats = numpy.array([self.TestStatistic(self.RunModel())
for _ in range(iters)])
count = sum(self.test_stats >= self.actual)
return count / iters
def MaxTestStat(self):
"""Returns the largest test statistic seen during simulations.
"""
return max(self.test_stats)
def PlotHist(self, label=None):
"""Draws a Cdf with vertical lines at the observed test stat.
"""
ys, xs, patches = pyplot.hist(ht.test_stats, color=COLOR4)
pyplot.vlines(self.actual, 0, max(ys), linewidth=3, color='0.8')
pyplot.xlabel('test statistic')
pyplot.ylabel('count')
def TestStatistic(self, data):
"""Computes the test statistic.
data: data in whatever form is relevant
"""
raise UnimplementedMethodException()
def MakeModel(self):
"""Build a model of the null hypothesis.
"""
pass
def RunModel(self):
"""Run the model of the null hypothesis.
returns: simulated data
"""
raise UnimplementedMethodException()
```
`HypothesisTest` is an abstract parent class that encodes the template. Child classes fill in the missing methods. For example, here's the test from the previous section.
```
class DiffMeansPermute(HypothesisTest):
"""Tests a difference in means by permutation."""
def TestStatistic(self, data):
"""Computes the test statistic.
data: data in whatever form is relevant
"""
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
def MakeModel(self):
"""Build a model of the null hypothesis.
"""
group1, group2 = self.data
self.n, self.m = len(group1), len(group2)
self.pool = numpy.hstack((group1, group2))
def RunModel(self):
"""Run the model of the null hypothesis.
returns: simulated data
"""
numpy.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
```
Now we can run the test by instantiating a DiffMeansPermute object:
```
data = (firsts.prglngth, others.prglngth)
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=1000)
print('\nmeans permute pregnancy length')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
```
And we can plot the sampling distribution of the test statistic under the null hypothesis.
```
ht.PlotHist()
```
### Difference in standard deviation
**Exercize 1**: Write a class named `DiffStdPermute` that extends `DiffMeansPermute` and overrides `TestStatistic` to compute the difference in standard deviations. Is the difference in standard deviations statistically significant?
```
# Solution goes here
```
Here's the code to test your solution to the previous exercise.
```
data = (firsts.prglngth, others.prglngth)
ht = DiffStdPermute(data)
p_value = ht.PValue(iters=1000)
print('\nstd permute pregnancy length')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
```
### Difference in birth weights
Now let's run DiffMeansPermute again to see if there is a difference in birth weight between first babies and others.
```
data = (firsts.totalwgt_lb.dropna(), others.totalwgt_lb.dropna())
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=1000)
print('\nmeans permute birthweight')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
```
In this case, after 1000 attempts, we never see a sample difference as big as the observed difference, so we conclude that the apparent effect is unlikely under the null hypothesis. Under normal circumstances, we can also make the inference that the apparent effect is unlikely to be caused by random sampling.
One final note: in this case I would report that the p-value is less than 1/1000 or less than 0.001. I would not report p=0, because the apparent effect is not impossible under the null hypothesis; just unlikely.
|
github_jupyter
|
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
import first
# seed the random number generator so we all get the same results
numpy.random.seed(19)
# some nicer colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
%matplotlib inline
live, firsts, others = first.MakeFrames()
def TestStatistic(data):
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
group1 = firsts.prglngth
group2 = others.prglngth
actual = TestStatistic((group1, group2))
actual
n, m = len(group1), len(group2)
pool = numpy.hstack((group1, group2))
def RunModel():
numpy.random.shuffle(pool)
data = pool[:n], pool[n:]
return data
RunModel()
TestStatistic(RunModel())
test_stats = numpy.array([TestStatistic(RunModel()) for i in range(1000)])
test_stats.shape
pyplot.vlines(actual, 0, 300, linewidth=3, color='0.8')
pyplot.hist(test_stats, color=COLOR5)
pyplot.xlabel('difference in means')
pyplot.ylabel('count')
None
pvalue = sum(test_stats >= actual) / len(test_stats)
pvalue
class HypothesisTest(object):
"""Represents a hypothesis test."""
def __init__(self, data):
"""Initializes.
data: data in whatever form is relevant
"""
self.data = data
self.MakeModel()
self.actual = self.TestStatistic(data)
self.test_stats = None
def PValue(self, iters=1000):
"""Computes the distribution of the test statistic and p-value.
iters: number of iterations
returns: float p-value
"""
self.test_stats = numpy.array([self.TestStatistic(self.RunModel())
for _ in range(iters)])
count = sum(self.test_stats >= self.actual)
return count / iters
def MaxTestStat(self):
"""Returns the largest test statistic seen during simulations.
"""
return max(self.test_stats)
def PlotHist(self, label=None):
"""Draws a Cdf with vertical lines at the observed test stat.
"""
ys, xs, patches = pyplot.hist(ht.test_stats, color=COLOR4)
pyplot.vlines(self.actual, 0, max(ys), linewidth=3, color='0.8')
pyplot.xlabel('test statistic')
pyplot.ylabel('count')
def TestStatistic(self, data):
"""Computes the test statistic.
data: data in whatever form is relevant
"""
raise UnimplementedMethodException()
def MakeModel(self):
"""Build a model of the null hypothesis.
"""
pass
def RunModel(self):
"""Run the model of the null hypothesis.
returns: simulated data
"""
raise UnimplementedMethodException()
class DiffMeansPermute(HypothesisTest):
"""Tests a difference in means by permutation."""
def TestStatistic(self, data):
"""Computes the test statistic.
data: data in whatever form is relevant
"""
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
def MakeModel(self):
"""Build a model of the null hypothesis.
"""
group1, group2 = self.data
self.n, self.m = len(group1), len(group2)
self.pool = numpy.hstack((group1, group2))
def RunModel(self):
"""Run the model of the null hypothesis.
returns: simulated data
"""
numpy.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
data = (firsts.prglngth, others.prglngth)
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=1000)
print('\nmeans permute pregnancy length')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
ht.PlotHist()
# Solution goes here
data = (firsts.prglngth, others.prglngth)
ht = DiffStdPermute(data)
p_value = ht.PValue(iters=1000)
print('\nstd permute pregnancy length')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
data = (firsts.totalwgt_lb.dropna(), others.totalwgt_lb.dropna())
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=1000)
print('\nmeans permute birthweight')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
| 0.792504 | 0.979784 |
```
from openpyxl import load_workbook
from bs4 import BeautifulSoup
from selenium import webdriver
from time import sleep
import csv
from random import randint
import json, io
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
from selenium.webdriver.common.action_chains import ActionChains
import urllib
import urllib3
import requests
import json, io
from bs4 import BeautifulSoup
urllib3.disable_warnings()
header = {'User-Agent':'Mozilla/5.0'}
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--user-agent="Mozilla/5.0')
chrome_options.add_argument("user-data-dir=selenium")
driver = webdriver.Chrome(chrome_options=chrome_options, executable_path=r'/Users/Name/Downloads/Compressed/chromedrives/chromedriver.exe')
cookies = json.load(open('cookiesdict.txt'))
for cookie in cookies:
driver.add_cookie(cookie)
i=10
driver.switch_to_window(driver.window_handles[0])
# driver.close()
i=0
about0=driver.find_elements_by_class_name('col-md-12')
while True:
driver.switch_to_window(driver.window_handles[0])
if i%10==0:
driver.execute_script("window.scrollTo(0, 200000)")
about0=driver.find_elements_by_class_name('col-md-12')
sleep(1)
about0=driver.find_elements_by_class_name('col-md-12')
actions = ActionChains(driver)
about=about0[i].find_element_by_tag_name('a')
actions.key_down(Keys.CONTROL).click(about).key_up(Keys.CONTROL).perform()
sleep(3)
driver.switch_to.window(driver.window_handles[1])
driver.close()
i=i+1
so1=BeautifulSoup(driver.page_source, 'lxml')
data_stor1=[]
data_stor0=['-']*21
address=so1.find_all(id="MainContent_lblFullAddress")[0].text
address
data_stor0[0]=address
price=so1.find_all(id="MainContent_lblListPrice")[0].text
price
data_stor0[1]=price
tds0=so1.find_all('td')
for ii in range(len(tds0)):
if tds0[ii].text=='Bedrooms:':
Bedrooms=tds0[ii+1].text.replace('\n',' ').replace('\t','')
if Bedrooms[0]==' ':
Bedrooms=Bedrooms[1:len(Bedrooms)]
print Bedrooms
data_stor0[2]=Bedrooms
if tds0[ii].text=='Bathrooms:':
Bathrooms=tds0[ii+1].text.replace('\n',' ').replace('\t','')
if Bathrooms[0]==' ':
Bathrooms=Bathrooms[1:len(Bathrooms)]
print Bathrooms
data_stor0[3]=Bathrooms
if tds0[ii].text=='\nResidential:':
Residential=tds0[ii+1].text.replace('\n',' ').replace('\t','')
if Residential[0]==' ':
Residential=Residential[1:len(Residential)]
print Residential
data_stor0[4]=Residential
if tds0[ii].text=='Lot:':
Lot=tds0[ii+1].text.replace('\n','')
print Lot
data_stor0[5]=Lot
if tds0[ii].text=='Year Built:':
Year_Built=tds0[ii+1].text.replace('\n','')
print Year_Built
data_stor0[6]=Year_Built
Description=so1.find_all('span',id="MainContent_lblDescription")[0].text
Description
data_stor0[7]=Description
box11=so1.find_all('div',class_="box1 margin-top10")
boxinner1=box11[0].find_all('div',class_="col-md-4")
for mm in range(0,len(boxinner1)):
if boxinner1[mm].find_all('span')[0].text=='Architecture Style':
Architecture_Style=boxinner1[mm].find_all('span')[1].text
print Architecture_Style
data_stor0[8]=Architecture_Style
if boxinner1[mm].find_all('span')[0].text=='Has Ceiling Fan(s)':
Has_Ceiling_Fan=boxinner1[mm].find_all('img')[0].get('alt')
print Has_Ceiling_Fan
data_stor0[9]=Has_Ceiling_Fan
if boxinner1[mm].find_all('span')[0].text=='Has Deck':
Has_Deck=boxinner1[mm].find_all('img')[0].get('alt')
data_stor0[10]=Has_Deck
print Has_Deck
if boxinner1[mm].find_all('span')[0].text=='Double-paned Windows':
Double_paned_Windows=boxinner1[mm].find_all('img')[0].get('alt')
data_stor0[11]= Double_paned_Windows
print Double_paned_Windows
if boxinner1[mm].find_all('span')[0].text=='# of Floors':
Nos_Floors=boxinner1[mm].find_all('span')[1].text
data_stor0[12]=Nos_Floors
print Nos_Floors
if boxinner1[mm].find_all('span')[0].text=='# of Parking Spaces':
Parking_Spaces=boxinner1[mm].find_all('span')[1].text
print Parking_Spaces
data_stor0[13]=Parking_Spaces
if boxinner1[mm].find_all('span')[0].text=='Has Pool':
Has_Pool=boxinner1[mm].find_all('img')[0].get('alt')
print Has_Pool
data_stor0[14]=Has_Pool
if boxinner1[mm].find_all('span')[0].text=='Has Porch':
Has_Porch=boxinner1[mm].find_all('img')[0].get('alt')
print Has_Porch
data_stor0[15]=Has_Porch
if boxinner1[mm].find_all('span')[0].text=='Room Count':
Parking_Spaces=boxinner1[mm].find_all('span')[1].text
print Parking_Spaces
data_stor0[16]=Parking_Spaces
if so1.find_all('h2')[2].text=='Listing info':
courtasyof=so1.find_all('h2')[2].next_element.next_element.next_element
print courtasyof.text
data_stor0[17]=courtasyof.text
courtasy_des=courtasyof.next_element.next_element.next_element.next_element
courtasy_des1=courtasy_des.replace('\t','').replace('\n',' ')
if courtasy_des1[0]==' ':
courtasy_des1=courtasy_des1[1:len(courtasy_des1)]
print courtasy_des1
data_stor0[18]=courtasy_des1
box22=so1.find_all('p',class_="small")[0].text
box23=box22.replace('\n',' ').replace('\t','').replace(' ',' ').replace(' ',' ')
if box23[0]==' ':
box23=box23[1:len(box23)]
if box23[0]==' ':
box23=box23[1:len(box23)]
print box23
data_stor0[19]=box23
current_link=driver.current_url
name=current_link.split('/')[-2]
data_stor0[20]=name
images0=so1.find_all(id="galleria")[0].find_all('img')
photos_name_Stor=[]
for gg in range(0,len(images0)):
photoslinks=images0[gg].get('src')
# print photoslinks
phot_name=name+'_'+str(gg)+'.jpg'
# urllib.urlretrieve(photoslinks,phot_name)
photos_name_Stor.append(phot_name)
data_stor0=data_stor0+photos_name_Stor
print data_stor0
data_stor1.append(data_stor0)
import warnings
from openpyxl import Workbook
wb = Workbook(write_only=True)
ws = wb.create_sheet()
# now we'll fill it with 100 rows x 200 columns
for irow in data_stor1:
ws.append(irow)
# save the file
wb.save('home_des1.xlsx')
cookiesdict=driver.get_cookies()
cookiesdict
import json, io
with io.open('cookiesdict.txt', 'w', encoding='utf8') as json_file:
data3 = json.dumps(cookiesdict, ensure_ascii=False, encoding='utf8',indent=4, sort_keys=True)
json_file.write(unicode(data3))
```
|
github_jupyter
|
from openpyxl import load_workbook
from bs4 import BeautifulSoup
from selenium import webdriver
from time import sleep
import csv
from random import randint
import json, io
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
from selenium.webdriver.common.action_chains import ActionChains
import urllib
import urllib3
import requests
import json, io
from bs4 import BeautifulSoup
urllib3.disable_warnings()
header = {'User-Agent':'Mozilla/5.0'}
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--user-agent="Mozilla/5.0')
chrome_options.add_argument("user-data-dir=selenium")
driver = webdriver.Chrome(chrome_options=chrome_options, executable_path=r'/Users/Name/Downloads/Compressed/chromedrives/chromedriver.exe')
cookies = json.load(open('cookiesdict.txt'))
for cookie in cookies:
driver.add_cookie(cookie)
i=10
driver.switch_to_window(driver.window_handles[0])
# driver.close()
i=0
about0=driver.find_elements_by_class_name('col-md-12')
while True:
driver.switch_to_window(driver.window_handles[0])
if i%10==0:
driver.execute_script("window.scrollTo(0, 200000)")
about0=driver.find_elements_by_class_name('col-md-12')
sleep(1)
about0=driver.find_elements_by_class_name('col-md-12')
actions = ActionChains(driver)
about=about0[i].find_element_by_tag_name('a')
actions.key_down(Keys.CONTROL).click(about).key_up(Keys.CONTROL).perform()
sleep(3)
driver.switch_to.window(driver.window_handles[1])
driver.close()
i=i+1
so1=BeautifulSoup(driver.page_source, 'lxml')
data_stor1=[]
data_stor0=['-']*21
address=so1.find_all(id="MainContent_lblFullAddress")[0].text
address
data_stor0[0]=address
price=so1.find_all(id="MainContent_lblListPrice")[0].text
price
data_stor0[1]=price
tds0=so1.find_all('td')
for ii in range(len(tds0)):
if tds0[ii].text=='Bedrooms:':
Bedrooms=tds0[ii+1].text.replace('\n',' ').replace('\t','')
if Bedrooms[0]==' ':
Bedrooms=Bedrooms[1:len(Bedrooms)]
print Bedrooms
data_stor0[2]=Bedrooms
if tds0[ii].text=='Bathrooms:':
Bathrooms=tds0[ii+1].text.replace('\n',' ').replace('\t','')
if Bathrooms[0]==' ':
Bathrooms=Bathrooms[1:len(Bathrooms)]
print Bathrooms
data_stor0[3]=Bathrooms
if tds0[ii].text=='\nResidential:':
Residential=tds0[ii+1].text.replace('\n',' ').replace('\t','')
if Residential[0]==' ':
Residential=Residential[1:len(Residential)]
print Residential
data_stor0[4]=Residential
if tds0[ii].text=='Lot:':
Lot=tds0[ii+1].text.replace('\n','')
print Lot
data_stor0[5]=Lot
if tds0[ii].text=='Year Built:':
Year_Built=tds0[ii+1].text.replace('\n','')
print Year_Built
data_stor0[6]=Year_Built
Description=so1.find_all('span',id="MainContent_lblDescription")[0].text
Description
data_stor0[7]=Description
box11=so1.find_all('div',class_="box1 margin-top10")
boxinner1=box11[0].find_all('div',class_="col-md-4")
for mm in range(0,len(boxinner1)):
if boxinner1[mm].find_all('span')[0].text=='Architecture Style':
Architecture_Style=boxinner1[mm].find_all('span')[1].text
print Architecture_Style
data_stor0[8]=Architecture_Style
if boxinner1[mm].find_all('span')[0].text=='Has Ceiling Fan(s)':
Has_Ceiling_Fan=boxinner1[mm].find_all('img')[0].get('alt')
print Has_Ceiling_Fan
data_stor0[9]=Has_Ceiling_Fan
if boxinner1[mm].find_all('span')[0].text=='Has Deck':
Has_Deck=boxinner1[mm].find_all('img')[0].get('alt')
data_stor0[10]=Has_Deck
print Has_Deck
if boxinner1[mm].find_all('span')[0].text=='Double-paned Windows':
Double_paned_Windows=boxinner1[mm].find_all('img')[0].get('alt')
data_stor0[11]= Double_paned_Windows
print Double_paned_Windows
if boxinner1[mm].find_all('span')[0].text=='# of Floors':
Nos_Floors=boxinner1[mm].find_all('span')[1].text
data_stor0[12]=Nos_Floors
print Nos_Floors
if boxinner1[mm].find_all('span')[0].text=='# of Parking Spaces':
Parking_Spaces=boxinner1[mm].find_all('span')[1].text
print Parking_Spaces
data_stor0[13]=Parking_Spaces
if boxinner1[mm].find_all('span')[0].text=='Has Pool':
Has_Pool=boxinner1[mm].find_all('img')[0].get('alt')
print Has_Pool
data_stor0[14]=Has_Pool
if boxinner1[mm].find_all('span')[0].text=='Has Porch':
Has_Porch=boxinner1[mm].find_all('img')[0].get('alt')
print Has_Porch
data_stor0[15]=Has_Porch
if boxinner1[mm].find_all('span')[0].text=='Room Count':
Parking_Spaces=boxinner1[mm].find_all('span')[1].text
print Parking_Spaces
data_stor0[16]=Parking_Spaces
if so1.find_all('h2')[2].text=='Listing info':
courtasyof=so1.find_all('h2')[2].next_element.next_element.next_element
print courtasyof.text
data_stor0[17]=courtasyof.text
courtasy_des=courtasyof.next_element.next_element.next_element.next_element
courtasy_des1=courtasy_des.replace('\t','').replace('\n',' ')
if courtasy_des1[0]==' ':
courtasy_des1=courtasy_des1[1:len(courtasy_des1)]
print courtasy_des1
data_stor0[18]=courtasy_des1
box22=so1.find_all('p',class_="small")[0].text
box23=box22.replace('\n',' ').replace('\t','').replace(' ',' ').replace(' ',' ')
if box23[0]==' ':
box23=box23[1:len(box23)]
if box23[0]==' ':
box23=box23[1:len(box23)]
print box23
data_stor0[19]=box23
current_link=driver.current_url
name=current_link.split('/')[-2]
data_stor0[20]=name
images0=so1.find_all(id="galleria")[0].find_all('img')
photos_name_Stor=[]
for gg in range(0,len(images0)):
photoslinks=images0[gg].get('src')
# print photoslinks
phot_name=name+'_'+str(gg)+'.jpg'
# urllib.urlretrieve(photoslinks,phot_name)
photos_name_Stor.append(phot_name)
data_stor0=data_stor0+photos_name_Stor
print data_stor0
data_stor1.append(data_stor0)
import warnings
from openpyxl import Workbook
wb = Workbook(write_only=True)
ws = wb.create_sheet()
# now we'll fill it with 100 rows x 200 columns
for irow in data_stor1:
ws.append(irow)
# save the file
wb.save('home_des1.xlsx')
cookiesdict=driver.get_cookies()
cookiesdict
import json, io
with io.open('cookiesdict.txt', 'w', encoding='utf8') as json_file:
data3 = json.dumps(cookiesdict, ensure_ascii=False, encoding='utf8',indent=4, sort_keys=True)
json_file.write(unicode(data3))
| 0.063439 | 0.108661 |
# RadarCOVID-Report
## Data Extraction
```
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
```
### Constants
```
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
```
### Parameters
```
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
```
### COVID-19 Cases
```
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
```
### Extract API TEKs
```
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
```
### Daily New TEKs
```
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
```
### Official Statistics
```
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
```
## Report Results
```
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
```
### Daily Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
```
### Daily Summary Plots
```
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
```
### Daily Generation to Upload Period Table
```
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
```
### Publish Results
```
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
```
### Publish Results as JSON
```
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
|
github_jupyter
|
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
| 0.268749 | 0.215464 |
# Ear Record Generator
```
%matplotlib inline
import sys
sys.path.append('/homes/yz4009/wd/gitdev/TFNet/')
import numpy as np
import menpo.io as mio
import scipy.io as sio
from io import BytesIO
from scipy.sparse import csr_matrix
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import xml.etree.ElementTree as ET
import json
import glob
import cv2
import scipy
import utils
import os
from menpo.image import Image
from menpo.visualize import print_dynamic, print_progress
from dAAMs.lineerror import interpolate
from dAAMs.tools import loadmatToDict, multi_channel_svs
from scipy.spatial.distance import pdist
from pathlib import Path
from menpo.shape import PointCloud, PointUndirectedGraph
from menpo.transform import Translation
from menpofit.transform import DifferentiableAlignmentSimilarity
from menpowidgets import visualize_images, visualize_pointclouds
from IPython.html.widgets import interact
from IPython.html.widgets import Button
from IPython.display import display, clear_output
from dAAMs.svs import SVS, MultiSVS
from pycocotools.coco import COCO
import numpy as np
import skimage.io as io
import matplotlib.pyplot as plt
import pylab
import data_provider
import tensorflow as tf
slim = tf.contrib.slim
def get_jpg_string(im):
# Gets the serialized jpg from a menpo `Image`.
fp = BytesIO()
mio.export_image(im, fp, extension='jpg')
fp.seek(0)
return fp.read()
def _int_feauture(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feauture(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feauture(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
```
### Ear Record
```
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/databases/ear/UERC/UERC 2017 Dataset/Train Dataset')
record_name = '%s.tfrecords'%'UERC_train'
print(record_name)
def data_iterator():
database_path = load_path
id_no = 1
for identity_path in print_progress(list(load_path.glob('*'))):
if identity_path.is_dir():
images = mio.import_images(identity_path)
for img in images:
cimgs = utils.crop_image(img, img.centre(), img.diagonal()/350, [256,256], base=384)[0]
img_height = 256
img_width = 256
id_no = int(identity_path.stem)
yield cimgs, img_height, img_width, id_no
id_no += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, id_no in iterator:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
writer.close()
generate(data_iterator())
```
#### save to file
```
store_path = Path('/homes/yz4009/wd/databases/UERC_160')
load_path = Path('/vol/atlas/databases/ear/UERC/UERC 2017 Dataset/Train Dataset')
record_name = '%s.tfrecords'%'UERC_train'
print(record_name)
def data_iterator():
database_path = load_path
id_no = 1
for identity_path in print_progress(list(load_path.glob('*'))):
if identity_path.is_dir():
images = mio.import_images(identity_path)
for img_id,img in enumerate(images):
cimgs = utils.crop_image(img, img.centre(), img.diagonal()/350, [160,160], base=384)[0]
img_height = 160
img_width = 160
id_no = int(identity_path.stem)
yield cimgs, img_height, img_width, id_no, img_id
id_no += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
for img_all, img_height, img_width, id_no, img_id in iterator:
d_path = str(store_path/str(id_no))
if not os.path.exists(d_path):
os.mkdir(d_path)
mio.export_image(img_all, store_path/str(id_no)/('%04d.png'%img_id))
generate(data_iterator())
store_path = Path('/homes/yz4009/wd/databases/UERC_160_generate')
load_path = Path('/vol/atlas/databases/ear/UERC/UERC 2017 Dataset/Test Dataset')
record_name = '%s.tfrecords'%'UERC_test'
print(record_name)
def data_iterator():
database_path = load_path
for identity_path in print_progress(list(load_path.glob('*'))):
if identity_path.is_dir():
images = mio.import_images(identity_path)
for img_id,img in enumerate(images):
cimgs = utils.crop_image(img, img.centre(), img.diagonal()/350, [160,160], base=384)[0]
img_height = 160
img_width = 160
id_no = identity_path.stem
image_name = img.path.name
yield cimgs, img_height, img_width, id_no, image_name
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
for img_all, img_height, img_width, id_no, image_name in iterator:
d_path = str(store_path/str(id_no))
if not os.path.exists(d_path):
os.mkdir(d_path)
mio.export_image(img_all, store_path/str(id_no)/image_name)
generate(data_iterator())
```
#### vgg ear
```
store_path = Path('/homes/yz4009/wd/databases/VGGEAR_160')
load_path = Path('/homes/yz4009/wd/databases/ear/VGGEers-Recognition')
def data_iterator():
database_path = load_path
id_no = 167
for identity_path in print_progress(list(load_path.glob('*'))):
if identity_path.is_dir():
images = mio.import_images(identity_path)
for img_id,img in enumerate(images):
img = img.crop_to_landmarks_proportion(0.1)
if img.n_channels == 1:
img = Image(np.stack([img.pixels.squeeze() for _ in range(3)]))
cimgs = utils.crop_image(img, img.centre(), img.diagonal()/350, [160,160], base=384)[0]
img_height = 160
img_width = 160
yield cimgs, img_height, img_width, id_no, img_id
id_no += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
for img_all, img_height, img_width, id_no, img_id in iterator:
d_path = str(store_path/str(id_no))
if not os.path.exists(d_path):
os.mkdir(d_path)
mio.export_image(img_all, store_path/str(id_no)/('%d_%04d.png'%(id_no,img_id)))
generate(data_iterator())
```
### Face Record
#### train
```
np.random.choice(3,5)
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/homes/jiankang/code/facenet/data/CASIA_182_multi/')
record_name = '%s.tfrecords'%'CASIA_182'
print(record_name)
def data_iterator():
database_path = load_path
image_id = 1
for tpath in print_progress(list(load_path.glob('*'))):
if tpath.is_dir():
img_height = 182
img_width = 182
img_all = np.stack([img.pixels_with_channels_at_back() for img in mio.import_images(tpath)])
if len(img_all) < 16:
img_all = img_all[np.random.choice(len(img_all),16)]
n_img = np.min([len(img_all), 354])
img_all = img_all.reshape(-1,img_height,img_width,3)
img_all = img_all[:n_img].reshape(-1,img_width,3)
yield Image.init_from_channels_at_back(img_all), img_height, img_width, n_img, image_id
image_id += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, n_img, id_no in iterator:
try:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'n_image': _int_feauture(n_img),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
except Exception as e:
print(e)
writer.close()
generate(data_iterator())
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/homes/jiankang/data/recognition/data/CASIA_112/')
record_name = '%s.tfrecords'%'CASIA'
print(record_name)
def data_iterator():
database_path = load_path
for timg in print_progress(mio.import_images(load_path)):
img_height = 112
img_width = 112
id_no = int(timg.path.stem)
nh,nw = np.array(timg.shape) // 112
n_img = np.min([nh*nw, 36*16])
img_all = timg.pixels.reshape(3,nh,112,nw,112).transpose(1,3,2,4,0).reshape(-1,112,112,3)
img_all = img_all[:n_img].reshape(-1,112,3)
yield Image.init_from_channels_at_back(img_all), img_height, img_width, n_img, id_no
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, n_img, id_no in iterator:
try:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'n_image': _int_feauture(n_img),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
except Exception as e:
print(e)
writer.close()
generate(data_iterator())
```
#### evaluate
```
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/homes/jiankang/code/facenet/data/lfw_160/')
record_name = '%s.tfrecords'%'LFW_160'
print(record_name)
def data_iterator():
database_path = load_path
image_id = 1
for tpath in print_progress(list(load_path.glob('*'))):
if tpath.is_dir():
img_height = 182
img_width = 182
img_all = np.stack([img.pixels_with_channels_at_back() for img in mio.import_images(tpath)])
n_img = np.min([len(img_all), 354])
img_all = img_all.reshape(-1,img_height,img_width,3)
img_all = img_all[:n_img].reshape(-1,img_width,3)
yield Image.init_from_channels_at_back(img_all), img_height, img_width, n_img, image_id
image_id += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, n_img, id_no in iterator:
try:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'n_image': _int_feauture(n_img),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
except Exception as e:
print(e)
writer.close()
generate(data_iterator())
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/homes/jiankang/data/recognition/data/lfw_112/')
record_name = '%s.tfrecords'%'LFW'
print(record_name)
def data_iterator():
database_path = load_path
img_height = 112
img_width = 112
n_img=2
with open('/homes/yz4009/Desktop/lfw_pairs.txt') as f:
pairs = f.readlines()
n_fold, n_pairs = map(int, pairs[0].strip().split('\t'))
pairs = pairs[1:]
for fold in print_progress(range(n_fold)):
for p in range(n_pairs):
name,id1,id2=pairs[fold*n_pairs*2+p].strip().split('\t')
img1 = mio.import_image(database_path/name/('%s_%04d.jpg'%(name, int(id1))))
img2 = mio.import_image(database_path/name/('%s_%04d.jpg'%(name, int(id2))))
img_all = Image(np.concatenate([img1.pixels, img2.pixels], axis=1))
yield img_all, img_height, img_width, n_img, 1
for p in range(n_pairs, n_pairs*2):
name1,id1, name2,id2=pairs[fold*n_pairs*2+p].strip().split('\t')
img1 = mio.import_image(database_path/name1/('%s_%04d.jpg'%(name1, int(id1))))
img2 = mio.import_image(database_path/name2/('%s_%04d.jpg'%(name2, int(id2))))
img_all = Image(np.concatenate([img1.pixels, img2.pixels], axis=1))
yield img_all, img_height, img_width, n_img, 0
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, n_img, id_no in iterator:
try:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'n_image': _int_feauture(n_img),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
except Exception as e:
print(e)
writer.close()
generate(data_iterator())
import struct
```
|
github_jupyter
|
%matplotlib inline
import sys
sys.path.append('/homes/yz4009/wd/gitdev/TFNet/')
import numpy as np
import menpo.io as mio
import scipy.io as sio
from io import BytesIO
from scipy.sparse import csr_matrix
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import xml.etree.ElementTree as ET
import json
import glob
import cv2
import scipy
import utils
import os
from menpo.image import Image
from menpo.visualize import print_dynamic, print_progress
from dAAMs.lineerror import interpolate
from dAAMs.tools import loadmatToDict, multi_channel_svs
from scipy.spatial.distance import pdist
from pathlib import Path
from menpo.shape import PointCloud, PointUndirectedGraph
from menpo.transform import Translation
from menpofit.transform import DifferentiableAlignmentSimilarity
from menpowidgets import visualize_images, visualize_pointclouds
from IPython.html.widgets import interact
from IPython.html.widgets import Button
from IPython.display import display, clear_output
from dAAMs.svs import SVS, MultiSVS
from pycocotools.coco import COCO
import numpy as np
import skimage.io as io
import matplotlib.pyplot as plt
import pylab
import data_provider
import tensorflow as tf
slim = tf.contrib.slim
def get_jpg_string(im):
# Gets the serialized jpg from a menpo `Image`.
fp = BytesIO()
mio.export_image(im, fp, extension='jpg')
fp.seek(0)
return fp.read()
def _int_feauture(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feauture(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feauture(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/databases/ear/UERC/UERC 2017 Dataset/Train Dataset')
record_name = '%s.tfrecords'%'UERC_train'
print(record_name)
def data_iterator():
database_path = load_path
id_no = 1
for identity_path in print_progress(list(load_path.glob('*'))):
if identity_path.is_dir():
images = mio.import_images(identity_path)
for img in images:
cimgs = utils.crop_image(img, img.centre(), img.diagonal()/350, [256,256], base=384)[0]
img_height = 256
img_width = 256
id_no = int(identity_path.stem)
yield cimgs, img_height, img_width, id_no
id_no += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, id_no in iterator:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
writer.close()
generate(data_iterator())
store_path = Path('/homes/yz4009/wd/databases/UERC_160')
load_path = Path('/vol/atlas/databases/ear/UERC/UERC 2017 Dataset/Train Dataset')
record_name = '%s.tfrecords'%'UERC_train'
print(record_name)
def data_iterator():
database_path = load_path
id_no = 1
for identity_path in print_progress(list(load_path.glob('*'))):
if identity_path.is_dir():
images = mio.import_images(identity_path)
for img_id,img in enumerate(images):
cimgs = utils.crop_image(img, img.centre(), img.diagonal()/350, [160,160], base=384)[0]
img_height = 160
img_width = 160
id_no = int(identity_path.stem)
yield cimgs, img_height, img_width, id_no, img_id
id_no += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
for img_all, img_height, img_width, id_no, img_id in iterator:
d_path = str(store_path/str(id_no))
if not os.path.exists(d_path):
os.mkdir(d_path)
mio.export_image(img_all, store_path/str(id_no)/('%04d.png'%img_id))
generate(data_iterator())
store_path = Path('/homes/yz4009/wd/databases/UERC_160_generate')
load_path = Path('/vol/atlas/databases/ear/UERC/UERC 2017 Dataset/Test Dataset')
record_name = '%s.tfrecords'%'UERC_test'
print(record_name)
def data_iterator():
database_path = load_path
for identity_path in print_progress(list(load_path.glob('*'))):
if identity_path.is_dir():
images = mio.import_images(identity_path)
for img_id,img in enumerate(images):
cimgs = utils.crop_image(img, img.centre(), img.diagonal()/350, [160,160], base=384)[0]
img_height = 160
img_width = 160
id_no = identity_path.stem
image_name = img.path.name
yield cimgs, img_height, img_width, id_no, image_name
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
for img_all, img_height, img_width, id_no, image_name in iterator:
d_path = str(store_path/str(id_no))
if not os.path.exists(d_path):
os.mkdir(d_path)
mio.export_image(img_all, store_path/str(id_no)/image_name)
generate(data_iterator())
store_path = Path('/homes/yz4009/wd/databases/VGGEAR_160')
load_path = Path('/homes/yz4009/wd/databases/ear/VGGEers-Recognition')
def data_iterator():
database_path = load_path
id_no = 167
for identity_path in print_progress(list(load_path.glob('*'))):
if identity_path.is_dir():
images = mio.import_images(identity_path)
for img_id,img in enumerate(images):
img = img.crop_to_landmarks_proportion(0.1)
if img.n_channels == 1:
img = Image(np.stack([img.pixels.squeeze() for _ in range(3)]))
cimgs = utils.crop_image(img, img.centre(), img.diagonal()/350, [160,160], base=384)[0]
img_height = 160
img_width = 160
yield cimgs, img_height, img_width, id_no, img_id
id_no += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
for img_all, img_height, img_width, id_no, img_id in iterator:
d_path = str(store_path/str(id_no))
if not os.path.exists(d_path):
os.mkdir(d_path)
mio.export_image(img_all, store_path/str(id_no)/('%d_%04d.png'%(id_no,img_id)))
generate(data_iterator())
np.random.choice(3,5)
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/homes/jiankang/code/facenet/data/CASIA_182_multi/')
record_name = '%s.tfrecords'%'CASIA_182'
print(record_name)
def data_iterator():
database_path = load_path
image_id = 1
for tpath in print_progress(list(load_path.glob('*'))):
if tpath.is_dir():
img_height = 182
img_width = 182
img_all = np.stack([img.pixels_with_channels_at_back() for img in mio.import_images(tpath)])
if len(img_all) < 16:
img_all = img_all[np.random.choice(len(img_all),16)]
n_img = np.min([len(img_all), 354])
img_all = img_all.reshape(-1,img_height,img_width,3)
img_all = img_all[:n_img].reshape(-1,img_width,3)
yield Image.init_from_channels_at_back(img_all), img_height, img_width, n_img, image_id
image_id += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, n_img, id_no in iterator:
try:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'n_image': _int_feauture(n_img),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
except Exception as e:
print(e)
writer.close()
generate(data_iterator())
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/homes/jiankang/data/recognition/data/CASIA_112/')
record_name = '%s.tfrecords'%'CASIA'
print(record_name)
def data_iterator():
database_path = load_path
for timg in print_progress(mio.import_images(load_path)):
img_height = 112
img_width = 112
id_no = int(timg.path.stem)
nh,nw = np.array(timg.shape) // 112
n_img = np.min([nh*nw, 36*16])
img_all = timg.pixels.reshape(3,nh,112,nw,112).transpose(1,3,2,4,0).reshape(-1,112,112,3)
img_all = img_all[:n_img].reshape(-1,112,3)
yield Image.init_from_channels_at_back(img_all), img_height, img_width, n_img, id_no
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, n_img, id_no in iterator:
try:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'n_image': _int_feauture(n_img),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
except Exception as e:
print(e)
writer.close()
generate(data_iterator())
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/homes/jiankang/code/facenet/data/lfw_160/')
record_name = '%s.tfrecords'%'LFW_160'
print(record_name)
def data_iterator():
database_path = load_path
image_id = 1
for tpath in print_progress(list(load_path.glob('*'))):
if tpath.is_dir():
img_height = 182
img_width = 182
img_all = np.stack([img.pixels_with_channels_at_back() for img in mio.import_images(tpath)])
n_img = np.min([len(img_all), 354])
img_all = img_all.reshape(-1,img_height,img_width,3)
img_all = img_all[:n_img].reshape(-1,img_width,3)
yield Image.init_from_channels_at_back(img_all), img_height, img_width, n_img, image_id
image_id += 1
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, n_img, id_no in iterator:
try:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'n_image': _int_feauture(n_img),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
except Exception as e:
print(e)
writer.close()
generate(data_iterator())
store_path = Path('/homes/yz4009/wd/databases/tfrecords')
load_path = Path('/vol/atlas/homes/jiankang/data/recognition/data/lfw_112/')
record_name = '%s.tfrecords'%'LFW'
print(record_name)
def data_iterator():
database_path = load_path
img_height = 112
img_width = 112
n_img=2
with open('/homes/yz4009/Desktop/lfw_pairs.txt') as f:
pairs = f.readlines()
n_fold, n_pairs = map(int, pairs[0].strip().split('\t'))
pairs = pairs[1:]
for fold in print_progress(range(n_fold)):
for p in range(n_pairs):
name,id1,id2=pairs[fold*n_pairs*2+p].strip().split('\t')
img1 = mio.import_image(database_path/name/('%s_%04d.jpg'%(name, int(id1))))
img2 = mio.import_image(database_path/name/('%s_%04d.jpg'%(name, int(id2))))
img_all = Image(np.concatenate([img1.pixels, img2.pixels], axis=1))
yield img_all, img_height, img_width, n_img, 1
for p in range(n_pairs, n_pairs*2):
name1,id1, name2,id2=pairs[fold*n_pairs*2+p].strip().split('\t')
img1 = mio.import_image(database_path/name1/('%s_%04d.jpg'%(name1, int(id1))))
img2 = mio.import_image(database_path/name2/('%s_%04d.jpg'%(name2, int(id2))))
img_all = Image(np.concatenate([img1.pixels, img2.pixels], axis=1))
yield img_all, img_height, img_width, n_img, 0
def generate(iterator,
store_path=store_path,
record_name=record_name,
base=384):
store_path = Path(store_path)
writer = tf.python_io.TFRecordWriter(str(store_path/record_name))
for img_all, img_height, img_width, n_img, id_no in iterator:
try:
example = tf.train.Example(
features=tf.train.Features(
# Features contains a map of string to Feature proto objects
feature={
# images
'image': _bytes_feauture(get_jpg_string(img_all)),
'height': _int_feauture(img_height),
'width': _int_feauture(img_width),
'n_image': _int_feauture(n_img),
'id_no': _int_feauture(id_no)
}))
# use the proto object to serialize the example to a string
serialized = example.SerializeToString()
# write the serialized object to disk
writer.write(serialized)
except Exception as e:
print(e)
writer.close()
generate(data_iterator())
import struct
| 0.417984 | 0.51501 |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **SpaceX Falcon 9 first stage Landing Prediction**
# Lab 1: Collecting the data
Estimated time needed: **45** minutes
In this capstone, we will predict if the Falcon 9 first stage will land successfully. SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars; other providers cost upward of 165 million dollars each, much of the savings is because SpaceX can reuse the first stage. Therefore if we can determine if the first stage will land, we can determine the cost of a launch. This information can be used if an alternate company wants to bid against SpaceX for a rocket launch. In this lab, you will collect and make sure the data is in the correct format from an API. The following is an example of a successful and launch.

Several examples of an unsuccessful landing are shown here:

Most unsuccessful landings are planned. Space X performs a controlled landing in the oceans.
## Objectives
In this lab, you will make a get request to the SpaceX API. You will also do some basic data wrangling and formating.
* Request to the SpaceX API
* Clean the requested data
***
## Import Libraries and Define Auxiliary Functions
We will import the following libraries into the lab
```
# Requests allows us to make HTTP requests which we will use to get data from an API
import requests
# Pandas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
# NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
# Datetime is a library that allows us to represent dates
import datetime
# Setting this option will print all collumns of a dataframe
pd.set_option('display.max_columns', None)
# Setting this option will print all of the data in a feature
pd.set_option('display.max_colwidth', None)
```
Below we will define a series of helper functions that will help us use the API to extract information using identification numbers in the launch data.
From the <code>rocket</code> column we would like to learn the booster name.
```
# Takes the dataset and uses the rocket column to call the API and append the data to the list
def getBoosterVersion(data):
for x in data['rocket']:
response = requests.get("https://api.spacexdata.com/v4/rockets/"+str(x)).json()
BoosterVersion.append(response['name'])
```
From the <code>launchpad</code> we would like to know the name of the launch site being used, the logitude, and the latitude.
```
# Takes the dataset and uses the launchpad column to call the API and append the data to the list
def getLaunchSite(data):
for x in data['launchpad']:
response = requests.get("https://api.spacexdata.com/v4/launchpads/"+str(x)).json()
Longitude.append(response['longitude'])
Latitude.append(response['latitude'])
LaunchSite.append(response['name'])
```
From the <code>payload</code> we would like to learn the mass of the payload and the orbit that it is going to.
```
# Takes the dataset and uses the payloads column to call the API and append the data to the lists
def getPayloadData(data):
for load in data['payloads']:
response = requests.get("https://api.spacexdata.com/v4/payloads/"+load).json()
PayloadMass.append(response['mass_kg'])
Orbit.append(response['orbit'])
```
From <code>cores</code> we would like to learn the outcome of the landing, the type of the landing, number of flights with that core, whether gridfins were used, wheter the core is reused, wheter legs were used, the landing pad used, the block of the core which is a number used to seperate version of cores, the number of times this specific core has been reused, and the serial of the core.
```
# Takes the dataset and uses the cores column to call the API and append the data to the lists
def getCoreData(data):
for core in data['cores']:
if core['core'] != None:
response = requests.get("https://api.spacexdata.com/v4/cores/"+core['core']).json()
Block.append(response['block'])
ReusedCount.append(response['reuse_count'])
Serial.append(response['serial'])
else:
Block.append(None)
ReusedCount.append(None)
Serial.append(None)
Outcome.append(str(core['landing_success'])+' '+str(core['landing_type']))
Flights.append(core['flight'])
GridFins.append(core['gridfins'])
Reused.append(core['reused'])
Legs.append(core['legs'])
LandingPad.append(core['landpad'])
```
Now let's start requesting rocket launch data from SpaceX API with the following URL:
```
spacex_url="https://api.spacexdata.com/v4/launches/past"
response = requests.get(spacex_url)
```
Check the content of the response
```
print(response.content)
```
You should see the response contains massive information about SpaceX launches. Next, let's try to discover some more relevant information for this project.
### Task 1: Request and parse the SpaceX launch data using the GET request
To make the requested JSON results more consistent, we will use the following static response object for this project:
```
static_json_url='https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/API_call_spacex_api.json'
```
We should see that the request was successfull with the 200 status response code
```
response.status_code
```
Now we decode the response content as a Json using <code>.json()</code> and turn it into a Pandas dataframe using <code>.json_normalize()</code>
```
# Use json_normalize meethod to convert the json result into a dataframe
data = pd.json_normalize(response.json())
```
Using the dataframe <code>data</code> print the first 5 rows
```
# Get the head of the dataframe
data.head(5)
```
You will notice that a lot of the data are IDs. For example the rocket column has no information about the rocket just an identification number.
We will now use the API again to get information about the launches using the IDs given for each launch. Specifically we will be using columns <code>rocket</code>, <code>payloads</code>, <code>launchpad</code>, and <code>cores</code>.
```
# Lets take a subset of our dataframe keeping only the features we want and the flight number, and date_utc.
data = data[['rocket', 'payloads', 'launchpad', 'cores', 'flight_number', 'date_utc']]
# We will remove rows with multiple cores because those are falcon rockets with 2 extra rocket boosters and rows that have multiple payloads in a single rocket.
data = data[data['cores'].map(len)==1]
data = data[data['payloads'].map(len)==1]
# Since payloads and cores are lists of size 1 we will also extract the single value in the list and replace the feature.
data['cores'] = data['cores'].map(lambda x : x[0])
data['payloads'] = data['payloads'].map(lambda x : x[0])
# We also want to convert the date_utc to a datetime datatype and then extracting the date leaving the time
data['date'] = pd.to_datetime(data['date_utc']).dt.date
# Using the date we will restrict the dates of the launches
data = data[data['date'] <= datetime.date(2020, 11, 13)]
```
* From the <code>rocket</code> we would like to learn the booster name
* From the <code>payload</code> we would like to learn the mass of the payload and the orbit that it is going to
* From the <code>launchpad</code> we would like to know the name of the launch site being used, the longitude, and the latitude.
* From <code>cores</code> we would like to learn the outcome of the landing, the type of the landing, number of flights with that core, whether gridfins were used, whether the core is reused, whether legs were used, the landing pad used, the block of the core which is a number used to seperate version of cores, the number of times this specific core has been reused, and the serial of the core.
The data from these requests will be stored in lists and will be used to create a new dataframe.
```
#Global variables
BoosterVersion = []
PayloadMass = []
Orbit = []
LaunchSite = []
Outcome = []
Flights = []
GridFins = []
Reused = []
Legs = []
LandingPad = []
Block = []
ReusedCount = []
Serial = []
Longitude = []
Latitude = []
```
These functions will apply the outputs globally to the above variables. Let's take a looks at <code>BoosterVersion</code> variable. Before we apply <code>getBoosterVersion</code> the list is empty:
```
BoosterVersion
```
Now, let's apply <code> getBoosterVersion</code> function method to get the booster version
```
# Call getBoosterVersion
getBoosterVersion(data)
```
the list has now been update
```
BoosterVersion[0:5]
```
we can apply the rest of the functions here:
```
# Call getLaunchSite
getLaunchSite(data)
# Call getPayloadData
getPayloadData(data)
# Call getCoreData
getCoreData(data)
```
Finally lets construct our dataset using the data we have obtained. We we combine the columns into a dictionary.
```
launch_dict = {'FlightNumber': list(data['flight_number']),
'Date': list(data['date']),
'BoosterVersion':BoosterVersion,
'PayloadMass':PayloadMass,
'Orbit':Orbit,
'LaunchSite':LaunchSite,
'Outcome':Outcome,
'Flights':Flights,
'GridFins':GridFins,
'Reused':Reused,
'Legs':Legs,
'LandingPad':LandingPad,
'Block':Block,
'ReusedCount':ReusedCount,
'Serial':Serial,
'Longitude': Longitude,
'Latitude': Latitude}
```
Then, we need to create a Pandas data frame from the dictionary launch_dict.
```
# Create a data from launch_dict
data_new = pd.DataFrame(launch_dict)
```
Show the summary of the dataframe
```
# Show the head of the dataframe
data_new.head(5)
```
### Task 2: Filter the dataframe to only include `Falcon 9` launches
Finally we will remove the Falcon 1 launches keeping only the Falcon 9 launches. Filter the data dataframe using the <code>BoosterVersion</code> column to only keep the Falcon 9 launches. Save the filtered data to a new dataframe called <code>data_falcon9</code>.
```
# Hint data['BoosterVersion']!='Falcon 1'
data_falcon9 = data_new[data_new['BoosterVersion']=='Falcon 9']
```
Now that we have removed some values we should reset the FlgihtNumber column
```
data_falcon9.loc[:,'FlightNumber'] = list(range(1, data_falcon9.shape[0]+1))
data_falcon9
```
## Data Wrangling
We can see below that some of the rows are missing values in our dataset.
```
data_falcon9.isnull().sum()
```
Before we can continue we must deal with these missing values. The <code>LandingPad</code> column will retain None values to represent when landing pads were not used.
### Task 3: Dealing with Missing Values
Calculate below the mean for the <code>PayloadMass</code> using the <code>.mean()</code>. Then use the mean and the <code>.replace()</code> function to replace `np.nan` values in the data with the mean you calculated.
```
# Calculate the mean value of PayloadMass column
pm_mean = data_falcon9['PayloadMass'].mean()
# Replace the np.nan values with its mean value
temp = data_falcon9['PayloadMass'].replace(np.nan, pm_mean)
data_falcon9['PayloadMass'] = temp
data_falcon9
# Replace the np.nan values with its mean value
```
You should see the number of missing values of the <code>PayLoadMass</code> change to zero.
Now we should have no missing values in our dataset except for in <code>LandingPad</code>.
We can now export it to a <b>CSV</b> for the next section,but to make the answers consistent, in the next lab we will provide data in a pre-selected date range.
<code>data_falcon9.to_csv('dataset_part\_1.csv', index=False)</code>
## Authors
<a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ----------------------------------- |
| 2020-09-20 | 1.1 | Joseph | get result each time you run |
| 2020-09-20 | 1.1 | Azim | Created Part 1 Lab using SpaceX API |
| 2020-09-20 | 1.0 | Joseph | Modified Multiple Areas |
Copyright © 2021 IBM Corporation. All rights reserved.
|
github_jupyter
|
# Requests allows us to make HTTP requests which we will use to get data from an API
import requests
# Pandas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
# NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
# Datetime is a library that allows us to represent dates
import datetime
# Setting this option will print all collumns of a dataframe
pd.set_option('display.max_columns', None)
# Setting this option will print all of the data in a feature
pd.set_option('display.max_colwidth', None)
# Takes the dataset and uses the rocket column to call the API and append the data to the list
def getBoosterVersion(data):
for x in data['rocket']:
response = requests.get("https://api.spacexdata.com/v4/rockets/"+str(x)).json()
BoosterVersion.append(response['name'])
# Takes the dataset and uses the launchpad column to call the API and append the data to the list
def getLaunchSite(data):
for x in data['launchpad']:
response = requests.get("https://api.spacexdata.com/v4/launchpads/"+str(x)).json()
Longitude.append(response['longitude'])
Latitude.append(response['latitude'])
LaunchSite.append(response['name'])
# Takes the dataset and uses the payloads column to call the API and append the data to the lists
def getPayloadData(data):
for load in data['payloads']:
response = requests.get("https://api.spacexdata.com/v4/payloads/"+load).json()
PayloadMass.append(response['mass_kg'])
Orbit.append(response['orbit'])
# Takes the dataset and uses the cores column to call the API and append the data to the lists
def getCoreData(data):
for core in data['cores']:
if core['core'] != None:
response = requests.get("https://api.spacexdata.com/v4/cores/"+core['core']).json()
Block.append(response['block'])
ReusedCount.append(response['reuse_count'])
Serial.append(response['serial'])
else:
Block.append(None)
ReusedCount.append(None)
Serial.append(None)
Outcome.append(str(core['landing_success'])+' '+str(core['landing_type']))
Flights.append(core['flight'])
GridFins.append(core['gridfins'])
Reused.append(core['reused'])
Legs.append(core['legs'])
LandingPad.append(core['landpad'])
spacex_url="https://api.spacexdata.com/v4/launches/past"
response = requests.get(spacex_url)
print(response.content)
static_json_url='https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/API_call_spacex_api.json'
response.status_code
# Use json_normalize meethod to convert the json result into a dataframe
data = pd.json_normalize(response.json())
# Get the head of the dataframe
data.head(5)
# Lets take a subset of our dataframe keeping only the features we want and the flight number, and date_utc.
data = data[['rocket', 'payloads', 'launchpad', 'cores', 'flight_number', 'date_utc']]
# We will remove rows with multiple cores because those are falcon rockets with 2 extra rocket boosters and rows that have multiple payloads in a single rocket.
data = data[data['cores'].map(len)==1]
data = data[data['payloads'].map(len)==1]
# Since payloads and cores are lists of size 1 we will also extract the single value in the list and replace the feature.
data['cores'] = data['cores'].map(lambda x : x[0])
data['payloads'] = data['payloads'].map(lambda x : x[0])
# We also want to convert the date_utc to a datetime datatype and then extracting the date leaving the time
data['date'] = pd.to_datetime(data['date_utc']).dt.date
# Using the date we will restrict the dates of the launches
data = data[data['date'] <= datetime.date(2020, 11, 13)]
#Global variables
BoosterVersion = []
PayloadMass = []
Orbit = []
LaunchSite = []
Outcome = []
Flights = []
GridFins = []
Reused = []
Legs = []
LandingPad = []
Block = []
ReusedCount = []
Serial = []
Longitude = []
Latitude = []
BoosterVersion
# Call getBoosterVersion
getBoosterVersion(data)
BoosterVersion[0:5]
# Call getLaunchSite
getLaunchSite(data)
# Call getPayloadData
getPayloadData(data)
# Call getCoreData
getCoreData(data)
launch_dict = {'FlightNumber': list(data['flight_number']),
'Date': list(data['date']),
'BoosterVersion':BoosterVersion,
'PayloadMass':PayloadMass,
'Orbit':Orbit,
'LaunchSite':LaunchSite,
'Outcome':Outcome,
'Flights':Flights,
'GridFins':GridFins,
'Reused':Reused,
'Legs':Legs,
'LandingPad':LandingPad,
'Block':Block,
'ReusedCount':ReusedCount,
'Serial':Serial,
'Longitude': Longitude,
'Latitude': Latitude}
# Create a data from launch_dict
data_new = pd.DataFrame(launch_dict)
# Show the head of the dataframe
data_new.head(5)
# Hint data['BoosterVersion']!='Falcon 1'
data_falcon9 = data_new[data_new['BoosterVersion']=='Falcon 9']
data_falcon9.loc[:,'FlightNumber'] = list(range(1, data_falcon9.shape[0]+1))
data_falcon9
data_falcon9.isnull().sum()
# Calculate the mean value of PayloadMass column
pm_mean = data_falcon9['PayloadMass'].mean()
# Replace the np.nan values with its mean value
temp = data_falcon9['PayloadMass'].replace(np.nan, pm_mean)
data_falcon9['PayloadMass'] = temp
data_falcon9
# Replace the np.nan values with its mean value
| 0.440469 | 0.990348 |
# Simulated CO<sub>2</sub> distributions
```
%load_ext autoreload
%autoreload 2
from itertools import product
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import cartopy.crs as ccrs
import figure_panels
import models
import obs_surface
import util
```
## Load data from [CarbonTracker (CT2019B)](http://dx.doi.org/10.25925/20201008)
Use the `models` interface to compute zonal-means and surface molefractions.
```
model_list = ['CT2019B',]
model_obj = {model: models.Model(model) for model in model_list}
model_obj
%%time
dset_srf = {
model: model_obj[model].open_derived_dataset('molefractions_surface_daily')
for model in model_list
}
ds_spo = {
model: model_obj[model].open_derived_dataset('spo_ts_daily')
for model in model_list
}
for model in model_list:
for v in ['CO2', 'CO2_OCN', 'CO2_LND', 'CO2_FFF']:
dset_srf[model][v] = dset_srf[model][v] - ds_spo[model][v]
dset_srf[model] = dset_srf[model].groupby('time.season').mean('time').compute()
dset_srf[model_list[0]].CO2.sel(season='DJF').plot(vmin=-2, vmax=2)
dset_srf
dsets_theta_bins = {}
for model in model_list:
ds = model_obj[model].open_derived_dataset(
'molefractions_theta_bins',
kwargs_name='SO-10K-bins-300K_275K',
lat_bounds=(-80., -45.),
theta_bins=[(295., 305.), (270., 280.),],
)
dsets_theta_bins[model] = ds.sel(time=slice('2009', '2020')).compute()
dsets_theta_bins
%%time
dset_za = {
model: model_obj[model].open_derived_dataset('molefractions_z_za')
for model in model_list
}
for model in model_list:
for v in ['CO2', 'CO2_OCN', 'CO2_LND', 'CO2_FFF']:
dset_za[model][v] = dset_za[model][v] - dsets_theta_bins[model][v].sel(theta_bins=300.) #ds_spo[model][v]
dset_za[model] = dset_za[model].groupby('time.season').mean('time').compute()
dset_za[model_list[0]].CO2.sel(season='DJF').plot(vmin=-2, vmax=2)
dset_za
```
## Zonal mean CO<sub>2</sub> distributions
```
tracer_name = dict(
CO2='Total',
CO2_OCN='Ocean',
CO2_LND='Land',
CO2_FFF='Fossil fuel',
)
def section_panel(model_list):
fig = plt.figure(figsize=(8, 12)) #dpi=300)
gs_outer = gridspec.GridSpec(
nrows=2, ncols=1,
left=0, right=0.95,
wspace=0.075, hspace=0.15,
)
ax_row = []
ax_col = []
ax_list = []
for j, model in enumerate(model_list):
for i, season in enumerate(['DJF', 'JJA']):
gs_inner = gs_outer[i, j].subgridspec(
nrows=2, ncols=2,
hspace=0.25, wspace=0.1
)
axs = np.array([
plt.subplot(gs_inner[0, 0]), plt.subplot(gs_inner[0, 1]),
plt.subplot(gs_inner[1, 0]), plt.subplot(gs_inner[1, 1]),
])
ds_xs = dset_za[model].sel(season=season)
lat = ds_xs.lat
zlev = ds_xs.zlev * 1e-3
theta = ds_xs.theta
for n, v in enumerate(['CO2', 'CO2_OCN', 'CO2_LND', 'CO2_FFF']):
title = f'{model} {season} {v}'
cf = figure_panels.model_CO2_xsection(lat, zlev, ds_xs[v].values, theta.values, axs[n])
axs[n].text(-89.5, 9.5, tracer_name[v], fontweight='bold', backgroundcolor='w', zorder=100)
if not (i == 1 and n >= 2):
axs[n].set_xlabel('')
if not (j == 0 and (n == 0 or n == 2)):
axs[n].set_ylabel('')
if j == 0 and n == 0:
ax_row.append(axs[n])
if i == 0 and n == 0:
ax_col.append(axs[n])
ax_list.append(axs[n])
cax = fig.add_axes([0.97, 0.2, 0.02, 0.6])
cb = plt.colorbar(cf, cax=cax)
cb.ax.set_title('$\Delta$CO$_2$ [ppm]', loc='left');
util.subplot_row_labels(ax_row, ['Summer (DJF)', 'Winter (JJA)'], yoff=-1)
util.label_plots(fig, ax_list, xoff=-0.01, yoff=0.005)
plt.suptitle(model_list[0], y=0.92,
fontsize='14', fontweight='bold', ha='center', va='center'
)
util.savefig(f'zonal-mean-sections-{model_list[0]}')
for m in model_list:
section_panel([m])
```
## Surface CO<sub>2</sub> distributions
```
fig = plt.figure(figsize=(16, 10)) #dpi=300)
gs = gridspec.GridSpec(
nrows=2, ncols=4,
left=0.1, right=0.97,
hspace=0.1, wspace=0.1)
prj = ccrs.SouthPolarStereo()
stninfo = obs_surface.get_stn_info('CO2')
stninfo = stninfo.loc[stninfo.stn.isin([s for s in stninfo.stn if 'LMG' not in s])]
axs = np.array(
[plt.subplot(gs[i, j], projection=prj)
for i, j in product(range(2), range(4))]
).reshape((2, 4))
util.label_plots(fig, [ax for ax in axs.ravel()], xoff=0., yoff=-0.01)
model = 'CT2019B'
for i, season in enumerate(['DJF', 'JJA']):
for j, flavor in enumerate(['CO2', 'CO2_OCN', 'CO2_LND', 'CO2_FFF']):
ds_map = dset_srf[model].sel(season=season)
lon = ds_map.lon
lat = ds_map.lat
field = ds_map[flavor]
cf = figure_panels.model_CO2_map(lon, lat, field, axs[i, j], stninfo=stninfo)
cax = fig.add_axes([0.99, 0.2, 0.01, 0.6])
cb = plt.colorbar(cf, cax=cax)
cb.ax.set_title('$\Delta$CO$_2$ [ppm]', loc='left');
util.subplot_row_labels(axs[:, 0], ['Summer (DJF)', 'Winter (JJA)'], xoff=60)
util.subplot_col_labels(axs[0, :], ['Total', 'Ocean', 'Land', 'Fossil'])
util.savefig('surface-co2-maps')
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
from itertools import product
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import cartopy.crs as ccrs
import figure_panels
import models
import obs_surface
import util
model_list = ['CT2019B',]
model_obj = {model: models.Model(model) for model in model_list}
model_obj
%%time
dset_srf = {
model: model_obj[model].open_derived_dataset('molefractions_surface_daily')
for model in model_list
}
ds_spo = {
model: model_obj[model].open_derived_dataset('spo_ts_daily')
for model in model_list
}
for model in model_list:
for v in ['CO2', 'CO2_OCN', 'CO2_LND', 'CO2_FFF']:
dset_srf[model][v] = dset_srf[model][v] - ds_spo[model][v]
dset_srf[model] = dset_srf[model].groupby('time.season').mean('time').compute()
dset_srf[model_list[0]].CO2.sel(season='DJF').plot(vmin=-2, vmax=2)
dset_srf
dsets_theta_bins = {}
for model in model_list:
ds = model_obj[model].open_derived_dataset(
'molefractions_theta_bins',
kwargs_name='SO-10K-bins-300K_275K',
lat_bounds=(-80., -45.),
theta_bins=[(295., 305.), (270., 280.),],
)
dsets_theta_bins[model] = ds.sel(time=slice('2009', '2020')).compute()
dsets_theta_bins
%%time
dset_za = {
model: model_obj[model].open_derived_dataset('molefractions_z_za')
for model in model_list
}
for model in model_list:
for v in ['CO2', 'CO2_OCN', 'CO2_LND', 'CO2_FFF']:
dset_za[model][v] = dset_za[model][v] - dsets_theta_bins[model][v].sel(theta_bins=300.) #ds_spo[model][v]
dset_za[model] = dset_za[model].groupby('time.season').mean('time').compute()
dset_za[model_list[0]].CO2.sel(season='DJF').plot(vmin=-2, vmax=2)
dset_za
tracer_name = dict(
CO2='Total',
CO2_OCN='Ocean',
CO2_LND='Land',
CO2_FFF='Fossil fuel',
)
def section_panel(model_list):
fig = plt.figure(figsize=(8, 12)) #dpi=300)
gs_outer = gridspec.GridSpec(
nrows=2, ncols=1,
left=0, right=0.95,
wspace=0.075, hspace=0.15,
)
ax_row = []
ax_col = []
ax_list = []
for j, model in enumerate(model_list):
for i, season in enumerate(['DJF', 'JJA']):
gs_inner = gs_outer[i, j].subgridspec(
nrows=2, ncols=2,
hspace=0.25, wspace=0.1
)
axs = np.array([
plt.subplot(gs_inner[0, 0]), plt.subplot(gs_inner[0, 1]),
plt.subplot(gs_inner[1, 0]), plt.subplot(gs_inner[1, 1]),
])
ds_xs = dset_za[model].sel(season=season)
lat = ds_xs.lat
zlev = ds_xs.zlev * 1e-3
theta = ds_xs.theta
for n, v in enumerate(['CO2', 'CO2_OCN', 'CO2_LND', 'CO2_FFF']):
title = f'{model} {season} {v}'
cf = figure_panels.model_CO2_xsection(lat, zlev, ds_xs[v].values, theta.values, axs[n])
axs[n].text(-89.5, 9.5, tracer_name[v], fontweight='bold', backgroundcolor='w', zorder=100)
if not (i == 1 and n >= 2):
axs[n].set_xlabel('')
if not (j == 0 and (n == 0 or n == 2)):
axs[n].set_ylabel('')
if j == 0 and n == 0:
ax_row.append(axs[n])
if i == 0 and n == 0:
ax_col.append(axs[n])
ax_list.append(axs[n])
cax = fig.add_axes([0.97, 0.2, 0.02, 0.6])
cb = plt.colorbar(cf, cax=cax)
cb.ax.set_title('$\Delta$CO$_2$ [ppm]', loc='left');
util.subplot_row_labels(ax_row, ['Summer (DJF)', 'Winter (JJA)'], yoff=-1)
util.label_plots(fig, ax_list, xoff=-0.01, yoff=0.005)
plt.suptitle(model_list[0], y=0.92,
fontsize='14', fontweight='bold', ha='center', va='center'
)
util.savefig(f'zonal-mean-sections-{model_list[0]}')
for m in model_list:
section_panel([m])
fig = plt.figure(figsize=(16, 10)) #dpi=300)
gs = gridspec.GridSpec(
nrows=2, ncols=4,
left=0.1, right=0.97,
hspace=0.1, wspace=0.1)
prj = ccrs.SouthPolarStereo()
stninfo = obs_surface.get_stn_info('CO2')
stninfo = stninfo.loc[stninfo.stn.isin([s for s in stninfo.stn if 'LMG' not in s])]
axs = np.array(
[plt.subplot(gs[i, j], projection=prj)
for i, j in product(range(2), range(4))]
).reshape((2, 4))
util.label_plots(fig, [ax for ax in axs.ravel()], xoff=0., yoff=-0.01)
model = 'CT2019B'
for i, season in enumerate(['DJF', 'JJA']):
for j, flavor in enumerate(['CO2', 'CO2_OCN', 'CO2_LND', 'CO2_FFF']):
ds_map = dset_srf[model].sel(season=season)
lon = ds_map.lon
lat = ds_map.lat
field = ds_map[flavor]
cf = figure_panels.model_CO2_map(lon, lat, field, axs[i, j], stninfo=stninfo)
cax = fig.add_axes([0.99, 0.2, 0.01, 0.6])
cb = plt.colorbar(cf, cax=cax)
cb.ax.set_title('$\Delta$CO$_2$ [ppm]', loc='left');
util.subplot_row_labels(axs[:, 0], ['Summer (DJF)', 'Winter (JJA)'], xoff=60)
util.subplot_col_labels(axs[0, :], ['Total', 'Ocean', 'Land', 'Fossil'])
util.savefig('surface-co2-maps')
| 0.269326 | 0.812384 |
## Enap - Especialização em Ciência de Dados aplicada a Políticas Públicas
# D20 - Monitoria em Ciência da Computação
## Atividade Avaliativa 01
Nessa atividade, você poderá avaliar seu progresso nos conceitos e habilidades trabalhados no primeiro ciclo da disciplina, isto é, o conteúdo de todas as aulas ministradas em 2021.
Para iniciar sua avaliação, execute a célula abaixo, que permitirá o registro de suas respostas no Dashboard de notas. Depois, responda às questões escrevendo uma função Python que atenda ao comando de cada questão. Note que cada questão é composta por três células: a primeira célula para sua resposta, a segunda célula para testes e a terceira para validação e registro da resposta.
```
#@title Célula de inicialização. Por favor, execute.
import sys, os, subprocess
```
### Questão 01
Crie uma função que retorne o string `"pelo menos um verdadeiro"` quando o resultado da operação lógica de **conjunção** entre dois parâmetros for verdadeira ($x \vee y = True$) ou o string vazio `""`, quando for falsa ($x \vee y = False$):
```
def conjuncao(x, y):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
### Questão 02
Crie uma função que retorne o resultado da operação lógica de **implicação** entre dois parâmetros ($x \rightarrow y$):
```
def implicacao(x, y):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
### Questão 03
Crie uma função que retorne o resultado da operação lógica de *exclusão* entre dois parâmetros ($x \veebar y$)
```
def exclusao(x, y):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
### Questão 04
Crie uma função que crie um arquivo com o nome indicado no parâmetro `nome`:
* Dica: você pode usar os métodos `os.system('cmd')` ou `subprocess.getoutput('cmd')` para executar comandos de terminal
```
def criar_arquivo(nome):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
### Questão 05
Crie uma função que crie um diretório com o nome indicado no parâmetro `nome`:
```
def criar_diretorio(nome):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
### Questão 06
Crie uma função que crie um arquivo `nomeArquivo` dentro do diretório `nomeDiretorio`. A função deve criar o diretório, caso esse ainda não exista.
* Dica 1: você pode usar o comando `os.chdir(path)` para mudar de diretório
* Dica 2: você também pode executar uma composição de comandos de terminal utilizando a expressão `&&`, como em `os.system('cmd1 && cmd2')`:
```
def criar_arquivo_no_diretorio(nomeArquivo, nomeDiretorio):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
### Questão 07
Crie uma função que crie um diretório, dentro do diretório recém-criado, crie um ambiente virtual Python
* Dica: utilize o comando de terminal `virtualenv`, conforme apresentado em aula
```
def criar_projeto_python(nomeProjeto):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
### Questão 08
Crie uma função que crie um diretório, dentro do diretório recém-criado, inicie um repositório Git e configure um nome e e-mail para o repositório:
* Dica: use os comandos `git init` e `git config`, conforme apresentados em aula
```
def criar_repo_git(nomeRepositorio):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
### Questão 09
Crie uma função que crie um arquivo `nomeArquivo` dentro do repositório `nomeRepositorio` e faça o commit do arquivo naquele repositório:
* Observação: o repositório já foi criado no exercício anterior
* Dica 1: não se esqueça da opção `-m "comment"` para adicionar a descrição do commit.
* Dica 2: você também pode informar o autor do commit com a opção `--author="Bruce Wayne <bruce@wayne.com>"`
```
def create_add_commit(nomeArquivo, nomeRepositorio):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
### Questão 10
Crie uma função que crie o branch `nomeBranch` dentro do repositório `nomeRepositorio`:
* Observação: o repositório já foi criado na Questão 08
```
def create_branch(nomeBranch, nomeRepositorio):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
```
|
github_jupyter
|
#@title Célula de inicialização. Por favor, execute.
import sys, os, subprocess
def conjuncao(x, y):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
def implicacao(x, y):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
def exclusao(x, y):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
def criar_arquivo(nome):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
def criar_diretorio(nome):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
def criar_arquivo_no_diretorio(nomeArquivo, nomeDiretorio):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
def criar_projeto_python(nomeProjeto):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
def criar_repo_git(nomeRepositorio):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
def create_add_commit(nomeArquivo, nomeRepositorio):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
def create_branch(nomeBranch, nomeRepositorio):
# Escreva sua resposta aqui
# Utilize esta célula para testar sua resposta
| 0.095988 | 0.930395 |
# Introduction
In this notebook we demonstrate the use of **BM25 (Best Matching 25)** Information Retrieval technique to make trace link recovery between Test Cases and Bug Reports.
We model our study as follows:
* Each bug report title, summary and description compose a single query.
* We use each test case content as an entire document that must be returned to the query made
# Import Libraries
```
%load_ext autoreload
%autoreload 2
from mod_finder_util import mod_finder_util
mod_finder_util.add_modules_origin_search_path()
import pandas as pd
from modules.models_runner.tc_br_models_runner import TC_BR_Runner
from modules.models_runner.tc_br_models_runner import TC_BR_Models_Hyperp
from modules.utils import aux_functions
from modules.utils import firefox_dataset_p2 as fd
from modules.utils import tokenizers as tok
from modules.models.bm25 import BM_25
from IPython.display import display
import warnings; warnings.simplefilter('ignore')
```
# Load Datasets
```
tcs = [x for x in range(37,59)]
orc = fd.Tc_BR_Oracles.read_oracle_expert_df()
orc_subset = orc[orc.index.isin(tcs)]
#aux_functions.highlight_df(orc_subset)
tcs = [13,37,60]
brs = [1267501]
testcases = fd.Datasets.read_testcases_df()
testcases = testcases[testcases.TC_Number.isin(tcs)]
bugreports = fd.Datasets.read_selected_bugreports_df()
bugreports = bugreports[bugreports.Bug_Number.isin(brs)]
print('tc.shape: {}'.format(testcases.shape))
print('br.shape: {}'.format(bugreports.shape))
```
# Running BM25 Model
```
corpus = testcases.tc_desc
query = bugreports.br_desc
test_cases_names = testcases.tc_name
bug_reports_names = bugreports.br_name
bm25_hyperp = TC_BR_Models_Hyperp.get_bm25_model_hyperp()
bm25_model = BM_25(**bm25_hyperp)
bm25_model.set_name('BM25_Model_TC_BR')
bm25_model.recover_links(corpus, query, test_cases_names, bug_reports_names)
bm25_model.get_sim_matrix().shape
sim_matrix_normalized = bm25_model.get_sim_matrix()
aux_functions.highlight_df(sim_matrix_normalized)
sim_matrix_origin = bm25_model._sim_matrix_origin
aux_functions.highlight_df(sim_matrix_origin)
df = pd.DataFrame()
df['tc'] = corpus
df.index = test_cases_names
df.index.name = ''
#df = df.T
df.head(10)
```
Query Vector
```
tokenizer = tok.PorterStemmerBased_Tokenizer()
query_vec = [tokenizer.__call__(doc) for doc in query]
df_q = pd.DataFrame(query_vec)
df_q.index = bug_reports_names
df_q.index.name = ''
df_q = df_q.T
df_q.head(10)
```
Average Document Length
```
bm25_model.bm25.avgdl
```
Number of documents
```
bm25_model.bm25.corpus_size
```
Term frequency by document
```
bm25_model.bm25.df['apz']
```
Most Relevant Words
```
bm25_model.mrw_tcs
bm25_model.docs_feats_df
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
from mod_finder_util import mod_finder_util
mod_finder_util.add_modules_origin_search_path()
import pandas as pd
from modules.models_runner.tc_br_models_runner import TC_BR_Runner
from modules.models_runner.tc_br_models_runner import TC_BR_Models_Hyperp
from modules.utils import aux_functions
from modules.utils import firefox_dataset_p2 as fd
from modules.utils import tokenizers as tok
from modules.models.bm25 import BM_25
from IPython.display import display
import warnings; warnings.simplefilter('ignore')
tcs = [x for x in range(37,59)]
orc = fd.Tc_BR_Oracles.read_oracle_expert_df()
orc_subset = orc[orc.index.isin(tcs)]
#aux_functions.highlight_df(orc_subset)
tcs = [13,37,60]
brs = [1267501]
testcases = fd.Datasets.read_testcases_df()
testcases = testcases[testcases.TC_Number.isin(tcs)]
bugreports = fd.Datasets.read_selected_bugreports_df()
bugreports = bugreports[bugreports.Bug_Number.isin(brs)]
print('tc.shape: {}'.format(testcases.shape))
print('br.shape: {}'.format(bugreports.shape))
corpus = testcases.tc_desc
query = bugreports.br_desc
test_cases_names = testcases.tc_name
bug_reports_names = bugreports.br_name
bm25_hyperp = TC_BR_Models_Hyperp.get_bm25_model_hyperp()
bm25_model = BM_25(**bm25_hyperp)
bm25_model.set_name('BM25_Model_TC_BR')
bm25_model.recover_links(corpus, query, test_cases_names, bug_reports_names)
bm25_model.get_sim_matrix().shape
sim_matrix_normalized = bm25_model.get_sim_matrix()
aux_functions.highlight_df(sim_matrix_normalized)
sim_matrix_origin = bm25_model._sim_matrix_origin
aux_functions.highlight_df(sim_matrix_origin)
df = pd.DataFrame()
df['tc'] = corpus
df.index = test_cases_names
df.index.name = ''
#df = df.T
df.head(10)
tokenizer = tok.PorterStemmerBased_Tokenizer()
query_vec = [tokenizer.__call__(doc) for doc in query]
df_q = pd.DataFrame(query_vec)
df_q.index = bug_reports_names
df_q.index.name = ''
df_q = df_q.T
df_q.head(10)
bm25_model.bm25.avgdl
bm25_model.bm25.corpus_size
bm25_model.bm25.df['apz']
bm25_model.mrw_tcs
bm25_model.docs_feats_df
| 0.302906 | 0.904903 |
<a href="https://colab.research.google.com/github/niegisc/ctcuc22/blob/main/2022_Python_Programming_Practical_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **2022 Python Programming Practical 1**
If you do not have one already, create a [GitHub](https://github.com) account.
Create a public repository ctcuc22
File --> Save a copy in GitHub under this repository
For the following questions,
* standard input: keyboard
* standard output: screen
**Q1. (Converting Fahrenheit to Celsius)**
Write a program that reads a Fahrenheit degree as a float from standard input, then converts it to Celsius and displays the result in standard output. The formula for the conversion is:
```
celsius = (5/9) * (fahrenheit - 32)
```
```
```
**Q2. (Computing the volume of a cylinder)**
Write a program that reads in the radius and length of a cylinder and computes its volume using the formulae:
```
area = radius * radius * pi
volume = area * length
```
```
# constants
PI = 3.14 # DRY - Don't Repeat Yourself
# get input
radius = float(input("Enter radius: "))
length = float(input("Enter length: "))
# process data
area = radius * radius * PI
volume = area * length
# output result
print(volume)
import math
radius = 3 # try try only for ease of testing, dun hard code :)
area = math.pi * radius * radius
circumference = 2 * math.pi * radius
print(area)
print(circumference)
print(math.pi)
from math import pi
area = pi * radius * radius
circumference = 2 * pi * radius
print(area)
print(circumference)
print(math.pi)
```
**Q3. (Converting miles into kilometers)**
Write a program that reads a number in miles, converts it to kilometres, and outputs the result. One mile is 1.60934 kilometres. Display your answer correct to 3 decimal places.
```
```
**Q4. (Summing the digits in an integer)**
Write a program that reads an integer between 0 and 1000 and adds all the digits in the integer. For example, if an integer is 932, the sum of all its digits is 14.
Hint: Use the % operator to extract digits, and use the // operator to remove the extracted digit. For instance, 932 % 10 = 2 and 932 // 10 = 93
```
```
**Q5. (Converting an uppercase letter to lowercase)**
Write a program that converts an uppercase letter from standard input to a lowercase letter by making use of its ASCII value.
http://www.asciitable.com
```
upper = input("Enter uppercase letter: ")
lower = ord(upper) + 32 # ordinal
print(chr(lower))
```
**Q6. (Finding the character of an ASCII code)**
Write a program that receives an ASCII value (an integer between 0 and 127) and displays its character. For example, if the user enters 97, the program displays character a.
http://www.asciitable.com
```
```
**Q7. (Payroll)**
Write a program that reads in the following information and prints a payroll statement. A sample input and output session is as follows:
Sample input:
```
Enter name: Lim Ah Seng
Enter number of hours worked weekly: 10
Enter hourly pay rate: 6.75
Enter CPF contribution rate(%): 20
```
Sample output:
```
Payroll statement for Lim Ah Seng
Number of hours worked in week: 10
Hourly pay rate: $6.75
Gross pay = $67.50
CPF contribution at 20% = $13.50
Net pay = $54.00
```
```
```
|
github_jupyter
|
celsius = (5/9) * (fahrenheit - 32)
```
**Q2. (Computing the volume of a cylinder)**
Write a program that reads in the radius and length of a cylinder and computes its volume using the formulae:
**Q3. (Converting miles into kilometers)**
Write a program that reads a number in miles, converts it to kilometres, and outputs the result. One mile is 1.60934 kilometres. Display your answer correct to 3 decimal places.
**Q4. (Summing the digits in an integer)**
Write a program that reads an integer between 0 and 1000 and adds all the digits in the integer. For example, if an integer is 932, the sum of all its digits is 14.
Hint: Use the % operator to extract digits, and use the // operator to remove the extracted digit. For instance, 932 % 10 = 2 and 932 // 10 = 93
**Q5. (Converting an uppercase letter to lowercase)**
Write a program that converts an uppercase letter from standard input to a lowercase letter by making use of its ASCII value.
http://www.asciitable.com
**Q6. (Finding the character of an ASCII code)**
Write a program that receives an ASCII value (an integer between 0 and 127) and displays its character. For example, if the user enters 97, the program displays character a.
http://www.asciitable.com
**Q7. (Payroll)**
Write a program that reads in the following information and prints a payroll statement. A sample input and output session is as follows:
Sample input:
Sample output:
| 0.798815 | 0.986929 |
```
import pandas as pd
import numpy as np
from scipy import stats
import pymc3 as pm
import theano.tensor as tt
import matplotlib.pyplot as pl
from seaborn import heatmap
%matplotlib inline
```
### The model in PyMC3:
Each of the 3 component GPs is constructed separately
```
np.random.seed(42)
xe = np.linspace(0, 1, 10)
ye = np.random.normal(0, 1, len(xe))
pl.plot(xe, ye, 'o-', label='the first one')
ye = np.zeros_like(xe)
ye2 = np.random.multivariate_normal(np.r_[ye[-1], ye[:-1]], np.eye(len(xe)))
ye3 = np.random.multivariate_normal(np.zeros_like(xe), np.eye(len(xe)))
for i in range(len(xe)):
ye[i] = np.random.normal(ye[i-1], 1)
pl.plot(xe, ye, 'o-', label='the second one')
pl.plot(xe, ye2, 'o-', label='the third one')
pl.plot(xe, ye3, 'o-', label='the fourth one')
pl.legend()
```
In practice, covariance matrices are specified using functions known as kernels. You may find more than one definition of kernel in the statistical literature, with slightly different mathematical properties. For the purpose of our discussion, we are going to say that a kernel is basically a symmetric function that takes two inputs and returns a value of zero in the inputs are the same or positive otherwise. If these conditions are met, we can interpret the output of a kernel function as a measure of similarity between the two inputs.
$$K =exp\left( \frac{\Vert x-x'\Vert^2}{2 \ell^2}\right)$$
where $\Vert x-x'\Vert^2$ is the squared Eucliden distance:
$$\Vert x-x'\Vert^2 = (x_1 - x'_1)^2 + (x_2 - x'_2)^2 + ... + (x_n -x_n')^2$$
$\ell$ is the length scale (or bandwidth or variance) that controls the width of the kernel.
```
def exp_quad_kernel(x, knots, ℓ=1):
"""exponentiated quadratic kernel"""
return np.array([np.exp(-(x-k)**2 / (2*ℓ**2)) for k in knots])
data = np.arange(-1, 3, dtype='i')
cov = exp_quad_kernel(data, data,)
f, ax = pl.subplots(nrows=1, ncols=2, figsize=(12, 5))
ax[0].plot(data, np.zeros_like(data), 'ko')
ax[0].grid()
heatmap(cov, ax=ax[1], cmap='viridis', annot=True, cbar=False, fmt='.2f')
ax[1].xaxis.tick_top()
np.random.seed(24)
test_points= np.linspace(0, 10, 200)
f, ax = pl.subplots(2, 2, figsize=(12, 6), sharex=True, sharey=True,
constrained_layout=True)
ax = ax.ravel()
for idx, ℓ in enumerate((0.2, 1 , 2, 10)):
cov = exp_quad_kernel(test_points, test_points, ℓ)
ax[idx].plot(test_points, stats.multivariate_normal.rvs(cov=cov, size=2).T)
ax[idx].set_title(f'ℓ = {ℓ}')
ax[idx].grid()
f.text(0.51, -0.03, 'x', fontsize=16)
f.text(-0.03, 0.5, 'f(x)', fontsize=16);
```
Gaussian processes are useful for building Bayesian non-parametric models, using them as prior distributions over functions.
|
github_jupyter
|
import pandas as pd
import numpy as np
from scipy import stats
import pymc3 as pm
import theano.tensor as tt
import matplotlib.pyplot as pl
from seaborn import heatmap
%matplotlib inline
np.random.seed(42)
xe = np.linspace(0, 1, 10)
ye = np.random.normal(0, 1, len(xe))
pl.plot(xe, ye, 'o-', label='the first one')
ye = np.zeros_like(xe)
ye2 = np.random.multivariate_normal(np.r_[ye[-1], ye[:-1]], np.eye(len(xe)))
ye3 = np.random.multivariate_normal(np.zeros_like(xe), np.eye(len(xe)))
for i in range(len(xe)):
ye[i] = np.random.normal(ye[i-1], 1)
pl.plot(xe, ye, 'o-', label='the second one')
pl.plot(xe, ye2, 'o-', label='the third one')
pl.plot(xe, ye3, 'o-', label='the fourth one')
pl.legend()
def exp_quad_kernel(x, knots, ℓ=1):
"""exponentiated quadratic kernel"""
return np.array([np.exp(-(x-k)**2 / (2*ℓ**2)) for k in knots])
data = np.arange(-1, 3, dtype='i')
cov = exp_quad_kernel(data, data,)
f, ax = pl.subplots(nrows=1, ncols=2, figsize=(12, 5))
ax[0].plot(data, np.zeros_like(data), 'ko')
ax[0].grid()
heatmap(cov, ax=ax[1], cmap='viridis', annot=True, cbar=False, fmt='.2f')
ax[1].xaxis.tick_top()
np.random.seed(24)
test_points= np.linspace(0, 10, 200)
f, ax = pl.subplots(2, 2, figsize=(12, 6), sharex=True, sharey=True,
constrained_layout=True)
ax = ax.ravel()
for idx, ℓ in enumerate((0.2, 1 , 2, 10)):
cov = exp_quad_kernel(test_points, test_points, ℓ)
ax[idx].plot(test_points, stats.multivariate_normal.rvs(cov=cov, size=2).T)
ax[idx].set_title(f'ℓ = {ℓ}')
ax[idx].grid()
f.text(0.51, -0.03, 'x', fontsize=16)
f.text(-0.03, 0.5, 'f(x)', fontsize=16);
| 0.398524 | 0.951188 |
# Training in mixed precision
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_10b import *
```
## A little bit of theory
Continuing the documentation on the fastai_v1 development here is a brief piece about mixed precision training. A very nice and clear introduction to it is [this video from NVIDIA](http://on-demand.gputechconf.com/gtc/2018/video/S81012/).
### What's half precision?
In neural nets, all the computations are usually done in single precision, which means all the floats in all the arrays that represent inputs, activations, weights... are 32-bit floats (FP32 in the rest of this post). An idea to reduce memory usage (and avoid those annoying cuda errors) has been to try and do the same thing in half-precision, which means using 16-bits floats (or FP16 in the rest of this post). By definition, they take half the space in RAM, and in theory could allow you to double the size of your model and double your batch size.
Another very nice feature is that NVIDIA developed its latest GPUs (the Volta generation) to take fully advantage of half-precision tensors. Basically, if you give half-precision tensors to those, they'll stack them so that each core can do more operations at the same time, and theoretically gives an 8x speed-up (sadly, just in theory).
So training at half precision is better for your memory usage, way faster if you have a Volta GPU (still a tiny bit faster if you don't since the computations are easiest). How do we do it? Super easily in pytorch, we just have to put .half() everywhere: on the inputs of our model and all the parameters. Problem is that you usually won't see the same accuracy in the end (so it happens sometimes) because half-precision is... well... not as precise ;).
### Problems with half-precision:
To understand the problems with half precision, let's look briefly at what an FP16 looks like (more information [here](https://en.wikipedia.org/wiki/Half-precision_floating-point_format)).

The sign bit gives us +1 or -1, then we have 5 bits to code an exponent between -14 and 15, while the fraction part has the remaining 10 bits. Compared to FP32, we have a smaller range of possible values (2e-14 to 2e15 roughly, compared to 2e-126 to 2e127 for FP32) but also a smaller *offset*.
For instance, between 1 and 2, the FP16 format only represents the number 1, 1+2e-10, 1+2*2e-10... which means that 1 + 0.0001 = 1 in half precision. That's what will cause a certain numbers of problems, specifically three that can occur and mess up your training.
1. The weight update is imprecise: inside your optimizer, you basically do w = w - lr * w.grad for each weight of your network. The problem in performing this operation in half precision is that very often, w.grad is several orders of magnitude below w, and the learning rate is also small. The situation where w=1 and lr*w.grad is 0.0001 (or lower) is therefore very common, but the update doesn't do anything in those cases.
2. Your gradients can underflow. In FP16, your gradients can easily be replaced by 0 because they are too low.
3. Your activations or loss can overflow. The opposite problem from the gradients: it's easier to hit nan (or infinity) in FP16 precision, and your training might more easily diverge.
### The solution: mixed precision training
To address those three problems, we don't fully train in FP16 precision. As the name mixed training implies, some of the operations will be done in FP16, others in FP32. This is mainly to take care of the first problem listed aboved. For the next two there are additional tricks.
The main idea is that we want to do the forward pass and the gradient computation in half precision (to go fast) but the update in single precision (to be more precise). It's okay if w and grad are both half floats, but when we do the operation w = w - lr * grad, we need to compute it in FP32. That way our 1 + 0.0001 is going to be 1.0001.
This is why we keep a copy of the weights in FP32 (called master model). Then, our training loop will look like:
1. compute the output with the FP16 model, then the loss
2. back-propagate the gradients in half-precision.
3. copy the gradients in FP32 precision
4. do the update on the master model (in FP32 precision)
5. copy the master model in the FP16 model.
Note that we lose precision during step 5, and that the 1.0001 in one of the weights will go back to 1. But if the next update corresponds to add 0.0001 again, since the optimizer step is done on the master model, the 1.0001 will become 1.0002 and if we eventually go like this up to 1.0005, the FP16 model will be able to tell the difference.
That takes care of problem 1. For the second problem, we use something called gradient scaling: to avoid the gradients getting zeroed by the FP16 precision, we multiply the loss by a scale factor (scale=512 for instance). That way we can push the gradients to the right in the next figure, and have them not become zero.

Of course we don't want those 512-scaled gradients to be in the weight update, so after converting them into FP32, we can divide them by this scale factor (once they have no risks of becoming 0). This changes the loop to:
1. compute the output with the FP16 model, then the loss.
2. multiply the loss by scale then back-propagate the gradients in half-precision.
3. copy the gradients in FP32 precision then divide them by scale.
4. do the update on the master model (in FP32 precision).
5. copy the master model in the FP16 model.
For the last problem, the tricks offered by NVIDIA are to leave the batchnorm layers in single precision (they don't have many weights so it's not a big memory challenge) and compute the loss in single precision (which means converting the last output of the model in single precision before passing it to the loss).

Implementing all of this in the new callback system is surprisingly easy, let's dig into this!
## Util functions
Before going in the main `Callback` we will need some helper functions. We will refactor using the [APEX library](https://github.com/NVIDIA/apex) util functions. The python-only build is enough for what we will use here if you don't manage to do the CUDA/C++ installation.
```
# export
import apex.fp16_utils as fp16
```
### Converting the model to FP16
We will need a function to convert all the layers of the model to FP16 precision except the BatchNorm-like layers (since those need to be done in FP32 precision to be stable). We do this in two steps: first we convert the model to FP16, then we loop over all the layers and put them back to FP32 if they are a BatchNorm layer.
```
bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)
def bn_to_float(model):
if isinstance(model, bn_types): model.float()
for child in model.children(): bn_to_float(child)
return model
def model_to_half(model):
model = model.half()
return bn_to_float(model)
```
Let's test this:
```
model = nn.Sequential(nn.Linear(10,30), nn.BatchNorm1d(30), nn.Linear(30,2)).cuda()
model = model_to_half(model)
def check_weights(model):
for i,t in enumerate([torch.float16, torch.float32, torch.float16]):
assert model[i].weight.dtype == t
assert model[i].bias.dtype == t
check_weights(model)
```
In Apex, the function that does this for us is `convert_network`. We can use it to put the model in FP16 or back to FP32.
```
model = nn.Sequential(nn.Linear(10,30), nn.BatchNorm1d(30), nn.Linear(30,2)).cuda()
model = fp16.convert_network(model, torch.float16)
check_weights(model)
```
### Creating the master copy of the parameters
From our model parameters (mostly in FP16), we'll want to create a copy in FP32 (master parameters) that we will use for the step in the optimizer. Optionally, we concatenate all the parameters to do one flat big tensor, which can make that step a little bit faster.
```
from torch.nn.utils import parameters_to_vector
def get_master(model, flat_master=False):
model_params = [param for param in model.parameters() if param.requires_grad]
if flat_master:
master_param = parameters_to_vector([param.data.float() for param in model_params])
master_param = torch.nn.Parameter(master_param, requires_grad=True)
if master_param.grad is None: master_param.grad = master_param.new(*master_param.size())
return model_params, [master_param]
else:
master_params = [param.clone().float().detach() for param in model_params]
for param in master_params: param.requires_grad_(True)
return model_params, master_params
```
The util function from Apex to do this is `prep_param_lists`.
```
model_p,master_p = get_master(model)
model_p1,master_p1 = fp16.prep_param_lists(model)
def same_lists(ps1, ps2):
assert len(ps1) == len(ps2)
for (p1,p2) in zip(ps1,ps2):
assert p1.requires_grad == p2.requires_grad
assert torch.allclose(p1.data.float(), p2.data.float())
same_lists(model_p,model_p1)
same_lists(model_p,master_p)
same_lists(master_p,master_p1)
same_lists(model_p1,master_p1)
```
We can't use flat_master when there is a mix of FP32 and FP16 parameters (like batchnorm here).
```
model1 = nn.Sequential(nn.Linear(10,30), nn.Linear(30,2)).cuda()
model1 = fp16.convert_network(model1, torch.float16)
model_p,master_p = get_master(model1, flat_master=True)
model_p1,master_p1 = fp16.prep_param_lists(model1, flat_master=True)
same_lists(model_p,model_p1)
same_lists(master_p,master_p1)
assert len(master_p[0]) == 10*30 + 30 + 30*2 + 2
assert len(master_p1[0]) == 10*30 + 30 + 30*2 + 2
```
The thing is that we don't always want all the parameters of our model in the same parameter group, because we might:
- want to do transfer learning and freeze some layers
- apply discriminative learning rates
- don't apply weight decay to some layers (like BatchNorm) or the bias terms
So we actually need a function that splits the parameters of an optimizer (and not a model) according to the right parameter groups.
```
def get_master(opt, flat_master=False):
model_params = [[param for param in pg if param.requires_grad] for pg in opt.param_groups]
if flat_master:
master_params = []
for pg in model_params:
mp = parameters_to_vector([param.data.float() for param in pg])
mp = torch.nn.Parameter(mp, requires_grad=True)
if mp.grad is None: mp.grad = mp.new(*mp.size())
master_params.append(mp)
else:
master_params = [[param.clone().float().detach() for param in pg] for pg in model_params]
for pg in master_params:
for param in pg: param.requires_grad_(True)
return model_params, master_params
```
### Copy the gradients from model params to master params
After the backward pass, all gradients must be copied to the master params before the optimizer step can be done in FP32. We need a function for that (with a bit of adjustement if we have flat master).
```
def to_master_grads(model_params, master_params, flat_master:bool=False)->None:
if flat_master:
if master_params[0].grad is None: master_params[0].grad = master_params[0].data.new(*master_params[0].data.size())
master_params[0].grad.data.copy_(parameters_to_vector([p.grad.data.float() for p in model_params]))
else:
for model, master in zip(model_params, master_params):
if model.grad is not None:
if master.grad is None: master.grad = master.data.new(*master.data.size())
master.grad.data.copy_(model.grad.data)
else: master.grad = None
```
The corresponding function in the Apex utils is `model_grads_to_master_grads`.
```
x = torch.randn(20,10).half().cuda()
z = model(x)
loss = F.cross_entropy(z, torch.randint(0, 2, (20,)).cuda())
loss.backward()
to_master_grads(model_p, master_p)
def check_grads(m1, m2):
for p1,p2 in zip(m1,m2):
if p1.grad is None: assert p2.grad is None
else: assert torch.allclose(p1.grad.data, p2.grad.data)
check_grads(model_p, master_p)
fp16.model_grads_to_master_grads(model_p, master_p)
check_grads(model_p, master_p)
```
### Copy the master params to the model params
After the step, we need to copy back the master paramters to the model parameters for the next update.
```
from torch._utils import _unflatten_dense_tensors
def to_model_params(model_params, master_params, flat_master:bool=False)->None:
if flat_master:
for model, master in zip(model_params, _unflatten_dense_tensors(master_params[0].data, model_params)):
model.data.copy_(master)
else:
for model, master in zip(model_params, master_params): model.data.copy_(master.data)
```
The corresponding function in Apex is `master_params_to_model_params`.
### But we need to handle param groups
The thing is that we don't always want all the parameters of our model in the same parameter group, because we might:
- want to do transfer learning and freeze some layers
- apply discriminative learning rates
- don't apply weight decay to some layers (like BatchNorm) or the bias terms
So we actually need a function that splits the parameters of an optimizer (and not a model) according to the right parameter groups and the following functions need to handle lists of lists of parameters (one list of each param group in `model_pgs` and `master_pgs`)
```
# export
def get_master(opt, flat_master=False):
model_pgs = [[param for param in pg['params'] if param.requires_grad] for pg in opt.param_groups]
if flat_master:
master_pgs = []
for pg in model_pgs:
mp = parameters_to_vector([param.data.float() for param in pg])
mp = torch.nn.Parameter(mp, requires_grad=True)
if mp.grad is None: mp.grad = mp.new(*mp.size())
master_pgs.append([mp])
else:
master_pgs = [[param.clone().float().detach() for param in pg] for pg in model_pgs]
for pg in master_pgs:
for param in pg: param.requires_grad_(True)
return model_pgs, master_pgs
# export
def to_master_grads(model_pgs, master_pgs, flat_master:bool=False)->None:
for (model_params,master_params) in zip(model_pgs,master_pgs):
fp16.model_grads_to_master_grads(model_params, master_params, flat_master=flat_master)
# export
def to_model_params(model_pgs, master_pgs, flat_master:bool=False)->None:
for (model_params,master_params) in zip(model_pgs,master_pgs):
fp16.master_params_to_model_params(model_params, master_params, flat_master=flat_master)
```
## The main Callback
```
class MixedPrecision(Callback):
_order = 99
def __init__(self, loss_scale=512, flat_master=False):
assert torch.backends.cudnn.enabled, "Mixed precision training requires cudnn."
self.loss_scale,self.flat_master = loss_scale,flat_master
def begin_fit(self):
self.run.model = fp16.convert_network(self.model, dtype=torch.float16)
self.model_pgs, self.master_pgs = get_master(self.opt, self.flat_master)
#Changes the optimizer so that the optimization step is done in FP32.
param_groups = self.opt.param_groups #Load the old param groups to get the HP values
for (pg,mp) in zip(param_groups,self.master_pgs): pg['params'] = mp #Replace the parameters by the new ones
self.run.opt.param_groups = param_groups #Put those param groups inside our runner.
def begin_batch(self): self.run.xb = self.run.xb.half() #Put the inputs to half precision
def after_pred(self): self.run.pred = self.run.pred.float() #Compute the loss in FP32
def after_loss(self): self.run.loss *= self.loss_scale #Loss scaling to avoid gradient underflow
def after_backward(self):
#Copy the gradients to master and unscale
to_master_grads(self.model_pgs, self.master_pgs, self.flat_master)
for master_params in self.master_pgs:
for param in master_params:
if param.grad is not None: param.grad.div_(self.loss_scale)
def after_step(self):
#Zero the gradients of the model since the optimizer is disconnected.
self.model.zero_grad()
#Update the params from master to model.
to_model_params(self.model_pgs, self.master_pgs, self.flat_master)
```
Now let's test this on Imagenette
```
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
bs = 64
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler)
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
nfs = [32,64,128,256,512]
def get_learner(nfs, data, lr, layer, loss_func=F.cross_entropy,
cb_funcs=None, opt_func=optim.SGD, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model)
return Learner(model, data, loss_func, lr=lr, cb_funcs=cb_funcs, opt_func=opt_func)
```
Training without mixed precision
```
cbfs = [partial(AvgStatsCallback,accuracy),
ProgressCallback,
CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
learn.fit(1)
```
Training with mixed precision
```
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
ProgressCallback,
partial(BatchTransformXCallback, norm_imagenette),
MixedPrecision]
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
learn.fit(1)
```
## Dynamic loss scaling
The only annoying thing with the previous implementation of mixed precision training is that it introduces one new hyper-parameter to tune, the value of the loss scaling. Fortunately for us, there is a way around this. We want the loss scaling to be as high as possible so that our gradients can use the whole range of representation, so let's first try a really high value. In all likelihood, this will cause our gradients or our loss to overflow, and we will try again with half that big value, and agian, until we get to the largest loss scale possible that doesn't make our gradients overflow.
This value will be perfectly fitted to our model and can continue to be dynamically adjusted as the training goes, if it's still too high, by jsut halfing it each time we overflow. After a while though, training will converge and gradients will start to get smaller, so we also need a mechanism to get this dynamic loss scale larger if it's safe to do so. The strategy used in the Apex library is to multiply the loss scale by 2 each time we had a given number of iterations without overflowing.
To check if the gradients have owerflown, we check their sum (computed in FP32). If one term is nan, the sum will be nan. Intereestingly, on the GPU, it's faster than checking `torch.isnan`:
```
# export
def test_overflow(x):
s = float(x.float().sum())
return (s == float('inf') or s == float('-inf') or s != s)
x = torch.randn(512,1024).cuda()
test_overflow(x)
x[123,145] = float('inf')
test_overflow(x)
%timeit test_overflow(x)
%timeit torch.isnan(x).any().item()
```
So we can use it in the following function that checks for gradient overflow:
```
# export
def grad_overflow(param_groups):
for group in param_groups:
for p in group:
if p.grad is not None:
s = float(p.grad.data.float().sum())
if s == float('inf') or s == float('-inf') or s != s: return True
return False
```
And now we can write a new version of the `Callback` that handles dynamic loss scaling.
```
# export
class MixedPrecision(Callback):
_order = 99
def __init__(self, loss_scale=512, flat_master=False, dynamic=True, max_loss_scale=2.**24, div_factor=2.,
scale_wait=500):
assert torch.backends.cudnn.enabled, "Mixed precision training requires cudnn."
self.flat_master,self.dynamic,self.max_loss_scale = flat_master,dynamic,max_loss_scale
self.div_factor,self.scale_wait = div_factor,scale_wait
self.loss_scale = max_loss_scale if dynamic else loss_scale
def begin_fit(self):
self.run.model = fp16.convert_network(self.model, dtype=torch.float16)
self.model_pgs, self.master_pgs = get_master(self.opt, self.flat_master)
#Changes the optimizer so that the optimization step is done in FP32.
param_groups = self.opt.param_groups #Load the old param groups to get the HP values
for (pg,mp) in zip(param_groups,self.master_pgs): pg['params'] = mp #Replace the parameters by the new ones
self.run.opt.param_groups = param_groups #Put those param groups inside our runner.
if self.dynamic: self.count = 0
def begin_batch(self): self.run.xb = self.run.xb.half() #Put the inputs to half precision
def after_pred(self): self.run.pred = self.run.pred.float() #Compute the loss in FP32
def after_loss(self): self.run.loss *= self.loss_scale #Loss scaling to avoid gradient underflow
def after_backward(self):
#First, check for an overflow
if self.dynamic and grad_overflow(self.model_pgs):
#Divide the loss scale by div_factor, zero the grad (after_step will be skipped)
self.loss_scale /= self.div_factor
self.model.zero_grad()
return True #skip step and zero_grad
#Copy the gradients to master and unscale
to_master_grads(self.model_pgs, self.master_pgs, self.flat_master)
for master_params in self.master_pgs:
for param in master_params:
if param.grad is not None: param.grad.div_(self.loss_scale)
#Check if it's been long enough without overflow
if self.dynamic:
self.count += 1
if self.count == self.scale_wait:
self.count = 0
self.loss_scale *= self.div_factor
def after_step(self):
#Zero the gradients of the model since the optimizer is disconnected.
self.model.zero_grad()
#Update the params from master to model.
to_model_params(self.model_pgs, self.master_pgs, self.flat_master)
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
ProgressCallback,
partial(BatchTransformXCallback, norm_imagenette),
MixedPrecision]
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
learn.fit(1)
```
The loss scale used is way higher than our previous number:
```
learn.cbs[-1].loss_scale
```
## Export
```
!./notebook2script.py 10c_fp16.ipynb
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_10b import *
# export
import apex.fp16_utils as fp16
bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)
def bn_to_float(model):
if isinstance(model, bn_types): model.float()
for child in model.children(): bn_to_float(child)
return model
def model_to_half(model):
model = model.half()
return bn_to_float(model)
model = nn.Sequential(nn.Linear(10,30), nn.BatchNorm1d(30), nn.Linear(30,2)).cuda()
model = model_to_half(model)
def check_weights(model):
for i,t in enumerate([torch.float16, torch.float32, torch.float16]):
assert model[i].weight.dtype == t
assert model[i].bias.dtype == t
check_weights(model)
model = nn.Sequential(nn.Linear(10,30), nn.BatchNorm1d(30), nn.Linear(30,2)).cuda()
model = fp16.convert_network(model, torch.float16)
check_weights(model)
from torch.nn.utils import parameters_to_vector
def get_master(model, flat_master=False):
model_params = [param for param in model.parameters() if param.requires_grad]
if flat_master:
master_param = parameters_to_vector([param.data.float() for param in model_params])
master_param = torch.nn.Parameter(master_param, requires_grad=True)
if master_param.grad is None: master_param.grad = master_param.new(*master_param.size())
return model_params, [master_param]
else:
master_params = [param.clone().float().detach() for param in model_params]
for param in master_params: param.requires_grad_(True)
return model_params, master_params
model_p,master_p = get_master(model)
model_p1,master_p1 = fp16.prep_param_lists(model)
def same_lists(ps1, ps2):
assert len(ps1) == len(ps2)
for (p1,p2) in zip(ps1,ps2):
assert p1.requires_grad == p2.requires_grad
assert torch.allclose(p1.data.float(), p2.data.float())
same_lists(model_p,model_p1)
same_lists(model_p,master_p)
same_lists(master_p,master_p1)
same_lists(model_p1,master_p1)
model1 = nn.Sequential(nn.Linear(10,30), nn.Linear(30,2)).cuda()
model1 = fp16.convert_network(model1, torch.float16)
model_p,master_p = get_master(model1, flat_master=True)
model_p1,master_p1 = fp16.prep_param_lists(model1, flat_master=True)
same_lists(model_p,model_p1)
same_lists(master_p,master_p1)
assert len(master_p[0]) == 10*30 + 30 + 30*2 + 2
assert len(master_p1[0]) == 10*30 + 30 + 30*2 + 2
def get_master(opt, flat_master=False):
model_params = [[param for param in pg if param.requires_grad] for pg in opt.param_groups]
if flat_master:
master_params = []
for pg in model_params:
mp = parameters_to_vector([param.data.float() for param in pg])
mp = torch.nn.Parameter(mp, requires_grad=True)
if mp.grad is None: mp.grad = mp.new(*mp.size())
master_params.append(mp)
else:
master_params = [[param.clone().float().detach() for param in pg] for pg in model_params]
for pg in master_params:
for param in pg: param.requires_grad_(True)
return model_params, master_params
def to_master_grads(model_params, master_params, flat_master:bool=False)->None:
if flat_master:
if master_params[0].grad is None: master_params[0].grad = master_params[0].data.new(*master_params[0].data.size())
master_params[0].grad.data.copy_(parameters_to_vector([p.grad.data.float() for p in model_params]))
else:
for model, master in zip(model_params, master_params):
if model.grad is not None:
if master.grad is None: master.grad = master.data.new(*master.data.size())
master.grad.data.copy_(model.grad.data)
else: master.grad = None
x = torch.randn(20,10).half().cuda()
z = model(x)
loss = F.cross_entropy(z, torch.randint(0, 2, (20,)).cuda())
loss.backward()
to_master_grads(model_p, master_p)
def check_grads(m1, m2):
for p1,p2 in zip(m1,m2):
if p1.grad is None: assert p2.grad is None
else: assert torch.allclose(p1.grad.data, p2.grad.data)
check_grads(model_p, master_p)
fp16.model_grads_to_master_grads(model_p, master_p)
check_grads(model_p, master_p)
from torch._utils import _unflatten_dense_tensors
def to_model_params(model_params, master_params, flat_master:bool=False)->None:
if flat_master:
for model, master in zip(model_params, _unflatten_dense_tensors(master_params[0].data, model_params)):
model.data.copy_(master)
else:
for model, master in zip(model_params, master_params): model.data.copy_(master.data)
# export
def get_master(opt, flat_master=False):
model_pgs = [[param for param in pg['params'] if param.requires_grad] for pg in opt.param_groups]
if flat_master:
master_pgs = []
for pg in model_pgs:
mp = parameters_to_vector([param.data.float() for param in pg])
mp = torch.nn.Parameter(mp, requires_grad=True)
if mp.grad is None: mp.grad = mp.new(*mp.size())
master_pgs.append([mp])
else:
master_pgs = [[param.clone().float().detach() for param in pg] for pg in model_pgs]
for pg in master_pgs:
for param in pg: param.requires_grad_(True)
return model_pgs, master_pgs
# export
def to_master_grads(model_pgs, master_pgs, flat_master:bool=False)->None:
for (model_params,master_params) in zip(model_pgs,master_pgs):
fp16.model_grads_to_master_grads(model_params, master_params, flat_master=flat_master)
# export
def to_model_params(model_pgs, master_pgs, flat_master:bool=False)->None:
for (model_params,master_params) in zip(model_pgs,master_pgs):
fp16.master_params_to_model_params(model_params, master_params, flat_master=flat_master)
class MixedPrecision(Callback):
_order = 99
def __init__(self, loss_scale=512, flat_master=False):
assert torch.backends.cudnn.enabled, "Mixed precision training requires cudnn."
self.loss_scale,self.flat_master = loss_scale,flat_master
def begin_fit(self):
self.run.model = fp16.convert_network(self.model, dtype=torch.float16)
self.model_pgs, self.master_pgs = get_master(self.opt, self.flat_master)
#Changes the optimizer so that the optimization step is done in FP32.
param_groups = self.opt.param_groups #Load the old param groups to get the HP values
for (pg,mp) in zip(param_groups,self.master_pgs): pg['params'] = mp #Replace the parameters by the new ones
self.run.opt.param_groups = param_groups #Put those param groups inside our runner.
def begin_batch(self): self.run.xb = self.run.xb.half() #Put the inputs to half precision
def after_pred(self): self.run.pred = self.run.pred.float() #Compute the loss in FP32
def after_loss(self): self.run.loss *= self.loss_scale #Loss scaling to avoid gradient underflow
def after_backward(self):
#Copy the gradients to master and unscale
to_master_grads(self.model_pgs, self.master_pgs, self.flat_master)
for master_params in self.master_pgs:
for param in master_params:
if param.grad is not None: param.grad.div_(self.loss_scale)
def after_step(self):
#Zero the gradients of the model since the optimizer is disconnected.
self.model.zero_grad()
#Update the params from master to model.
to_model_params(self.model_pgs, self.master_pgs, self.flat_master)
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
bs = 64
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler)
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
nfs = [32,64,128,256,512]
def get_learner(nfs, data, lr, layer, loss_func=F.cross_entropy,
cb_funcs=None, opt_func=optim.SGD, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model)
return Learner(model, data, loss_func, lr=lr, cb_funcs=cb_funcs, opt_func=opt_func)
cbfs = [partial(AvgStatsCallback,accuracy),
ProgressCallback,
CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
learn.fit(1)
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
ProgressCallback,
partial(BatchTransformXCallback, norm_imagenette),
MixedPrecision]
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
learn.fit(1)
# export
def test_overflow(x):
s = float(x.float().sum())
return (s == float('inf') or s == float('-inf') or s != s)
x = torch.randn(512,1024).cuda()
test_overflow(x)
x[123,145] = float('inf')
test_overflow(x)
%timeit test_overflow(x)
%timeit torch.isnan(x).any().item()
# export
def grad_overflow(param_groups):
for group in param_groups:
for p in group:
if p.grad is not None:
s = float(p.grad.data.float().sum())
if s == float('inf') or s == float('-inf') or s != s: return True
return False
# export
class MixedPrecision(Callback):
_order = 99
def __init__(self, loss_scale=512, flat_master=False, dynamic=True, max_loss_scale=2.**24, div_factor=2.,
scale_wait=500):
assert torch.backends.cudnn.enabled, "Mixed precision training requires cudnn."
self.flat_master,self.dynamic,self.max_loss_scale = flat_master,dynamic,max_loss_scale
self.div_factor,self.scale_wait = div_factor,scale_wait
self.loss_scale = max_loss_scale if dynamic else loss_scale
def begin_fit(self):
self.run.model = fp16.convert_network(self.model, dtype=torch.float16)
self.model_pgs, self.master_pgs = get_master(self.opt, self.flat_master)
#Changes the optimizer so that the optimization step is done in FP32.
param_groups = self.opt.param_groups #Load the old param groups to get the HP values
for (pg,mp) in zip(param_groups,self.master_pgs): pg['params'] = mp #Replace the parameters by the new ones
self.run.opt.param_groups = param_groups #Put those param groups inside our runner.
if self.dynamic: self.count = 0
def begin_batch(self): self.run.xb = self.run.xb.half() #Put the inputs to half precision
def after_pred(self): self.run.pred = self.run.pred.float() #Compute the loss in FP32
def after_loss(self): self.run.loss *= self.loss_scale #Loss scaling to avoid gradient underflow
def after_backward(self):
#First, check for an overflow
if self.dynamic and grad_overflow(self.model_pgs):
#Divide the loss scale by div_factor, zero the grad (after_step will be skipped)
self.loss_scale /= self.div_factor
self.model.zero_grad()
return True #skip step and zero_grad
#Copy the gradients to master and unscale
to_master_grads(self.model_pgs, self.master_pgs, self.flat_master)
for master_params in self.master_pgs:
for param in master_params:
if param.grad is not None: param.grad.div_(self.loss_scale)
#Check if it's been long enough without overflow
if self.dynamic:
self.count += 1
if self.count == self.scale_wait:
self.count = 0
self.loss_scale *= self.div_factor
def after_step(self):
#Zero the gradients of the model since the optimizer is disconnected.
self.model.zero_grad()
#Update the params from master to model.
to_model_params(self.model_pgs, self.master_pgs, self.flat_master)
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
ProgressCallback,
partial(BatchTransformXCallback, norm_imagenette),
MixedPrecision]
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
learn.fit(1)
learn.cbs[-1].loss_scale
!./notebook2script.py 10c_fp16.ipynb
| 0.661486 | 0.992554 |
# Week 1 Assignment: Data Validation
[Tensorflow Data Validation (TFDV)](https://cloud.google.com/solutions/machine-learning/analyzing-and-validating-data-at-scale-for-ml-using-tfx) is an open-source library that helps to understand, validate, and monitor production machine learning (ML) data at scale. Common use-cases include comparing training, evaluation and serving datasets, as well as checking for training/serving skew. You have seen the core functionalities of this package in the previous ungraded lab and you will get to practice them in this week's assignment.
In this lab, you will use TFDV in order to:
* Generate and visualize statistics from a dataframe
* Infer a dataset schema
* Calculate, visualize and fix anomalies
Let's begin!
## Table of Contents
- [1 - Setup and Imports](#1)
- [2 - Load the Dataset](#2)
- [2.1 - Read and Split the Dataset](#2-1)
- [2.1.1 - Data Splits](#2-1-1)
- [2.1.2 - Label Column](#2-1-2)
- [3 - Generate and Visualize Training Data Statistics](#3)
- [3.1 - Removing Irrelevant Features](#3-1)
- [Exercise 1 - Generate Training Statistics](#ex-1)
- [Exercise 2 - Visualize Training Statistics](#ex-2)
- [4 - Infer a Data Schema](#4)
- [Exercise 3: Infer the training set schema](#ex-3)
- [5 - Calculate, Visualize and Fix Evaluation Anomalies](#5)
- [Exercise 4: Compare Training and Evaluation Statistics](#ex-4)
- [Exercise 5: Detecting Anomalies](#ex-5)
- [Exercise 6: Fix evaluation anomalies in the schema](#ex-6)
- [6 - Schema Environments](#6)
- [Exercise 7: Check anomalies in the serving set](#ex-7)
- [Exercise 8: Modifying the domain](#ex-8)
- [Exercise 9: Detecting anomalies with environments](#ex-9)
- [7 - Check for Data Drift and Skew](#7)
- [8 - Display Stats for Data Slices](#8)
- [9 - Freeze the Schema](#8)
<a name='1'></a>
## 1 - Setup and Imports
```
# Import packages
import os
import pandas as pd
import tensorflow as tf
import tempfile, urllib, zipfile
import tensorflow_data_validation as tfdv
from tensorflow.python.lib.io import file_io
from tensorflow_data_validation.utils import slicing_util
from tensorflow_metadata.proto.v0.statistics_pb2 import DatasetFeatureStatisticsList, DatasetFeatureStatistics
# Set TF's logger to only display errors to avoid internal warnings being shown
tf.get_logger().setLevel('ERROR')
```
<a name='2'></a>
## 2 - Load the Dataset
You will be using the [Diabetes 130-US hospitals for years 1999-2008 Data Set](https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008) donated to the University of California, Irvine (UCI) Machine Learning Repository. The dataset represents 10 years (1999-2008) of clinical care at 130 US hospitals and integrated delivery networks. It includes over 50 features representing patient and hospital outcomes.
This dataset has already been included in your Jupyter workspace so you can easily load it.
<a name='2-1'></a>
### 2.1 Read and Split the Dataset
```
# Read CSV data into a dataframe and recognize the missing data that is encoded with '?' string as NaN
df = pd.read_csv('dataset_diabetes/diabetic_data.csv', header=0, na_values = '?')
# Preview the dataset
df.head()
```
<a name='2-1-1'></a>
#### Data splits
In a production ML system, the model performance can be negatively affected by anomalies and divergence between data splits for training, evaluation, and serving. To emulate a production system, you will split the dataset into:
* 70% training set
* 15% evaluation set
* 15% serving set
You will then use TFDV to visualize, analyze, and understand the data. You will create a data schema from the training dataset, then compare the evaluation and serving sets with this schema to detect anomalies and data drift/skew.
<a name='2-1-2'></a>
#### Label Column
This dataset has been prepared to analyze the factors related to readmission outcome. In this notebook, you will treat the `readmitted` column as the *target* or label column.
The target (or label) is important to know while splitting the data into training, evaluation and serving sets. In supervised learning, you need to include the target in the training and evaluation datasets. For the serving set however (i.e. the set that simulates the data coming from your users), the **label column needs to be dropped** since that is the feature that your model will be trying to predict.
The following function returns the training, evaluation and serving partitions of a given dataset:
```
def prepare_data_splits_from_dataframe(df):
'''
Splits a Pandas Dataframe into training, evaluation and serving sets.
Parameters:
df : pandas dataframe to split
Returns:
train_df: Training dataframe(70% of the entire dataset)
eval_df: Evaluation dataframe (15% of the entire dataset)
serving_df: Serving dataframe (15% of the entire dataset, label column dropped)
'''
# 70% of records for generating the training set
train_len = int(len(df) * 0.7)
# Remaining 30% of records for generating the evaluation and serving sets
eval_serv_len = len(df) - train_len
# Half of the 30%, which makes up 15% of total records, for generating the evaluation set
eval_len = eval_serv_len // 2
# Remaining 15% of total records for generating the serving set
serv_len = eval_serv_len - eval_len
# Sample the train, validation and serving sets. We specify a random state for repeatable outcomes.
train_df = df.iloc[:train_len].sample(frac=1, random_state=48).reset_index(drop=True)
eval_df = df.iloc[train_len: train_len + eval_len].sample(frac=1, random_state=48).reset_index(drop=True)
serving_df = df.iloc[train_len + eval_len: train_len + eval_len + serv_len].sample(frac=1, random_state=48).reset_index(drop=True)
# Serving data emulates the data that would be submitted for predictions, so it should not have the label column.
serving_df = serving_df.drop(['readmitted'], axis=1)
return train_df, eval_df, serving_df
# Split the datasets
train_df, eval_df, serving_df = prepare_data_splits_from_dataframe(df)
print('Training dataset has {} records\nValidation dataset has {} records\nServing dataset has {} records'.format(len(train_df),len(eval_df),len(serving_df)))
```
<a name='3'></a>
## 3 - Generate and Visualize Training Data Statistics
In this section, you will be generating descriptive statistics from the dataset. This is usually the first step when dealing with a dataset you are not yet familiar with. It is also known as performing an *exploratory data analysis* and its purpose is to understand the data types, the data itself and any possible issues that need to be addressed.
It is important to mention that **exploratory data analysis should be perfomed on the training dataset** only. This is because getting information out of the evaluation or serving datasets can be seen as "cheating" since this data is used to emulate data that you have not collected yet and will try to predict using your ML algorithm. **In general, it is a good practice to avoid leaking information from your evaluation and serving data into your model.**
<a name='3-1'></a>
### Removing Irrelevant Features
Before you generate the statistics, you may want to drop irrelevant features from your dataset. You can do that with TFDV with the [tfdv.StatsOptions](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/StatsOptions) class. It is usually **not a good idea** to drop features without knowing what information they contain. However there are times when this can be fairly obvious.
One of the important parameters of the `StatsOptions` class is `feature_whitelist`, which defines the features to include while calculating the data statistics. You can check the [documentation](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/StatsOptions#args) to learn more about the class arguments.
In this case, you will omit the statistics for `encounter_id` and `patient_nbr` since they are part of the internal tracking of patients in the hospital and they don't contain valuable information for the task at hand.
```
# Define features to remove
features_to_remove = {'encounter_id', 'patient_nbr'}
# Collect features to whitelist while computing the statistics
approved_cols = [col for col in df.columns if (col not in features_to_remove)]
# Instantiate a StatsOptions class and define the feature_whitelist property
stats_options = tfdv.StatsOptions(feature_whitelist=approved_cols)
# Review the features to generate the statistics
print(stats_options.feature_whitelist)
```
<a name='ex-1'></a>
### Exercise 1: Generate Training Statistics
TFDV allows you to generate statistics from different data formats such as CSV or a Pandas DataFrame.
Since you already have the data stored in a DataFrame you can use the function [`tfdv.generate_statistics_from_dataframe()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) which, given a DataFrame and `stats_options`, generates an object of type `DatasetFeatureStatisticsList`. This object includes the computed statistics of the given dataset.
Complete the cell below to generate the statistics of the training set. Remember to pass the training dataframe and the `stats_options` that you defined above as arguments.
```
### START CODE HERE
train_stats = tfdv.generate_statistics_from_dataframe(train_df, stats_options)
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features used: {len(train_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples used: {train_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {train_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {train_stats.datasets[0].features[-1].path.step[0]}")
```
**Expected Output:**
```
Number of features used: 48
Number of examples used: 71236
First feature: race
Last feature: readmitted
```
<a name='ex-2'></a>
### Exercise 2: Visualize Training Statistics
Now that you have the computed statistics in the `DatasetFeatureStatisticsList` instance, you will need a way to **visualize** these to get actual insights. TFDV provides this functionality through the method [`tfdv.visualize_statistics()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/visualize_statistics).
Using this function in an interactive Python environment such as this one will output a very nice and convenient way to interact with the descriptive statistics you generated earlier.
**Try it out yourself!** Remember to pass in the generated training statistics in the previous exercise as an argument.
```
### START CODE HERE
tfdv.visualize_statistics(train_stats)
### END CODE HERE
```
<a name='4'></a>
## 4 - Infer a data schema
A schema defines the **properties of the data** and can thus be used to detect errors. Some of these properties include:
- which features are expected to be present
- feature type
- the number of values for a feature in each example
- the presence of each feature across all examples
- the expected domains of features
The schema is expected to be fairly static, whereas statistics can vary per data split. So, you will **infer the data schema from only the training dataset**. Later, you will generate statistics for evaluation and serving datasets and compare their state with the data schema to detect anomalies, drift and skew.
<a name='ex-3'></a>
### Exercise 3: Infer the training set schema
Schema inference is straightforward using [`tfdv.infer_schema()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/infer_schema). This function needs only the **statistics** (an instance of `DatasetFeatureStatisticsList`) of your data as input. The output will be a Schema [protocol buffer](https://developers.google.com/protocol-buffers) containing the results.
A complimentary function is [`tfdv.display_schema()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/display_schema) for displaying the schema in a table. This accepts a **Schema** protocol buffer as input.
Fill the code below to infer the schema from the training statistics using TFDV and display the result.
```
### START CODE HERE
# Infer the data schema by using the training statistics that you generated
schema = tfdv.infer_schema(statistics=train_stats)
# Display the data schema
tfdv.display_schema(schema)
### END CODE HERE
# TEST CODE
# Check number of features
print(f"Number of features in schema: {len(schema.feature)}")
# Check domain name of 2nd feature
print(f"Second feature in schema: {list(schema.feature)[1].domain}")
```
**Expected Output:**
```
Number of features in schema: 48
Second feature in schema: gender
```
**Be sure to check the information displayed before moving forward.**
<a name='5'></a>
## 5 - Calculate, Visualize and Fix Evaluation Anomalies
It is important that the schema of the evaluation data is consistent with the training data since the data that your model is going to receive should be consistent to the one you used to train it with.
Moreover, it is also important that the **features of the evaluation data belong roughly to the same range as the training data**. This ensures that the model will be evaluated on a similar loss surface covered during training.
<a name='ex-4'></a>
### Exercise 4: Compare Training and Evaluation Statistics
Now you are going to generate the evaluation statistics and compare it with training statistics. You can use the [`tfdv.generate_statistics_from_dataframe()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) function for this. But this time, you'll need to pass the **evaluation data**. For the `stats_options` parameter, the list you used before works here too.
Remember that to visualize the evaluation statistics you can use [`tfdv.visualize_statistics()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/visualize_statistics).
However, it is impractical to visualize both statistics separately and do your comparison from there. Fortunately, TFDV has got this covered. You can use the `visualize_statistics` function and pass additional parameters to overlay the statistics from both datasets (referenced as left-hand side and right-hand side statistics). Let's see what these parameters are:
- `lhs_statistics`: Required parameter. Expects an instance of `DatasetFeatureStatisticsList `.
- `rhs_statistics`: Expects an instance of `DatasetFeatureStatisticsList ` to compare with `lhs_statistics`.
- `lhs_name`: Name of the `lhs_statistics` dataset.
- `rhs_name`: Name of the `rhs_statistics` dataset.
For this case, remember to define the `lhs_statistics` protocol with the `eval_stats`, and the optional `rhs_statistics` protocol with the `train_stats`.
Additionally, check the function for the protocol name declaration, and define the lhs and rhs names as `'EVAL_DATASET'` and `'TRAIN_DATASET'` respectively.
```
### START CODE HERE
# Generate evaluation dataset statistics
# HINT: Remember to use the evaluation dataframe and to pass the stats_options (that you defined before) as an argument
eval_stats = tfdv.generate_statistics_from_dataframe(eval_df, stats_options=stats_options)
# Compare evaluation data with training data
# HINT: Remember to use both the evaluation and training statistics with the lhs_statistics and rhs_statistics arguments
# HINT: Assign the names of 'EVAL_DATASET' and 'TRAIN_DATASET' to the lhs and rhs protocols
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features: {len(eval_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples: {eval_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {eval_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {eval_stats.datasets[0].features[-1].path.step[0]}")
```
**Expected Output:**
```
Number of features: 48
Number of examples: 15265
First feature: race
Last feature: readmitted
```
<a name='ex-5'></a>
### Exercise 5: Detecting Anomalies ###
At this point, you should ask if your evaluation dataset matches the schema from your training dataset. For instance, if you scroll through the output cell in the previous exercise, you can see that the categorical feature **glimepiride-pioglitazone** has 1 unique value in the training set while the evaluation dataset has 2. You can verify with the built-in Pandas `describe()` method as well.
```
train_df["glimepiride-pioglitazone"].describe()
eval_df["glimepiride-pioglitazone"].describe()
```
It is possible but highly inefficient to visually inspect and determine all the anomalies. So, let's instead use TFDV functions to detect and display these.
You can use the function [`tfdv.validate_statistics()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/validate_statistics) for detecting anomalies and [`tfdv.display_anomalies()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/display_anomalies) for displaying them.
The `validate_statistics()` method has two required arguments:
- an instance of `DatasetFeatureStatisticsList`
- an instance of `Schema`
Fill in the following graded function which, given the statistics and schema, displays the anomalies found.
```
def calculate_and_display_anomalies(statistics, schema):
'''
Calculate and display anomalies.
Parameters:
statistics : Data statistics in statistics_pb2.DatasetFeatureStatisticsList format
schema : Data schema in schema_pb2.Schema format
Returns:
display of calculated anomalies
'''
### START CODE HERE
# HINTS: Pass the statistics and schema parameters into the validation function
anomalies = tfdv.validate_statistics(statistics, schema)
# HINTS: Display input anomalies by using the calculated anomalies
tfdv.display_anomalies(anomalies)
### END CODE HERE
```
You should see detected anomalies in the `medical_specialty` and `glimepiride-pioglitazone` features by running the cell below.
```
# Check evaluation data for errors by validating the evaluation data staticss using the previously inferred schema
calculate_and_display_anomalies(eval_stats, schema=schema)
```
<a name='ex-6'></a>
### Exercise 6: Fix evaluation anomalies in the schema
The evaluation data has records with values for the features **glimepiride-pioglitazone** and **medical_speciality** that were not included in the schema generated from the training data. You can fix this by adding the new values that exist in the evaluation dataset to the domain of these features.
To get the `domain` of a particular feature you can use [`tfdv.get_domain()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/get_domain).
You can use the `append()` method to the `value` property of the returned `domain` to add strings to the valid list of values. To be more explicit, given a domain you can do something like:
```python
domain.value.append("feature_value")
```
```
### START CODE HERE
# Get the domain associated with the input feature, glimepiride-pioglitazone, from the schema
glimepiride_pioglitazone_domain = tfdv.get_domain(schema, 'glimepiride-pioglitazone')
# HINT: Append the missing value 'Steady' to the domain
glimepiride_pioglitazone_domain.value.append('Steady')
# Get the domain associated with the input feature, medical_specialty, from the schema
medical_specialty_domain = tfdv.get_domain(schema, 'medical_specialty')
# HINT: Append the missing value 'Neurophysiology' to the domain
medical_specialty_domain.value.append('Neurophysiology')
# HINT: Re-calculate and re-display anomalies with the new schema
calculate_and_display_anomalies(eval_stats, schema=schema)
### END CODE HERE
```
If you did the exercise correctly, you should see *"No anomalies found."* after running the cell above.
<a name='6'></a>
## 6 - Schema Environments
By default, all datasets in a pipeline should use the same schema. However, there are some exceptions.
For example, the **label column is dropped in the serving set** so this will be flagged when comparing with the training set schema.
**In this case, introducing slight schema variations is necessary.**
<a name='ex-7'></a>
### Exercise 7: Check anomalies in the serving set
Now you are going to check for anomalies in the **serving data**. The process is very similar to the one you previously did for the evaluation data with a little change.
Let's create a new `StatsOptions` that is aware of the information provided by the schema and use it when generating statistics from the serving DataFrame.
```
# Define a new statistics options by the tfdv.StatsOptions class for the serving data by passing the previously inferred schema
options = tfdv.StatsOptions(schema=schema,
infer_type_from_schema=True,
feature_whitelist=approved_cols)
### START CODE HERE
# Generate serving dataset statistics
# HINT: Remember to use the serving dataframe and to pass the newly defined statistics options
serving_stats = tfdv.generate_statistics_from_dataframe(serving_df, stats_options=options)
# HINT: Calculate and display anomalies using the generated serving statistics
calculate_and_display_anomalies(serving_stats, schema=schema)
### END CODE HERE
```
You should see that `metformin-rosiglitazone`, `metformin-pioglitazone`, `payer_code` and `medical_specialty` features have an anomaly (i.e. Unexpected string values) which is less than 1%.
Let's **relax the anomaly detection constraints** for the last two of these features by defining the `min_domain_mass` of the feature's distribution constraints.
```
# This relaxes the minimum fraction of values that must come from the domain for the feature.
# Get the feature and relax to match 90% of the domain
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.distribution_constraints.min_domain_mass = 0.9
# Get the feature and relax to match 90% of the domain
medical_specialty = tfdv.get_feature(schema, 'medical_specialty')
medical_specialty.distribution_constraints.min_domain_mass = 0.9
# Detect anomalies with the updated constraints
calculate_and_display_anomalies(serving_stats, schema=schema)
```
If the `payer_code` and `medical_specialty` are no longer part of the output cell, then the relaxation worked!
<a name='ex-8'></a>
### Exercise 8: Modifying the Domain
Let's investigate the possible cause of the anomalies for the other features, namely `metformin-pioglitazone` and `metformin-rosiglitazone`. From the output of the previous exercise, you'll see that the `anomaly long description` says: "Examples contain values missing from the schema: Steady (<1%)". You can redisplay the schema and look at the domain of these features to verify this statement.
When you inferred the schema at the start of this lab, it's possible that some values were not detected in the training data so it was not included in the expected domain values of the feature's schema. In the case of `metformin-rosiglitazone` and `metformin-pioglitazone`, the value "Steady" is indeed missing. You will just see "No" in the domain of these two features after running the code cell below.
```
tfdv.display_schema(schema)
```
Towards the bottom of the Domain-Values pairs of the cell above, you can see that many features (including **'metformin'**) have the same values: `['Down', 'No', 'Steady', 'Up']`. These values are common to many features including the ones with missing values during schema inference.
TFDV allows you to modify the domains of some features to match an existing domain. To address the detected anomaly, you can **set the domain** of these features to the domain of the `metformin` feature.
Complete the function below to set the domain of a feature list to an existing feature domain.
For this, use the [`tfdv.set_domain()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/set_domain) function, which has the following parameters:
- `schema`: The schema
- `feature_path`: The name of the feature whose domain needs to be set.
- `domain`: A domain protocol buffer or the name of a global string domain present in the input schema.
```
def modify_domain_of_features(features_list, schema, to_domain_name):
'''
Modify a list of features' domains.
Parameters:
features_list : Features that need to be modified
schema: Inferred schema
to_domain_name : Target domain to be transferred to the features list
Returns:
schema: new schema
'''
### START CODE HERE
# HINT: Loop over the feature list and use set_domain with the inferred schema, feature name and target domain name
for feature in features_list:
tfdv.set_domain(schema, feature, to_domain_name)
### END CODE HERE
return schema
```
Using this function, set the domain of the features defined in the `domain_change_features` list below to be equal to **metformin's domain** to address the anomalies found.
**Since you are overriding the existing domain of the features, it is normal to get a warning so you don't do this by accident.**
```
domain_change_features = ['repaglinide', 'nateglinide', 'chlorpropamide', 'glimepiride',
'acetohexamide', 'glipizide', 'glyburide', 'tolbutamide', 'pioglitazone',
'rosiglitazone', 'acarbose', 'miglitol', 'troglitazone', 'tolazamide',
'examide', 'citoglipton', 'insulin', 'glyburide-metformin', 'glipizide-metformin',
'glimepiride-pioglitazone', 'metformin-rosiglitazone', 'metformin-pioglitazone']
# Infer new schema by using your modify_domain_of_features function
# and the defined domain_change_features feature list
schema = modify_domain_of_features(domain_change_features, schema, 'metformin')
# Display new schema
tfdv.display_schema(schema)
# TEST CODE
# check that the domain of some features are now switched to `metformin`
print(f"Domain name of 'chlorpropamide': {tfdv.get_feature(schema, 'chlorpropamide').domain}")
print(f"Domain values of 'chlorpropamide': {tfdv.get_domain(schema, 'chlorpropamide').value}")
print(f"Domain name of 'repaglinide': {tfdv.get_feature(schema, 'repaglinide').domain}")
print(f"Domain values of 'repaglinide': {tfdv.get_domain(schema, 'repaglinide').value}")
print(f"Domain name of 'nateglinide': {tfdv.get_feature(schema, 'nateglinide').domain}")
print(f"Domain values of 'nateglinide': {tfdv.get_domain(schema, 'nateglinide').value}")
```
**Expected Output:**
```
Domain name of 'chlorpropamide': metformin
Domain values of 'chlorpropamide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'repaglinide': metformin
Domain values of 'repaglinide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'nateglinide': metformin
Domain values of 'nateglinide': ['Down', 'No', 'Steady', 'Up']
```
Let's do a final check of anomalies to see if this solved the issue.
```
calculate_and_display_anomalies(serving_stats, schema=schema)
```
You should now see the `metformin-pioglitazone` and `metformin-rosiglitazone` features dropped from the output anomalies.
<a name='ex-9'></a>
### Exercise 9: Detecting anomalies with environments
There is still one thing to address. The `readmitted` feature (which is the label column) showed up as an anomaly ('Column dropped'). Since labels are not expected in the serving data, let's tell TFDV to ignore this detected anomaly.
This requirement of introducing slight schema variations can be expressed by using [environments](https://www.tensorflow.org/tfx/data_validation/get_started#schema_environments). In particular, features in the schema can be associated with a set of environments using `default_environment`, `in_environment` and `not_in_environment`.
```
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
```
Complete the code below to exclude the `readmitted` feature from the `SERVING` environment.
To achieve this, you can use the [`tfdv.get_feature()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/get_feature) function to get the `readmitted` feature from the inferred schema and use its `not_in_environment` attribute to specify that `readmitted` should be removed from the `SERVING` environment's schema. This **attribute is a list** so you will have to **append** the name of the environment that you wish to omit this feature for.
To be more explicit, given a feature you can do something like:
```python
feature.not_in_environment.append('NAME_OF_ENVIRONMENT')
```
The function `tfdv.get_feature` receives the following parameters:
- `schema`: The schema.
- `feature_path`: The path of the feature to obtain from the schema. In this case this is equal to the name of the feature.
```
### START CODE HERE
# Specify that 'readmitted' feature is not in SERVING environment.
# HINT: Append the 'SERVING' environmnet to the not_in_environment attribute of the feature
tfdv.get_feature(schema, 'readmitted').not_in_environment.append('SERVING')
# HINT: Calculate anomalies with the validate_statistics function by using the serving statistics,
# inferred schema and the SERVING environment parameter.
serving_anomalies_with_env = tfdv.validate_statistics(serving_stats, schema, environment='SERVING')
### END CODE HERE
```
You should see "No anomalies found" by running the cell below.
```
# Display anomalies
tfdv.display_anomalies(serving_anomalies_with_env)
```
Now you have succesfully addressed all anomaly-related issues!
<a name='7'></a>
## 7 - Check for Data Drift and Skew
During data validation, you also need to check for data drift and data skew between the training and serving data. You can do this by specifying the [skew_comparator and drift_comparator](https://www.tensorflow.org/tfx/data_validation/get_started#checking_data_skew_and_drift) in the schema.
Drift and skew is expressed in terms of [L-infinity distance](https://en.wikipedia.org/wiki/Chebyshev_distance) which evaluates the difference between vectors as the greatest of the differences along any coordinate dimension.
You can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.
Let's check for the skew in the **diabetesMed** feature and drift in the **payer_code** feature.
```
# Calculate skew for the diabetesMed feature
diabetes_med = tfdv.get_feature(schema, 'diabetesMed')
diabetes_med.skew_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate drift for the payer_code feature
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.drift_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate anomalies
skew_drift_anomalies = tfdv.validate_statistics(train_stats, schema,
previous_statistics=eval_stats,
serving_statistics=serving_stats)
# Display anomalies
tfdv.display_anomalies(skew_drift_anomalies)
```
In both of these cases, the detected anomaly distance is not too far from the threshold value of `0.03`. For this exercise, let's accept this as within bounds (i.e. you can set the distance to something like `0.035` instead).
**However, if the anomaly truly indicates a skew and drift, then further investigation is necessary as this could have a direct impact on model performance.**
<a name='8'></a>
## 8 - Display Stats for Data Slices <a class="anchor" id="fourth-objective"></a>
Finally, you can [slice the dataset and calculate the statistics](https://www.tensorflow.org/tfx/data_validation/get_started#computing_statistics_over_slices_of_data) for each unique value of a feature. By default, TFDV computes statistics for the overall dataset in addition to the configured slices. Each slice is identified by a unique name which is set as the dataset name in the [DatasetFeatureStatistics](https://github.com/tensorflow/metadata/blob/master/tensorflow_metadata/proto/v0/statistics.proto#L43) protocol buffer. Generating and displaying statistics over different slices of data can help track model and anomaly metrics.
Let's first define a few helper functions to make our code in the exercise more neat.
```
def split_datasets(dataset_list):
'''
split datasets.
Parameters:
dataset_list: List of datasets to split
Returns:
datasets: sliced data
'''
datasets = []
for dataset in dataset_list.datasets:
proto_list = DatasetFeatureStatisticsList()
proto_list.datasets.extend([dataset])
datasets.append(proto_list)
return datasets
def display_stats_at_index(index, datasets):
'''
display statistics at the specified data index
Parameters:
index : index to show the anomalies
datasets: split data
Returns:
display of generated sliced data statistics at the specified index
'''
if index < len(datasets):
print(datasets[index].datasets[0].name)
tfdv.visualize_statistics(datasets[index])
```
The function below returns a list of `DatasetFeatureStatisticsList` protocol buffers. As shown in the ungraded lab, the first one will be for `All Examples` followed by individual slices through the feature you specified.
To configure TFDV to generate statistics for dataset slices, you will use the function `tfdv.StatsOptions()` with the following 4 arguments:
- `schema`
- `slice_functions` passed as a list.
- `infer_type_from_schema` set to True.
- `feature_whitelist` set to the approved features.
Remember that `slice_functions` only work with [`generate_statistics_from_csv()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_csv) so you will need to convert the dataframe to CSV.
```
def sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe, schema):
'''
generate statistics for the sliced data.
Parameters:
slice_fn : slicing definition
approved_cols: list of features to pass to the statistics options
dataframe: pandas dataframe to slice
schema: the schema
Returns:
slice_info_datasets: statistics for the sliced dataset
'''
# Set the StatsOptions
slice_stats_options = tfdv.StatsOptions(schema=schema,
slice_functions=[slice_fn],
infer_type_from_schema=True,
feature_whitelist=approved_cols)
# Convert Dataframe to CSV since `slice_functions` works only with `tfdv.generate_statistics_from_csv`
CSV_PATH = 'slice_sample.csv'
dataframe.to_csv(CSV_PATH)
# Calculate statistics for the sliced dataset
sliced_stats = tfdv.generate_statistics_from_csv(CSV_PATH, stats_options=slice_stats_options)
# Split the dataset using the previously defined split_datasets function
slice_info_datasets = split_datasets(sliced_stats)
return slice_info_datasets
```
With that, you can now use the helper functions to generate and visualize statistics for the sliced datasets.
```
# Generate slice function for the `medical_speciality` feature
slice_fn = slicing_util.get_feature_value_slicer(features={'medical_specialty': None})
# Generate stats for the sliced dataset
slice_datasets = sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe=train_df, schema=schema)
# Print name of slices for reference
print(f'Statistics generated for:\n')
print('\n'.join([sliced.datasets[0].name for sliced in slice_datasets]))
# Display at index 10, which corresponds to the slice named `medical_specialty_Gastroenterology`
display_stats_at_index(10, slice_datasets)
```
If you are curious, try different slice indices to extract the group statistics. For instance, `index=5` corresponds to all `medical_specialty_Surgery-General` records. You can also try slicing through multiple features as shown in the ungraded lab.
Another challenge is to implement your own helper functions. For instance, you can make a `display_stats_for_slice_name()` function so you don't have to determine the index of a slice. If done correctly, you can just do `display_stats_for_slice_name('medical_specialty_Gastroenterology', slice_datasets)` and it will generate the same result as `display_stats_at_index(10, slice_datasets)`.
<a name='9'></a>
## 9 - Freeze the schema
Now that the schema has been reviewed, you will store the schema in a file in its "frozen" state. This can be used to validate incoming data once your application goes live to your users.
This is pretty straightforward using Tensorflow's `io` utils and TFDV's [`write_schema_text()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/write_schema_text) function.
```
# Create output directory
OUTPUT_DIR = "output"
file_io.recursive_create_dir(OUTPUT_DIR)
# Use TensorFlow text output format pbtxt to store the schema
schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')
# write_schema_text function expect the defined schema and output path as parameters
tfdv.write_schema_text(schema, schema_file)
```
After submitting this assignment, you can click the Jupyter logo in the left upper corner of the screen to check the Jupyter filesystem. The `schema.pbtxt` file should be inside the `output` directory.
**Congratulations on finishing this week's assignment!** A lot of concepts where introduced and now you should feel more familiar with using TFDV for inferring schemas, anomaly detection and other data-related tasks.
**Keep it up!**
|
github_jupyter
|
# Import packages
import os
import pandas as pd
import tensorflow as tf
import tempfile, urllib, zipfile
import tensorflow_data_validation as tfdv
from tensorflow.python.lib.io import file_io
from tensorflow_data_validation.utils import slicing_util
from tensorflow_metadata.proto.v0.statistics_pb2 import DatasetFeatureStatisticsList, DatasetFeatureStatistics
# Set TF's logger to only display errors to avoid internal warnings being shown
tf.get_logger().setLevel('ERROR')
# Read CSV data into a dataframe and recognize the missing data that is encoded with '?' string as NaN
df = pd.read_csv('dataset_diabetes/diabetic_data.csv', header=0, na_values = '?')
# Preview the dataset
df.head()
def prepare_data_splits_from_dataframe(df):
'''
Splits a Pandas Dataframe into training, evaluation and serving sets.
Parameters:
df : pandas dataframe to split
Returns:
train_df: Training dataframe(70% of the entire dataset)
eval_df: Evaluation dataframe (15% of the entire dataset)
serving_df: Serving dataframe (15% of the entire dataset, label column dropped)
'''
# 70% of records for generating the training set
train_len = int(len(df) * 0.7)
# Remaining 30% of records for generating the evaluation and serving sets
eval_serv_len = len(df) - train_len
# Half of the 30%, which makes up 15% of total records, for generating the evaluation set
eval_len = eval_serv_len // 2
# Remaining 15% of total records for generating the serving set
serv_len = eval_serv_len - eval_len
# Sample the train, validation and serving sets. We specify a random state for repeatable outcomes.
train_df = df.iloc[:train_len].sample(frac=1, random_state=48).reset_index(drop=True)
eval_df = df.iloc[train_len: train_len + eval_len].sample(frac=1, random_state=48).reset_index(drop=True)
serving_df = df.iloc[train_len + eval_len: train_len + eval_len + serv_len].sample(frac=1, random_state=48).reset_index(drop=True)
# Serving data emulates the data that would be submitted for predictions, so it should not have the label column.
serving_df = serving_df.drop(['readmitted'], axis=1)
return train_df, eval_df, serving_df
# Split the datasets
train_df, eval_df, serving_df = prepare_data_splits_from_dataframe(df)
print('Training dataset has {} records\nValidation dataset has {} records\nServing dataset has {} records'.format(len(train_df),len(eval_df),len(serving_df)))
# Define features to remove
features_to_remove = {'encounter_id', 'patient_nbr'}
# Collect features to whitelist while computing the statistics
approved_cols = [col for col in df.columns if (col not in features_to_remove)]
# Instantiate a StatsOptions class and define the feature_whitelist property
stats_options = tfdv.StatsOptions(feature_whitelist=approved_cols)
# Review the features to generate the statistics
print(stats_options.feature_whitelist)
### START CODE HERE
train_stats = tfdv.generate_statistics_from_dataframe(train_df, stats_options)
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features used: {len(train_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples used: {train_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {train_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {train_stats.datasets[0].features[-1].path.step[0]}")
Number of features used: 48
Number of examples used: 71236
First feature: race
Last feature: readmitted
### START CODE HERE
tfdv.visualize_statistics(train_stats)
### END CODE HERE
### START CODE HERE
# Infer the data schema by using the training statistics that you generated
schema = tfdv.infer_schema(statistics=train_stats)
# Display the data schema
tfdv.display_schema(schema)
### END CODE HERE
# TEST CODE
# Check number of features
print(f"Number of features in schema: {len(schema.feature)}")
# Check domain name of 2nd feature
print(f"Second feature in schema: {list(schema.feature)[1].domain}")
Number of features in schema: 48
Second feature in schema: gender
### START CODE HERE
# Generate evaluation dataset statistics
# HINT: Remember to use the evaluation dataframe and to pass the stats_options (that you defined before) as an argument
eval_stats = tfdv.generate_statistics_from_dataframe(eval_df, stats_options=stats_options)
# Compare evaluation data with training data
# HINT: Remember to use both the evaluation and training statistics with the lhs_statistics and rhs_statistics arguments
# HINT: Assign the names of 'EVAL_DATASET' and 'TRAIN_DATASET' to the lhs and rhs protocols
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features: {len(eval_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples: {eval_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {eval_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {eval_stats.datasets[0].features[-1].path.step[0]}")
Number of features: 48
Number of examples: 15265
First feature: race
Last feature: readmitted
train_df["glimepiride-pioglitazone"].describe()
eval_df["glimepiride-pioglitazone"].describe()
def calculate_and_display_anomalies(statistics, schema):
'''
Calculate and display anomalies.
Parameters:
statistics : Data statistics in statistics_pb2.DatasetFeatureStatisticsList format
schema : Data schema in schema_pb2.Schema format
Returns:
display of calculated anomalies
'''
### START CODE HERE
# HINTS: Pass the statistics and schema parameters into the validation function
anomalies = tfdv.validate_statistics(statistics, schema)
# HINTS: Display input anomalies by using the calculated anomalies
tfdv.display_anomalies(anomalies)
### END CODE HERE
# Check evaluation data for errors by validating the evaluation data staticss using the previously inferred schema
calculate_and_display_anomalies(eval_stats, schema=schema)
domain.value.append("feature_value")
### START CODE HERE
# Get the domain associated with the input feature, glimepiride-pioglitazone, from the schema
glimepiride_pioglitazone_domain = tfdv.get_domain(schema, 'glimepiride-pioglitazone')
# HINT: Append the missing value 'Steady' to the domain
glimepiride_pioglitazone_domain.value.append('Steady')
# Get the domain associated with the input feature, medical_specialty, from the schema
medical_specialty_domain = tfdv.get_domain(schema, 'medical_specialty')
# HINT: Append the missing value 'Neurophysiology' to the domain
medical_specialty_domain.value.append('Neurophysiology')
# HINT: Re-calculate and re-display anomalies with the new schema
calculate_and_display_anomalies(eval_stats, schema=schema)
### END CODE HERE
# Define a new statistics options by the tfdv.StatsOptions class for the serving data by passing the previously inferred schema
options = tfdv.StatsOptions(schema=schema,
infer_type_from_schema=True,
feature_whitelist=approved_cols)
### START CODE HERE
# Generate serving dataset statistics
# HINT: Remember to use the serving dataframe and to pass the newly defined statistics options
serving_stats = tfdv.generate_statistics_from_dataframe(serving_df, stats_options=options)
# HINT: Calculate and display anomalies using the generated serving statistics
calculate_and_display_anomalies(serving_stats, schema=schema)
### END CODE HERE
# This relaxes the minimum fraction of values that must come from the domain for the feature.
# Get the feature and relax to match 90% of the domain
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.distribution_constraints.min_domain_mass = 0.9
# Get the feature and relax to match 90% of the domain
medical_specialty = tfdv.get_feature(schema, 'medical_specialty')
medical_specialty.distribution_constraints.min_domain_mass = 0.9
# Detect anomalies with the updated constraints
calculate_and_display_anomalies(serving_stats, schema=schema)
tfdv.display_schema(schema)
def modify_domain_of_features(features_list, schema, to_domain_name):
'''
Modify a list of features' domains.
Parameters:
features_list : Features that need to be modified
schema: Inferred schema
to_domain_name : Target domain to be transferred to the features list
Returns:
schema: new schema
'''
### START CODE HERE
# HINT: Loop over the feature list and use set_domain with the inferred schema, feature name and target domain name
for feature in features_list:
tfdv.set_domain(schema, feature, to_domain_name)
### END CODE HERE
return schema
domain_change_features = ['repaglinide', 'nateglinide', 'chlorpropamide', 'glimepiride',
'acetohexamide', 'glipizide', 'glyburide', 'tolbutamide', 'pioglitazone',
'rosiglitazone', 'acarbose', 'miglitol', 'troglitazone', 'tolazamide',
'examide', 'citoglipton', 'insulin', 'glyburide-metformin', 'glipizide-metformin',
'glimepiride-pioglitazone', 'metformin-rosiglitazone', 'metformin-pioglitazone']
# Infer new schema by using your modify_domain_of_features function
# and the defined domain_change_features feature list
schema = modify_domain_of_features(domain_change_features, schema, 'metformin')
# Display new schema
tfdv.display_schema(schema)
# TEST CODE
# check that the domain of some features are now switched to `metformin`
print(f"Domain name of 'chlorpropamide': {tfdv.get_feature(schema, 'chlorpropamide').domain}")
print(f"Domain values of 'chlorpropamide': {tfdv.get_domain(schema, 'chlorpropamide').value}")
print(f"Domain name of 'repaglinide': {tfdv.get_feature(schema, 'repaglinide').domain}")
print(f"Domain values of 'repaglinide': {tfdv.get_domain(schema, 'repaglinide').value}")
print(f"Domain name of 'nateglinide': {tfdv.get_feature(schema, 'nateglinide').domain}")
print(f"Domain values of 'nateglinide': {tfdv.get_domain(schema, 'nateglinide').value}")
Domain name of 'chlorpropamide': metformin
Domain values of 'chlorpropamide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'repaglinide': metformin
Domain values of 'repaglinide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'nateglinide': metformin
Domain values of 'nateglinide': ['Down', 'No', 'Steady', 'Up']
calculate_and_display_anomalies(serving_stats, schema=schema)
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
feature.not_in_environment.append('NAME_OF_ENVIRONMENT')
### START CODE HERE
# Specify that 'readmitted' feature is not in SERVING environment.
# HINT: Append the 'SERVING' environmnet to the not_in_environment attribute of the feature
tfdv.get_feature(schema, 'readmitted').not_in_environment.append('SERVING')
# HINT: Calculate anomalies with the validate_statistics function by using the serving statistics,
# inferred schema and the SERVING environment parameter.
serving_anomalies_with_env = tfdv.validate_statistics(serving_stats, schema, environment='SERVING')
### END CODE HERE
# Display anomalies
tfdv.display_anomalies(serving_anomalies_with_env)
# Calculate skew for the diabetesMed feature
diabetes_med = tfdv.get_feature(schema, 'diabetesMed')
diabetes_med.skew_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate drift for the payer_code feature
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.drift_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate anomalies
skew_drift_anomalies = tfdv.validate_statistics(train_stats, schema,
previous_statistics=eval_stats,
serving_statistics=serving_stats)
# Display anomalies
tfdv.display_anomalies(skew_drift_anomalies)
def split_datasets(dataset_list):
'''
split datasets.
Parameters:
dataset_list: List of datasets to split
Returns:
datasets: sliced data
'''
datasets = []
for dataset in dataset_list.datasets:
proto_list = DatasetFeatureStatisticsList()
proto_list.datasets.extend([dataset])
datasets.append(proto_list)
return datasets
def display_stats_at_index(index, datasets):
'''
display statistics at the specified data index
Parameters:
index : index to show the anomalies
datasets: split data
Returns:
display of generated sliced data statistics at the specified index
'''
if index < len(datasets):
print(datasets[index].datasets[0].name)
tfdv.visualize_statistics(datasets[index])
def sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe, schema):
'''
generate statistics for the sliced data.
Parameters:
slice_fn : slicing definition
approved_cols: list of features to pass to the statistics options
dataframe: pandas dataframe to slice
schema: the schema
Returns:
slice_info_datasets: statistics for the sliced dataset
'''
# Set the StatsOptions
slice_stats_options = tfdv.StatsOptions(schema=schema,
slice_functions=[slice_fn],
infer_type_from_schema=True,
feature_whitelist=approved_cols)
# Convert Dataframe to CSV since `slice_functions` works only with `tfdv.generate_statistics_from_csv`
CSV_PATH = 'slice_sample.csv'
dataframe.to_csv(CSV_PATH)
# Calculate statistics for the sliced dataset
sliced_stats = tfdv.generate_statistics_from_csv(CSV_PATH, stats_options=slice_stats_options)
# Split the dataset using the previously defined split_datasets function
slice_info_datasets = split_datasets(sliced_stats)
return slice_info_datasets
# Generate slice function for the `medical_speciality` feature
slice_fn = slicing_util.get_feature_value_slicer(features={'medical_specialty': None})
# Generate stats for the sliced dataset
slice_datasets = sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe=train_df, schema=schema)
# Print name of slices for reference
print(f'Statistics generated for:\n')
print('\n'.join([sliced.datasets[0].name for sliced in slice_datasets]))
# Display at index 10, which corresponds to the slice named `medical_specialty_Gastroenterology`
display_stats_at_index(10, slice_datasets)
# Create output directory
OUTPUT_DIR = "output"
file_io.recursive_create_dir(OUTPUT_DIR)
# Use TensorFlow text output format pbtxt to store the schema
schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')
# write_schema_text function expect the defined schema and output path as parameters
tfdv.write_schema_text(schema, schema_file)
| 0.712332 | 0.990988 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.