prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
# Latent Dirichlet Allocation
- Paper: Latent Dirichlet Allocatio
- Author: David M.Blei, Andrew Y.Ng, Michael I.Jordan
- Teammates: Yizi Lin, Siqi Fu
- Github: https://github.com/lyz1206/lda.git
### Part 1 Abstract
*Latent Dirichlet Allocation* (LDA) is a generative probabilitistic model dealing with collections of data such as corpus. Based on the assumption of bag of word and exchangeability, each document in corpus is modeled as random mixture over latent topics and each topic is modeled by a distribution over words. Document is represented in a form of topic probability. In this project, we foucs on the text data, and in this case, each document is represented as a topic probability. We implement *variational inference* and *EM algorithm* to estimate parameters and performed optimazation method to make our algorithm more efficient. We compare LDA with other *topic model* like LSI and HDP.
*key words*: Topic Model, Latent Dirichlet Allocation, Variational Inference, EM algorithm
### Part 2 Background
In our project, we use the the paper "Latent Dirichlet Allocation" by David M. Blei, Andrew Y. Ng and Michael I.Jordan.
Latent Dirichlet allocation (LDA) is a generative probabilistic model of a corpus, it uses a three-level hierarchical Bayesian model to describe the word generative process. Its basic idea is that each document are represented as random mixtures over latent topics, and each topic is characterized by a distribution over words. In general, LDA assumes the following generative process for each document w in a corpus D:
1. Choose $N \sim Possion(\xi)$, which represents the document length.
2. Choose $\theta \sim Dir(\alpha)$, which $\theta$ is a column vector representing the topic probability.
3. For each of the N words:
- Choose $z_n\sim Multinomial(\theta)$, which represents current topic.
- Choose $w_n$ based on $p( w_n \mid z_n; \beta)$
There are three critical assumption for this model:
- the dimensionality k of the Dirichlet distribution is assumed known and fixed.
- $\beta$ is a V $\times$ k matrix, where $\beta_{ij} = P( w^j = 1 \mid z^i = 1)$, which means $\beta$ represents the probability of generating one particular word given the particular topic. $\beta$ is also assumed to be known and fix.
- words are generated by topics and those topics are infinitely exchangeable within a document.
The generating process is represented as a probabilistic graphical model below:

Based on the model described above, the joint distribution of a topic mixture $\theta$, a set of N topics z, and a set of N words w is given by:
$$
p(\theta,z,w|\alpha, \beta)=p(\theta|\alpha)\prod_{n=1}^{N}p(z_n|\theta)p(w_n|z_n, \beta)
$$
Integrating over $\theta$ and summing over z, we obtain the marginal distribution of a document:
$$
p(w|\alpha, \beta) = \int p(\theta|\alpha)(\prod_{n=1}^{N}\sum_{z_n}p(z_n|\theta)p(w_n|z_n,\beta))d\theta
$$
Finally, taking the product of the marginal probabilities of single documents, we obtain the probability of a corpus: $$
P(D \mid \alpha, \beta) = \prod_{d = 1}^{M} \int p(\theta_d \mid \alpha)(\prod_{n = 1}^{N_d}\sum_{z_{dn}}p(z_{dn}\mid \theta_d) p(w_{dn} \mid z_{dn},\beta)) d \theta_d
$$
Using Bayesian rule, we can get the formula of the posterior distribution of the hidden variables given a document:
$$
p(\theta,z|w, \alpha, \beta) = \frac{p(\theta,z,w|\alpha,\beta)}{p(w|\alpha,\beta)}
$$
However, this distribution is intractable to capture in general. In this paper, the author use variational EM algorithm to approximate the distribution. We will discuss it in Part 3.
Generally speaking, the main goal of LDA is to find short descriptions of the members of a collection that enable efficient processing of large collections while preserving the essential statistical relationships that are useful for basis tasks. Common applications involve:
- document modeling
- text classification
- collaborative filtering
As a three level hierarchical Bayesian model, LDA is more elaborate than some other latent models such as Mixture of unigrams, and pLSA.
- In the Mixture of unigrams, the word distributions can be viewed as representations od topics under the assumption that each document exihibit only one topic. However, LDA allows documents to exihibit multiple topics to different probabilities.
- The pLSA model does solves the problem of Mixture of unigrams, but it has further problems that it is not well-defined generative model of documents,which means that it cannot be used to assign probability to a previously unused document. Also, since the linear growth in parameter of the pLSA model, it can causes overfitting. However, LDA sufferes neigher of those problems.
From Mixture of unigrams to PLSA TO lDA, the text modeling is improved step by step. LDA introduces the Dirichlet Distribution in the document to topic layer, which is better than PLSA, so that the number of model parameters does not expand with the increase of corpus.
### Part 3 Description of algorithm
In part 2, we have mentioned that the posterior distribution of the hidden variables is intractable to capture in general, so the authors in this paper use variational EM algorithm to approximate it. Generally, this algorithm follows such iteration:
1. (E-step) For each document, find the optimizing values of the variational parameters{$\gamma_d^\ast$, \Phi_d^\ast, d \in D}.
2. (M-step) Maximize the resulting lower bound on the log likelihood with respect to the model parameters $\alpha$ and $\beta$. This correspondings to finding maximum likelihood estimates with expected sufficient statistics for each document under the approximate posterior which is computed in the E-step.
* E-step
The main idea in this step is to find the tightest possible lower bound of the log likelihood and choose variational parameters.
Firstly, we show the procedure of finding the tightest lower bound of the log likelihood.
We begin by applying `Jensen's inequality` to bound the log likehood of document:
$$
\begin{split}
log \space p \space (w \mid \alpha, \beta) &= log \int \sum_z p(\theta,z,w \mid \alpha, \beta ) d \theta\\
&=log \int \sum_z \frac{p(\theta,z,w \mid \alpha, \beta)\space q(\theta,z)}{q(\theta,z)} d\theta\\
&\ge \int \sum_z q(\theta,z) \space log \space p(\theta,z,w \mid \alpha,\beta) d\theta - \int \sum_z q(\theta,z) \space log \space q(\theta,z) d\theta\\
&= E_q[log \space p(\theta, z,w \mid \alpha, \beta)] -E_q[log\space q(\theta, z)]
\end{split}
$$
Form the above equation, we get a lower bound of the likelihood for variational distribution $q(\theta,z \mid \gamma,\phi)$.
The difference between the left side and right side of the above qeuation represents the `KL` divergence between variational posterior probability and the true posterior probability. Let $L(\gamma,\phi: \alpha, \beta)$ denote the right-hand side, and we can get:
$$log p(w \mid \alpha, \beta) = L (\gamma, \phi :\alpha,\beta) + D(q(\theta, z \mid \gamma, \phi) \mid \mid p(\theta,z \mid w, \alpha,\beta))$$
This shows the maximize lower bound $L(\gamma, \phi :\alpha,\beta)$ with respect to $\gamma$ and $\phi$ is equivalent to minimizing the KL divergence.
So we successfully translate the into the optimization problem as below:
$$(\gamma^*,\phi^*) = argmin_{\gamma,\phi} D(q(\theta,z \mid \gamma, \phi) \mid \mid p (\theta,z \mid w,\alpha,\beta))$$
Secondly, we obtain a tractable family of the $q(\theta,z)$.
In the paper, the authors drop the edges between $\theta$, z and w, as well as the w nodes. This procedure is shown below.

So $q(\theta,z)$ is characterized by the following variational distribution:
$$
q(\theta,z|\gamma, \Phi) = q(\theta|\gamma)\prod_{n=1}^{N}q(z_n|\Phi_n)
$$
Thirdly, we expand the lower bound using the factorizations of p and q:
$$
L(\gamma,\phi: \alpha,\beta) = E_q[log p(\theta \alpha)] +E_q [log p(z \mid \theta )] +E_q [log p(w \mid z,\beta)] -E_q[log q(\theta)] -E_q[log q(z)]
$$
Finally, we use Lagrange method to maximize the lower bound with respect to the variational parameters $\Phi$ and $\gamma$. The updated equations are:
$$\phi_{ni} \propto \beta_{iw_n} exp[E_q (log(\theta_i) \mid \gamma)]$$
$$\gamma_i = \alpha_i +\sum_{n = 1}^N \phi_{ni}$$
The pseudocode of E-step is as follow:

- M-step
The main in M-step is to maximize the resulting lower bound on the log likelohood with respect to the model parameters $\alpha$ and $\beta$.
We update $\beta$ through Lagrange method.
$$
L_{[\beta]} = \sum_{d=1}^{M}\sum_{n=1}^{N_d}\sum_{i=1}^{k}\sum_{j=1}^{V}\Phi_{dni}w_{dn}^jlog\beta_{ij}+\sum_{i=1}^{k}\lambda_i(\sum_{j=1}^{V}\beta_{ij}-1)
$$
So the update equation is:
$$
\beta_{ij} \propto \sum_{d=1}^{M}\sum_{n=1}^{N_d}\Phi_{dn_i}w^{j}_{dn}
$$
$\alpha$ is updated by Newton-Raphson Method:
$$\alpha_{new} = \alpha_{old} -H (\alpha_{old})^{-1} g(\alpha_{old})$$
where $H(\alpha)$ is the Hessian matrix and $g(\alpha)$ is the gradient at point $\alpha$. This algorithm scales as $O(N^3)$ due to the matrix inversion.
Instead, if the Hessian matrix has a special struture $H = diag(h) + \textbf{1} z \textbf{1}^T$, we are able to yields a Newton-Raphson algorithm that has linear complexity. This precesure is shown below:
$$
H^{-1} = diag(h)^{-1} - \frac{diag(h)^{-1} \textbf{1} \textbf{1}^T diag(h)^{-1}}{z^{-1} + \sum_{j = 1}^k h_j^{-1}}\\
$$
Multiplying by the gradient, we can get the ith component as:
$$(H^{-1} g)_i = \frac{g_i-c}{h_i}$$
where $c = \frac{\sum_{j=1}^k g_j/h_j}{z^{-1} +\sum_{j = 1}^{k} j_j^{-1}}$
### Part 4 Optimization
In this project, we use variational EM algorithm for LDA model to find the value of parameter $\alpha$ and $\beta$, to maximize the marginal log likelihood. In general, there are two parts that needed to be optimized.
1. E-step: For each document d, calculate the optimizing values of the variational parameters : $\gamma_d^\ast$ and $Phi_d^\ast$.
2. M-step: Based on the results of E-step, update $\alpha$ and $\beta$.
In the previous part(the plain version), we have optimized the M-step. More specifically, we update $\alpha$ through the Newton-Raphson methods for a Hessian with special structure, which is mentioned in the paper and decrease the time complexity from $O(N^3)$ to linearity.
In this part, our main goal is to optimize the E-step. There are two processes here: optimize $\gamma_d^\ast$ and $Phi_d^\ast$, then calculate the statistics for M-step.
Method we use:
- Vectorization: in `Estep_singedoc()` function, usd matrix to avoid the use of for loop.
- JIT compilation: for `accumulate_Phi()` and `Estep()` function.
We also tried the cython method, but unfortunately it didn't improve the code performance here.
```
import numpy as np
import pandas as pd
import gensim
import numba
from nltk.stem import WordNetLemmatizer
from nltk import PorterStemmer
from scipy.special import digamma, polygamma
```
#### Original version
```
def Estep_original(doc, alpha, beta, k, N_d, max_iter = 50):
'''
E step for a document, which calculate the posterior parameters.
beta and alpha is coming from previous iteration.
Return Phi and gamma of a document.
'''
gamma_old = [alpha[i] + N_d/k for i in range(k)]
row_index = list(doc.keys())
word_count = np.array(list(doc.values()))
for i in range(max_iter):
# Update Phi
Phi = np.zeros((N_d, k))
for i in range(N_d):
for j in range(k):
Phi[i,j] = beta[row_index[i],j]*np.exp(digamma(gamma_old[j]))
Phi[i,:] = Phi[i,:]/np.sum(Phi[i,:])
# Update gamma
Phi_sum = np.zeros(k)
for j in range(k):
z = 0
for i in range(N_d):
z += Phi[i,j] * word_count[i]
Phi_sum[j] = z
gamma_new = alpha + Phi_sum
# converge or not
if(i>0) and (convergence(gamma_new, gamma_old)):
break
else:
gamma_old = gamma_new.copy()
return gamma_new, Phi
def accumulate_Phi_original(beta, Phi, doc):
'''
This function accumulates the effect of Phi_new from all documents after e step.
beta is V*k matrix.
Phi is N_d * k matrix.
Return updated beta.
'''
row_index = list(doc.keys())
word_count = list(doc.values())
for i in range(len(row_index)):
beta[row_index[i],:] = word_count[i] * Phi[i,:]
return beta
```
#### Opitimized version
```
def Estep_singedoc(doc, alpha, beta, k, N_d, max_iter = 50):
'''
E step for a document, which calculate the posterior parameters.
beta and alpha is coming from previous iteration.
Return Phi and gamma of a document.
'''
gamma_old = alpha + np.ones(k) * N_d/k
row_index = list(doc.keys())
word_count = np.array(list(doc.values()))
for i in range(max_iter):
# Update Phi
Phi_exp = np.exp(digamma(gamma_old))
Phi = beta[row_index,:] @ np.diag(Phi_exp)
Phi_new = normalization_row(Phi)
# Update gamma
Phi_sum = Phi_new.T @ word_count[:,None] # k-dim
gamma_new = alpha + Phi_sum.T[0]
# Converge or not
if (i>0) & convergence(gamma_new, gamma_old):
break
else:
gamma_old = gamma_new.copy()
return gamma_new, Phi_new
@numba.jit(cache = True)
def accumulate_Phi(beta, Phi, doc):
'''
This function accumulates the effect of Phi_new from all documents after e step.
beta is V*k matrix.
Phi is N_d * k matrix.
Return updated beta.
'''
beta[list(doc.keys()),:] += np.diag(list(doc.values())) @ Phi
return beta
@numba.jit(cache = True)
def Estep(doc, alpha_old, beta_old, beta_new, gamma_matrix, k, N_d, M):
'''
Calculate $\gamma$ and $\Phi$ for all documents.
'''
for i in range(M):
gamma, Phi = Estep_singedoc(doc[i], alpha_old, beta_old, k, N_d[i])
beta_new = accumulate_Phi(beta_new, Phi, doc[i])
gamma_matrix[i,:] = gamma
return beta_new, gamma_matrix
# Some helpful functions for comparison
def convergence(new, old, epsilon = 1.0e-3):
'''
Check convergence.
'''
return np.all(np.abs(new - old)) < epsilon
def normalization_row(x):
'''
Normaize a matrix by row.
'''
return x/np.sum(x,1)[:,None]
def initializaiton(k, V):
'''
Initialize alpha and beta.
alpha is a k-dim vector. beta is V*k matrix.
'''
np.random.seed(12345)
alpha = np.random.uniform(size = k)
alpha_output = alpha/np.sum(alpha)
beta_output = np.random.dirichlet(alpha_output, V)
return alpha_output, beta_output
def lemmatize_stemming(text):
'''
Lenmmatize and stem the text.
'''
return PorterStemmer().stem(WordNetLemmatizer().lemmatize(text, pos='v'))
def preprocess(text):
'''
Preprocess the text.
'''
result = []
for token in gensim.utils.simple_preprocess(text):
if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3:
result.append(lemmatize_stemming(token))
return result
```
#### Comparison
```
# Load the data and preprocess it.
data = pd.read_csv('data/articles.csv', error_bad_lines=False)
document = data[['content']].iloc[:50,:].copy()
document['index'] = document.index
processed_docs = document['content'].map(preprocess)
vocabulary = gensim.corpora.Dictionary(processed_docs)
#vocabulary.filter_extremes(no_below=5, no_above=0.1, keep_n=100)
bow_corpus = [vocabulary.doc2bow(doc) for doc in processed_docs]
doc = [dict(bow) for bow in bow_corpus]
N_d = [len(d) for d in doc]
V = len(vocabulary)
M = len(doc)
k = 3
```
- Original Version
```
%%timeit
alpha0, beta0 = initializaiton(k, V)
beta = beta0
gamma_matrix_origin = np.zeros((M, k))
for i in range(M):
gamma, Phi = Estep_original(doc[i], alpha0, beta0, k, N_d[i])
beta = accumulate_Phi_original(beta, Phi, doc[i])
gamma_matrix_origin[i,:] = gamma
```
- Optimized Version
```
%%timeit
alpha0, beta0 = initializaiton(k, V)
beta = beta0
gamma_matrix_opt = np.zeros((M, k))
beta_new, gamma_matrix_opt = Estep(doc, alpha0, beta0, beta, gamma_matrix_opt, k, N_d, M)
```
- Conclusion
We use 50 documents here, and set up k=3. After opitimizaion, the running time decreased from 15.2 to 221 ms. the increase percentage is nearly 100%.
### Part 5: Applications to simulated data sets
In this part, we use our model on a simulated data set. Given $\alpha$, $\beta$, k, M, V, and N_d by ourselves, the data process is described in Part 2. The corresponding code and result is in the file Test-Simulated.ipynb.
The real value of $\alpha$ is [0.15, 0.35, 0.5]. The results of the simulated shows that, , the $\hat{\alpha}$ is [0.49, 0.34, 0.16]. We find that the estiamted value is able to approximate the true value.
We also calculate the average difference between $\beta$ and $\hat{\beta}$, which is equals to 0.0059 and is acceptable.
In conclusion, the LDA method is able to capture the true value of $\alpha$ and $\beta$ if we run the code for enough time and start with a "good" point.
### Part 6: Real data set
In this part, we implent our algorithm on two real dataset.
We preprocess the dataset before using them. Operation includes: spliting the sentences into words, lemmatizating, removing stop words, creating vocaubulary and establish corpus.
- Dataset in Paper
In the original paper, the authors used 16,000 documents from a subset of the TREC AP corpors(Harman, 1992). It is not easy to get the TREC datast since we need to sign an individual agreement and ask for approval from NIST. Instead, we download the sample data on [Blei's webpage](http://www.cs.columbia.edu/~blei/lda-c/). This sample is just a subset of the data that the authors used in the paper, so we cannot get the same result.

The topic words from some of the resulting multinomial distribution $p(w|z)$ are illustrated above. These distribution seem to capture some hidden topics in the corpus.
For example, "millison", "market", "stock", "company" are common words in the topic like "economy", and "president", "reagan", "statement" and "troop" are common words in the topic like "politics".
- Another Dataset
This dataset is named "All the news" and it is coming from [kaggle](https://www.kaggle.com/snapcrack/all-the-news). The dataset contains articles from New York Times, Breitbart, CNN, Business Insider, the Atlantic, Fox News and so on. The original dataset has three csv file, but we just use the first 1000 rows in the second file.

Similar to the previous dataset, the LDA model captures some hidden topics in the corpus. For example, words like "space", "planet", "earth" and "universe" are common in astronomy area.
In conclusion, our package works well. The LDA model is able to capture the hidden topic in the corpus and to provide reasonable insights to us, which is useful for text classification and collaborative filtering.
### Part 7 Comparative Analysis with Competing Algorihtms
In this part, we compare the LDA method with two competing algorithm: Latent Semantic Indexing (LSI) and Hierarchical Dirichlet process(HDP). We still use the "All the news" dataset to evaluate the performance of the algorithm.
```
df = pd.read_csv('data/articles.csv')
document = df[['content']].copy()
document['index'] = document.index
processed_docs = document['content'].map(preprocess)
vocabulary = gensim.corpora.Dictionary(processed_docs)
bow_corpus = [vocabulary.doc2bow(doc) for doc in processed_docs]
```
#### LDA vs LSI
```
from gensim.models import LdaModel
from gensim.models import LsiModel
```
- Compare speed
```
%timeit LsiModel(bow_corpus, 30, id2word = vocabulary)
%timeit LdaModel(bow_corpus, 30, id2word = vocabulary)
```
- Compare result
```
lsamodel = LsiModel(bow_corpus, 30, id2word = vocabulary)
ldamodel = LdaModel(bow_corpus, 30, id2word = vocabulary)
for idx, topic in ldamodel.print_topics(-1):
if idx<5:
print('Topic: {} \nWords: {}'.format(idx, topic))\
for idx, topic in lsamodel.print_topics(-1):
if idx<5:
print('Topic: {} \nWords: {}'.format(idx, topic))
```
The result above shows that the LSI algorithm is even faster, since it implement a SVD decompisition to reduce the dimension of the input. Howvever, the LDA method implement a variational EM algorithm or gibbs sampling, which require a lot of iteration to make the estimated value of every document converge, and thus is time consuming.
As for the result, we find that all the coefficients in LDA is positive, but some of the coefficients in LSI is negative.
In conclusion, the speed of LSI algotithm is significantly faster than the LDA algorithm. In the case that we need to use cross-validation to choose the topic number, LSI is more effective. However, the components of the topic in LSI method are arbitrarily positive/negative, which it is difficult to interpret. Moreover, LSI is unable to capture the multiple meanings of words.
#### LDA vs HDP
One character of the LDA algorithm is that we need to specify the number of topic. However, in most cases, we do not know the exactly topic numbers. Cross validatio is a method to deal with this problem. However, in the previous part, we have shown that the LDA algorithm is less effective, using cross validation is time consuming.
HDP algorithm is a natural nonparametric generalization of Latent Dirichlet allocation, where the number of topics can be unbounded and learnt from data, so we don't need to select the topic numbers.
```
from gensim.models import HdpModel
```
- Compare speed
```
%timeit HdpModel(bow_corpus, vocabulary)
%timeit LdaModel(bow_corpus, 30, id2word = vocabulary)
```
- Compare results
```
hdp = HdpModel(bow_corpus, vocabulary)
for idx, topic in hdp.print_topics(-1):
if idx<5:
print('Topic: {} \nWords: {}'.format(idx, topic))
```
Although the speed the LDA algorithm is faster, if we want to implement cross validaiton to ensure the topic numbers, we have to run the model serveal time, and thus the total time is longer than the HDP algorithm. In this case, HDP is better.
However, compared the results from two algorithm, it is obvious that they are different. Sometimes it is difficult to interpret the result of HDP algorithm. What's more, if the previous experience tells us what the topic number is, the LDA model is more effective.
### Part 8 Discussion
Based on the performance of our algorithm on the real dataset and the dataset in paper, our algorithm does fulfill the need that divide the document into different topics and explore the words occurring in that topic of different weight.<br>
LDA can be used in a variety of purposes:
- Clustering: The topic in the clustering center, and the document associate with multer clusters. Clustering is very helpful in organizing and summarizing article collections.
- Feature generation: LDA can generate features for other machine learning algorithm. As mentioned before, LDA generates topics for each document, and these K topics can be treated as K features, and these features can be used to predict as in logistic regression and decision tree.
- Dimension Reduction: LDA provides a topic distribution which can be seen as a concise
summary of the article.Comparing articles in this dimensionally reduced feature space is more meaningful than in the feature space of the original vocabulary.
Even though the LDA performs well in our dataset, it does have some limitations. For example, the number of topics $k$ is fixed and must be known ahead of time. Also, Since the Dirichlet topic distribution cannot capture correlations, the LDA can only capture uncorrelated topics. Finally, the LDA algorithm is based on the assumption of BoW, which assumes words are exchangeable, and does not consider sentence sequence. <br>
To overcome the limitation, extend LDA to the distributions on the topic variables are elaborated. For example, arranging the topics in a time series, so that it relax the full exchangeability assumption to one of partial exchangeability.
### Part 9 References/bibliography
[1] Blei, David M., Andrew Y. Ng, and Michael I. Jordan. "Latent dirichlet allocation." the Journal of machine Learning research 3 (2003): 993-1022.
| true |
code
| 0.474936 | null | null | null | null |
|
# Exploration of Quora dataset
```
import sys
sys.path.append("..")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use("dark_background") # comment out if using light Jupyter theme
dtypes = {"qid": str, "question_text": str, "target": int}
train = pd.read_csv("../data/train.csv", dtype=dtypes)
test = pd.read_csv("../data/test.csv", dtype=dtypes)
```
## 1. A first glance
```
train.head()
print("There are {} questions in train and {} in test".format(train.shape[0], test.shape[0]))
print("Target value is binary (values: {})".format(set(train["target"].unique())))
print("Number of toxic questions in training data is {} (proportion: {}).".format(train["target"].sum(), train["target"].mean()))
```
## 2. A closer look at the questions
### 2.1 Question length (characters)
```
train["text_length"] = train["question_text"].str.len()
train["text_length"].describe()
```
Most questions are relatively short, i.e., less than 100 characters. There are some exceptions, however, with a maximum of more than a thousand. Let's see how many characters we should consider.
```
for length in [100, 150, 200, 250, 300, 350, 500]:
num = np.sum(train["text_length"] > length)
print("There are {} questions ({}%) with more than {} characters."
.format(num, np.round(num / len(train) * 100, 2), length))
```
The number of questions with more than 250 characters is already small and with more than 300 negligible. We can cut the questions at 300 or even just remove them. Would there be a difference between the length of toxic and sincere questions?
```
def split_on_target(data):
toxic = data[data["target"] == 1]
sincere = data[data["target"] == 0]
return sincere, toxic
sincere, toxic = split_on_target(train)
def plot_density_plots(sincere_data, toxic_data, column, xlim=(0, 300), bin_size=5):
fig, axes = plt.subplots(1, 2, figsize=(12, 5))
axes[0] = sns.distplot(sincere_data[column], ax=axes[0], bins=np.arange(xlim[0], xlim[1], bin_size))
axes[0].set_title("Sincere questions")
axes[1] = sns.distplot(toxic_data[column], ax=axes[1], bins=np.arange(xlim[0], xlim[1], bin_size))
axes[1].set_title("Toxic questions")
if xlim is not None:
for ax in axes:
ax.set_xlim(xlim[0], xlim[1])
plt.suptitle("Comparison of {} between sincere and toxic questions".format(column))
plt.show()
plot_density_plots(sincere, toxic, "text_length")
```
Toxic questions seem to have a higher chance of having somewhat more characters, although the medians seem to be more or less the same. The numbers confirm:
```
pd.concat([sincere["text_length"].describe(), toxic["text_length"].describe()], axis=1)
```
### 2.2 Question length (words)
A similar analysis can be done based on the number of _words_ per question, rather than the number of characters. To do this properly, we should probably first remove symbols and punctuation, but let's take a quick look.
```
train["words"] = train["question_text"].apply(lambda x: len(x.split(" ")))
sincere, toxic = split_on_target(train)
plot_density_plots(sincere, toxic, "words", xlim=(0, 60), bin_size=2)
```
The same conclusion seems to hold for the number of words. It is, thus, useful to include the question size as a feature in our models. Also, it seems that there are not many questions with more than 50 or 60 words:
```
for n in [50, 55, 60]:
print("{} questions with more than {} words.".format(np.sum(train["words"] > n), n))
```
| true |
code
| 0.441252 | null | null | null | null |
|
# Normalizing Flows Overview
Normalizing Flows is a rich family of distributions. They were described by [Rezende and Mohamed](https://arxiv.org/abs/1505.05770), and their experiments proved the importance of studying them further. Some extensions like that of [Tomczak and Welling](https://arxiv.org/abs/1611.09630) made partially/full rank Gaussian approximations for high dimensional spaces computationally tractable.
This notebook reveals some tips and tricks for using normalizing flows effectively in PyMC3.
```
%matplotlib inline
from collections import Counter
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
import theano
import theano.tensor as tt
pm.set_tt_rng(42)
np.random.seed(42)
```
## Theory
Normalizing flows is a series of invertible transformations on an initial distribution.
$$z_K = f_K \circ \dots \circ f_2 \circ f_1(z_0) $$
In this case, we can compute a tractable density for the flow.
$$\ln q_K(z_K) = \ln q_0(z_0) - \sum_{k=1}^{K}\ln \left|\frac{\partial f_k}{\partial z_{k-1}}\right|$$
Here, every $f_k$ is a parametric function with a well-defined determinant. The transformation used is up to the user; for example, the simplest flow is an affine transform:
$$z = loc(scale(z_0)) = \mu + \sigma * z_0 $$
In this case, we get a mean field approximation if $z_0 \sim \mathcal{N}(0, 1)$
## Flow Formulas
In PyMC3 there are flexible ways to define flows with formulas. There are currently 5 types defined:
* Loc (`loc`): $z' = z + \mu$
* Scale (`scale`): $z' = \sigma * z$
* Planar (`planar`): $z' = z + u * \tanh(w^T z + b)$
* Radial (`radial`): $z' = z + \beta (\alpha + ||z-z_r||)^{-1}(z-z_r)$
* Householder (`hh`): $z' = H z$
Formulae can be composed as a string, e.g. `'scale-loc'`, `'scale-hh*4-loc'`, `'planar*10'`. Each step is separated with `'-'`, and repeated flows are defined with `'*'` in the form of `'<flow>*<#repeats>'`.
Flow-based approximations in PyMC3 are based on the `NormalizingFlow` class, with corresponding inference classes named using the `NF` abbreviation (analogous to how `ADVI` and `SVGD` are treated in PyMC3).
Concretely, an approximation is represented by:
```
pm.NormalizingFlow
```
While an inference class is:
```
pm.NFVI
```
## Flow patterns
Composing flows requires some understanding of the target output. Flows that are too complex might not converge, whereas if they are too simple, they may not accurately estimate the posterior.
Let's start simply:
```
with pm.Model() as dummy:
N = pm.Normal("N", shape=(100,))
```
### Mean Field connectivity
Let's apply the transformation corresponding to the mean-field family to begin with:
```
pm.NormalizingFlow("scale-loc", model=dummy)
```
### Full Rank Normal connectivity
We can get a full rank model with dense covariance matrix using **householder flows** (hh). One `hh` flow adds exactly one rank to the covariance matrix, so for a full rank matrix we need `K=ndim` householder flows. hh flows are volume-preserving, so we need to change the scaling if we want our posterior to have unit variance for the latent variables.
After we specify the covariance with a combination of `'scale-hh*K'`, we then add location shift with the `loc` flow. We now have a full-rank analog:
```
pm.NormalizingFlow("scale-hh*100-loc", model=dummy)
```
A more interesting case is when we do not expect a lot of interactions within the posterior. In this case, where our covariance is expected to be sparse, we can constrain it by defining a *low rank* approximation family.
This has the additional benefit of reducing the computational cost of approximating the model.
```
pm.NormalizingFlow("scale-hh*10-loc", model=dummy)
```
Parameters can be initialized randomly, using the `jitter` argument to specify the scale of the randomness.
```
pm.NormalizingFlow("scale-hh*10-loc", model=dummy, jitter=0.001) # LowRank
```
### Planar and Radial Flows
* Planar (`planar`): $z' = z + u * \tanh(w^T z + b)$
* Radial (`radial`): $z' = z + \beta (\alpha + ||z-z_r||)^{-1}(z-z_r)$
Planar flows are useful for splitting the incoming distribution into two parts, which allows multimodal distributions to be modeled.
Similarly, a radial flow changes density around a specific reference point.
## Simulated data example
There were 4 potential functions illustrated in the [original paper](https://arxiv.org/abs/1505.05770), which we can replicate here. Inference can be unstable in multimodal cases, but there are strategies for dealing with them.
First, let's specify the potential functions:
```
def w1(z):
return tt.sin(2.0 * np.pi * z[0] / 4.0)
def w2(z):
return 3.0 * tt.exp(-0.5 * ((z[0] - 1.0) / 0.6) ** 2)
def w3(z):
return 3.0 * (1 + tt.exp(-(z[0] - 1.0) / 0.3)) ** -1
def pot1(z):
z = z.T
return 0.5 * ((z.norm(2, axis=0) - 2.0) / 0.4) ** 2 - tt.log(
tt.exp(-0.5 * ((z[0] - 2.0) / 0.6) ** 2) + tt.exp(-0.5 * ((z[0] + 2.0) / 0.6) ** 2)
)
def pot2(z):
z = z.T
return 0.5 * ((z[1] - w1(z)) / 0.4) ** 2 + 0.1 * tt.abs_(z[0])
def pot3(z):
z = z.T
return -tt.log(
tt.exp(-0.5 * ((z[1] - w1(z)) / 0.35) ** 2)
+ tt.exp(-0.5 * ((z[1] - w1(z) + w2(z)) / 0.35) ** 2)
) + 0.1 * tt.abs_(z[0])
def pot4(z):
z = z.T
return -tt.log(
tt.exp(-0.5 * ((z[1] - w1(z)) / 0.4) ** 2)
+ tt.exp(-0.5 * ((z[1] - w1(z) + w3(z)) / 0.35) ** 2)
) + 0.1 * tt.abs_(z[0])
z = tt.matrix("z")
z.tag.test_value = pm.floatX([[0.0, 0.0]])
pot1f = theano.function([z], pot1(z))
pot2f = theano.function([z], pot2(z))
pot3f = theano.function([z], pot3(z))
pot4f = theano.function([z], pot4(z))
def contour_pot(potf, ax=None, title=None, xlim=5, ylim=5):
grid = pm.floatX(np.mgrid[-xlim:xlim:100j, -ylim:ylim:100j])
grid_2d = grid.reshape(2, -1).T
cmap = plt.get_cmap("inferno")
if ax is None:
_, ax = plt.subplots(figsize=(12, 9))
pdf1e = np.exp(-potf(grid_2d))
contour = ax.contourf(grid[0], grid[1], pdf1e.reshape(100, 100), cmap=cmap)
if title is not None:
ax.set_title(title, fontsize=16)
return ax
fig, ax = plt.subplots(2, 2, figsize=(12, 12))
ax = ax.flatten()
contour_pot(
pot1f,
ax[0],
"pot1",
)
contour_pot(pot2f, ax[1], "pot2")
contour_pot(pot3f, ax[2], "pot3")
contour_pot(pot4f, ax[3], "pot4")
fig.tight_layout()
```
## Reproducing first potential function
```
from pymc3.distributions.dist_math import bound
def cust_logp(z):
# return bound(-pot1(z), z>-5, z<5)
return -pot1(z)
with pm.Model() as pot1m:
pm.DensityDist("pot1", logp=cust_logp, shape=(2,))
```
### NUTS
Let's use NUTS first. Just to have a look how good is it's approximation.
> Note you may need to rerun the model a couple of times, as the sampler/estimator might not fully explore function due to multimodality.
```
pm.set_tt_rng(42)
np.random.seed(42)
with pot1m:
trace = pm.sample(
1000,
init="auto",
cores=2,
start=[dict(pot1=np.array([-2, 0])), dict(pot1=np.array([2, 0]))],
)
dftrace = pm.trace_to_dataframe(trace)
sns.jointplot(dftrace.iloc[:, 0], dftrace.iloc[:, 1], kind="kde")
```
### Normalizing flows
As a first (naive) try with flows, we will keep things simple: Let's use just 2 planar flows and see what we get:
```
with pot1m:
inference = pm.NFVI("planar*2", jitter=1)
## Plotting starting distribution
dftrace = pm.trace_to_dataframe(inference.approx.sample(1000))
sns.jointplot(dftrace.iloc[:, 0], dftrace.iloc[:, 1], kind="kde");
```
#### Tracking gradients
It is illustrative to track gradients as well as parameters. In this setup, different sampling points can give different gradients because a single sampled point tends to collapse to a mode.
Here are the parameters of the model:
```
inference.approx.params
```
We also require an objective:
```
inference.objective(nmc=None)
```
Theano can be used to calcuate the gradient of the objective with respect to the parameters:
```
with theano.configparser.change_flags(compute_test_value="off"):
grads = tt.grad(inference.objective(None), inference.approx.params)
grads
```
If we want to keep track of the gradient changes during the inference, we warp them in a pymc3 callback:
```
from collections import OrderedDict, defaultdict
from itertools import count
@theano.configparser.change_flags(compute_test_value="off")
def get_tracker(inference):
numbers = defaultdict(count)
params = inference.approx.params
grads = tt.grad(inference.objective(None), params)
names = ["%s_%d" % (v.name, next(numbers[v.name])) for v in inference.approx.params]
return pm.callbacks.Tracker(
**OrderedDict(
[(name, v.eval) for name, v in zip(names, params)]
+ [("grad_" + name, v.eval) for name, v in zip(names, grads)]
)
)
tracker = get_tracker(inference)
tracker.whatchdict
inference.fit(30000, obj_optimizer=pm.adagrad_window(learning_rate=0.01), callbacks=[tracker])
dftrace = pm.trace_to_dataframe(inference.approx.sample(1000))
sns.jointplot(dftrace.iloc[:, 0], dftrace.iloc[:, 1], kind="kde")
plt.plot(inference.hist);
```
As you can see, the objective history is not very informative here. This is where the gradient tracker can be more informative.
```
# fmt: off
trackername = ['u_0', 'w_0', 'b_0', 'u_1', 'w_1', 'b_1',
'grad_u_0', 'grad_w_0', 'grad_b_0', 'grad_u_1', 'grad_w_1', 'grad_b_1']
# fmt: on
def plot_tracker_results(tracker):
fig, ax = plt.subplots(len(tracker.hist) // 2, 2, figsize=(16, len(tracker.hist) // 2 * 2.3))
ax = ax.flatten()
# names = list(tracker.hist.keys())
names = trackername
gnames = names[len(names) // 2 :]
names = names[: len(names) // 2]
pairnames = zip(names, gnames)
def plot_params_and_grads(name, gname):
i = names.index(name)
left = ax[i * 2]
right = ax[i * 2 + 1]
grads = np.asarray(tracker[gname])
if grads.ndim == 1:
grads = grads[:, None]
grads = grads.T
params = np.asarray(tracker[name])
if params.ndim == 1:
params = params[:, None]
params = params.T
right.set_title("Gradient of %s" % name)
left.set_title("Param trace of %s" % name)
s = params.shape[0]
for j, (v, g) in enumerate(zip(params, grads)):
left.plot(v, "-")
right.plot(g, "o", alpha=1 / s / 10)
left.legend([name + "_%d" % j for j in range(len(names))])
right.legend([gname + "_%d" % j for j in range(len(names))])
for vn, gn in pairnames:
plot_params_and_grads(vn, gn)
fig.tight_layout()
plot_tracker_results(tracker);
```
Inference **is often unstable**, some parameters are not well fitted as they poorly influence the resulting posterior.
In a multimodal setting, the dominant mode might well change from run to run.
### Going deeper
We can try to improve our approximation by adding flows; in the original paper they used both 8 and 32. Let's try using 8 here.
```
with pot1m:
inference = pm.NFVI("planar*8", jitter=1.0)
dftrace = pm.trace_to_dataframe(inference.approx.sample(1000))
sns.jointplot(dftrace.iloc[:, 0], dftrace.iloc[:, 1], kind="kde");
```
We can try for a more robust fit by allocating more samples to `obj_n_mc` in `fit`, which controls the number of Monte Carlo samples used to approximate the gradient.
```
inference.fit(
25000,
obj_optimizer=pm.adam(learning_rate=0.01),
obj_n_mc=100,
callbacks=[pm.callbacks.CheckParametersConvergence()],
)
dftrace = pm.trace_to_dataframe(inference.approx.sample(1000))
sns.jointplot(dftrace.iloc[:, 0], dftrace.iloc[:, 1], kind="kde")
```
This is a noticeable improvement. Here, we see that flows are able to characterize the multimodality of a given posterior, but as we have seen, they are hard to fit. The initial point of the optimization matters in general for the multimodal case.
### MCMC vs NFVI
Let's use another potential function, and compare the sampling using NUTS to what we get with NF:
```
def cust_logp(z):
return -pot4(z)
with pm.Model() as pot_m:
pm.DensityDist("pot_func", logp=cust_logp, shape=(2,))
with pot_m:
traceNUTS = pm.sample(3000, tune=1000, target_accept=0.9, cores=2)
formula = "planar*10"
with pot_m:
inference = pm.NFVI(formula, jitter=0.1)
inference.fit(25000, obj_optimizer=pm.adam(learning_rate=0.01), obj_n_mc=10)
traceNF = inference.approx.sample(5000)
fig, ax = plt.subplots(1, 3, figsize=(18, 6))
contour_pot(pot4f, ax[0], "Target Potential Function")
ax[1].scatter(traceNUTS["pot_func"][:, 0], traceNUTS["pot_func"][:, 1], c="r", alpha=0.05)
ax[1].set_xlim(-5, 5)
ax[1].set_ylim(-5, 5)
ax[1].set_title("NUTS")
ax[2].scatter(traceNF["pot_func"][:, 0], traceNF["pot_func"][:, 1], c="b", alpha=0.05)
ax[2].set_xlim(-5, 5)
ax[2].set_ylim(-5, 5)
ax[2].set_title("NF with " + formula);
%load_ext watermark
%watermark -n -u -v -iv -w
```
| true |
code
| 0.586049 | null | null | null | null |
|
# Resample Data
## Pandas Resample
You've learned about bucketing to different periods of time like Months. Let's see how it's done. We'll start with an example series of days.
```
import numpy as np
import pandas as pd
dates = pd.date_range('10/10/2018', periods=11, freq='D')
close_prices = np.arange(len(dates))
close = pd.Series(close_prices, dates)
close
```
Let's say we want to bucket these days into 3 day periods. To do that, we'll use the [DataFrame.resample](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.resample.html) function. The first parameter in this function is a string called `rule`, which is a representation of how to resample the data. This string representation is made using an offset alias. You can find a list of them [here](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases). To create 3 day periods, we'll set `rule` to "3D".
```
close.resample('3D')
```
This returns a `DatetimeIndexResampler` object. It's an intermediate object similar to the `GroupBy` object. Just like group by, it breaks the original data into groups. That means, we'll have to apply an operation to these groups. Let's make it simple and get the first element from each group.
```
close.resample('3D').first()
```
You might notice that this is the same as `.iloc[::3]`
```
close.iloc[::3]
```
So, why use the `resample` function instead of `.iloc[::3]` or the `groupby` function?
The `resample` function shines when handling time and/or date specific tasks. In fact, you can't use this function if the index isn't a [time-related class](https://pandas.pydata.org/pandas-docs/version/0.21/timeseries.html#overview).
```
try:
# Attempt resample on a series without a time index
pd.Series(close_prices).resample('W')
except TypeError:
print('It threw a TypeError.')
else:
print('It worked.')
```
One of the resampling tasks it can help with is resampling on periods, like weeks. Let's resample `close` from it's days frequency to weeks. We'll use the "W" offset allies, which stands for Weeks.
```
pd.DataFrame({
'days': close,
'weeks': close.resample('W').first()})
```
The weeks offset considers the start of a week on a Monday. Since 2018-10-10 is a Wednesday, the first group only looks at the first 5 items. There are offsets that handle more complicated problems like filtering for Holidays. For now, we'll only worry about resampling for days, weeks, months, quarters, and years. The frequency you want the data to be in, will depend on how often you'll be trading. If you're making trade decisions based on reports that come out at the end of the year, we might only care about a frequency of years or months.
## OHLC
Now that you've seen how Pandas resamples time series data, we can apply this to Open, High, Low, and Close (OHLC). Pandas provides the [`Resampler.ohlc`](https://pandas.pydata.org/pandas-docs/version/0.21.0/generated/pandas.core.resample.Resampler.ohlc.html#pandas.core.resample.Resampler.ohlc) function will convert any resampling frequency to OHLC data. Let's get the Weekly OHLC.
```
close.resample('W').ohlc()
```
Can you spot a potential problem with that? It has to do with resampling data that has already been resampled.
We're getting the OHLC from close data. If we want OHLC data from already resampled data, we should resample the first price from the open data, resample the highest price from the high data, etc..
To get the weekly closing prices from `close`, you can use the [`Resampler.last`](https://pandas.pydata.org/pandas-docs/version/0.21.0/generated/pandas.core.resample.Resampler.last.html#pandas.core.resample.Resampler.last) function.
```
close.resample('W').last()
```
## Quiz
Implement `days_to_weeks` function to resample OHLC price data to weekly OHLC price data. You find find more Resampler functions [here](https://pandas.pydata.org/pandas-docs/version/0.21.0/api.html#id44) for calculating high and low prices.
```
import quiz_tests
def days_to_weeks(open_prices, high_prices, low_prices, close_prices):
"""Converts daily OHLC prices to weekly OHLC prices.
Parameters
----------
open_prices : DataFrame
Daily open prices for each ticker and date
high_prices : DataFrame
Daily high prices for each ticker and date
low_prices : DataFrame
Daily low prices for each ticker and date
close_prices : DataFrame
Daily close prices for each ticker and date
Returns
-------
open_prices_weekly : DataFrame
Weekly open prices for each ticker and date
high_prices_weekly : DataFrame
Weekly high prices for each ticker and date
low_prices_weekly : DataFrame
Weekly low prices for each ticker and date
close_prices_weekly : DataFrame
Weekly close prices for each ticker and date
"""
open_prices_weekly = open_prices.resample('W').first()
high_prices_weekly = high_prices.resample('W').max()
low_prices_weekly = low_prices.resample('W').min()
close_prices_weekly = close_prices.resample('W').last()
return open_prices_weekly, high_prices_weekly, low_prices_weekly, close_prices_weekly
quiz_tests.test_days_to_weeks(days_to_weeks)
```
| true |
code
| 0.638976 | null | null | null | null |
|
# Solver - Tutorial
## Non colliding fiber models
An important component of nerve fibers is that they are 3d objects.
Therefore, they should not overlap each other.
To achieve this, an [algorithm](https://arxiv.org/abs/1901.10284) was developed based on collision checking of conical objects.
A conical object is defined by two neighboring points in the fiber array, i.e. fiber[i] and fiber[i+1].
The class `solver` checks a given fiber model for collisions and resolves these collisions iteratively by small displacements.
To account for the flexibility of fibers, they are continuously divided into segments. These segments are modeled geometrically as cones.
A parallel implementation of an octree is used to run the collision detection algorithm between these cones.
## General imports
First, we prepair all necesarry modules and defining a function to euqalice all three axis of an 3d plot.
```
import fastpli.model.solver
import fastpli.model.sandbox
import fastpli.io
import os
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
def set_3d_axes_equal(ax):
x_limits = ax.get_xlim3d()
y_limits = ax.get_ylim3d()
z_limits = ax.get_zlim3d()
x_range = abs(x_limits[1] - x_limits[0])
x_middle = np.mean(x_limits)
y_range = abs(y_limits[1] - y_limits[0])
y_middle = np.mean(y_limits)
z_range = abs(z_limits[1] - z_limits[0])
z_middle = np.mean(z_limits)
plot_radius = 0.5 * max([x_range, y_range, z_range])
ax.set_xlim3d([x_middle - plot_radius, x_middle + plot_radius])
ax.set_ylim3d([y_middle - plot_radius, y_middle + plot_radius])
ax.set_zlim3d([z_middle - plot_radius, z_middle + plot_radius])
def plot_fiber_bundles(fbs, colors):
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection='3d')
for fb, c in zip(fbs, colors):
for f in fb:
plt.plot(f[:,0],f[:,1],f[:,2], c)
set_3d_axes_equal(ax)
```
## Prepairing Models and defining bounding conditions
The [fiber bundles](https://github.com/3d-pli/fastpli/wiki/FiberBundles) are prepaired as shown in the sandbox examples/tutorials.
Additionally each fiber will get a random radius.
Two crossing fiber bundle (x and y) are prepaiered in this manor.
### Note
- Take note that matplotlib does not check z-buffering. Therefore each new plotted line is on top of the lines before.
That why the second fiber bundle (red) seems to be on top of the first one (blue).
- Also not be showing here.
The solver class provides an OpenGL visualization tool `solver.draw_scene()` which is not shown in this notebook since.
The example file `examples/solver.py` and [wiki](https://github.com/3d-pli/fastpli/wiki/Solver) shows its capabilities.
```
solver = fastpli.model.solver.Solver()
fiber_bundle_trj_x = [[-150, 0, 0], [150, 0, 0]]
fiber_bundle_trj_y = [[0, -150, 0], [0, 150, 0]]
population = fastpli.model.sandbox.seeds.triangular_circle(20, 6)
fiber_radii = np.random.uniform(2.0, 10.0, population.shape[0])
fiber_bundle_x = fastpli.model.sandbox.build.bundle(fiber_bundle_trj_x,
population, fiber_radii)
fiber_radii = np.random.uniform(2.0, 10.0, population.shape[0])
fiber_bundle_y = fastpli.model.sandbox.build.bundle(fiber_bundle_trj_y,
population, fiber_radii)
fiber_bundles = [fiber_bundle_x, fiber_bundle_y]
plot_fiber_bundles(fiber_bundles, ['b', 'r'])
plt.show()
```
## Running solver
The solver algorithm splits each fiber into almost equal fiber segments allowing to seperate the model more naturally.
The mean length of this segments is controlled via `solver.obj_mean_length`.
Since the fiber segments will move in each step of the algorithm, the curviture of the fibers can increase quite fast.
To limit this a maximal curviture radii of the fibers can be set via `solver.obj_min_radius`.
This means that a "circle" of point `p_i-1, p_i` and `p_i+1` is limited by a lower value.
Is the value exceeded, the betroffende fiber segments are corrected slightly.
If all conditions are fullfiled, the output is marked as solved and the model can be used for further processing.
```
# run solver
solver.fiber_bundles = fiber_bundles
solver.obj_min_radius = 10
solver.obj_mean_length = 30
N = 1000
for i in range(N):
solved = solver.step()
if solved:
break
print(f'{i/N*100:.2f}%', end='\r')
print(f'solved: {i}, {solver.num_obj}/{solver.num_col_obj}')
plot_fiber_bundles(solver.fiber_bundles, ['b', 'r'])
plt.show()
```
## Saving
The resulting configuration can be save in a `.dat` file or `.h5` (HDF5) file wich is supported via this toolbox.
```
fastpli.io.fiber_bundles.save('output.dat', solver.fiber_bundles, mode='w')
```
## Additiona manipulations
A trick to allow for more randomness is to apply more varrity to the fiber models at the beginning of the solver alrogithm.
However since the boundry conditions i. e. curviture and mean fiber segment length is usually not set when initializing the models, one can apply the boundry conditions to the currently set models inside the solver object.And can then be afterward manipulated
```
# run solver
solver.fiber_bundles = fiber_bundles
solver.obj_min_radius = 10
solver.obj_mean_length = 30
solver.apply_boundary_conditions(n_max=10)
print(fiber_bundles[0][0].shape)
print(solver.fiber_bundles[0][0].shape)
fbs = solver.fiber_bundles
for i, fb in enumerate(fbs):
for j, _ in enumerate(fb):
fbs[i][j][:,:3] += np.random.uniform(-10,10,(fbs[i][j].shape[0],3))
fbs[i][j][:,3] *= np.random.uniform(0.5,2,(fbs[i][j].shape[0]))
plot_fiber_bundles(fbs, ['b', 'r'])
plt.show()
N = 1000
solver.fiber_bundles = fbs
for i in range(N):
solved = solver.step()
if solved:
break
print(f'{i/N*100:.2f}%', end='\r')
print(f'solved: {i}, {solver.num_obj}/{solver.num_col_obj}')
plot_fiber_bundles(solver.fiber_bundles, ['b', 'r'])
plt.show()
```
## Orientation histogram
```
import fastpli.analysis
_, axs = plt.subplots(1,2, subplot_kw=dict(projection='polar'), figsize=(10,5))
pcs=[None, None]
phi, theta = fastpli.analysis.orientation.fiber_bundles(fiber_bundles)
_, _, _, pcs[0] = fastpli.analysis.orientation.histogram(phi,
theta,
ax=axs[0],
n_phi=60,
n_theta=30,
weight_area=False)
phi, theta = fastpli.analysis.orientation.fiber_bundles(solver.fiber_bundles)
_, _, _, pcs[1] = fastpli.analysis.orientation.histogram(phi,
theta,
ax=axs[1],
n_phi=60,
n_theta=30,
weight_area=False)
for ax, pc in zip(axs, pcs):
cbar = plt.colorbar(pc, ax=ax)
cbar.ax.set_title('#')
ax.set_rmax(90)
ax.set_rticks(range(0, 90, 10))
ax.set_rlabel_position(22.5)
ax.set_yticklabels([])
ax.grid(True)
plt.show()
```
| true |
code
| 0.607197 | null | null | null | null |
|
# Demo of the LCS package
```
#preamble
import os, sys
import pandas as pd
import numpy as np
import random
import pickle
```
## Import Package
```
# how to import the packaes
from Rulern.LCSModule import LCS # the core library
from Rulern.RuleModule import Rule # this is only needed if you create your own rules
```
## Load Pre-Trained Models (Back-up)
```
# #how to load the models using pickle
# with open("Eval/LCSvsNN/28072020-bool/"+"0cv_model_LCS.obj", 'rb') as f:
# model = pickle.load(f)
# #show example rules form a trained model
# # print(b.history)
# for rule in model.rules:
# if rule.fitness > 1.0: # filter out all the bad rules
# print(rule,rule.fitness) # print rule and rule fittness
```
## Generating data (swap with your own data)
```
# generate data i 0 - 9 are the input bits and o0-4 are the output
# replce this with your own data set and data wrangling operations
# the LCS package can work with dataframes, arrays or numpy arrays
def gen_rand_in_out(arr_len = 10):
input = []
for i in range(arr_len):
input.append(random.choice([1,0]))
output = np.array(input[0:int(arr_len/2)]) | np.array(input[int(arr_len/2):arr_len]) # logical or of the first and last five bits
return np.append(input,output)
print(gen_rand_in_out())
df = []
np_samples = 1000
for i in range(np_samples):
df.append(gen_rand_in_out())
df = pd.DataFrame(np.array(df).reshape(np_samples,15),columns = ["i0","i1","i2","i3","i4","i5","i6","i7","i8","i9","o0","o1","o2","o3","o4"])
print(df)
```
## Initialise an LCS model (recommended order of operations)
See Appendix B, Table B.1 for a summary of the model parameters
```
# initialise LCS
# recommended order of parameter initialisation
def init_LCS():
lcs = LCS((10,1),(5,1),max_pop = 100) #input and output shapes as well as the max population
lcs.input_names = ["i0","i1","i2","i3","i4","i5","i6","i7","i8","i9"] # column names of the input
lcs.output_names = ["o0","o1","o2","o3","o4"] # column names of the outputs
lcs.initGA() # initialise genetic algorithms
lcs.covering_threshold = 5 # set a covering threshold - how may rules must match a data instance
lcs.GA.interval = 0 # the range interval if range antecedents are enabled
lcs.GA.sigma = 0.0 # sigma of the spread of genetic mutations of the rule values
lcs.GA.max_ranges = 0 # max number of ranges a rule can have = > i1 > 0.5 and i1 < 1.0
lcs.GA.max_attribute_comp = 0 # max number of attribute comparisons a rule can have = > i0 >= i1
lcs.GA.max_comp = 1 # max number of attribute comparisons to a cosntant a rule can have = > i0 >= 0.5
lcs.GA.max_output_attributes = 0 # max number of ouput atributes excl. bias => i1*0.5 + i2*0.5
lcs.fitness_weights =[1,0,1] # weights on the fitness function c1, c2 and c3 in the report
lcs.GA.input_template = df[["i0","i1","i2","i3","i4","i5","i6","i7","i8","i9"]].iloc[[0]] # template on an input frame
lcs.purge_threshold = 1.0 # purge threshold
lcs.type = "Multi-Class" # this by default is "continous" but can be a classifer if it is a a single-classifer
return lcs
lcs = init_LCS() # initialise LCS
X_test = df[lcs.input_names] # get input data
```
## How to add your own rules
```
rules = []
# how to add manual rules for an or operation
for i in range(5):
ant_dict = {
"i"+str(i):[[0],["=="],[1]] # antecedent dictionary structure
}
con_dict = { # consequent dictionary structure
"out_var":"o"+str(i),
"vars":{},
"bias":1}
rules.append(Rule("USER"+str(i),ant_dict,con_dict,seq_len = 1)) # name, antecedent, consequent, sequence length (def. 1)
for i in range(5):
ant_dict = {
"i"+str(i+5):[[0],["=="],[1]]
}
con_dict = {
"out_var":"o"+str(i),
"vars":{},
"bias":1}
rules.append(Rule("USER"+str(i+5),ant_dict,con_dict,seq_len = 1))
# initalise each rules parameters, if a rule does not have stats, it will not contribute to a classifcation
for rule in rules:
rule.fitness = 2
rule.correctness = 100
rule.matched = 100
rule.evaluated = 100
lcs.rules.append(rule)
for rule in lcs.rules:
if rule.fitness > 1.0: # filter out all the bad rules
print(rule)
```
## Evaluate inputs
```
# evaluate input data
results,activations = lcs.evaluate_data(X_test)
```
### Show results
```
print(results[0:10].apply(np.ceil).astype("int"),activations[0:10]) #print the prediction and activations for each row
y_test= df[lcs.output_names]
print(y_test.iloc[0:10]) # print the true value for comparison
```
## How to train your own LCS model
```
#how to train your own LCS
# initialise new LCS instance
lcs = init_LCS()
# initialise new LCS instance
lcs.LearningClassifierSystem(X_test.iloc[0:100],y_test.iloc[0:100],mutation_frq = 10,verberose = True,eval = [X_test,y_test],epochs = 10)
results,activations = lcs.evaluate_data(X_test)
for rule in lcs.rules:
#if rule.fitness > 0: # filter out all the bad rules
print(rule,rule.fitness)
# show system classfications, recommeded to use ceil for muticlass models outputs
print(results[10:20].apply(np.ceil).astype("int"),activations[0:10]) #print the prediction and activations for each row
y_test= df[lcs.output_names]
print(y_test.iloc[10:20]) # print the true value for comparison
```
| true |
code
| 0.327265 | null | null | null | null |
|
# Hyper-parameter Tunning of Machine Learning (ML) Models
### Code for Classification Problems
#### `Dataset Used:`
MNIST dataset
#### `Machine Learning Algorithm Used:`
* Random Forest (RF)
* Support Vector Machine (SVM)
* K-Nearest Neighbor (KNN)
* Artificial Neural Network (ANN)
#### `Hyper-parameter Tuning Algorithms Used:`
* Grid Search
* Random Search
* Bayesian Optimization with Gaussian Processes (BO-GP)
* Bayesian Optimization with Tree-structured Parzen Estimator (BO-TPE)
---
```
# Importing required libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats as stats
from sklearn import datasets
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
```
#### Loading MNIST Dataset
The Modified National Institute of Standards and Technology (MNIST) database is a large database of handwritten digits that is commonly used by the people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. It has a training set of 60,000 examples, and a test set of 10,000 examples.It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
It has 1797 record and 64 columns.
For more details about the dataset click here: [Details-1](http://yann.lecun.com/exdb/mnist/), [Details-2](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits/)
```
# Loading the dataset
X, y = datasets.load_digits(return_X_y=True)
datasets.load_digits()
```
### Baseline Machine Learning Models: Classifier with default Hyper-parameters
### `Random Forest`
```
# Random Forest (RF) with 3-fold cross validation
RF_clf = RandomForestClassifier()
RF_clf.fit(X,y)
RF_scores = cross_val_score(RF_clf, X, y, cv = 3,scoring = 'accuracy')
print("Accuracy (RF): "+ str(RF_scores.mean()))
```
### `Support Vector Machine`
```
# Support Vector Machine (SVM)
SVM_clf = SVC(gamma='scale')
SVM_clf.fit(X,y)
SVM_scores = cross_val_score(SVM_clf, X, y, cv = 3,scoring = 'accuracy')
print("Accuracy (SVM): "+ str(SVM_scores.mean()))
```
### `K-Nearest Neighbor`
```
# K-Nearest Neighbor (KNN)
KNN_clf = KNeighborsClassifier()
KNN_clf.fit(X,y)
KNN_scores = cross_val_score(KNN_clf, X, y, cv = 3,scoring='accuracy')
print("Accuracy (KNN):"+ str(KNN_scores.mean()))
```
### `Artificial Neural Network`
```
# Artificial Neural Network (ANN)
from keras.models import Sequential, Model
from keras.layers import Dense, Input
from keras.wrappers.scikit_learn import KerasClassifier
from keras.callbacks import EarlyStopping
def ann_model(optimizer = 'sgd',neurons = 32,batch_size = 32,epochs = 50,activation = 'relu',patience = 5,loss = 'categorical_crossentropy'):
model = Sequential()
model.add(Dense(neurons, input_shape = (X.shape[1],), activation = activation))
model.add(Dense(neurons, activation = activation))
model.add(Dense(10,activation='softmax'))
model.compile(optimizer = optimizer, loss = loss)
early_stopping = EarlyStopping(monitor = "loss", patience = patience)
history = model.fit(X, pd.get_dummies(y).values, batch_size = batch_size, epochs=epochs, callbacks = [early_stopping], verbose=0)
return model
ANN_clf = KerasClassifier(build_fn = ann_model, verbose = 0)
ANN_scores = cross_val_score(ANN_clf, X, y, cv = 3,scoring = 'accuracy')
print("Accuracy (ANN):"+ str(ANN_scores.mean()))
```
### Hyper-parameter Tuning Algorithms
### `1] Grid Search`
```
from sklearn.model_selection import GridSearchCV
```
### `Random Forest`
```
# Random Forest (RF)
RF_params = {
'n_estimators': [10, 20, 30],
'max_depth': [15,20,25,30,50],
"criterion":['gini','entropy']
}
RF_clf = RandomForestClassifier(random_state = 1)
RF_grid = GridSearchCV(RF_clf, RF_params, cv = 3, scoring = 'accuracy')
RF_grid.fit(X, y)
print(RF_grid.best_params_)
print("Accuracy (RF): "+ str(RF_grid.best_score_))
```
### `Support Vector Machine`
```
# Support Vector Machine (SVM)
SVM_params = {
'C': [1, 10, 20, 50, 100],
"kernel":['linear','poly','rbf','sigmoid']
}
SVM_clf = SVC(gamma='scale')
SVM_grid = GridSearchCV(SVM_clf, SVM_params, cv = 3, scoring = 'accuracy')
SVM_grid.fit(X, y)
print(SVM_grid.best_params_)
print("Accuracy:"+ str(SVM_grid.best_score_))
```
### `K-Nearest Neighbor`
```
#K-Nearest Neighbor (KNN)
KNN_params = { 'n_neighbors': [2, 4, 6, 8] }
KNN_clf = KNeighborsClassifier()
KNN_grid = GridSearchCV(KNN_clf, KNN_params, cv = 3, scoring = 'accuracy')
KNN_grid.fit(X, y)
print(KNN_grid.best_params_)
print("Accuracy:"+ str(KNN_grid.best_score_))
```
### `Artificial Neural Network`
```
# Artificial Neural Network (ANN)
ANN_params = {
'optimizer': ['adam','sgd'],
'activation': ['relu','tanh'],
'batch_size': [16,32],
'neurons':[16,32],
'epochs':[30,50],
'patience':[3,5]
}
ANN_clf = KerasClassifier(build_fn = ann_model, verbose = 0)
ANN_grid = GridSearchCV(ANN_clf, ANN_params, cv = 3,scoring = 'accuracy')
ANN_grid.fit(X, y)
print(ANN_grid.best_params_)
print("Accuracy (ANN): "+ str(ANN_grid.best_score_))
```
### `2] Random Search`
```
from sklearn.model_selection import RandomizedSearchCV
from random import randrange as sp_randrange
from scipy.stats import randint as sp_randint
```
### `Random Forest`
```
# Random Forest (RF)
RF_params = {
'n_estimators': sp_randint(10,100),
'max_depth': sp_randint(5,50),
"criterion":['gini','entropy']
}
RF_clf = RandomForestClassifier(random_state = 1)
RF_Random = RandomizedSearchCV(RF_clf, param_distributions = RF_params, n_iter = 20,cv = 3,scoring = 'accuracy')
RF_Random.fit(X, y)
print(RF_Random.best_params_)
print("Accuracy (RF):"+ str(RF_Random.best_score_))
```
### `Support Vector Machine`
```
# Support Vector Machine(SVM)
SVM_params = {
'C': stats.uniform(1,50),
"kernel":['poly','rbf']
}
SVM_clf = SVC(gamma='scale')
SVM_Random = RandomizedSearchCV(SVM_clf, param_distributions = SVM_params, n_iter = 20,cv = 3,scoring = 'accuracy')
SVM_Random.fit(X, y)
print(SVM_Random.best_params_)
print("Accuracy (SVM): "+ str(SVM_Random.best_score_))
```
### `K-Nearest Neighbor`
```
# K-Nearest Neighbor (KNN)
KNN_params = {'n_neighbors': range(1,20)}
KNN_clf = KNeighborsClassifier()
KNN_Random = RandomizedSearchCV(KNN_clf, param_distributions = KNN_params,n_iter = 10,cv = 3,scoring = 'accuracy')
KNN_Random.fit(X, y)
print(KNN_Random.best_params_)
print("Accuracy (KNN): "+ str(KNN_Random.best_score_))
```
### `Artificial Neural Network`
```
# Artificial Neural Network (ANN)
ANN_params = {
'optimizer': ['adam','sgd'],
'activation': ['relu','tanh'],
'batch_size': [16,32],
'neurons':sp_randint(10,100),
'epochs':[30,50],
'patience':sp_randint(5,20)
}
ANN_clf = KerasClassifier(build_fn = ann_model, verbose = 0)
ANN_Random = RandomizedSearchCV(ANN_clf, param_distributions = ANN_params, n_iter = 10,cv = 3,scoring = 'accuracy')
ANN_Random.fit(X, y)
print(ANN_Random.best_params_)
print("Accuracy (ANN): "+ str(ANN_Random.best_score_))
```
### `3] Bayesian Optimization with Gaussian Process (BO-GP)`
```
from skopt import Optimizer
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
```
### `Random Factor`
```
#Random Forest (RF)
RF_params = {
'n_estimators': Integer(10,100),
'max_depth': Integer(5,50),
"criterion":['gini','entropy']
}
RF_clf = RandomForestClassifier(random_state = 1)
RF_Bayes = BayesSearchCV(RF_clf, RF_params,cv = 3,n_iter = 20, n_jobs = -1,scoring = 'accuracy')
RF_Bayes.fit(X, y)
print(RF_Bayes.best_params_)
print("Accuracy (RF): "+ str(RF_Bayes.best_score_))
```
### `Support Vector Machine`
```
# Support Vector Machine (SVM)
SVM_params = {
'C': Real(1,50),
"kernel":['poly','rbf']
}
SVM_clf = SVC(gamma = 'scale')
SVM_Bayes = BayesSearchCV(SVM_clf, SVM_params,cv = 3,n_iter = 20, n_jobs = -1,scoring = 'accuracy')
SVM_Bayes.fit(X, y)
print(SVM_Bayes.best_params_)
print("Accuracy (SVM): "+ str(SVM_Bayes.best_score_))
```
### `K-Nearest Neighbor`
```
# K-Nearest Neighbor (KNN)
KNN_params = {'n_neighbors': Integer(1,20),}
KNN_clf = KNeighborsClassifier()
KNN_Bayes = BayesSearchCV(KNN_clf, KNN_params,cv = 3,n_iter = 10, n_jobs = -1,scoring = 'accuracy')
KNN_Bayes.fit(X, y)
print(KNN_Bayes.best_params_)
print("Accuracy (KNN): "+ str(KNN_Bayes.best_score_))
```
### `Artificial Neural Network`
```
# Artificial Neural Network (ANN)
ANN_params = {
'optimizer': ['adam','sgd'],
'activation': ['relu','tanh'],
'batch_size': [16,32],
'neurons':Integer(10,100),
'epochs':[30,50],
'patience':Integer(5,20)
}
ANN_clf = KerasClassifier(build_fn = ann_model, verbose = 0)
ANN_Bayes = BayesSearchCV(ANN_clf, ANN_params,cv = 3,n_iter = 10, scoring = 'accuracy')
ANN_Bayes.fit(X, y)
print(ANN_Bayes.best_params_)
print("Accuracy (ANN): "+ str(ANN_Bayes.best_score_))
```
### `4] Bayesian Optimization with Tree-structured Parzen Estimator (BO-TPE)`
```
from sklearn.model_selection import StratifiedKFold
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
```
### `Random Forest`
```
# Random Forest (RF)
def RF_fun(params):
params = {
'n_estimators': int(params['n_estimators']),
'max_features': int(params['max_features']),
"criterion":str(params['criterion'])
}
RF_clf = RandomForestClassifier(**params)
RF_score = cross_val_score(RF_clf, X, y, cv = StratifiedKFold(n_splits = 3),scoring = 'accuracy').mean()
return {'loss':-RF_score, 'status': STATUS_OK }
RF_space = {
'n_estimators': hp.quniform('n_estimators', 10, 100, 1),
"max_features":hp.quniform('max_features', 1, 32, 1),
"criterion":hp.choice('criterion',['gini','entropy'])
}
RF_best = fmin(fn = RF_fun, space = RF_space, algo = tpe.suggest, max_evals = 20)
print("Estimated optimum (RF): " +str(RF_best))
```
### `Support Vector Machine`
```
# Support Vector Machine (SVM)
def SVM_fun(params):
params = {
'C': abs(float(params['C'])),
"kernel":str(params['kernel'])
}
SVM_clf = SVC(gamma ='scale', **params)
SVM_score = cross_val_score(SVM_clf, X, y, cv = StratifiedKFold(n_splits = 3), scoring ='accuracy').mean()
return {'loss':-SVM_score, 'status': STATUS_OK }
SVM_space = {
'C': hp.normal('C', 0, 50),
"kernel":hp.choice('kernel',['poly','rbf'])
}
SVM_best = fmin(fn = SVM_fun, space = SVM_space, algo = tpe.suggest, max_evals = 20)
print("Estimated optimum (SVM): "+str(SVM_best))
```
### `K-Nearest Neighbor`
```
# K-Nearest Neighbor (KNN)
def KNN_fun(params):
params = {'n_neighbors': abs(int(params['n_neighbors'])) }
KNN_clf = KNeighborsClassifier(**params)
KNN_score = cross_val_score(KNN_clf, X, y, cv = StratifiedKFold(n_splits=3), scoring='accuracy').mean()
return {'loss':-KNN_score, 'status': STATUS_OK }
KNN_space = {'n_neighbors': hp.quniform('n_neighbors', 1, 20, 1)}
KNN_best = fmin(fn = KNN_fun, space = KNN_space, algo = tpe.suggest, max_evals = 10)
print("Estimated optimum (KNN): "+str(KNN_best))
```
### `Artificial Neural Network`
```
# Artificial Neural Network (ANN)
def ANN_fun(params):
params = {
"optimizer":str(params['optimizer']),
"activation":str(params['activation']),
'batch_size': abs(int(params['batch_size'])),
'neurons': abs(int(params['neurons'])),
'epochs': abs(int(params['epochs'])),
'patience': abs(int(params['patience']))
}
ANN_clf = KerasClassifier(build_fn = ann_model,**params, verbose = 0)
ANN_score = -np.mean(cross_val_score(ANN_clf, X, y, cv=3, scoring = "accuracy"))
return {'loss':ANN_score, 'status': STATUS_OK }
ANN_space = {
"optimizer":hp.choice('optimizer',['adam','rmsprop','sgd']),
"activation":hp.choice('activation',['relu','tanh']),
'batch_size': hp.quniform('batch_size', 16, 32, 16),
'neurons': hp.quniform('neurons', 10, 100, 10),
'epochs': hp.quniform('epochs', 30, 50, 10),
'patience': hp.quniform('patience', 5, 20, 5),
}
ANN_best = fmin(fn = ANN_fun, space = ANN_space, algo = tpe.suggest, max_evals = 10)
print("Estimated optimum (ANN): "+str(ANN_best))
```
---
| true |
code
| 0.60013 | null | null | null | null |
|
# COVID-19 comparison using Pie charts
Created by (c) Shardav Bhatt on 17 June 2020
# 1. Introduction
Jupyter Notebook Created by Shardav Bhatt
Data (as on 16 June 2020)
References:
1. Vadodara: https://vmc.gov.in/coronaRelated/covid19dashboard.aspx
2. Gujarat: https://gujcovid19.gujarat.gov.in/
3. India: https://www.mohfw.gov.in/
4. Other countries and World: https://www.worldometers.info/coronavirus/
In this notebook, I have considered data of COVID-19 cases at Local level and at Global level. The aim is to determine whether there is a significance difference between Global scenario and Local scenario of COVID-19 cases or not. The comparison is done using Pie charts for active cases, recovered cases and deaths.
# 2. Importing necessary modules
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import datetime
```
# 3. Extracting data from file
```
date = str(np.array(datetime.datetime.now()))
data = pd.read_csv('data_17June.csv')
d = data.values
row = np.zeros((d.shape[0],d.shape[1]-2))
for i in range(d.shape[0]):
row[i] = d[i,1:-1]
```
# 4. Creating a funtion to print % in Pie chart
```
def func(pct, allvals):
absolute = int(round(pct/100.*np.sum(allvals)))
return "{:.1f}% ({:d})".format(pct, absolute)
```
# 5. Plot pre-processing
```
plt.close('all')
date = str(np.array(datetime.datetime.now()))
labels = 'Infected', 'Recovered', 'Died'
fs = 20
C = ['lightskyblue','lightgreen','orange']
def my_plot(i):
fig, axs = plt.subplots()
axs.pie(row[i], autopct=lambda pct: func(pct, row[i]), explode=(0, 0.1, 0), textprops=dict(color="k", size=fs-2), colors = C, radius=1.5)
axs.legend(labels, fontsize = fs-4, bbox_to_anchor=(1.1,1))
figure_title = str(d[i,0])+': '+str(d[i,-1])+' cases on '+date
plt.text(1, 1.2, figure_title, horizontalalignment='center', fontsize=fs, transform = axs.transAxes)
plt.show()
print('\n')
```
# 6. Local scenario of COVID-19 cases
```
for i in range(4):
my_plot(i)
```
# My Observations:
1. Death rate in Vadodara city is less compared to state and nation. Death rate of Gujarat is almost double compared to the nation. Death rate of India is less compared to the global death rate.
2. Recovery rates of Vadodara and Gujarat are higher compared to national and global recovery rate. The recovery rate of India and of World are similar.
3. Proportion of active cases in Vadodara and Gujarat is lower compared to national and global active cases. Proportion of active cases of India and world are similar.
# 7. Global scenario of COVID-19 cases
```
for i in range(4,d.shape[0]):
my_plot(i)
```
# Observations:
1. Russia, Chile, Turkey, Peru have comparatively lower death rate i.e. below 3%. Mexico, Italy and France have comparatively higher death rate i.e. above 10%.
2. Germany, Chile, Turkey, Iran, Italy, Mexico have recovery rate above 75%. These countries are comming out of danger.
3. Russia, India, Peru, Brazil, France, USA have have recovery rate below 53%. These countries needs to recover fast.
4. Proportion of active cases is least in Germany, Italy, Turkey is least
| true |
code
| 0.418697 | null | null | null | null |
|
## In this notebook we are going to Predict the Growth of Apple Stock using Linear Regression Model and CRISP-DM.
```
#importing the libraries
import numpy as np
import pandas as pd
from sklearn import metrics
%matplotlib inline
import matplotlib.pyplot as plt
import math
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
```
# Data Understanding
The data is already processed to price-split values so it is easy to analysis but we are creating new tables to optimize our model
```
#importing Price Split Data
data = pd.read_csv('prices-split-adjusted.csv')
data
#checking data for null values
data.isnull().sum()
```
#### There are no null values in the Data table we are going to create
# Data Preprocessing
Creating Table for a specific Stock
```
#Initializing the Dataset for the Stock to be Analysized
data = data.loc[(data['symbol'] == 'AAPL')]
data = data.drop(columns=['symbol'])
data = data[['date', 'open', 'close', 'low', 'volume', 'high']]
data
#Number of rows and columns we are working with
data.shape
```
Ploting the closing price of the Stock
```
plt.scatter(data.date, data.close, color='blue')
plt.xlabel("Time")
plt.ylabel("Close")
plt.show()
```
### Here we can see that the Stock is growing in Long-Term with multiple medium downfalls
So it is good for Long-term investing
```
#For plotting against time
data['date'] = pd.to_datetime(data.date)
#Plot for close values on each date
data['close'].plot(figsize=(16, 8))
```
# Linear Regression
Here we are going to use LR to make simple prediction of the stock value change. We are checking for accuracy on a particular Stock.
```
x1 = data[['open', 'high', 'low', 'volume']]
y1 = data['close']
#Making test and train datasets
x1_train, x1_test, y1_train, y1_test = train_test_split(x1, y1, random_state = 0)
x1_train.shape
x1_test.shape
#Initailizing LinearRegression
regression = LinearRegression()
regression.fit(x1_train, y1_train)
print(regression.coef_)
print(regression.intercept_)
predicted = regression.predict(x1_test)
#Predictions for Stock values
print(x1_test)
predicted.shape
```
# Evaluation of the model
Making table for Actual price and Predicted Price
```
dframe = pd.DataFrame(y1_test, predicted)
dfr = pd.DataFrame({'Actual_Price':y1_test, 'Predicted_Price':predicted})
print(dfr)
#Actual values vs Predicted Values
dfr.head(10)
from sklearn.metrics import confusion_matrix, accuracy_score
#Regrssion Score Analysis
regression.score(x1_test, y1_test)
print('Mean Absolute Error:', metrics.mean_absolute_error(y1_test, predicted))
print('Mean Squared Error:', metrics.mean_squared_error(y1_test, predicted))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y1_test, predicted)))
x2 = dfr.Actual_Price.mean()
y2 = dfr.Predicted_Price.mean()
Accuracy1 = x2/y2*100
print("The accuracy of the model is " , Accuracy1)
```
# Deploying the model by visualization
### Plotting Acutal Close values vs Predicted Values in LR Model
```
plt.scatter(dfr.Actual_Price, dfr.Predicted_Price, color='red')
plt.xlabel("Actual Price")
plt.ylabel("Predicted Price")
plt.show()
```
We can see that using simple Linear Regression on a Scalar and Linear entity as Stock Price over a period of time gives a simple and straight line. Stating that the stock is growing over time. So now we are some what confident in investing in this stock. To better understand next we are using LSTM model.
| true |
code
| 0.632673 | null | null | null | null |
|
# COVID-19 evolution in French departments
#### <br> Visualize evolution of the number of people hospitalized in French departments due to COVID-19 infection
```
%load_ext lab_black
%matplotlib inline
from IPython.display import HTML
import requests
import zipfile
import io
from datetime import timedelta, date
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import pandas as pd
import geopandas as gpd
import contextily as ctx
from PIL import Image
```
#### <br> COVID data are open data from the French open data portal data.gouv.fr: https://www.data.gouv.fr/fr/datasets/donnees-relatives-a-lepidemie-du-covid-19/
```
url_dep = "http://osm13.openstreetmap.fr/~cquest/openfla/export/departements-20140306-5m-shp.zip"
covid_url = (
"https://www.data.gouv.fr/fr/datasets/r/63352e38-d353-4b54-bfd1-f1b3ee1cabd7"
)
filter_dep = ["971", "972", "973", "974", "976"] # only metropolitan France
figsize = (15, 15)
tile_zoom = 7
frame_duration = 500 # in ms
```
#### <br> Load French departements data into a GeoPandas GeoSeries
#### More information on these geographical open data can be found here: https://www.data.gouv.fr/fr/datasets/contours-des-departements-francais-issus-d-openstreetmap/
```
local_path = "tmp/"
r = requests.get(url_dep)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall(path=local_path)
filenames = [
y
for y in sorted(z.namelist())
for ending in ["dbf", "prj", "shp", "shx"]
if y.endswith(ending)
]
dbf, prj, shp, shx = [filename for filename in filenames]
fr = gpd.read_file(local_path + shp) # + encoding="utf-8" if needed
fr.crs = "epsg:4326" # {'init': 'epsg:4326'}
met = fr.query("code_insee not in @filter_dep")
met.set_index("code_insee", inplace=True)
met = met["geometry"]
```
#### <br> Load the map tile with contextily
```
w, s, e, n = met.total_bounds
bck, ext = ctx.bounds2img(w, s, e, n, zoom=tile_zoom, ll=True)
```
#### <br> Plot function to save image at a given date (title)
```
def save_img(df, title, img_name, vmin, vmax):
gdf = gpd.GeoDataFrame(df, crs={"init": "epsg:4326"})
gdf_3857 = gdf.to_crs(epsg=3857) # web mercator
f, ax = plt.subplots(figsize=figsize)
ax.imshow(
bck, extent=ext, interpolation="sinc", aspect="equal"
) # load background map
divider = make_axes_locatable(ax)
cax = divider.append_axes(
"right", size="5%", pad=0.1
) # GeoPandas trick to adjust the legend bar
gdf_3857.plot(
column="hosp", # Number of people currently hospitalized
ax=ax,
cax=cax,
alpha=0.75,
edgecolor="k",
legend=True,
cmap=matplotlib.cm.get_cmap("magma_r"),
vmin=vmin,
vmax=vmax,
)
ax.set_axis_off()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.set_title(title, fontsize=25)
plt.savefig(img_name, bbox_inches="tight") # pad_inches=-0.1 to remove border
plt.close(f)
```
#### <br> Load COVID data into a pandas DataFrame
```
cov = pd.read_csv(covid_url, sep=";", index_col=2, parse_dates=True,)
cov = cov.query("sexe == 0") # sum of male/female
cov = cov.query("dep not in @filter_dep")
cov.dropna(inplace=True)
cov.head()
```
#### <br> Add geometry data to COVID DataFrame
```
cov["geometry"] = cov["dep"].map(met)
```
#### <br> Parse recorded days and save one image for each day
```
def daterange(date1, date2):
for n in range(int((date2 - date1).days) + 1):
yield date1 + timedelta(n)
```
#### Create the folder img at the root of the notebook
```
vmax = cov.hosp.max()
for i, dt in enumerate(daterange(cov.index.min(), cov.index.max())):
title = dt.strftime("%d-%b-%Y")
df = cov.query("jour == @dt")
df = df.drop_duplicates(subset=["dep"], keep="first")
img_name = "img/" + str(i) + ".png"
save_img(df, title, img_name, 0, vmax)
```
#### <br> Compile images in animated gif
```
frames = []
for i, dt in enumerate(daterange(cov.index.min(), cov.index.max())):
name = "img/" + str(i) + ".png"
frames.append(Image.open(name))
frames[0].save(
"covid.gif",
format="GIF",
append_images=frames[1:],
save_all=True,
duration=frame_duration,
loop=0,
)
from IPython.display import HTML
HTML("<img src='covid.gif'>")
```
| true |
code
| 0.417954 | null | null | null | null |
|
# **Demos of MultiResUNet models implemented on the CelebAMaskHQ dataset**
In this notebook, we display demos of the models tested using the mechanisms mentioned in [MultiResUNet.ipynb](https://drive.google.com/file/d/1H26uaN10rU2V7MnX8vRdE3J0ZMoO7wq2/view?usp=sharing).
This demo should work irrespective of any access issues to the author's drive.
If any errors in downloading models may occur, kindly notify me.
We use gdown to install the JSON files (per model) and their weights for testing, and user input image is utilised in order to provide output.
Note : Due to Colab's anomalous nature with matplotlib.pyplot, we will store the final images within the machine.
PS : If you would like to see the results for all the demo networks, utilise the 'Run All' feature in the Runtime submenu.
Run the snippet below in order to download all the models and their corresponding weights:
You can download the models by going through the links mentioned in the comments below-
```
import os
import cv2
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
from keras.layers import Input, Conv2D, MaxPooling2D, Conv2DTranspose, concatenate, BatchNormalization, Activation, add
from keras.models import Model, model_from_json
from keras.optimizers import Adam
from keras.layers.advanced_activations import ELU, LeakyReLU
from keras.utils.vis_utils import plot_model
from keras import backend as K
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
!pip install gdown
# The testing model -
# https://drive.google.com/file/d/1-0H74nlLkTnnvSkMG-MPuhuX70LXBOlo/view?usp=sharing
# https://drive.google.com/file/d/1--fVnHrfDpujmdX9OWT6PVmNQjQ1mPX8/view?usp=sharing
# The F10k model -
# https://drive.google.com/file/d/1-GhqkzttGHAkGi0r5XEgowVg8wLcgmY1/view?usp=sharing
# https://drive.google.com/file/d/1-CPthO3qPHE_IeykqyG_bndf9UuQbHAO/view?usp=sharing
# The FD10 model -
# https://drive.google.com/file/d/1yhWML6lThGv_MSGkUOuVhkoD7H-u6gwm/view?usp=sharing
# https://drive.google.com/file/d/12S277zHGFN9YPKcX7M7hEwxkokvT5bNt/view?usp=sharing
# For the test model :
!gdown --id 1-0H74nlLkTnnvSkMG-MPuhuX70LXBOlo --output modelP5.json
!gdown --id 1--fVnHrfDpujmdX9OWT6PVmNQjQ1mPX8 --output modelW.h5
# For the F10k model :
!gdown --id 1-GhqkzttGHAkGi0r5XEgowVg8wLcgmY1 --output modelP5f10.json
!gdown --id 1-CPthO3qPHE_IeykqyG_bndf9UuQbHAO --output modelWf10.h5
# For the FD10 model :
!gdown --id 1yhWML6lThGv_MSGkUOuVhkoD7H-u6gwm --output modelP5FD.json
!gdown --id 12S277zHGFN9YPKcX7M7hEwxkokvT5bNt --output modelWFD.h5
!ls
```
### Image input and pre processing :
Run the cell below in order to
upload the required images to be tested.
Accepts .jpg format images.
```
from google.colab import files
img_arr = []
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
if(fn.split('.')[-1]=='jpg'or fn.split('.')[-1]=='jpeg'):
img = cv2.imread(fn, cv2.IMREAD_COLOR)
img_arr.append(cv2.resize(img,(256, 192), interpolation = cv2.INTER_CUBIC))
else:
print(fn+' is not of the valid format.')
img_loaded = img_arr
img_arr = np.array(img_arr)
img_arr = img_arr / 255
print('Number of images uploaded : '+str(len(img_arr)))
img_names = list(uploaded.keys())
```
## Boilerplate code to run model :
This code provides outputs in the format(image-mask-imagewithmask).
Your results will be stored under the Files Section (on the left side of the website) in the folder specified by the output during runtime.
In order to allow for automatic downloading of the images, just uncomment the
```
#files.download('results_'+model_json.split('.')[0]+'/result_'+str(img_names[i].split('.')[0])+'.png')
```
section of the code below.
(NOTE : This feature works only for **Google Chrome** users)
```
from google.colab import files
from keras.models import model_from_json
def RunModel( model_json, model_weights, image_array, img_names, img_loaded):
try:
os.makedirs('results_'+model_json.split('.')[0])
except:
pass
print('Your results will be stored under the Files Section in the folder '+'results_'+model_json.split('.')[0])
# load json and create model
json_file = open(model_json, 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights(model_weights)
print("Loaded model from disk")
loaded_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Used in order to ease naming
i = 0
yp = loaded_model.predict(x=np.array(image_array), batch_size=1, verbose=1)
yp = np.round(yp,0)
for ip_img in image_array:
# Modification of mask in order to mimic sample output
t = yp[i]
a = np.concatenate((t,t),axis = 2)
b = np.concatenate((a,t),axis = 2)
b = b * 255
plt.figure(figsize=(20,10))
plt.subplot(1,3,1)
plt.imshow(cv2.cvtColor(np.array(img_loaded[i]), cv2.COLOR_BGR2RGB))
plt.title('Input')
plt.subplot(1,3,2)
plt.imshow(yp[i].reshape(yp[i].shape[0],yp[i].shape[1]))
plt.title('Prediction')
plt.subplot(1,3,3)
plt.imshow(cv2.cvtColor(cv2.addWeighted(np.uint8(img_loaded[i]), 0.5, np.uint8(b), 0.5, 0.0),cv2.COLOR_BGR2RGB))
plt.title('Prediction with mask')
plt.savefig('results_'+model_json.split('.')[0]+'/result_'+str(img_names[i].split('.')[0])+'.png',format='png')
plt.close()
# Uncomment the line below to allow automatic downloading
#files.download('results_'+model_json.split('.')[0]+'/result_'+str(img_names[i].split('.')[0])+'.png')
i += 1
```
# **Model F10k:**
This model has been trained on a 80%-20% split amongst the first 10,000 images of the dataset.
* Number of epochs: 20
* Time taken : [1:29:40, 267.74s/it]
* Jacard Index(final) : 0.8452393049122288
* Dice Coefficient(final) : 0.9139967587791317
* Accuracy(final) : 99.80 %
```
RunModel('modelP5f10.json','modelWf10.h5', img_arr, img_names, img_loaded)
```
# **Model FD10:**
The dataset for this model has been split twice:
1. Three sets of 10,000 images each.
2. Each set trained on a 80%-20% split.
Each split of the data was used to train the model for 10 epochs each. The split was performed in order to compensate for RAM bottlenecks in our system.
* Number of epochs: 10+10+10 = 30
* Time taken : [2:29:04, 2958.94s/it]
* Jacard Index(final) : 0.8331437322988224
* Dice Coefficient(final) : 0.9071035040844939
* Accuracy(final) : 99.70 %
```
RunModel('modelP5FD.json','modelWFD.h5',img_arr,img_names,img_loaded)
```
# **EXTRA : Base Testing Model**
**This model returns only the left eye segmented as a mask and was made for TESTING PURPOSES ONLY**
This model has been trained on a 80%-20% split amongst the first 250 images of the dataset, and was used in order to test the original model mentioned in the paper.
* Number of epochs: 10
* Time taken : [0:20:40]
* Jacard Index(final) : 0.39899180087352404
* Dice Coefficient(final) : 0.495551362130639337
* Accuracy(final) : 99.80 %
**NOTE : THIS MODEL IS SIMPLY A PRECURSOR TO OUR ACTUAL MODELS MENTIONED ABOVE AND SHOULD NOT BE CONSIDERED AS FEASIBLE FOR ANY ASPECTS**
```
RunModel('modelP5.json','modelW.h5',img_arr,img_names,img_loaded)
```
| true |
code
| 0.496033 | null | null | null | null |
|
WNixalo
2018/2/11 17:51
[Homework No.1](https://github.com/fastai/numerical-linear-algebra/blob/master/nbs/Homework%201.ipynb)
```
%matplotlib inline
import numpy as np
import torch as pt
import matplotlib.pyplot as plt
plt.style.use('seaborn')
```
## 1.
---
1. Consider the polynomial $p(x) = (x-2)^9 = x^9 - 18x^8 + 144x^7 - 672x^6 + 2016x^5 - 4032x^4 + 5376x^3 - 4608x^2 + 2304x - 512$
a. Plot $p(x)$ for $x=1.920,\,1.921,\,1.922,\ldots,2.080$ evaluating $p$ via its coefficients $1,\,,-18,\,144,\ldots$
b. Plot the same plot again, now evaluating $p$ via the expression $(x-2)^9$.
c. Explain the difference.
*(The numpy method linspace will be useful for this)*
```
def p(x, mode=0):
if mode == 0:
return x**9 - 18*x**8 + 144*x**7 - 672*x**6 + 2016*x**5 - 4032*x**4 + 5376*x**3 - 4608*x**2 + 2304*x - 512
else:
return (x-2)**9
```
WNx: *wait, what does it mean to evaluate a function by its coefficients? How is that different than just evaluating it?*
--> *does she mean to ignore the exponents? Because that would make* ***b.*** make more sense.. I .. think.*
```
# Signature: np.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None)
np.linspace(1.92, 2.08, num=161)
# np.arange(1.92, 2.08, 0.001)
start = 1.92
stop = 2.08
num = int((stop-start)/0.001 + 1) # =161
x = np.linspace(start, stop, num)
def p_cœf(x):
return x - 18*x + 144*x - 672*x + 2016*x - 4032*x + 5376*x - 4608*x + 2304*x - 512
def p_cœf_alt(x):
return p(x,0)
def p_ex9(x):
return p(x,1)
```
WNx: *huh.. this is a thing.*
```Init signature: np.vectorize(pyfunc, otypes=None, doc=None, excluded=None, cache=False, signature=None)```
```
vec_pcf = np.vectorize(p_cœf)
vec_pcf_alt = np.vectorize(p_cœf_alt)
vec_px9 = np.vectorize(p_ex9)
y_cf = vec_pcf(x)
y_cf_alt = vec_pcf_alt(x)
y_x9 = vec_px9(x)
y = p(x)
```
**a**, **b**:
```
fig = plt.figure(1, figsize=(12,12))
ax = fig.add_subplot(3,3,1)
ax.set_title('Coefficients')
ax.plot(y_cf)
ax = fig.add_subplot(3,3,2)
ax.set_title('$(x - 2)^2$')
ax.plot(y_x9)
ax = fig.add_subplot(3,3,3)
ax.set_title('$p(x)$')
ax.plot(y)
ax = fig.add_subplot(3,3,4)
ax.set_title('Coefficients (Alternate)')
ax.plot(y_cf_alt)
ax = fig.add_subplot(3,3,5)
ax.set_title('All')
# ax.plot(y_cf)
ax.plot(y_x9)
ax.plot(y_cf_alt)
ax.plot(y);
```
WNx: *I think my original interpretation of what "evaluate p by its coefficients" meant was wrong, so I'm leaving it out of the final "All" plot, it just drowns everything else out.*
**c:**
WNx: $p(x) = (x-2)^9$ is the 'general' version of the Coefficient interpretation of $p$. It captures the overall trend of $p$ without all the detail. Kind of an average -- gives you the overall picture of what's going on. For instance you'd compresss signal $p$ to its $(x-2)^9$ form, instead of saving its full coeff form.
## 2.
---
2\. How many different double-precision numbers are there? Express your answer using powers of 2
WNx: $2^{64} - (2^{53} - 2^0$) for IEEE 754 64-bit Double. See: [Quora Link](https://www.quora.com/How-many-distinct-numbers-can-be-represented-as-double-precision)
## 3.
---
3\. Using the updated [Numbers Every Programmer Should Know](https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html), how much longer does a main memory reference take than an L1 cache look-up? How much longer does a disk seek take than a main memory reference?
```
3e-3/1e-7
```
Main memory reference takes **100x** longer than an L1 cache lookup.
Disk seek takes **30,000x** longer than a main memory reference.
L1 cache: `1e-9`s.
MMRef: `1e-7`s.
DS: `3e-3`s
## 4.
---
4\. From the Halide Video, what are 4 ways to traverse a 2d array?
WNx:
**Scanline Order**: Sequentially in Y, within that: Sequentially in X. (row-maj walk)
(or): Transpose X&Y and do a Column-Major Traversal. (walk down cols first)
**Serial Y, Vectorize X by n**: walk down x in increments (vectors)
**Parallel Y, Vectorize X by n**: distribute scanlines into parallel threads
Split X & Y by tiles (**Tile-Traversal**). Split X by n, Y by n. Serial Y_outer, Serial X_outer, Serial Y_inner, Serial X_inner
See: [Halide Video section](https://youtu.be/3uiEyEKji0M?t=318)
## 5.
---
5\. Using the animations --- ([source](https://www.youtube.com/watch?v=3uiEyEKji0M)), explain what the benefits and pitfalls of each approach. Green squares indicate that a value is being read; red indicates a value is being written. Your answers should be longer in length (give more detail) than just two words.
WNx:
1) Parallelizable across scanlines. Entire input computed before output computation. \ Poor Locality.
Loading is slow and limited by system memory bandwidth. By the time the `blurred in y` stage goes to read some intermediate data, it's probably been evicted from cache.
2) Parallelizable across scanlines. Locality. \ Redundant Computation.
Each point in `blurred in x` is recomputed 3 times.
3) Locality & No redundant computation. \ Serial Dependence --> Poor Parallelism.
Introduction of a serial dependence in the scanlines of the output. Relying on having to compute scanline `N-1` before computing scanline `N`. We ca
## 6.
---
6\. Prove that if $A = Q B Q^T$ for some orthogonal matrix $Q$, the $A$ and $B$ have the same singular values.
Orthogonal Matrix: $Q^TQ = QQ^T = I \iff Q^T = Q^{-1}$
So.. if you put matrix $B$ in between $Q$ and $Q^T$, what your doing is performing a transformation on $B$ and then performing the inverse of that transformation on it. ie: Returning $B$ to what it was originally. $\Rightarrow$ if $B$ is ultimately unchanged and $A=QBQ^T$ then $A=B$ (or at least same sing.vals?) This -- seems to me -- an inherent property of the orthogonal matrix $Q$.
**edit**: ahhh, Singluar Values are not just the values of a matrix. Like Eigen Values, they tell something special about it [Mathematics StackEx link](https://math.stackexchange.com/questions/127500/what-is-the-difference-between-singular-value-and-eigenvalue)
```
### some tests:
# Q is I
Q = np.eye(3)
A = np.random.randint(-10,10,(3,3))
A
Q@A@Q.T
# random orthogonal matrix Q
# ref: https://stackoverflow.com/a/38426572
from scipy.stats import ortho_group
Q = ortho_group.rvs(dim=3)
```
WNx: gonna have to do SVD to find the singular values of $A$. Then make a matrix $B$ st. $A=QBQ^T$. *Then* check that A.σ == B.σ. [C.Mellon U. page on SVD](https://www.cs.cmu.edu/~venkatg/teaching/CStheory-infoage/book-chapter-4.pdf)
From the [Lesson 2 notebook](https://github.com/fastai/numerical-linear-algebra/blob/master/nbs/2.%20Topic%20Modeling%20with%20NMF%20and%20SVD.ipynb), I think I'll start with $B$ and compute $A$ acc. to the eqn, then check σ's of both.
Aha. So `σ` is `s` is `S`. The diagonal matrix of singular values. Everyone's using different names for the same thing. *bastards*.
```
# setting A & B
B = np.random.randint(-100,100,(3,3))
A = Q@B@Q.T
Ua, sa, Va = np.linalg.svd(A, full_matrices=False)
Ub, sb, Vb = np.linalg.svd(B, full_matrices=False)
# sa & sb are the singular values of A and B
np.isclose(sa, sb)
sa, sb
```
Woohoo!
## 7.
---
7\. What is the *stochastic* part of *stochastic gradient descent*?
WNx:
*Stochastic* refers to computing the gradient on random mini-batches of the input data.
| true |
code
| 0.646237 | null | null | null | null |
|
# Chapter 6: Physiological and Psychological State Detection in IoT
# Use Case 1: Human Activity Recognition (HAR)
# Model: LSTM
# Step 1: Download Dataset
```
import pandas as pd
import numpy as np
import pickle
import matplotlib.pyplot as plt
from scipy import stats
import tensorflow as tf
import seaborn as sns
from pylab import rcParams
from sklearn import metrics
from sklearn.model_selection import train_test_split
%matplotlib inline
sns.set(style='whitegrid', palette='muted', font_scale=1.5)
rcParams['figure.figsize'] = 14, 8
RANDOM_SEED = 42
```
# Have a quick look at the data
```
columns = ['user','activity','timestamp', 'x-axis', 'y-axis', 'z-axis']
df = pd.read_csv('data/WISDM_ar_v1.1_raw.txt', header = None, names = columns)
df = df.dropna()
df.head()
```
# Step 2: Data Exploration
The columns we will be most interested in are activity, x-axis, y-axis and z-axis.
# Activity-wise data distribution of the dataset
```
df['activity'].value_counts().plot(kind='bar', title='Training data by activity type', color='g');
```
# Activiy Data Exploration
```
def plot_activity(activity, df):
data = df[df['activity'] == activity][['x-axis', 'y-axis', 'z-axis']][:200]
axis = data.plot(subplots=True, figsize=(16, 12),
title=activity)
for ax in axis:
ax.legend(loc='lower left', bbox_to_anchor=(1.0, 0.5))
plot_activity("Sitting", df)
plot_activity("Standing", df)
plot_activity("Walking", df)
plot_activity("Jogging", df)
```
# Step 3: Data preprocessing
Generally, LSTM model expects fixed-length sequences as training data. As we have seen above 200 time steps contain enough information to distinguish the activities. We use that value to preprocess the dataset.
```
N_TIME_STEPS = 200
N_FEATURES = 3
step = 20
segments = []
labels = []
for i in range(0, len(df) - N_TIME_STEPS, step):
xs = df['x-axis'].values[i: i + N_TIME_STEPS]
ys = df['y-axis'].values[i: i + N_TIME_STEPS]
zs = df['z-axis'].values[i: i + N_TIME_STEPS]
label = stats.mode(df['activity'][i: i + N_TIME_STEPS])[0][0]
segments.append([xs, ys, zs])
labels.append(label)
np.array(segments).shape
```
# Reshape the array/tensor standard form
Let's transform it into sequences of 200 rows, each containing x, y and z. Also, apply a one-hot encoding to our labels.
```
# Reshaping segments
reshaped_segments = np.asarray(segments, dtype= np.float32).reshape(-1, N_TIME_STEPS, N_FEATURES)
labels = np.asarray(pd.get_dummies(labels), dtype = np.float32)
# Inspect the reshaped_segments
reshaped_segments.shape
labels[0]
# Datasplit for training and test
X_train, X_test, y_train, y_test = train_test_split(
reshaped_segments, labels, test_size=0.2, random_state=RANDOM_SEED)
len(X_train)
len(X_test)
```
# Step 4: Training Model
# Model Building
Our model contains 2 fully-connected and 2 LSTM layers (stacked on each other) with 64 units each.
```
N_CLASSES = 6
N_HIDDEN_UNITS = 64
# Function for model building
def create_LSTM_model(inputs):
W = {
'hidden': tf.Variable(tf.random_normal([N_FEATURES, N_HIDDEN_UNITS])),
'output': tf.Variable(tf.random_normal([N_HIDDEN_UNITS, N_CLASSES]))
}
biases = {
'hidden': tf.Variable(tf.random_normal([N_HIDDEN_UNITS], mean=1.0)),
'output': tf.Variable(tf.random_normal([N_CLASSES]))
}
X = tf.transpose(inputs, [1, 0, 2])
X = tf.reshape(X, [-1, N_FEATURES])
hidden = tf.nn.relu(tf.matmul(X, W['hidden']) + biases['hidden'])
hidden = tf.split(hidden, N_TIME_STEPS, 0)
# Stack 2 LSTM layers
lstm_layers = [tf.contrib.rnn.BasicLSTMCell(N_HIDDEN_UNITS, forget_bias=1.0) for _ in range(2)]
lstm_layers = tf.contrib.rnn.MultiRNNCell(lstm_layers)
outputs, _ = tf.contrib.rnn.static_rnn(lstm_layers, hidden, dtype=tf.float32)
# Get output for the last time step
lstm_last_output = outputs[-1]
return tf.matmul(lstm_last_output, W['output']) + biases['output']
# Create placeholder for the model
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, N_TIME_STEPS, N_FEATURES], name="input")
Y = tf.placeholder(tf.float32, [None, N_CLASSES])
# Call the model function
pred_Y = create_LSTM_model(X)
pred_softmax = tf.nn.softmax(pred_Y, name="y_")
# L2 Regularisation
L2_LOSS = 0.0015
l2 = L2_LOSS * \
sum(tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables())
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = pred_Y, labels = Y)) + l2
# Define optimiser & accuracy
LEARNING_RATE = 0.0025
optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE).minimize(loss)
correct_pred = tf.equal(tf.argmax(pred_softmax, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, dtype=tf.float32))
```
# Training
Training could take time, will depend on your computing resources.
```
N_EPOCHS = 50
BATCH_SIZE = 1024
saver = tf.train.Saver()
history = dict(train_loss=[],
train_acc=[],
test_loss=[],
test_acc=[])
sess=tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
train_count = len(X_train)
for i in range(1, N_EPOCHS + 1):
for start, end in zip(range(0, train_count, BATCH_SIZE),
range(BATCH_SIZE, train_count + 1,BATCH_SIZE)):
sess.run(optimizer, feed_dict={X: X_train[start:end],
Y: y_train[start:end]})
_, acc_train, loss_train = sess.run([pred_softmax, accuracy, loss], feed_dict={
X: X_train, Y: y_train})
_, acc_test, loss_test = sess.run([pred_softmax, accuracy, loss], feed_dict={
X: X_test, Y: y_test})
history['train_loss'].append(loss_train)
history['train_acc'].append(acc_train)
history['test_loss'].append(loss_test)
history['test_acc'].append(acc_test)
if i != 1 and i % 10 != 0:
continue
print(f'epoch: {i} test accuracy: {acc_test} loss: {loss_test}')
predictions, acc_final, loss_final = sess.run([pred_softmax, accuracy, loss], feed_dict={X: X_test, Y: y_test})
print()
print(f'final results: accuracy: {acc_final} loss: {loss_final}')
#Store the model detail to disk.
pickle.dump(predictions, open("predictions.p", "wb"))
pickle.dump(history, open("history.p", "wb"))
tf.train.write_graph(sess.graph_def, '.', './checkpoint/har_LSTM.pbtxt')
saver.save(sess, save_path = "./checkpoint/har_LSTM.ckpt")
sess.close()
```
# Step 5: Performance Evaluation
```
# Load the saved model detail for evaluation
history = pickle.load(open("history.p", "rb"))
predictions = pickle.load(open("predictions.p", "rb"))
plt.figure(figsize=(12, 8))
plt.plot(np.array(history['train_loss']), "r--", label="Train loss")
plt.plot(np.array(history['train_acc']), "b--", label="Train accuracy")
plt.plot(np.array(history['test_loss']), "r-", label="Test loss")
plt.plot(np.array(history['test_acc']), "b-", label="Test accuracy")
plt.title("Training session's progress over Training Epochs")
plt.legend(loc='upper right', shadow=True)
plt.ylabel('Training Progress (Loss or Accuracy values)')
plt.xlabel('Training Epoch')
plt.ylim(0)
plt.show()
```
# Confusion matrix
```
LABELS = ['Downstairs', 'Jogging', 'Sitting', 'Standing', 'Upstairs', 'Walking']
max_test = np.argmax(y_test, axis=1)
max_predictions = np.argmax(predictions, axis=1)
confusion_matrix = metrics.confusion_matrix(max_test, max_predictions)
plt.figure(figsize=(16, 14))
sns.heatmap(confusion_matrix, xticklabels=LABELS, yticklabels=LABELS, annot=True, fmt="d");
plt.title("Confusion matrix")
plt.ylabel('True Activity')
plt.xlabel('Predicted Activity')
plt.show();
```
# # Step 6: Exporting the model
Finally Model Exporting for IoT devices (Pi 3/Smartphones)
```
from tensorflow.python.tools import freeze_graph
MODEL_NAME = 'har_LSTM'
input_graph_path = 'checkpoint/' + MODEL_NAME+'.pbtxt'
checkpoint_path = './checkpoint/' +MODEL_NAME+'.ckpt'
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_frozen_graph_name = 'frozen_'+MODEL_NAME+'.pb'
freeze_graph.freeze_graph(input_graph_path, input_saver="",
input_binary=False, input_checkpoint=checkpoint_path,
output_node_names="y_", restore_op_name="save/restore_all",
filename_tensor_name="save/Const:0",
output_graph=output_frozen_graph_name, clear_devices=True, initializer_nodes="")
```
| true |
code
| 0.565659 | null | null | null | null |
|
## Uncertainty estimation for regression
We would demonstrate how to estimate the uncertainty for a regression task. In this case we treat uncertainty as a standard deviation for test data points.
As an example dataset we take the kinemtic movement data from UCI database and would estimate the uncertainty prediction with log likelihood metric
```
%load_ext autoreload
%autoreload 2
import numpy as np
import torch
from torch import nn
from torch.utils.data import TensorDataset, DataLoader
import torch.nn.functional as F
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.metrics import r2_score
from alpaca.ue import MCDUE
from alpaca.utils.datasets.builder import build_dataset
from alpaca.utils.ue_metrics import ndcg, uq_ll
from alpaca.ue.masks import BasicBernoulliMask, DecorrelationMask, LeverageScoreMask
from alpaca.utils import model_builder
import alpaca.nn as ann
```
## Prepare the dataset
The alpaca library has a few regression dataset provided (these datasets often used in the related scientific papers)
```
dataset = build_dataset('kin8nm', val_split=1_000)
x_train, y_train = dataset.dataset('train')
x_val, y_val = dataset.dataset('val')
x_train.shape, y_val.shape
train_ds = TensorDataset(torch.FloatTensor(x_train), torch.FloatTensor(y_train))
val_ds = TensorDataset(torch.FloatTensor(x_val), torch.FloatTensor(y_val))
train_loader = DataLoader(train_ds, batch_size=512)
val_loader = DataLoader(val_ds, batch_size=512)
```
## Let's build the simple model
We'll replace common nn.Dropout layer with ann.Dropout from alpaca.
Alpaca version allow to switch on the dropout during inference without worrying other "training" layers, like batch norm.
```
class MLP(nn.Module):
def __init__(self, input_size, base_size=64, dropout_rate=0., dropout_mask=None):
super().__init__()
self.net = nn.Sequential(
nn.Linear(input_size, 4*base_size),
nn.CELU(),
nn.Linear(4*base_size, 2*base_size),
ann.Dropout(dropout_rate, dropout_mask),
nn.CELU(),
nn.Linear(2*base_size, 1*base_size),
ann.Dropout(dropout_rate, dropout_mask),
nn.CELU(),
nn.Linear(base_size, 1)
)
def forward(self, x):
return self.net(x)
# Train model
model = MLP(input_size=8, dropout_rate=0.1, dropout_mask=BasicBernoulliMask)
```
## Train the model
```
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
model.train()
for epochs in range(100):
for x_batch, y_batch in train_loader: # Train for one epoch
predictions = model(x_batch)
loss = criterion(predictions, y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Train loss on last batch', loss.item())
# Check model effectiveness
model.eval()
x_batch, y_batch = next(iter(val_loader))
predictions = model(x_batch).detach().cpu().numpy()
print('R2:', r2_score(predictions, y_batch))
```
## Estimate uncertainty
We compare the log likelihood for constant prediction and monte-carlo uncertainty estimation
```
# Calculate uncertainty estimation
estimator = MCDUE(model)
predictions, estimations = estimator(x_batch)
# Baseline
const_std = np.std(y_val)
errors = np.abs(predictions - y_batch.reshape((-1)).numpy())
score = uq_ll(errors, np.ones_like(errors) * const_std)
print("Quality score for const std is ", score)
model.train()
estimator = MCDUE(model, nn_runs=100)
predictions, estimations = estimator(x_batch)
errors = np.abs(predictions - y_batch.reshape((-1)).numpy())
score = uq_ll(np.array(errors), predictions)
print("Quality score for monte-carlo dropout is ", score)
```
| true |
code
| 0.827706 | null | null | null | null |
|
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
# Supervised Learning Part b - Decision Trees and Forests (optional)
Here we'll explore a class of algorithms based on decision trees. Decision trees are at their root extremely intuitive. They
encode a series of "if" and "else" choices, similar to how a person might make a decision. However, which questions to ask, and how to proceed for each answer is entirely learned from the data.
For example, if you wanted to create a guide to identifying an animal found in nature, you might ask the following series of questions:
- Is the animal bigger or smaller than a meter long?
+ *bigger*: does the animal have horns?
- *yes*: are the horns longer than ten centimeters?
- *no*: is the animal wearing a collar
+ *smaller*: does the animal have two or four legs?
- *two*: does the animal have wings?
- *four*: does the animal have a bushy tail?
and so on. This binary splitting of questions is the essence of a decision tree.
One of the main benefit of tree-based models is that they require little preprocessing of the data. They can work with variables of different types (continuous and discrete) and are invariant to scaling of the features.
Another benefit is that tree-based models are what is called "nonparametric", which means they don't have a fix set of parameters to learn. Instead, a tree model can become more and more flexible, if given more data. In other words, the number of free parameters grows with the number of samples and is not fixed, as for example in linear models.
## Decision Tree Regression
A decision tree is a simple binary classification tree that is
similar to nearest neighbor classification. It can be used as follows:
```
def make_dataset(n_samples=100):
rnd = np.random.RandomState(42)
x = np.linspace(-3, 3, n_samples)
y_no_noise = np.sin(4 * x) + x
y = y_no_noise + rnd.normal(size=len(x))
return x, y
x, y = make_dataset()
X = x.reshape(-1, 1)
plt.xlabel('Feature X')
plt.ylabel('Target y')
plt.scatter(X, y);
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(max_depth=5)
reg.fit(X, y)
X_fit = np.linspace(-3, 3, 1000).reshape((-1, 1))
y_fit_1 = reg.predict(X_fit)
plt.plot(X_fit.ravel(), y_fit_1, color='blue', label="prediction")
plt.plot(X.ravel(), y, '.k', label="training data")
plt.legend(loc="best");
```
A single decision tree allows us to estimate the signal in a non-parametric way, but clearly has some issues. In some regions, the model shows high bias and under-fits the data (seen in the long flat lines which don't follow the contours of the data), while in other regions the model shows high variance and over-fits the data (reflected in the narrow spikes which are influenced by noise in single points).
## Decision Tree Classification
Decision tree classification work very similarly, by assigning all points within a leaf the majority class in that leaf:
```
def plot_2d_separator(classifier, X, fill=False, ax=None, eps=None):
if eps is None:
eps = X.std() / 2.
x_min, x_max = X[:, 0].min() - eps, X[:, 0].max() + eps
y_min, y_max = X[:, 1].min() - eps, X[:, 1].max() + eps
xx = np.linspace(x_min, x_max, 100)
yy = np.linspace(y_min, y_max, 100)
X1, X2 = np.meshgrid(xx, yy)
X_grid = np.c_[X1.ravel(), X2.ravel()]
try:
decision_values = classifier.decision_function(X_grid)
levels = [0]
fill_levels = [decision_values.min(), 0, decision_values.max()]
except AttributeError:
# no decision_function
decision_values = classifier.predict_proba(X_grid)[:, 1]
levels = [.5]
fill_levels = [0, .5, 1]
if ax is None:
ax = plt.gca()
if fill:
ax.contourf(X1, X2, decision_values.reshape(X1.shape),
levels=fill_levels, colors=['blue', 'red'])
else:
ax.contour(X1, X2, decision_values.reshape(X1.shape), levels=levels,
colors="black")
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
X, y = make_blobs(centers=[[0, 0], [1, 1]], random_state=61526, n_samples=100)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
clf = DecisionTreeClassifier(max_depth=5)
clf.fit(X_train, y_train)
plot_2d_separator(clf, X, fill=True)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s=60, alpha=.7)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=60);
```
There are many parameter that control the complexity of a tree, but the one that might be easiest to understand is the maximum depth. This limits how finely the tree can partition the input space, or how many "if-else" questions can be asked before deciding which class a sample lies in.
This parameter is important to tune for trees and tree-based models. The interactive plot below shows how underfit and overfit looks like for this model. Having a ``max_depth`` of 1 is clearly an underfit model, while a depth of 7 or 8 clearly overfits. The maximum depth a tree can be grown at for this dataset is 8, at which point each leave only contains samples from a single class. This is known as all leaves being "pure."
In the interactive plot below, the regions are assigned blue and red colors to indicate the predicted class for that region. The shade of the color indicates the predicted probability for that class (darker = higher probability), while yellow regions indicate an equal predicted probability for either class.
```
from figures import plot_tree_interactive
plot_tree_interactive()
```
Decision trees are fast to train, easy to understand, and often lead to interpretable models. However, single trees often tend to overfit the training data. Playing with the slider above you might notice that the model starts to overfit even before it has a good separation between the classes.
Therefore, in practice it is more common to combine multiple trees to produce models that generalize better. The most common methods for combining trees are random forests and gradient boosted trees.
## Random Forests
Random forests are simply many trees, built on different random subsets (drawn with replacement) of the data, and using different random subsets (drawn without replacement) of the features for each split.
This makes the trees different from each other, and makes them overfit to different aspects. Then, their predictions are averaged, leading to a smoother estimate that overfits less.
```
from figures import plot_forest_interactive
plot_forest_interactive()
```
___
## Exercise
Use a decision tree or random forests to create a classifier for the ``breast_cancer`` dataset.
___
| true |
code
| 0.640523 | null | null | null | null |
|
[User Struggles <](10_Struggles.ipynb) | [> Use of Special Features](12_Magic.ipynb)
# What can we learn about API design for data science?
There are a lot of different ways of spelling out functionality in APIs and some of them are painful, while others are highly usable. We may be able to learn things about API design by looking at what APIs people are using and how. We can help to design good APIs by advising the granularity questions (lots of small objects/functions, or a few with lots or arguments)?
## Results Summary:
- Code cells
- On average, a code cell is 10.37 lines long (median = 6). The longest cell is 40,759 lines long.
- Variables
- On average, there are 5.15 object definitions in a notebook. Median = 0.0. (Among notebooks with at least one object, Median = 10.0)
- Parameters
- Across all function calls, there are an average of 1.057 arguments per function.
- On average, a call to a user defined function has 1.65 parameters.
- On average, a call to a non user-defined function has 1.017 parameters.
- This is a statistically significant difference. We are 95% confident that the true average number of parameters in user-defined function calls is between 0.62 and 0.64 higher than the average number of parameters in non user-defined function calls.
- Functions
- Across all function calls, there are an average of 1.13 arguments per function.
-----
# Import Packages and Load Data
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime
import math
from collections import deque
import scipy.stats as st
import ast
import astpretty
import pickle
import re
import os
import load_data
```
Load dataframes
```
notebooks_temp = load_data.load_notebooks()
repos_temp = load_data.load_repos()
```
Load aggregated dataframes. Code used to create them is in [aggregate.py](aggregate.py).
```
all_objects_df_temp = load_data.load_objects()
cell_stats_df_temp = load_data.load_cell_stats()
cell_types_df_temp = load_data.load_cell_types()
function_defs_df_temp = load_data.load_function_defs()
function_use_df_temp = load_data.load_function_use()
all_objects_df_temp = load_data.load_objects()
errors_df_temp = load_data.load_errors()
nb_imports_temp = load_data.load_nb_imports()
```
Load list of lines of code per code cell.
```
lines_per_code_cell = load_data.load_lines()
```
Load statuses. For some analysis we will remove files that couldn't be parsed with the python abstract syntax tree.
```
statuses_df_temp = load_data.load_statuses()
```
---
# Tidy Data
### Only looking at Python notebooks
```
notebooks = notebooks_temp.copy()[notebooks_temp.lang_name == 'python'].reset_index(drop=True)
print("{0:,} ({1}%) of notebooks were written in Python. The remaining {2}% have been removed.".format(
len(notebooks),
round(100*len(notebooks)/len(notebooks_temp), 2),
round(100 - 100*len(notebooks)/len(notebooks_temp), 2)
))
```
### Needed for some analysis: limit to notebooks that could be parsed with Python ast
```
statuses_df = statuses_df_temp.copy()[statuses_df_temp.syntax == True].reset_index(drop=True)
notebooks_ast = notebooks.copy()[notebooks.file.isin(statuses_df.file)].reset_index(drop=True)
print("{0}% of python notebooks were able to be parsed by Python AST.".format(
round(100*len(notebooks_ast)/len(notebooks), 2),
round(100 - 100*len(notebooks_ast)/len(notebooks), 2)
))
print("{0}% of python 3 notebooks were able to be parsed by Python AST.".format(
round(100*len(notebooks_ast)/len(notebooks[[str(l).startswith('3') for l in notebooks.lang_version]&(notebooks.lang_name == 'python')]), 2),
round(100 - 100*len(notebooks_ast)/len(notebooks[[str(l).startswith('3') for l in notebooks.lang_version]&(notebooks.lang_name == 'python')]), 2)
))
```
### Update repos and aggregated dataframe to reflect notebooks in question
All python notebooks not in ipynb checkpoints
```
cell_stats_df = cell_stats_df_temp.copy()[cell_stats_df_temp.file.isin(notebooks.file)]
cell_types_df = cell_types_df_temp.copy()[cell_types_df_temp.file.isin(notebooks.file)]
repos = repos_temp.copy()[repos_temp.repo_id.isin(notebooks.repo_id)]
errors_df = errors_df_temp.copy()[errors_df_temp.file.isin(notebooks.file)]
nb_imports = nb_imports_temp.copy()[nb_imports_temp.file.isin(nb_imports_temp.file)]
```
Python notebooks in ipynb checkpoints that were able to be parsed
```
function_defs_df = function_defs_df_temp.copy()[function_defs_df_temp.file.isin(notebooks_ast.file)]
function_use_df = function_use_df_temp.copy()[function_use_df_temp.file.isin(notebooks_ast.file)]
all_objects_df = all_objects_df_temp.copy()[all_objects_df_temp.file.isin(notebooks_ast.file)]
```
### Delete temp dataframes to save space
```
del notebooks_temp
del repos_temp
del cell_stats_df_temp
del cell_types_df_temp
del function_defs_df_temp
del function_use_df_temp
del all_objects_df_temp
del errors_df_temp
```
---
# Manipulate Data
Add num_errors to errors dataframe
```
errors_df['num_errors'] = [len(e) for e in errors_df['error_names']]
```
Add num_objects column to objects dataframe
```
all_objects_df['num_objects'] = [len(obj) for obj in all_objects_df['objects']]
```
Group function definitions by notebook
```
function_defs_stats_df = function_defs_df.groupby('file')['function'].count().reset_index().merge(
function_defs_df.groupby('file')['parameters'].sum().reset_index(),
on = 'file'
)
```
---
# Visualizations and Statistics
## How long are code cells?
```
pd.Series(lines_per_code_cell).aggregate(['mean','median','min','max'])
plt.hist(lines_per_code_cell, bins = range(50), color = 'teal')
plt.xlim(0,50)
plt.xlabel('Lines of Code')
plt.ylabel('Number of Cells')
plt.title('Code Cell Length')
plt.show()
```
On average, code cells have 10.30 lines of code. The typical code cell has 6 lines of code (median).
## What is a typical number of objects in a notebook?
Calculate summary statistics for the number of objects in each notebook. Only consider 'name' assigments as objects. Setting the value in a list or data frame (subscript) and altering the attributes of an object (attribute) should not count as object assignments
```
mean_objs = all_objects_df.num_objects.mean()
median_objs = all_objects_df.num_objects.median()
median_objs_with = all_objects_df[all_objects_df!=0].num_objects.median()
print('On average, among notebooks that were able to be parsed with Python abstract syntax tree, there are {0} object definitions in a notebook. Median = {1}. (Among notebooks with at least one object, Median = {2})'.format(
round(mean_objs, 2), median_objs, median_objs_with
))
plt.hist(all_objects_df.num_objects, bins = 20, color='teal')
plt.title('What is a typical number of objects in a notebook?')
plt.xlabel('Number of Objects')
plt.ylabel('Number of Notebooks')
plt.yscale('log')
plt.show()
```
## How many functions are called?
```
function_use_df['unique_user_def'] = [len(set(user_def)) for user_def in function_use_df.user_def]
function_use_df['unique_not_user_def'] = [len(set(not_user_def)) for not_user_def in function_use_df.not_user_def]
function_use_df['unique'] = function_use_df['unique_user_def'] + function_use_df['unique_not_user_def']
print('There are an average of {0} unique functions called in each notebook (median = {1}).'.format(
round(function_use_df.unique.mean(), 2),
function_use_df.unique.median()
))
print('There are an average of {0} unique user-defined functions called in each notebook (median = {1}).'.format(
round(function_use_df.unique_user_def.mean(), 2),
function_use_df.unique_user_def.median()
))
print('There are an average of {0} unique not user-defined functions called in each notebook (median = {1}).'.format(
round(function_use_df.unique_not_user_def.mean(), 2),
function_use_df.unique_not_user_def.median()
))
fig = plt.figure(figsize = (6, 3))
plt.subplot(1,2,1)
plt.hist(function_use_df[function_use_df.unique_not_user_def < 100].unique_not_user_def, color = 'teal', bins = 20)
plt.ylim(0, 600000)
plt.title('Not user define functions')
plt.ylabel('Number of notebooks')
plt.xlabel('Number of functions')
plt.subplot(1,2,2)
plt.hist(function_use_df[function_use_df.unique_user_def < 100].unique_user_def, color = 'navy', bins = 20)
plt.ylim(0, 600000)
plt.yticks([],[])
plt.title('User defined functions')
plt.xlabel('Number of functions')
plt.tight_layout()
plt.show()
print("{0} ({1}%) notebooks have no user defined functions.".format(
sum(function_use_df.unique_user_def == 0),
round(100*sum(function_use_df.unique_user_def == 0)/len(function_use_df))
))
```
### Is number of functions used associated with number of errors in a notebook?
```
errors_funcs_df = errors_df[['file','num_errors']].merge(function_use_df[['file','unique','unique_user_def','parameters']], on = 'file')
errors_funcs_df[['num_errors','unique']].corr()
```
The very weak correlation of 0.019 provides no evidence that the number of function calls in a notebook is associated with the number of errors in a notebook.
### Is number of functions defined associated with number of errors in a notebook?
```
errors_funcs_df[['num_errors','unique_user_def']].corr()
```
The very weak correlation of 0.01 provides no evidence that the number of user defined functions in a notebook is associated with the number of errors in a notebook.
### Is the average number of parameters associated wth the number of errors in a notebook?
```
errors_funcs_df['avg_params'] = [sum(p)/len(p) if len(p) > 0 else None for p in errors_funcs_df.parameters]
errors_funcs_df[['num_errors','avg_params']].corr()
```
The very weak correlation of -0.004 provides no evidence that the average number of parameters of function calls in a notebook is associated with the number of errors in a notebook.
## How many arguments are typical to pass into functions?
```
# 35 seconds
all_params = load_data.flatten(function_use_df.parameters)
print("Across all function calls, there are an average of {0} arguments per function.".format(
round(pd.Series(all_params).mean(), 2)
))
param_counts = pd.Series(all_params).value_counts().reset_index().rename(
columns={'index':'Arguments',0:'Count'}
)
plt.hist(pd.Series(all_params), bins = range(25), color = 'teal')
plt.xlabel('Arguments')
plt.xlim(-1, 20)
plt.ylim(0, 35000000)
plt.yticks(range(0, 35000000, 5000000), range(0, 35, 5))
plt.xticks(range(25))
plt.title('How many arguments are passed into functions?')
plt.ylabel('Number of Function Calls\n(millions)')
plt.show()
```
### Parameters of user-defined functions
#### Based on Definitions
```
# # 2 min
# start = datetime.datetime.now()
# function_defs_stats_df['avg_params'] = [
# row.parameters / row.function
# if row.function != 0 else 0
# for _, row in function_defs_stats_df.iterrows()
# ]
# end = datetime.datetime.now()
# print(end - start)
# user_mean_params = function_defs_stats_df.avg_params.mean()
# print("On average, a user defined function has {0} parameters.".format(
# round(user_mean_params, 2)
# ))
plt.hist(
function_defs_df.parameters,
color = 'teal',
bins = range(25)
)
plt.xticks(range(25))
plt.xlim(-1, 20)
plt.ylim(0, 2500000)
plt.yticks(range(0, 2500000, 500000), pd.Series(range(0, 25, 5))/10)
plt.title('How many arguments are in user defined functions?')
plt.xlabel('Arguments')
plt.ylabel('Number of Functions\n(millions)')
plt.show()
```
[User Struggles <](10_Struggles.ipynb) | [> Use of Special Features](12_Magic.ipynb)
| true |
code
| 0.228286 | null | null | null | null |
|
# Fully Bayesian inference for generalized GP models with HMC
*James Hensman, 2015-16*
Converted to candlegp *Thomas Viehmann*
It's possible to construct a very flexible models with Gaussian processes by combining them with different likelihoods (sometimes called 'families' in the GLM literature). This makes inference of the GP intractable since the likelihoods is not generally conjugate to the Gaussian process. The general form of the model is
$$\theta \sim p(\theta)\\f \sim \mathcal {GP}(m(x; \theta),\, k(x, x'; \theta))\\y_i \sim p(y | g(f(x_i))\,.$$
To perform inference in this model, we'll run MCMC using Hamiltonian Monte Carlo (HMC) over the function-values and the parameters $\theta$ jointly. Key to an effective scheme is rotation of the field using the Cholesky decomposition. We write
$$\theta \sim p(\theta)\\v \sim \mathcal {N}(0,\, I)\\LL^\top = K\\f = m + Lv\\y_i \sim p(y | g(f(x_i))\,.$$
Joint HMC over v and the function values is not widely adopted in the literature becate of the difficulty in differentiating $LL^\top=K$. We've made this derivative available in tensorflow, and so application of HMC is relatively straightforward.
### Exponential Regression example
The first illustration in this notebook is 'Exponential Regression'. The model is
$$\theta \sim p(\theta)\\f \sim \mathcal {GP}(0, k(x, x'; \theta))\\f_i = f(x_i)\\y_i \sim \mathcal {Exp} (e^{f_i})$$
We'll use MCMC to deal with both the kernel parameters $\theta$ and the latent function values $f$. first, generate a data set.
```
import sys, os
sys.path.append(os.path.join(os.getcwd(),'..'))
import candlegp
import candlegp.training.hmc
import numpy
import torch
from torch.autograd import Variable
from matplotlib import pyplot
pyplot.style.use('ggplot')
%matplotlib inline
X = Variable(torch.linspace(-3,3,20,out=torch.DoubleTensor()))
Y = Variable(torch.from_numpy(numpy.random.exponential(((X.data.sin())**2).numpy())))
```
GPflow's model for fully-Bayesian MCMC is called GPMC. It's constructed like any other model, but contains a parameter `V` which represents the centered values of the function.
```
#build the model
k = candlegp.kernels.Matern32(1,ARD=False).double() + candlegp.kernels.Bias(1).double()
l = candlegp.likelihoods.Exponential()
m = candlegp.models.GPMC(X[:,None], Y[:,None], k, l)
m
```
The `V` parameter already has a prior applied. We'll add priors to the parameters also (these are rather arbitrary, for illustration).
```
m.kern.kern_list[0].lengthscales.prior = candlegp.priors.Gamma(1., 1., ttype=torch.DoubleTensor)
m.kern.kern_list[0].variance.prior = candlegp.priors.Gamma(1.,1., ttype=torch.DoubleTensor)
m.kern.kern_list[1].variance.prior = candlegp.priors.Gamma(1.,1., ttype=torch.DoubleTensor)
m.V.prior = candlegp.priors.Gaussian(0.,1., ttype=torch.DoubleTensor)
m
```
Running HMC is as easy as hitting m.sample(). GPflow only has HMC sampling for the moment, and it's a relatively vanilla implementation (no NUTS, for example). There are two setting to tune, the step size (epsilon) and the maximum noumber of steps Lmax. Each proposal will take a random number of steps between 1 and Lmax, each of length epsilon.
We'll use the `verbose` setting so that we can see the acceptance rate.
```
# start near MAP
opt = torch.optim.LBFGS(m.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.data[0])
m
res = candlegp.training.hmc.hmc_sample(m,500,0.2,burn=50, thin=10)
xtest = torch.linspace(-4,4,100).double().unsqueeze(1)
f_samples = []
for i in range(len(res[0])):
for j,mp in enumerate(m.parameters()):
mp.set(res[j+1][i])
f_samples.append(m.predict_f_samples(Variable(xtest), 5).squeeze(0).t())
f_samples = torch.cat(f_samples, dim=0)
rate_samples = torch.exp(f_samples)
pyplot.figure(figsize=(12, 6))
line, = pyplot.plot(xtest.numpy(), rate_samples.data.mean(0).numpy(), lw=2)
pyplot.fill_between(xtest[:,0], numpy.percentile(rate_samples.data.numpy(), 5, axis=0), numpy.percentile(rate_samples.data.numpy(), 95, axis=0), color=line.get_color(), alpha = 0.2)
pyplot.plot(X.data.numpy(), Y.data.numpy(), 'kx', mew=2)
pyplot.ylim(-0.1, numpy.max(numpy.percentile(rate_samples.data.numpy(), 95, axis=0)))
import pandas
df = pandas.DataFrame(res[1:],index=[n for n,p in m.named_parameters()]).transpose()
df[:10]
df["kern.kern_list.1.variance"].apply(lambda x: x[0]).hist(bins=20)
```
# Sparse Version
Do the same with sparse:
```
Z = torch.linspace(-3,3,5).double().unsqueeze(1)
k2 = candlegp.kernels.Matern32(1,ARD=False).double() + candlegp.kernels.Bias(1).double()
l2 = candlegp.likelihoods.Exponential()
m2 = candlegp.models.SGPMC(X[:,None], Y[:,None], k2, l2, Z)
m2.kern.kern_list[0].lengthscales.prior = candlegp.priors.Gamma(1., 1., ttype=torch.DoubleTensor)
m2.kern.kern_list[0].variance.prior = candlegp.priors.Gamma(1.,1., ttype=torch.DoubleTensor)
m2.kern.kern_list[1].variance.prior = candlegp.priors.Gamma(1.,1., ttype=torch.DoubleTensor)
m2.V.prior = candlegp.priors.Gaussian(0.,1., ttype=torch.DoubleTensor)
m2
# start near MAP
opt = torch.optim.LBFGS(m2.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m2()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m2()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.data[0])
m2
res = candlegp.training.hmc.hmc_sample(m,500,0.2,burn=50, thin=10)
xtest = torch.linspace(-4,4,100).double().unsqueeze(1)
f_samples = []
for i in range(len(res[0])):
for j,mp in enumerate(m.parameters()):
mp.set(res[j+1][i])
f_samples.append(m.predict_f_samples(Variable(xtest), 5).squeeze(0).t())
f_samples = torch.cat(f_samples, dim=0)
rate_samples = torch.exp(f_samples)
pyplot.figure(figsize=(12, 6))
line, = pyplot.plot(xtest.numpy(), rate_samples.data.mean(0).numpy(), lw=2)
pyplot.fill_between(xtest[:,0], numpy.percentile(rate_samples.data.numpy(), 5, axis=0), numpy.percentile(rate_samples.data.numpy(), 95, axis=0), color=line.get_color(), alpha = 0.2)
pyplot.plot(X.data.numpy(), Y.data.numpy(), 'kx', mew=2)
pyplot.plot(m2.Z.get().data.numpy(),numpy.zeros(m2.num_inducing),'o')
pyplot.ylim(-0.1, numpy.max(numpy.percentile(rate_samples.data.numpy(), 95, axis=0)))
```
| true |
code
| 0.522689 | null | null | null | null |
|
# Batch Normalization
Training deep models is difficult and getting them
to converge in a reasonable amount of time can be tricky.
In this section, we describe batch normalization,
one popular and effective technique
that has been found to accelerate the convergence of deep nets
and ([together with residual blocks, which we cover next](resnet.md))
has recently enabled practitioners
to routinely train networks with over 100 layers.
## Training Deep Networks
Let's review some of the practical challenges when training deep networks.
1. Data preprocessing often proves to be a crucial consideration for effective statistical modeling. Recall our application of deep networks to [predicting house prices](../chapter_deep-learning-basics/kaggle-house-price.md). In that example, we standardized our input features to each have a mean of *zero* and variance of *one*. Standardizing input data typically makes it easier to train models since parameters are a-priori at a similar scale.
1. For a typical MLP or CNN, as we train the model, the activations in intermediate layers of the network may assume different orders of magnitude (both across nodes in the same layer, and over time due to updating the model's parameters). The authors of the batch normalization technique postulated that this drift in the distribution of activations could hamper the convergence of the network. Intuitively, we might conjecture that if one layer has activation values that are 100x that of another layer, we might need to adjust learning rates adaptively per layer (or even per node within a layer).
1. Deeper networks are complex and easily capable of overfitting. This means that regularization becomes more critical. Empirically, we note that even with dropout, models can overfit badly and we might benefit from other regularization heuristics.
In 2015, [Ioffe and Szegedy introduced Batch Normalization (BN)](https://arxiv.org/abs/1502.03167), a clever heuristic
that has proved immensely useful for improving the reliability
and speed of convergence when training deep models.
In each training iteration, BN normalizes
the activations of each hidden layer node
(on each layer where it is applied)
by subtracting its mean and dividing by its standard deviation,
estimating both based on the current minibatch.
Note that if our batch size was $1$,
we wouldn't be able to learn anything
because during training, every hidden node would take value $0$.
However, with large enough minibatches,
the approach proves effective and stable.
In a nutshell, the idea in Batch Normalization is
to transform the activation at a given layer from $\mathbf{x}$ to
$$\mathrm{BN}(\mathbf{x}) = \mathbf{\gamma} \odot \frac{\mathbf{x} - \hat{\mathbf{\mu}}}{\hat\sigma} + \mathbf{\beta}$$
Here, $\hat{\mathbf{\mu}}$ is the estimate of the mean
and $\hat{\mathbf{\sigma}}$ is the estimate of the variance.
The result is that the activations are approximately rescaled
to zero mean and unit variance.
Since this may not be quite what we want,
we allow for a coordinate-wise scaling coefficient $\mathbf{\gamma}$
and an offset $\mathbf{\beta}$.
Consequently, the activations for intermediate layers
cannot diverge any longer: we are actively rescaling them back
to a given order of magnitude via $\mathbf{\mu}$ and $\sigma$.
Intuitively, it is hoped that this normalization allows us
to be more aggressive in picking large learning rates.
To address the fact that in some cases the activations
may actually *need* to differ from standardized data,
BN also introduces scaling coefficients $\mathbf{\gamma}$
and an offset $\mathbf{\beta}$.
In principle, we might want to use all of our training data
to estimate the mean and variance.
However, the activations correpsonding to each example
change each time we update our model.
To remedy this problem, BN uses only the current minibatch
for estimating $\hat{\mathbf{\mu}}$ and $\hat\sigma$.
It is precisely due to this fact
that we normalize based only on the *currect batch*
that *batch normalization* derives its name.
To indicate which minibatch $\mathcal{B}$ we draw this from,
we denote the quantities with $\hat{\mathbf{\mu}}_\mathcal{B}$
and $\hat\sigma_\mathcal{B}$.
$$\hat{\mathbf{\mu}}_\mathcal{B} \leftarrow \frac{1}{|\mathcal{B}|} \sum_{\mathbf{x} \in \mathcal{B}} \mathbf{x}
\text{ and }
\hat{\mathbf{\sigma}}_\mathcal{B}^2 \leftarrow \frac{1}{|\mathcal{B}|} \sum_{\mathbf{x} \in \mathcal{B}} (\mathbf{x} - \mathbf{\mu}_{\mathcal{B}})^2 + \epsilon$$
Note that we add a small constant $\epsilon > 0$ to the variance estimate
to ensure that we never end up dividing by zero,
even in cases where the empirical variance estimate might vanish by accident.
The estimates $\hat{\mathbf{\mu}}_\mathcal{B}$
and $\hat{\mathbf{\sigma}}_\mathcal{B}$ counteract the scaling issue
by using unbiased but noisy estimates of mean and variance.
Normally we would consider this a problem.
After all, each minibatch has different data,
different labels and with it, different activations, predictions and errors. As it turns out, this is actually beneficial.
This natural variation appears to act as a form of regularization,
conferring benefits (as observed empirically) in mitigating overfitting.
In other recent preliminary research, [Teye, Azizpour and Smith, 2018](https://arxiv.org/pdf/1802.06455.pdf) and [Luo et al, 2018](https://arxiv.org/pdf/1809.00846.pdf) relate the properties of BN
to Bayesian Priors and penalties respectively.
In particular, this sheds some light on the puzzle why BN works best
for moderate sizes of minibatches in the range 50-100.
We are now ready to take a look at how batch normalization works in practice.
## Batch Normalization Layers
The batch normalization methods for fully-connected layers
and convolutional layers are slightly different.
This is due to the dimensionality of the data
generated by convolutional layers.
We discuss both cases below.
Note that one of the key differences between BN and other layers
is that BN operates on a a full minibatch at a time
(otherwise it cannot compute the mean and variance parameters per batch).
### Fully-Connected Layers
Usually we apply the batch normalization layer
between the affine transformation and the activation function
in a fully-connected layer.
In the following, we denote by $\mathbf{u}$ the input
and by $\mathbf{x} = \mathbf{W}\mathbf{u} + \mathbf{b}$ the output
of the linear transform.
This yields the following variant of BN:
$$\mathbf{y} = \phi(\mathrm{BN}(\mathbf{x})) = \phi(\mathrm{BN}(\mathbf{W}\mathbf{u} + \mathbf{b}))$$
Recall that mean and variance are computed
on the *same* minibatch $\mathcal{B}$
on which the transformation is applied.
Also recall that the scaling coefficient $\mathbf{\gamma}$
and the offset $\mathbf{\beta}$ are parameters that need to be learned.
They ensure that the effect of batch normalization
can be neutralized as needed.
### Convolutional Layers
For convolutional layers, batch normalization occurs
after the convolution computation
and before the application of the activation function.
If the convolution computation outputs multiple channels,
we need to carry out batch normalization
for *each* of the outputs of these channels,
and each channel has an independent scale parameter and shift parameter,
both of which are scalars.
Assume that there are $m$ examples in the mini-batch.
On a single channel, we assume that the height and width
of the convolution computation output are $p$ and $q$, respectively.
We need to carry out batch normalization
for $m \times p \times q$ elements in this channel simultaneously.
While carrying out the standardization computation for these elements,
we use the same mean and variance.
In other words, we use the means and variances of the $m \times p \times q$ elements in this channel rather than one per pixel.
### Batch Normalization During Prediction
At prediction time, we might not have the luxury
of computing offsets per batch—we
might be required to make one prediction at a time.
Secondly, the uncertainty in $\mathbf{\mu}$ and $\mathbf{\sigma}$,
as arising from a minibatch are undesirable once we've trained the model.
One way to mitigate this is to compute more stable estimates
on a larger set for once (e.g. via a moving average)
and then fix them at prediction time.
Consequently, BN behaves differently during training and at test time
(recall that dropout also behaves differently at train and test times).
## Implementation from Scratch
Next, we will implement the batch normalization layer with `torch.Tensor` from scratch:
```
import sys
sys.path.insert(0, '..')
import d2l
import torch
import torch.nn as nn
def batch_norm(X, gamma, beta, moving_mean, moving_var, eps, momentum):
# Use torch.is_grad_enabled() to determine whether the current mode is training mode or
# prediction mode
if not torch.is_grad_enabled():
# If it is the prediction mode, directly use the mean and variance
# obtained from the incoming moving average
X_hat = (X - moving_mean) / torch.sqrt(moving_var + eps)
else:
assert len(X.shape) in (2, 4)
if len(X.shape) == 2:
# When using a fully connected layer, calculate the mean and
# variance on the feature dimension
mean = X.mean(dim=0)
var = ((X - mean) ** 2).mean(dim=0)
else:
# When using a two-dimensional convolutional layer, calculate the
# mean and variance on the channel dimension (axis=1). Here we
# need to maintain the shape of X, so that the broadcast operation
# can be carried out later
mean = X.mean(dim=(0, 2, 3), keepdim=True)
var = ((X - mean) ** 2).mean(dim=(0, 2, 3), keepdim=True)
# In training mode, the current mean and variance are used for the
# standardization
X_hat = (X - mean) / torch.sqrt(var + eps)
# Update the mean and variance of the moving average
moving_mean = momentum * moving_mean + (1.0 - momentum) * mean
moving_var = momentum * moving_var + (1.0 - momentum) * var
Y = gamma * X_hat + beta # Scale and shift
return Y, moving_mean, moving_var
```
Now, we can customize a `BatchNorm` layer.
This retains the scale parameter `gamma`
and the shift parameter `beta`
involved in gradient finding and iteration,
and it also maintains the mean and variance
obtained from the moving average,
so that they can be used during model prediction.
The `num_features` parameter required by the `BatchNorm` instance
is the number of outputs for a fully-connected layer
and the number of output channels for a convolutional layer.
The `num_dims` parameter also required by this instance
is 2 for a fully-connected layer and 4 for a convolutional layer.
Besides the algorithm per se, also note
the design pattern in implementing layers.
Typically one defines the math in a separate function, say `batch_norm`.
This is then integrated into a custom layer
that mostly focuses on bookkeeping,
such as moving data to the right device context,
ensuring that variables are properly initialized,
keeping track of the running averages for mean and variance, etc.
That way we achieve a clean separation of math and boilerplate code.
We have to specify the number of features throughout.
```
class BatchNorm(nn.Module):
def __init__(self, num_features, num_dims, **kwargs):
super(BatchNorm, self).__init__(**kwargs)
if num_dims == 2:
shape = (1, num_features)
else:
shape = (1, num_features, 1, 1)
# The scale parameter and the shift parameter involved in gradient
# finding and iteration are initialized to 0 and 1 respectively
self.gamma = nn.Parameter(torch.ones(shape))
self.beta = nn.Parameter(torch.zeros(shape))
# All the variables not involved in gradient finding and iteration are
# initialized to 0 on the CPU
self.moving_mean = torch.zeros(shape)
self.moving_var = torch.zeros(shape)
def forward(self, X):
# If X is not on the CPU, copy moving_mean and moving_var to the
# device where X is located
if self.moving_mean.device != X.device:
self.moving_mean = self.moving_mean.to(X.device)
self.moving_var = self.moving_var.to(X.device)
# Save the updated moving_mean and moving_var
Y, self.moving_mean, self.moving_var = batch_norm(
X, self.gamma, self.beta, self.moving_mean,
self.moving_var, eps=1e-5, momentum=0.9)
return Y
```
## Use a Batch Normalization LeNet
Next, we will modify the LeNet model
in order to apply the batch normalization layer.
We add the batch normalization layer
after all the convolutional layers and after all fully-connected layers.
As discussed, we add it before the activation layer.
```
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.size(0), -1)
net = nn.Sequential(nn.Conv2d(1, 6, kernel_size=5),
BatchNorm(6, num_dims=4),
nn.Sigmoid(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(6, 16, kernel_size=5),
BatchNorm(16, num_dims=4),
nn.Sigmoid(),
nn.MaxPool2d(kernel_size=2, stride=2),
Flatten(),
nn.Linear(16*4*4, 120),
BatchNorm(120, num_dims=2),
nn.Sigmoid(),
nn.Linear(120, 84),
BatchNorm(84, num_dims=2),
nn.Sigmoid(),
nn.Linear(84, 10))
```
Next we train the modified model, again on Fashion-MNIST.
The code is virtually identical to that in previous steps.
The main difference is the considerably larger learning rate.
```
lr, num_epochs, batch_size, device = 1, 5, 256, d2l.try_gpu()
#Initialization of Weights
def init_weights(m):
if type(m) == nn.Linear or type(m) == nn.Conv2d:
torch.nn.init.xavier_uniform_(m.weight)
net.apply(init_weights)
criterion = nn.CrossEntropyLoss()
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch5(net, train_iter, test_iter, criterion, num_epochs, batch_size, device, lr)
```
Let's have a look at the scale parameter `gamma`
and the shift parameter `beta` learned
from the first batch normalization layer.
```
list(net.children())[1].gamma.reshape((-1,)), list(net.children())[1].beta.reshape((-1,))
```
## Concise Implementation
Compared with the `BatchNorm` class, which we just defined ourselves,
the `_BatchNorm` class defined by the `nn.modules.batchnorm` model in Pytorch is easier to use.
We have `nn.BatchNorm1d` and `nn.BatchNorm2d` for num_dims= 2 and 4 respectively. The number of features is to be passed as argument.
Instead, these parameter values will be obtained automatically
by delayed initialization.
The code looks virtually identical
(save for the lack of an explicit specification
of the dimensionality of the features
for the Batch Normalization layers).
```
net = nn.Sequential(nn.Conv2d(1, 6, kernel_size=5),
nn.BatchNorm2d(6),
nn.Sigmoid(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(6, 16, kernel_size=5),
nn.BatchNorm2d(16),
nn.Sigmoid(),
nn.MaxPool2d(kernel_size=2, stride=2),
Flatten(),
nn.Linear(256, 120),
nn.BatchNorm1d(120),
nn.Sigmoid(),
nn.Linear(120, 84),
nn.BatchNorm1d(84),
nn.Sigmoid(),
nn.Linear(84, 10))
```
Use the same hyper-parameter to carry out the training.
Note that as usual, the Pytorch variant runs much faster
since its code has been compiled to C++/CUDA
vs our custom implementation,
which must be interpreted by Python.
```
lr, num_epochs, batch_size, device = 1, 5, 256, d2l.try_gpu()
#Initialization of Weights
def init_weights(m):
if type(m) == nn.Linear or type(m) == nn.Conv2d:
torch.nn.init.xavier_uniform_(m.weight)
net.apply(init_weights)
criterion = nn.CrossEntropyLoss()
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch5(net, train_iter, test_iter, criterion, num_epochs, batch_size, device, lr)
```
## Controversy
Intuitively, batch normalization is thought to somehow
make the optimization landscape smoother.
However, we must be careful to distinguish between
speculative intuitions and true explanations
for the phenomena that we observe when training deep models.
Recall that we do not even know why simpler
deep neural networks (MLPs and conventional CNNs) generalize so well.
Despite dropout and L2 regularization,
they remain too flexible to admit
conventional learning-theoretic generalization guarantees.
In the original paper proposing batch normalization,
the authors, in addition to introducing a powerful and useful tool
offered an explanation for why it works:
by reducing *internal covariate shift*.
Presumably by *internal covariate shift* the authors
meant something like the intuition expressed above---the
notion that the distribution of activations changes
over the course of training.
However there were two problems with this explanation:
(1) This drift is very different from *covariate shift*,
rendering the name a misnomer.
(2) The explanation remains ill-defined (and thus unproven)---rendering *why precisely this technique works* an open question.
Throughout this book we aim to convey the intuitions that practitioners
use to guide their development of deep neural networks.
However, it's important to separate these guiding heuristics
from established sceintific fact.
Eventually, when you master this material
and start writing your own research papers
you will want to be clear to delineate
between technical claims and hunches.
Following the success of batch normalization,
its explanation and via *internal covariate shift*
became a hot topic that has been revisted several times
both in the technical literature and in the broader discourse
about how machine learning research ought to be presented.
Ali Rahimi popularly raised this issue during a memorable
speech while accepting a Test of Time Award at the NeurIPS conference in 2017
and the issue was revisited in a recent position paper
on troubling trends in machine learning
([Lipton et al, 2018](https://arxiv.org/abs/1807.03341)).
In the technical literature other authors
([Santukar et al., 2018](https://arxiv.org/abs/1805.11604))
have proposed alternative explanations for the success of BN,
some claiming that BN's success comes despite exhibiting behavior that is in some ways opposite to those claimed in the original paper.
## Summary
* During model training, batch normalization continuously adjusts the intermediate output of the neural network by utilizing the mean and standard deviation of the mini-batch, so that the values of the intermediate output in each layer throughout the neural network are more stable.
* The batch normalization methods for fully connected layers and convolutional layers are slightly different.
* Like a dropout layer, batch normalization layers have different computation results in training mode and prediction mode.
* Batch Normalization has many beneficial side effects, primarily that of regularization. On the other hand, the original motivation of reducing covariate shift seems not to be a valid explanation.
## Exercises
1. Can we remove the fully connected affine transformation before the batch normalization or the bias parameter in convolution computation?
* Find an equivalent transformation that applies prior to the fully connected layer.
* Is this reformulation effective. Why (not)?
1. Compare the learning rates for LeNet with and without batch normalization.
* Plot the decrease in training and test error.
* What about the region of convergence? How large can you make the learning rate?
1. Do we need Batch Normalization in every layer? Experiment with it?
1. Can you replace Dropout by Batch Normalization? How does the behavior change?
1. Fix the coefficients `beta` and `gamma` (add the parameter `grad_req='null'` at the time of construction to avoid calculating the gradient), and observe and analyze the results.
1. Review the Pytoch documentation for `_BatchNorm` to see the other applications for Batch Normalization.
1. Research ideas - think of other normalization transforms that you can apply? Can you apply the probability integral transform? How about a full rank covariance estimate?
## References
[1] Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
| true |
code
| 0.790813 | null | null | null | null |
|
# Co-refinement of multiple contrast DMPC datasets in *refnx*
This Jupyter notebook demonstrates the utility of the *refnx* package for analysis of neutron reflectometry data. Specifically:
- the co-refinement of three contrast variation datasets of a DMPC (1,2-dimyristoyl-sn-glycero-3-phosphocholine) bilayer measured at the solid-liquid interface with a common model
- the use of the `LipidLeaflet` component to parameterise the model in terms of physically relevant parameters
- the use of Bayesian Markov Chain Monte Carlo (MCMC) to investigate the Posterior distribution of the curvefitting system.
- the intrinsic usefulness of Jupyter notebooks to facilitate reproducible research in scientific data analysis
<img src="DMPC.png">
The images produced in this notebook are used directly in production of the *refnx* paper.
The Jupyter notebook are executable documents that can be distributed, enabling others to reproduce the data analysis contained in the document. The *refnx* documentation at https://refnx.readthedocs.io/en/latest/index.html can be consulted for further details.
The first step in most Python scripts is to import modules and functions that are going to be used
```
# use matplotlib for plotting
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os.path
import refnx, scipy
# the analysis module contains the curvefitting engine
from refnx.analysis import CurveFitter, Objective, Parameter, GlobalObjective, process_chain
# the reflect module contains functionality relevant to reflectometry
from refnx.reflect import SLD, ReflectModel, Structure, LipidLeaflet
# the ReflectDataset object will contain the data
from refnx.dataset import ReflectDataset
```
In order for the analysis to be exactly reproducible the same package versions must be used. The *conda* packaging manager, and *pip*, can be used to ensure this is the case.
```
# version numbers used in this analysis
refnx.version.version, scipy.version.version
```
The `ReflectDataset` class is used to represent a dataset. They can be constructed by supplying a filename
```
data_d2o = ReflectDataset('c_PLP0016596.dat')
data_d2o.name = "d2o"
data_hdmix = ReflectDataset('c_PLP0016601.dat')
data_hdmix.name = "hdmix"
data_h2o = ReflectDataset('c_PLP0016607.dat')
data_h2o.name = "h2o"
```
A `SLD` object is used to represent the Scattering Length Density of a material. It has `real` and `imag` attributes because the SLD is a complex number, with the imaginary part accounting for absorption. The units of SLD are $10^{-6} \mathring{A}^{-2}$
The `real` and `imag` attributes are `Parameter` objects. These `Parameter` objects contain the: parameter value, whether it allowed to vary, any interparameter constraints, and bounds applied to the parameter. The bounds applied to a parameter are probability distributions which encode the log-prior probability of the parameter having a certain value.
```
si = SLD(2.07 + 0j)
sio2 = SLD(3.47 + 0j)
# the following represent the solvent contrasts used in the experiment
d2o = SLD(6.36 + 0j)
h2o = SLD(-0.56 + 0j)
hdmix = SLD(2.07 + 0j)
# We want the `real` attribute parameter to vary in the analysis, and we want to apply
# uniform bounds. The `setp` method of a Parameter is a way of changing many aspects of
# Parameter behaviour at once.
d2o.real.setp(vary=True, bounds=(6.1, 6.36))
d2o.real.name='d2o SLD'
```
The `LipidLeaflet` class is used to describe a single lipid leaflet in our interfacial model. A leaflet consists of a head and tail group region. Since we are studying a bilayer then inner and outer `LipidLeaflet`'s are required.
```
# Parameter for the area per molecule each DMPC molecule occupies at the surface. We
# use the same area per molecule for the inner and outer leaflets.
apm = Parameter(56, 'area per molecule', vary=True, bounds=(52, 65))
# the sum of scattering lengths for the lipid head and tail in Angstrom.
b_heads = Parameter(6.01e-4, 'b_heads')
b_tails = Parameter(-2.92e-4, 'b_tails')
# the volume occupied by the head and tail groups in cubic Angstrom.
v_heads = Parameter(319, 'v_heads')
v_tails = Parameter(782, 'v_tails')
# the head and tail group thicknesses.
inner_head_thickness = Parameter(9, 'inner_head_thickness', vary=True, bounds=(4, 11))
outer_head_thickness = Parameter(9, 'outer_head_thickness', vary=True, bounds=(4, 11))
tail_thickness = Parameter(14, 'tail_thickness', vary=True, bounds=(10, 17))
# finally construct a `LipidLeaflet` object for the inner and outer leaflets.
# Note that here the inner and outer leaflets use the same area per molecule,
# same tail thickness, etc, but this is not necessary if the inner and outer
# leaflets are different.
inner_leaflet = LipidLeaflet(apm,
b_heads, v_heads, inner_head_thickness,
b_tails, v_tails, tail_thickness,
3, 3)
# we reverse the monolayer for the outer leaflet because the tail groups face upwards
outer_leaflet = LipidLeaflet(apm,
b_heads, v_heads, outer_head_thickness,
b_tails, v_tails, tail_thickness,
3, 0, reverse_monolayer=True)
```
The `Slab` Component represents a layer of uniform scattering length density of a given thickness in our interfacial model. Here we make `Slabs` from `SLD` objects, but other approaches are possible.
```
# Slab constructed from SLD object.
sio2_slab = sio2(15, 3)
sio2_slab.thick.setp(vary=True, bounds=(2, 30))
sio2_slab.thick.name = 'sio2 thickness'
sio2_slab.rough.setp(vary=True, bounds=(0, 7))
sio2_slab.rough.name = name='sio2 roughness'
sio2_slab.vfsolv.setp(0.1, vary=True, bounds=(0., 0.5))
sio2_slab.vfsolv.name = 'sio2 solvation'
solv_roughness = Parameter(3, 'bilayer/solvent roughness')
solv_roughness.setp(vary=True, bounds=(0, 5))
```
Once all the `Component`s have been constructed we can chain them together to compose a `Structure` object. The `Structure` object represents the interfacial structure of our system. We create different `Structure`s for each contrast. It is important to note that each of the `Structure`s share many components, such as the `LipidLeaflet` objects. This means that parameters used to construct those components are shared between all the `Structure`s, which enables co-refinement of multiple datasets. An alternate way to carry this out would be to apply constraints to underlying parameters, but this way is clearer. Note that the final component for each structure is a `Slab` created from the solvent `SLD`s, we give those slabs a zero thickness.
```
s_d2o = si | sio2_slab | inner_leaflet | outer_leaflet | d2o(0, solv_roughness)
s_hdmix = si | sio2_slab | inner_leaflet | outer_leaflet | hdmix(0, solv_roughness)
s_h2o = si | sio2_slab | inner_leaflet | outer_leaflet | h2o(0, solv_roughness)
```
The `Structure`s created in the previous step describe the interfacial structure, these structures are used to create `ReflectModel` objects that know how to apply resolution smearing, scaling factors and background.
```
model_d2o = ReflectModel(s_d2o)
model_hdmix = ReflectModel(s_hdmix)
model_h2o = ReflectModel(s_h2o)
model_d2o.scale.setp(vary=True, bounds=(0.9, 1.1))
model_d2o.bkg.setp(vary=True, bounds=(-5e-7, 1e-6))
model_hdmix.bkg.setp(vary=True, bounds=(-5e-7, 1e-6))
model_h2o.bkg.setp(vary=True, bounds=(-5e-7, 1e-6))
```
An `Objective` is constructed from a `ReflectDataset` and `ReflectModel`. Amongst other things `Objective`s can calculate chi-squared, log-likelihood probability, log-prior probability, etc. We then combine all the individual `Objective`s into a `GlobalObjective`.
```
objective_d2o = Objective(model_d2o, data_d2o)
objective_hdmix = Objective(model_hdmix, data_hdmix)
objective_h2o = Objective(model_h2o, data_h2o)
global_objective = GlobalObjective([objective_d2o, objective_hdmix, objective_h2o])
```
A `CurveFitter` object can perform least squares fitting, or MCMC sampling on the `Objective` used to construct it.
```
fitter = CurveFitter(global_objective, nwalkers=200)
```
We initialise the MCMC walkers by jittering around the best fit. Other modes of initialisation are possible: from a supplied covariance matrix, by sampling from the prior distributions, or by supplying known positions from an array.
```
# we seed the numpy random number generator to get reproducible numbers
# during walker initialisation
np.random.seed(1)
fitter.initialise('jitter')
```
In MCMC sampling a burn in period is used to allow the workers to be more representative of the distribution they are sampling. Here we do a number of samples, then discard them. The last chain position is kept to provide a starting point for the 'production' run.
```
# set random_state for reproducible pseudo-random number streams
fitter.sample(1000, random_state=321);
```
The shape of the chain containing the samples is `(number_steps, number_walkers, number_parameters)`
```
print(fitter.chain.shape)
```
At the start of the sampling run the walkers in the MCMC ensemble probably won't distributed according the distribution they are sampling. We can discard, or burn, the initial steps. Let's have a look at the steps for a parameter (e.g. the area-per-molecule) to see if they've reached equilibrium (i.e. distributed around a mean).
```
for i in range(200):
plt.plot(fitter.chain[:, i, 5].flat)
```
Although it's hard to tell from this graph it seems that ~500 steps is enough for equilibration, so let's discard these initial steps that acted as the burn-in period.
```
fitter.reset()
```
Now we do a production sampling run.
In this example the total number of samples is the number of walkers (200 by default) multiplied by the number of steps: 8000 * 200 = 1 600 000. The sampling engine automatically makes full use of the total number of processing cores available to it, but this is specifiable. In addition MPI can be used, which make it useful for sampling on a cluster - MCMC is embarrassingly parallel.
Samples can be saved to file as they are acquired, useful for checkpointing sampling state.
```
fitter.sample(8000, random_state=123);
```
However, successive steps are correlated to previous steps to some degree, and the chain should be thinned to ensure the samples are independent. Let's see how much we should thin by by looking at the autocorrelation of a parameter.
```
plt.plot(fitter.acf()[:, 5])
plt.xlim(0, 1000);
```
For the sampling done here thinning by 400 should be sufficient.
```
process_chain(global_objective, fitter.chain, nthin=400);
```
The sampling gives each varying parameter its own MCMC chain, which can be processed to give relevant statistics, or histogrammed, etc. The relationship between chains encodes the covariance of all the parameters. The chains are automatically processed to calculate the median of all the samples, and the half width of the [15.87, 84.13] percentiles. These two values are taken to be the 'fitted' parameter value, and its standard deviation. Each Parameter set to this median value, and given an `stderr` attribute.
We can see those statistics by printing the objective.
```
print(global_objective)
```
Now let's see how the 'fitted' models compare to the data. We could use `global_objective.plot()`, but because we want to do a bit more tweaking for the graphics (such as vertical offsets) we're going to create the graph manually. We're also going to examine the spread in the posterior distribution.
```
hdmix_mult = 0.01
h2o_mult = 0.1
# the data
plt.errorbar(data_d2o.x, data_d2o.y, data_d2o.y_err,
label='$\mathregular{D_2O}$', ms=4, marker='o', lw=0, elinewidth=1)
plt.errorbar(data_h2o.x, data_h2o.y * h2o_mult, data_h2o.y_err * h2o_mult,
label='$\mathregular{H_2O}$', ms=4, marker='^', lw=0, elinewidth=1)
plt.errorbar(data_hdmix.x, data_hdmix.y * hdmix_mult, data_hdmix.y_err * hdmix_mult,
label='$\mathregular{HD_{mix}}$', ms=4, marker='^', lw=0, elinewidth=1)
# the median of the posterior
plt.plot(data_d2o.x, objective_d2o.generative(), color='r', zorder=20)
plt.plot(data_hdmix.x, objective_hdmix.generative() * hdmix_mult, color='r', zorder=20)
plt.plot(data_h2o.x, objective_h2o.generative() * h2o_mult, color='r', zorder=20)
# plot the spread of the fits for the different datasets
gen = global_objective.pgen(500)
save_pars = np.copy(global_objective.parameters)
for i in range(500):
global_objective.setp(next(gen))
plt.plot(data_d2o.x, objective_d2o.generative(),
color='k', alpha=0.02, zorder=10)
plt.plot(data_hdmix.x, objective_hdmix.generative() * hdmix_mult,
color='k', alpha=0.02, zorder=10)
plt.plot(data_h2o.x, objective_h2o.generative() * h2o_mult,
color='k', alpha=0.02, zorder=10)
# put back the saved parameters
global_objective.setp(save_pars)
ax = plt.gca()
ax.text(-0.04, 1e-11, 'a)')
plt.legend()
plt.yscale('log')
plt.ylabel('Reflectivity')
plt.xlabel('Q /$\AA^{-1}$')
plt.ylim(1e-10, 2);
plt.xlim(0.004, 0.3)
plt.savefig('global_fit.pdf')
```
We can investigate the posterior distribution by a corner plot, this reveals interparameter covariances.
```
global_objective.corner();
plt.savefig('corner.pdf')
```
The variation in scattering length density profiles can be visualised by a little bit of processing. This enables one to see what range of SLD profiles are statistically possible.
```
saved_params = np.array(objective_d2o.parameters)
z, median_sld = s_d2o.sld_profile()
for pvec in objective_d2o.pgen(ngen=500):
objective_d2o.setp(pvec)
zs, sld = s_d2o.sld_profile()
plt.plot(zs, sld, color='k', alpha=0.05)
# put back saved_params
objective_d2o.setp(saved_params)
ax = plt.gca()
ax.text(-50, -1.6, 'b)')
plt.plot(z, median_sld, lw=2, color='r');
plt.ylabel('scattering length density / $10^{-6}\AA^{-2}$')
plt.xlabel('distance / $\AA$')
plt.savefig('d2o_sld_spread.pdf')
```
| true |
code
| 0.710955 | null | null | null | null |
|
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.lines as mlines
import matplotlib.patches as mpatches
import datetime
from typing import Union
sns.set_theme(style="whitegrid")
```
## Analyze CS Data
```
df = pd.read_csv("data/cs.csv", index_col=0)
def vec_dt_replace(series, year=None, month=None, day=None):
return pd.to_datetime(
{'year': series.dt.year if year is None else year,
'month': series.dt.month if month is None else month,
'day': series.dt.day if day is None else day})
df_1720 = df[(df['season'].isin(['F16', 'F17', 'F18', 'F19', 'F20']))]
df.columns
```
### Make it easier to filter through programs using the decision, the institution, gre, gpa, etc.
```
def create_filter(df,
degree: str = None,
decisionfin: Union[str, list] = None,
institution: Union[str, list] = None,
gpa: bool = False,
gre: bool = False):
filt = [True] * len(df)
if degree is not None:
filt = (filt) & (df['degree'] == degree)
if decisionfin is not None:
if isinstance(decisionfin, str):
filt = (filt) & (df['decisionfin'].str.contains(decisionfin, case=False))
elif isinstance(decisionfin, list):
filt = (filt) & (df['decisionfin'].isin(decisionfin))
if institution is not None:
if isinstance(institution, str):
filt = (filt) & (df['institution'].str.contains(institution, case=False))
elif isinstance(institution, list):
filt = (filt) & (df['institution'].isin(institution))
if gpa:
filt = (filt) & (~df['gpafin'].isna()) & (df['gpafin'] <= 4)
if gre:
filt = (filt) & (~df['grev'].isna()) & (~df['grem'].isna()) & (~df['grew'].isna()) & (df['new_gre'])
return filt
```
### Actual function that generates the images
```
def get_uni_stats(u_df, search: str = None, title: str = None, degree: str = 'MS', field: str = 'CS', hue='decisionfin'):
title = title if title is not None else search
if degree not in ['MS', 'PhD', 'MEng', 'MFA', 'MBA', 'Other']:
degree = 'MS'
# Clean up the data a bit, this probably needs a lot more work
# Maybe its own method, too
u_df = u_df.copy()
u_df = u_df[~u_df['decdate'].isna()]
u_df.loc[:,'year'] = u_df['decdate'].str[-4:].astype(int)
u_df = u_df[(u_df['year'] > 2000) & (u_df['year'] < datetime.datetime.now().year)]
# Normalize to 2020. 2020 is a good choice because it's recent AND it's a leap year
u_df.loc[:, 'uniform_dates'] = vec_dt_replace(pd.to_datetime(u_df['decdate']), year=2020)
# Get december dates to be from "2019" so Fall decisions that came in Dec come before the Jan ones.
dec_filter = u_df['uniform_dates'] > datetime.datetime.strptime('2020-11-30', '%Y-%m-%d')
u_df.loc[dec_filter, 'uniform_dates'] = vec_dt_replace(pd.to_datetime(u_df[dec_filter]['uniform_dates']), year=2019)
# Trying to pick red/green colorblind-friendly colors
flatui = ["#2eff71", "#ff0000", "#0000ff"]
sns.set_palette(flatui)
acc_patch = mpatches.Patch(color='#2eff7180')
rej_patch = mpatches.Patch(color='#ff000080')
int_patch = mpatches.Patch(color='#0000ff80')
acc_line = mlines.Line2D([], [], color='#2eff71')
rej_line = mlines.Line2D([], [], color='#ff0000')
int_line = mlines.Line2D([], [], color='#0000ff')
hue_order = ['Accepted', 'Rejected', 'Interview']
if hue == 'status':
hue_order = ['American', 'International', 'International with US Degree', 'Other']
# This generates 4 graphs, so let's make it a 2x2 grid
fig, ax = plt.subplots(2,2)
fig.set_size_inches(20, 20)
# Timeline stats
mscs_filt = create_filter(u_df, degree, ['Accepted', 'Rejected', 'Interview'], institution = search)
mscs_filt = (mscs_filt) & (u_df['uniform_dates'].astype(str) <= '2020-06-00')
sns.histplot(data=u_df[mscs_filt],
x='uniform_dates',
hue=hue,
cumulative=True,
discrete=False,
element='step',
fill=False,
hue_order=hue_order,
ax=ax[0][0])
locator = mdates.AutoDateLocator(minticks=3, maxticks=7)
formatter = mdates.ConciseDateFormatter(locator)
formatter.formats = ['%b', # years
'%b', # months
'%d', # days
'%H:%M', # hrs
'%H:%M', # min
'%S.%f', ] # secs
# Hide the year
formatter.zero_formats = ['%b', # years
'%b', # months
'%d', # days
'%H:%M', # hrs
'%H:%M', # min
'%S.%f', ] # secs
# Hide the year
formatter.offset_formats = ['', # years
'', # months
'%d', # days
'%H:%M', # hrs
'%H:%M', # mins
'%S.%f', ] # secs
ax[0][0].xaxis.set_major_locator(locator)
ax[0][0].xaxis.set_major_formatter(formatter)
h, l = ax[0][0].get_legend_handles_labels()
# Add frequency counts
if h is not None and l is not None:
if hue == 'decisionfin':
counts = u_df[mscs_filt][hue].value_counts().reindex(hue_order)
l = [f'{value} (n={count})' for value, count in counts.iteritems()]
ax[0][0].legend(handles=[acc_line, rej_line, int_line], labels=l, title="Decision")
ax[0][0].set_xlabel("Date")
ax[0][0].set_ylabel("Count")
ax[0][0].set_title("Cumsum of decisions")
# Get GPA stats
mscs_filt = create_filter(u_df, degree, ['Accepted', 'Rejected'], institution = search, gpa = True)
sns.histplot(data=u_df[mscs_filt],
x='gpafin',
hue=hue,
hue_order=hue_order,
bins=20,
ax=ax[0][1])
ax[0][1].set_xlabel("GPA")
ax[0][1].set_ylabel("Count")
ax[0][1].set_title("GPA Distribution")
# Add frequency counts
h, l = ax[0][1].get_legend_handles_labels()
if h is not None and l is not None:
if hue == 'decisionfin':
counts = u_df[mscs_filt][hue].value_counts().reindex(hue_order)
l = [f'{value} (n={count})' for value, count in counts.iteritems()]
ax[0][1].legend(handles=[acc_patch, rej_patch], labels=l, title="Decision")
# Get GRE stats
mscs_filt = create_filter(u_df, degree, ['Accepted', 'Rejected', 'Interview'], institution = search, gre = True)
dfq = u_df[mscs_filt][['grem', hue]]
dfq = dfq.assign(gre_type='Quant')
dfq.columns = ['score', hue, 'gre_type']
dfv = u_df[mscs_filt][['grev', hue]]
dfv = dfv.assign(gre_type='Verbal')
dfv.columns = ['score', hue, 'gre_type']
cdf = pd.concat([dfq, dfv])
sns.boxplot(data=cdf,
x='gre_type',
y='score',
hue=hue,
linewidth=2.5,
hue_order=hue_order,
ax=ax[1][0])
leg = ax[1][0].get_legend()
if leg is not None:
leg.set_title('Decision')
ax[1][0].set_xlabel("GRE Section")
ax[1][0].set_ylabel("Score")
ax[1][0].set_title("GRE Score distribution")
# Get GRE AWA stats
mscs_filt = create_filter(u_df, degree, ['Accepted', 'Rejected', 'Interview'], institution = search, gre = True)
sns.boxplot(data=u_df[mscs_filt],
x=['AWA'] * len(u_df[mscs_filt]),
y='grew',
hue=hue,
linewidth=2.5,
hue_order=hue_order,
ax=ax[1][1])
leg = ax[1][1].get_legend()
if leg is not None:
leg.set_title('Decision')
ax[1][1].set_xlabel("GRE Section")
ax[1][1].set_ylabel("Score")
ax[1][1].set_title("GRE AWA Score distribution")
# Save file to output directory
fig.suptitle(title + ', ' + field + ' ' + degree, size='xx-large')
plt.savefig('output/' + title + '_' + field + ' ' + degree + '.png')
fig
get_uni_stats(df_1720, search='cornell university', title='Cornell University', degree='MS', field='CS')
```
## Other things you could analyze
For instance how many interviews per university, and thus know how likely it is that the interview process is a must if you wanna be accepted.
### Bad interview analysis
```
df_1720['is_int'] = 0
df_1720.loc[df_1720['decisionfin'] == 'Interview', 'is_int'] = 1
df_1720.groupby(by='institution').agg({'is_int': sum}).sort_values(by='is_int', ascending=False).head(10)
```
# Analyze other fields
```
hisdf = pd.read_csv("data/all.csv", index_col=0, low_memory=False)
hisdf.columns
get_uni_stats(hisdf, title='All Universities', degree='PhD', field='All')
```
## Answering Questions
### GPA Inflation
```
hisdf['decyear'] = hisdf['decdate'].str.slice(-4)
hisdf['decyear'] = hisdf['decyear'].astype(int, errors='ignore')
hisdf = hisdf[(hisdf['decyear'] >= '2009') & (hisdf['decyear'] <= '2020') & (hisdf['status'].isin(['American', 'International with US Degree'])) & (hisdf['gpafin'] <= 4)]
gpadf = hisdf[~hisdf['decyear'].isnull()].groupby(by=['decyear']).agg({'gpafin': 'mean'})
fig, ax = plt.subplots()
sns.barplot(x = gpadf.index,
y=gpadf['gpafin'],
ax=ax)
ax.set_ylim([0, 4])
ax.set_xlabel("Year of Submission")
ax.set_ylabel("GPA Mean")
ax.set_title("GPA Behaviour over the Years")
plt.show()
fig.savefig("output/gpa_inflation.png")
```
### Do International Students Have Significantly Different Stats?
```
get_uni_stats(hisdf, title='All Universities by Status', degree='PhD', field='All', hue='status')
hisdf['major'].value_counts()
```
| true |
code
| 0.452717 | null | null | null | null |
|
# Creating a logistic regression to predict absenteeism
## Import the relevant libraries
```
# import the relevant libraries
import pandas as pd
import numpy as np
```
## Load the data
```
# load the preprocessed CSV data
data_preprocessed = pd.read_csv('Absenteeism_preprocessed.csv')
# eyeball the data
data_preprocessed.head()
```
## Create the targets
```
# find the median of 'Absenteeism Time in Hours'
data_preprocessed['Absenteeism Time in Hours'].median()
# create targets for our logistic regression
# they have to be categories and we must find a way to say if someone is 'being absent too much' or not
# what we've decided to do is to take the median of the dataset as a cut-off line
# in this way the dataset will be balanced (there will be roughly equal number of 0s and 1s for the logistic regression)
# as balancing is a great problem for ML, this will work great for us
# alternatively, if we had more data, we could have found other ways to deal with the issue
# for instance, we could have assigned some arbitrary value as a cut-off line, instead of the median
# note that what line does is to assign 1 to anyone who has been absent 4 hours or more (more than 3 hours)
# that is the equivalent of taking half a day off
# initial code from the lecture
# targets = np.where(data_preprocessed['Absenteeism Time in Hours'] > 3, 1, 0)
# parameterized code
targets = np.where(data_preprocessed['Absenteeism Time in Hours'] >
data_preprocessed['Absenteeism Time in Hours'].median(), 1, 0)
# eyeball the targets
targets
# create a Series in the original data frame that will contain the targets for the regression
data_preprocessed['Excessive Absenteeism'] = targets
# check what happened
# maybe manually see how the targets were created
data_preprocessed.head()
```
## A comment on the targets
```
# check if dataset is balanced (what % of targets are 1s)
# targets.sum() will give us the number of 1s that there are
# the shape[0] will give us the length of the targets array
targets.sum() / targets.shape[0]
# create a checkpoint by dropping the unnecessary variables
# also drop the variables we 'eliminated' after exploring the weights
data_with_targets = data_preprocessed.drop(['Absenteeism Time in Hours'],axis=1)
# check if the line above is a checkpoint :)
# if data_with_targets is data_preprocessed = True, then the two are pointing to the same object
# if it is False, then the two variables are completely different and this is in fact a checkpoint
data_with_targets is data_preprocessed
# check what's inside
data_with_targets.head()
```
## Select the inputs for the regression
```
data_with_targets.shape
# Selects all rows and all columns until 14 (excluding)
data_with_targets.iloc[:,:14]
# Selects all rows and all columns but the last one (basically the same operation)
data_with_targets.iloc[:,:-1]
# Create a variable that will contain the inputs (everything without the targets)
unscaled_inputs = data_with_targets.iloc[:,:-1]
```
## Standardize the data
```
# standardize the inputs
# standardization is one of the most common preprocessing tools
# since data of different magnitude (scale) can be biased towards high values,
# we want all inputs to be of similar magnitude
# this is a peculiarity of machine learning in general - most (but not all) algorithms do badly with unscaled data
# a very useful module we can use is StandardScaler
# it has much more capabilities than the straightforward 'preprocessing' method
from sklearn.preprocessing import StandardScaler
# we will create a variable that will contain the scaling information for this particular dataset
# here's the full documentation: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
# define scaler as an object
absenteeism_scaler = StandardScaler()
# import the libraries needed to create the Custom Scaler
# note that all of them are a part of the sklearn package
# moreover, one of them is actually the StandardScaler module,
# so you can imagine that the Custom Scaler is build on it
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.preprocessing import StandardScaler
# create the Custom Scaler class
class CustomScaler(BaseEstimator,TransformerMixin):
# init or what information we need to declare a CustomScaler object
# and what is calculated/declared as we do
def __init__(self,columns,copy=True,with_mean=True,with_std=True):
# scaler is nothing but a Standard Scaler object
self.scaler = StandardScaler(copy,with_mean,with_std)
# with some columns 'twist'
self.columns = columns
self.mean_ = None
self.var_ = None
# the fit method, which, again based on StandardScale
def fit(self, X, y=None):
self.scaler.fit(X[self.columns], y)
self.mean_ = np.mean(X[self.columns])
self.var_ = np.var(X[self.columns])
return self
# the transform method which does the actual scaling
def transform(self, X, y=None, copy=None):
# record the initial order of the columns
init_col_order = X.columns
# scale all features that you chose when creating the instance of the class
X_scaled = pd.DataFrame(self.scaler.transform(X[self.columns]), columns=self.columns)
# declare a variable containing all information that was not scaled
X_not_scaled = X.loc[:,~X.columns.isin(self.columns)]
# return a data frame which contains all scaled features and all 'not scaled' features
# use the original order (that you recorded in the beginning)
return pd.concat([X_not_scaled, X_scaled], axis=1)[init_col_order]
# check what are all columns that we've got
unscaled_inputs.columns.values
# choose the columns to scale
# we later augmented this code and put it in comments
# columns_to_scale = ['Month Value','Day of the Week', 'Transportation Expense', 'Distance to Work',
#'Age', 'Daily Work Load Average', 'Body Mass Index', 'Children', 'Pet']
# select the columns to omit
columns_to_omit = ['Reason_1', 'Reason_2', 'Reason_3', 'Reason_4','Education']
# create the columns to scale, based on the columns to omit
# use list comprehension to iterate over the list
columns_to_scale = [x for x in unscaled_inputs.columns.values if x not in columns_to_omit]
# declare a scaler object, specifying the columns you want to scale
absenteeism_scaler = CustomScaler(columns_to_scale)
# fit the data (calculate mean and standard deviation); they are automatically stored inside the object
absenteeism_scaler.fit(unscaled_inputs)
# standardizes the data, using the transform method
# in the last line, we fitted the data - in other words
# we found the internal parameters of a model that will be used to transform data.
# transforming applies these parameters to our data
# note that when you get new data, you can just call 'scaler' again and transform it in the same way as now
scaled_inputs = absenteeism_scaler.transform(unscaled_inputs)
# the scaled_inputs are now an ndarray, because sklearn works with ndarrays
scaled_inputs
# check the shape of the inputs
scaled_inputs.shape
```
## Split the data into train & test and shuffle
### Import the relevant module
```
# import train_test_split so we can split our data into train and test
from sklearn.model_selection import train_test_split
```
### Split
```
# check how this method works
train_test_split(scaled_inputs, targets)
# declare 4 variables for the split
x_train, x_test, y_train, y_test = train_test_split(scaled_inputs, targets, #train_size = 0.8,
test_size = 0.2, random_state = 20)
# check the shape of the train inputs and targets
print (x_train.shape, y_train.shape)
# check the shape of the test inputs and targets
print (x_test.shape, y_test.shape)
```
## Logistic regression with sklearn
```
# import the LogReg model from sklearn
from sklearn.linear_model import LogisticRegression
# import the 'metrics' module, which includes important metrics we may want to use
from sklearn import metrics
```
### Training the model
```
# create a logistic regression object
reg = LogisticRegression()
# fit our train inputs
# that is basically the whole training part of the machine learning
reg.fit(x_train,y_train)
# assess the train accuracy of the model
reg.score(x_train,y_train)
```
### Manually check the accuracy
```
# find the model outputs according to our model
model_outputs = reg.predict(x_train)
model_outputs
# compare them with the targets
y_train
# ACTUALLY compare the two variables
model_outputs == y_train
# find out in how many instances we predicted correctly
np.sum((model_outputs==y_train))
# get the total number of instances
model_outputs.shape[0]
# calculate the accuracy of the model
np.sum((model_outputs==y_train)) / model_outputs.shape[0]
```
### Finding the intercept and coefficients
```
# get the intercept (bias) of our model
reg.intercept_
# get the coefficients (weights) of our model
reg.coef_
# check what were the names of our columns
unscaled_inputs.columns.values
# save the names of the columns in an ad-hoc variable
feature_name = unscaled_inputs.columns.values
# use the coefficients from this table (they will be exported later and will be used in Tableau)
# transpose the model coefficients (model.coef_) and throws them into a df (a vertical organization, so that they can be
# multiplied by certain matrices later)
summary_table = pd.DataFrame (columns=['Feature name'], data = feature_name)
# add the coefficient values to the summary table
summary_table['Coefficient'] = np.transpose(reg.coef_)
# display the summary table
summary_table
# do a little Python trick to move the intercept to the top of the summary table
# move all indices by 1
summary_table.index = summary_table.index + 1
# add the intercept at index 0
summary_table.loc[0] = ['Intercept', reg.intercept_[0]]
# sort the df by index
summary_table = summary_table.sort_index()
summary_table
```
## Interpreting the coefficients
```
# create a new Series called: 'Odds ratio' which will show the.. odds ratio of each feature
summary_table['Odds_ratio'] = np.exp(summary_table.Coefficient)
# display the df
summary_table
# sort the table according to odds ratio
# note that by default, the sort_values method sorts values by 'ascending'
summary_table.sort_values('Odds_ratio', ascending=False)
```
| true |
code
| 0.633495 | null | null | null | null |
|
# Forest Inference Library (FIL)
The forest inference library is used to load saved forest models of xgboost, lightgbm or protobuf and perform inference on them. It can be used to perform both classification and regression. In this notebook, we'll begin by fitting a model with XGBoost and saving it. We'll then load the saved model into FIL and use it to infer on new data.
FIL works in the same way with lightgbm and protobuf model as well.
The model accepts both numpy arrays and cuDF dataframes. In order to convert your dataset to cudf format please read the cudf documentation on https://rapidsai.github.io/projects/cudf/en/latest/.
For additional information on the forest inference library please refer to the documentation on https://rapidsai.github.io/projects/cuml/en/latest/index.html
```
!conda install -c rapidsai -c nvidia -c conda-forge \
-c defaults rapids=0.13 python=3.6
conda install -c conda-forge xgboost
import numpy as np
import os
from cuml.test.utils import array_equal
from cuml.utils.import_utils import has_xgboost
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from cuml import ForestInference
```
### Check for xgboost
Checks if xgboost is present, if not then it throws an error.
```
if has_xgboost():
import xgboost as xgb
else:
raise ImportError("Please install xgboost using the conda package,"
" Use conda install -c conda-forge xgboost "
"command to install xgboost")
```
## Train helper function
Defines a simple function that trains the XGBoost model and returns the trained model.
For additional information on the xgboost library please refer to the documentation on :
https://xgboost.readthedocs.io/en/latest/parameter.html
```
def train_xgboost_model(X_train, y_train,
num_rounds, model_path):
# set the xgboost model parameters
params = {'silent': 1, 'eval_metric':'error',
'objective':'binary:logistic',
'max_depth': 25}
dtrain = xgb.DMatrix(X_train, label=y_train)
# train the xgboost model
bst = xgb.train(params, dtrain, num_rounds)
# save the trained xgboost model
bst.save_model(model_path)
return bst
```
## Predict helper function
Uses the trained xgboost model to perform prediction and return the labels.
```
def predict_xgboost_model(X_validation, y_validation, xgb_model):
# predict using the xgboost model
dvalidation = xgb.DMatrix(X_validation, label=y_validation)
xgb_preds = xgb_model.predict(dvalidation)
# convert the predicted values from xgboost into class labels
xgb_preds = np.around(xgb_preds)
return xgb_preds
```
## Define parameters
```
n_rows = 10000
n_columns = 100
n_categories = 2
random_state = np.random.RandomState(43210)
# enter path to the directory where the trained model will be saved
model_path = 'xgb.model'
# num of iterations for which the model is trained
num_rounds = 15
```
## Generate data
```
# create the dataset
X, y = make_classification(n_samples=n_rows,
n_features=n_columns,
n_informative=int(n_columns/5),
n_classes=n_categories,
random_state=random_state)
train_size = 0.8
# convert the dataset to np.float32
X = X.astype(np.float32)
y = y.astype(np.float32)
# split the dataset into training and validation splits
X_train, X_validation, y_train, y_validation = train_test_split(
X, y, train_size=train_size)
```
## Train and Predict the model
Invoke the function to train the model and get predictions so that we can validate them.
```
# train the xgboost model
xgboost_model = train_xgboost_model(X_train, y_train,
num_rounds, model_path)
%%time
# test the xgboost model
trained_model_preds = predict_xgboost_model(X_validation,
y_validation,
xgboost_model)
```
## Load Forest Inference Library (FIL)
The load function of the ForestInference class accepts the following parameters:
filename : str
Path to saved model file in a treelite-compatible format
(See https://treelite.readthedocs.io/en/latest/treelite-api.html
output_class : bool
If true, return a 1 or 0 depending on whether the raw prediction
exceeds the threshold. If False, just return the raw prediction.
threshold : float
Cutoff value above which a prediction is set to 1.0
Only used if the model is classification and output_class is True
algo : string name of the algo from (from algo_t enum)
'NAIVE' - simple inference using shared memory
'TREE_REORG' - similar to naive but trees rearranged to be more
coalescing-friendly
'BATCH_TREE_REORG' - similar to TREE_REORG but predicting
multiple rows per thread block
model_type : str
Format of saved treelite model to load.
Can be 'xgboost', 'lightgbm', or 'protobuf'
## Loaded the saved model
Use FIL to load the saved xgboost model
```
fm = ForestInference.load(filename=model_path,
algo='BATCH_TREE_REORG',
output_class=True,
threshold=0.50,
model_type='xgboost')
```
## Predict using FIL
```
%%time
# perform prediction on the model loaded from path
fil_preds = fm.predict(X_validation)
```
## Evaluate results
Verify the predictions for the original and FIL model match.
```
print("The shape of predictions obtained from xgboost : ",(trained_model_preds).shape)
print("The shape of predictions obtained from FIL : ",(fil_preds).shape)
print("Are the predictions for xgboost and FIL the same : " , array_equal(trained_model_preds, fil_preds))
```
| true |
code
| 0.579638 | null | null | null | null |
|
# Chapter 1 Tutorial
You can use NetworkX to construct and draw graphs that are undirected or directed, with weighted or unweighted edges. An array of functions to analyze graphs is available. This tutorial takes you through a few basic examples and exercises.
Note that many exercises are followed by a block with some `assert` statements. These assertions may be preceded by some setup code. They are provided to give you feedback that you are on the right path -- receiving an `AssertionError` probably means you've done something wrong.
## Official documentation for version used in this tutorial
https://networkx.github.io/documentation/networkx-2.2/
## Official tutorial for version used in this tutorial
https://networkx.github.io/documentation/networkx-2.2/tutorial.html
# The `import` statement
Recall that `import` statements go at the top of your code, telling Python to load an external module. In this case we want to load NetworkX, but give it a short alias `nx` since we'll have to type it repeatedly, hence the `as` statement.
Lines starting with the `%` character are not Python code, they are "magic" directives for Jupyter notebook. The `%matplotlib inline` magic tells Jupyter Notebook to draw graphics inline i.e. in the notebook. This magic should be used right after the import statement.
```
import networkx as nx
%matplotlib inline
```
Let's check the installed version of NetworkX. Version 2 is incompatible with v1, so we want to make sure we're not using an out of date package.
```
nx.__version__
```
# Creating and drawing undirected graphs
```
# a "plain" graph is undirected
G = nx.Graph()
# give each a node a 'name', which is a letter in this case.
G.add_node('a')
# the add_nodes_from method allows adding nodes from a sequence, in this case a list
nodes_to_add = ['b', 'c', 'd']
G.add_nodes_from(nodes_to_add)
# add edge from 'a' to 'b'
# since this graph is undirected, the order doesn't matter here
G.add_edge('a', 'b')
# just like add_nodes_from, we can add edges from a sequence
# edges should be specified as 2-tuples
edges_to_add = [('a', 'c'), ('b', 'c'), ('c', 'd')]
G.add_edges_from(edges_to_add)
# draw the graph
nx.draw(G, with_labels=True)
```
There are many optional arguments to the draw function to customize the appearance.
```
nx.draw(G,
with_labels=True,
node_color='blue',
node_size=1600,
font_color='white',
font_size=16,
)
```
# A note on naming conventions
Usually in Python, variables are named in `snake_case`, i.e. lowercase with underscores separating words. Classes are conventionally named in `CamelCase`, i.e. with the first letter of each word capitalized.
Obviously NetworkX doesn't use this convention, often using single capital letters for the names of graphs. This is an example of convention leaking from the world of discrete mathematics. Since most of the documentation you will find online uses this convention, we will follow it as well.
# Graph methods
The graph object has some properties and methods giving data about the whole graph.
```
# List all of the nodes
G.nodes()
# List all of the edges
G.edges()
```
NodeView and EdgeView objects have iterators, so we can use them in `for` loops:
```
for node in G.nodes:
print(node)
for edge in G.edges:
print(edge)
```
Note that the edges are given as 2-tuples, the same way we entered them.
We can get the number of nodes and edges in a graph using the `number_of_` methods.
```
G.number_of_nodes()
G.number_of_edges()
```
Some graph methods take an edge or node as argument. These provide the graph properties of the given edge or node. For example, the `.neighbors()` method gives the nodes linked to the given node:
```
# list of neighbors of node 'b'
G.neighbors('b')
```
For performance reasons, many graph methods return iterators instead of lists. They are convenient to loop over:
```
for neighbor in G.neighbors('b'):
print(neighbor)
```
and you can always use the `list` constructor to make a list from an iterator:
```
list(G.neighbors('b'))
```
# NetworkX functions vs. Graph methods
The previous data are available via graph *methods*, *i.e.* they are called from the graph object:
G.<method_name>(<arguments>)
While several of the most-used NetworkX functions are provided as methods, many more of them are module functions and are called like this:
nx.<function_name>(G, <arguments>)
that is, with the graph provided as the first, and maybe only, argument. Here are a couple of examples of NetworkX module functions that provide information about a graph:
```
nx.is_tree(G)
nx.is_connected(G)
```
# Node and edge existence
To check if a node is present in a graph, you can use the `has_node()` method:
```
G.has_node('a')
G.has_node('x')
```
Additionally, the loop syntax used above: `for n in G.nodes` suggests another way we can check if a node is in a graph:
```
'd' in G.nodes
```
Likewise we can check if two nodes are connected by an edge:
```
G.has_edge('a', 'b')
G.has_edge('a', 'd')
('c', 'd') in G.edges
```
# Node degree
One of the most important questions we can ask about a node in a graph is how many other nodes it connects to. Using the `.neighbors()` method from above, we could formulate this question as so:
```
len(list(G.neighbors('a')))
```
but this is such a common task that NetworkX provides us a graph method to do this in a much clearer way:
```
G.degree('a')
```
# EXERCISE 1
Often in the context of trees, a node with degree 1 is called a *leaf*. Write a function named `get_leaves` that takes a graph as an argument, loops through the nodes, and returns a list of nodes with degree 1.
```
def get_leaves(G):
G = nx.Graph()
G.add_edges_from([
('a', 'b'),
('a', 'd'),
('c', 'd'),
])
assert set(get_leaves(G)) == {'c', 'b'}
```
# Aside: comprehensions
Often we have one sequence of values and we want to generate a new sequence by applying an operation to each item in the first. List comprehensions and generator expressions are compact ways to do this.
List comprehensions are specified inside square brackets, and immediately produce a list of the result.
```
items = ['spider', 'y', 'banana']
[item.upper() for item in items]
```
In the context of NetworkX, this is often used to do something with the node or edge lists:
```
print(G.nodes())
print([G.degree(n) for n in G.nodes()])
```
Generator expressions are slightly different as they are evaluated [lazily](https://en.wikipedia.org/wiki/Lazy_evaluation). These are specified using round braces, and if they are being expressed as a function argument, they can be specified without any braces. These are most often used in the context of aggregations like the `max` function:
```
g = (len(item) for item in items)
list(g)
max(len(item) for item in items)
sorted(item.upper() for item in items)
```
# Node names
The node names don't have to be single characters -- they can be strings or integers or any immutable object, and the types can be mixed. The example below uses strings and integers for names.
```
G = nx.Graph()
G.add_nodes_from(['cat','dog','virus',13])
G.add_edge('cat','dog')
nx.draw(G, with_labels=True, font_color='white', node_size=1000)
```
# Adjacency lists
One compact way to represent a graph is an adjacency list. This is most useful for unweighted graphs, directed or undirected. In an adjacency list, each line contains some number of node names. The first node name is the "source" and each other node name on the line is a "target". For instance, given the following adjacency list:
```
a d e
b c
c
d
e
```
the edges are as follows:
```
(a, d)
(a, e)
(b, c)
```
The nodes on their own line exist so that we are sure to include any singleton nodes. Note that if our graph is undirected, we only need to specify one direction for each edge. Importantly, whether the graph is directed or undirected is often not contained in the file itself -- you have to infer it. This is one limitation of the format.
In the `datasets` directory, there is a file called `friends.adjlist`. It's a plain text file, so you can open it on your computer or in GitHub, but here are its contents:
```
print(open('datasets/friends.adjlist').read())
```
NetworkX provides a way to read a graph from an adjacency list: `nx.read_adjlist()`. We will name this graph SG, for social graph.
```
SG = nx.read_adjlist('datasets/friends.adjlist')
```
We know how to draw this graph:
```
nx.draw(SG, node_size=2000, node_color='lightblue', with_labels=True)
```
And we know how to get information such as the number of friends linked from a node:
```
SG.degree('Alice')
```
# EXERCISE 2
Write a function max_degree that takes a graph as its argument, and returns a 2-tuple with the name and degree of the node with highest degree.
```
def max_degree(G):
SG = nx.read_adjlist('datasets/friends.adjlist')
assert max_degree(SG) == ('Claire', 4)
```
# EXERCISE 3
Write a function `mutual_friends` that takes a graph and two nodes as arguments, and returns a list (or set) of nodes that are linked to both given nodes. For example, in the graph `SG` drawn above,
mutual_friends(SG, 'Alice', 'Claire') == ['Frank']
an empty list or set should be returned in the case where two nodes have no mutual friends, e.g. George and Bob in `SG` drawn above.
```
def mutual_friends(G, node_1, node_2):
SG = nx.read_adjlist('datasets/friends.adjlist')
assert mutual_friends(SG, 'Alice', 'Claire') == ['Frank']
assert mutual_friends(SG, 'George', 'Bob') == []
assert sorted(mutual_friends(SG, 'Claire', 'George')) == ['Dennis', 'Frank']
```
# Directed graphs
Unless otherwise specified, we assume graph edges are undirected -- they are symmetric and go both ways. But some relationships, e.g. predator-prey relationships, are asymmetric and best represented as directed graphs. NetworkX provides the `DiGraph` class for directed graphs.
```
D = nx.DiGraph()
D.add_edges_from([(1,2),(2,3),(3,2),(3,4),(3,5),(4,5),(4,6),(5,6),(6,4),(4,2)])
nx.draw(D, with_labels=True)
```
Note the asymmetry in graph methods dealing with edges such as `has_edge()`:
```
D.has_edge(1,2)
D.has_edge(2,1)
```
Instead of the symmetric relationship "neighbors", nodes in directed graphs have predecessors ("in-neighbors") and successors ("out-neighbors"):
```
print('Successors of 2:', list(D.successors(2)))
print('Predecessors of 2:', list(D.predecessors(2)))
```
Directed graphs have in-degree and out-degree, giving the number of edges pointing to and from the given node, respectively:
```
D.in_degree(2)
D.out_degree(2)
```
### Caveat
Since NetworkX 2, the `.degree()` method on a directed graph gives the total degree: in-degree plus out-degree. However, in a bit of confusing nomenclature, the `neighbors` method is a synonym for `successors`, giving only the edges originating from the given node. This makes sense if you consider `neighbors` to be all the nodes reachable from the given node by following links, but it's easy to make the mistake of writing `.neighbors()` in your code when you really want both predecessors and successors.
```
D.degree(2)
print('Successors of 2:', list(D.successors(2)))
print('"Neighbors" of 2:', list(D.neighbors(2)))
```
| true |
code
| 0.521593 | null | null | null | null |
|
<p><font size="6"><b>Numpy</b></font></p>
> *DS Python for GIS and Geoscience*
> *October, 2020*
>
> *© 2020, Joris Van den Bossche and Stijn Van Hoey. Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
---
```
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib.lines import Line2D
import rasterio
from rasterio.plot import plotting_extent, show
```
## Introduction
On of the most fundamental parts of the scientific python 'ecosystem' is [numpy](https://numpy.org/). A lot of other packages - you already used Pandas and GeoPandas in this course - are built on top of Numpy and the `ndarray` (n-dimensional array) data type it provides.
```
import numpy as np
```
Let's start again from reading in a GeoTiff data set from file, thiss time a Sentinal Band 4 of the City of Ghent:
```
with rasterio.open("./data/gent/raster/2020-09-17_Sentinel_2_L1C_B04.tiff") as src:
b4_data = src.read()
b4_data_meta = src.meta
show(src)
```
As we learnt in the previous lesson, Rasterio returns a Numpy `ndarray`:
```
type(b4_data)
b4_data
```
Numpy supports different `dtype`s (`float`, `int`,...), but all elements of an array do have the same dtype. Note that NumPy auto-detects the data-type from the input.
```
b4_data.dtype
```
The data type of this specific array `b4_data` is 16bit unsigned integer. More information on the data types Numpy supports is available in the [documentation](https://numpy.org/devdocs/user/basics.types.html#array-types-and-conversions-between-types). Detailed info on data types is out of scope of this course, but remember that using 16bit unsigned integer, it can contain `2**16` different (all positive) integer values:
```
2**16
```
Let's check this by calculating the minimum and maximum value in the array:
```
b4_data.min(), b4_data.max()
```
Converting to another data type is supported by `astype` method. When floats are preferred during calculation:
```
b4_data.astype(float)
b4_data.max()
```
Just as any other object in Python, the `ndarray` has a number of attributes. We already checkes the `dtype` attribute. The `shape` and `ndim` of the array are other relevant attribute:
```
b4_data.shape, b4_data.ndim
```
Hence, we have a single band with dimensions (317, 625) and data type `uint16`. Compare this to the metadata stored in the geotiff file:
```
#!gdalinfo ./data/gent/raster/2020-09-17_Sentinel_2_L1C_B04.tiff
```
The metadata on the dimensions and the datatype correspond, but the spatial information is lost when we only store the Numpy array.
Numpy works very well together with the other fundamental scientific Python package [Matplotlib](https://matplotlib.org/). An useful plot function to know when working with raster data is `imshow`:
```
fig, ax = plt.subplots()
ax.imshow(b4_data.squeeze());
```
__Note:__ Numpy function `squeeze` used to get rid of the single-value dimension of the numpy array.
As the Numpy array does not contain any spatial information, the x and y axis labels are defined by the indices of the array. Remark that the Rasterio plot returned this plot with the coordinate information in the axis labels.
With a small trick, the same result can be achieved with Matplotlib:
1. When reading in a data set using Rasterio, use the `plotting_extent` function from rasterio to get the spatial extent:
```
from rasterio.plot import plotting_extent
with rasterio.open("./data/gent/raster/2020-09-17_Sentinel_2_L1C_B04.tiff") as src:
b4_data = src.read()
b4_data_meta = src.meta
b4_data_extent = plotting_extent(src) # NEW
b4_data_extent
```
2. Add the `extent` argument to the `imshow` plot
```
fig, ax = plt.subplots()
ax.imshow(b4_data.squeeze(), extent=b4_data_extent)
```
<div class="alert alert-info" style="font-size:120%">
**REMEMBER**: <br>
The [`numpy` package](https://numpy.org/) is the backbone of the scientific Python ecosystem. The `ndarray` provides an efficient data type to store and manipulate raster data, but it does NOT contain any spatial information.
Use the spatial `extent` trick to add coordinate information to imshow plot axis. Convert to the preferred datatype using `astype()` method.
</div>
## Reshape, slice and index
```
b4_data.shape
```
We already used `squeeze` to remove the single-value dimension. We could also select the data we needed, similar to slicing in lists or Pandas DataFrames:
```
b4_data[0]
b4 = b4_data[0]
b4.shape
```
If you do not like the order of dimensions of the data, you can switch these using `transpose`:
```
b4.transpose(1, 0).shape
```
Getting rid of the dimensions and flattening all values into a single 1-D array can be done using `flatten` method:
```
b4.flatten().shape
```
Flattening an arrya is useful to create a histogram with Matplotlib:
```
plt.hist(b4.flatten(), bins=100);
# slice, subsample, reverse
# slice + assign
# fancy indexing
# fancy indexing + assign
b4 = b4_data[0]
```
Select a specific row/column:
```
b4[10].shape
b4[:, -2:].shape
```
Select every nth element in a given dimension:
```
b4[100:200:10, :].shape
```
Reversing an array:
```
b4[:, ::-1].shape # Note you can also np.flip an array
b4[0, :4]
b4_rev = b4[:, ::-1]
b4_rev[0, -4:]
```
You can also combine assignment and slicing:
```
b4[0, :3] = 10
b4
```
Use a __condition__ to select data, also called fancy indexing or boolean indexing:
```
b4 < 1000
```
Onle keep the data which are True for the given condition
```
b4[b4 < 1000]
```
Or combine assignment and fancy indexing, e.g. a reclassification of the raster data:
```
b4[b4 < 5000] = 0 # assign the value 0 to all elements with a value lower than 5000
b4
```
A powerfull shortcut to handle this kind or reclassifications is the `np.where` function:
```
np.where(b4 < 5000, 10, b4)
```
<div class="alert alert-success">
**EXERCISE**:
* Read in the file `./data/gent/raster/2020-09-17_Sentinel_2_L1C_True_color.tiff` with rasterio and assign the data to a new variable `tc_data`.
* Select only the *second* layer of `tc_data` and assign the output to a new variable `tc_g`.
* Assign to each of the elements in the `tc_g` array with a value above 15000 the new value 65535.
<details><summary>Hints</summary>
* You can combine the assignment of new values together with fancy indexing of a numpy array.
* Python (and also Numpy) uses 0 as the first-element index
</details>
</div>
```
# %load _solutions/11-numpy1.py
# %load _solutions/11-numpy2.py
# %load _solutions/11-numpy3.py
```
<div class="alert alert-success">
**EXERCISE**:
Subsample the ndarray `tc_data` by taking only the one out of each 5 data points for all layers at the same time (Be aware that this is a naive resampling implementation for educational purposes only).
<details><summary>Hints</summary>
* The result should still be a 3-D array with 3 elements in the first dimension.
</details>
</div>
```
# %load _solutions/11-numpy4.py
# %load _solutions/11-numpy5.py
```
<div class="alert alert-success">
**EXERCISE**:
Elements with the value `65535` do represent 'Not a Number' (NaN) values. However, Numpy does not support NaN values for integer data, so we'll convert to float first as data type. After reading in the data set `./data/gent/raster/2020-09-17_Sentinel_2_L1C_B04_(Raw).tiff` (assign data to variable `b4_data`):
* Count the number of elements that are equal to `65535`
* Convert the data type to `float`, assign the result to a new variable `b4_data_f`
* Assign Nan (`np.nan`) value to each of the elements of `b4_data_f` equal to `65535`
* Count the number of Nan values in the `b4_data_f` data
* Make a histogram of both the `b4_data` and `b4_data_f` data. Can you spot the difference?
<details><summary>Hints</summary>
* `np.nan` represents _Not a Number (NaN)_ in Numpy. You can assign an element to it, e.g. `dummy[2] = np.nan`
* `np.sum` will by default sum all of the elements of the input array and can also count boolean values (True = 1 and False = 0), resulting from a conditional expression.
* To test if a value is a nan, Numpy provides `np.isnan(...)` which results in an element-wise check returning boolean values.
* Check the help of the `plt.hist` command to find out more about the `bins` and the `log` arguments.
</details>
</div>
```
# %load _solutions/11-numpy6.py
# %load _solutions/11-numpy7.py
# %load _solutions/11-numpy8.py
# %load _solutions/11-numpy9.py
# %load _solutions/11-numpy10.py
```
## Reductions, element-wise calculations and broadcasting
Up until now, we worked with the 16bit integer values. For specific applications we might want to rescale this data. A (fake) example is the linear transformation to the range 0-1 after log conversion of the data. To do so, we need to calculate _for each element_ in the original $b$ array the following:
$$x_i= \log(b_i)$$
$$z_i=\frac{x_i-\min(x)}{\max(x)-\min(x)}$$
__1. reductions__
As part of it, we need the minimum `min(x)` and the maximum `max(x)` of the array. These __reductions__ (aggregations) are provided by Numpy and can be applied along one or more of the data dimensions, called the __axis__:
```
dummy = np.arange(1, 10).reshape(3, 3)
dummy
np.min(dummy), np.min(dummy, axis=0), np.min(dummy, axis=1)
dummy = np.arange(1, 25).reshape(2, 3, 4)
dummy.shape, dummy
np.min(dummy), np.min(dummy, axis=0), np.min(dummy, axis=(0, 1)), np.min(dummy, axis=(0, 2))
```
In some applications, the usage of the `keepdims=True` is useful to keep the number of dimensions after reduction:
```
np.min(dummy, axis=(0, 2), keepdims=True)
```
When working with Nan values, the result will be Nan as well:
```
np.min(np.array([1., 2., np.nan]))
```
Use the `nanmin`, `nan...` version of the function instead, if available:
```
np.nanmin(np.array([1., 2., np.nan]))
```
__2. Element-wise__
The __for each element__ is crucial for Numpy. The typical answer in programming would be a `for`-loop, but Numpy is optimized to do these calculations __element-wise__ (i.e. for all elements together):
```
dummy = np.arange(1, 10)
dummy
dummy*10
```
Instead of:
```
[el*20 for el in dummy]
```
Numpy provides most of the familiar arithmetic operators to apply on an element-by-element basis:
```
np.exp(dummy), np.sin(dummy), dummy**2, np.log(dummy)
```
For some function, you can either use the `np.min(my_array)` or the `my_array.min()` approach:
```
dummy.min() == np.min(dummy)
```
__3. Broadcasting__
When we combine arrays with different shapes during arithmetic operations, Numpy applies a set of __broadcoasting__ rules and the smaller array is _broadcast_ across the larger array so that they have compatible shapes. An important consequence for out application is:
```
np.array([1, 2, 3]) + 4. , np.array([1, 2, 3]) + np.array([4.]), np.array([1, 2, 3]) + np.array([4., 4., 4.])
```
The smallest array is broadcasted to make both compatible. It starts with the trailing (i.e. rightmost) dimensions. Exploring all the rules are out of scope in this lesson and are well explained in the [broadcasting Numpy documentation](https://numpy.org/devdocs/user/basics.broadcasting.html#general-broadcasting-rules).
__Back to our function__
By combining these three elements, we know enough to translate our conversion into Numpy code on the example data set:
```
with rasterio.open("./data/gent/raster/2020-09-17_Sentinel_2_L1C_B04.tiff") as src:
b4_data = src.read()
b4_data = b4_data.squeeze().astype(float) # squeeze and convert to float
b4_data[b4_data == 0.0] = 0.00001 # to overcome zero-division error
```
Take the log of al the values __element-wise__:
```
b4_data_log = np.log(b4_data)
```
Get the min and max __reductions__:
```
b4_min, b4_max = b4_data_log.min(), b4_data_log.max()
```
__Broadcast__ our single value `b4_min` and `b4_max` to all elements of `b4_data_log`:
```
b4_rescaled = ((b4_data_log - b4_min)/(b4_max - b4_min))
plt.hist(b4_rescaled.flatten(), bins=100);
```
__Remark 1:__ One-dimensional linear interpolation towards a new value range can be calculated using the `np.interp` function as well. For the range 0 -> 1:
```
np.interp(b4_data, (b4_data.min(), b4_data.max()), (0, 1))
```
__Remark 2: Why not iterate over the values of a list?__
Let's use the rescaling example to compare the calculation with Numpy versus a list comprehension (for-loop in Python):
```
b4_min, b4_max = b4_data.min(), b4_data.max()
```
With Numpy:
```
%%time
rescaled_values_1 = ((b4_data - b4_min)/(b4_max - b4_min))
```
Using a list with a for loop:
```
b4_as_list = b4_data.flatten().tolist()
%%time
rescaled_values_2 = [((data_point - b4_min)/(b4_max - b4_min)) for data_point in b4_as_list]
np.allclose(rescaled_values_1.flatten(), rescaled_values_2) # np.allclose also works element wise
```
<div class="alert alert-info" style="font-size:120%">
**REMEMBER**: <br>
The combination of element-wise calculations, efficient reductions and broadcasting provides Numpy a lot of power. In general, it is a good advice to __avoid for loops__ when working with Numpy arrays.
</div>
### Let's practice!
<div class="alert alert-success">
**EXERCISE**:
The data set `./data/herstappe/raster/2020-09-17_Sentinel_2_L1C_True_color.tiff` (assign to variable `herstappe_data`) contains 3 bands. The `imshow` function of Matplotlib can plot 3-D (RGB) data sets, but when running `plt.imshow(herstappe_data)`, we got the following error:
```
...
TypeError: Invalid shape (3, 227, 447) for image data
```
- Check in the help op `plt.imshow` why the `herstappe_data` can not be plot as such
- Adjust the data to fix the behavior of `plt.imshow(herstappe_data)`
Next, plot a greyscale version of the data as well. Instead of using a custom function just rely on the sum of the 3 bands as a proxy.
<details><summary>Hints</summary>
* In a Jupyter Notebook, us the SHIFT-TAB combination when the cursor is on the `imshow` function or type in a new cell `?plt.imshow` to see the documentation of a function.
* The `imshow` function requires the different color bands as last dimension, so we will need to transpose the image array.
* Add the extent to see the coordinates in the axis labels.
* A greyscale image requires a greyscale `cmap`, checkt he available names in [the documentation online](https://matplotlib.org/tutorials/colors/colormaps.html)
</details>
</div>
```
# %load _solutions/11-numpy11.py
# %load _solutions/11-numpy12.py
# %load _solutions/11-numpy13.py
```
<div class="alert alert-success">
**EXERCISE**:
The data set `./data/herstappe/raster/2020-09-17_Sentinel_2_L1C_True_color.tiff` (assign to variable `herstappe_data`) has values ranging in between 0.11325, 0.8575. To improve the quality of the visualization, stretch __each of the layers individually__ to the values to the range 0. to 1. with a linear transformation:
$$z_i=\frac{x_i-\min(x)}{\max(x)-\min(x)}$$
Make a plot of the end result and compare with the plots of the previous exercise.
<details><summary>Hints</summary>
* Keep into account that the data set is 3-dimensional. Have a look at the optional arguments for the reduction/aggregation functions in terms of `axis` and `keepdims`.
* You need the minimal/maximal value over 2 axis to end up with a min/max for each of the layers.
* Broadcasting starts comparison of the alignment on the last dimension.
</details>
</div>
```
with rasterio.open("./data/herstappe/raster/2020-09-17_Sentinel_2_L1C_True_color.tiff") as src:
herstappe_data = src.read()
herstappe_extent = plotting_extent(src)
# %load _solutions/11-numpy14.py
# %load _solutions/11-numpy15.py
# %load _solutions/11-numpy16.py
```
<div class="alert alert-success">
**EXERCISE**:
You want to reclassify the values of the 4th band data to a fixed set of classes:
* x < 0.05 need to be 10
* 0.05 < x < 0.1 need to be 20
* x > 0.1 need to be 30
Use the data set `./data/gent/raster/2020-09-17_Sentinel_2_L1C_B04_(Raw).tiff` (assign data to variable `b4_data`):
* Read the data set and exclude the single-value dimension to end up with a 2D array.
* Convert to float data type. and normalize the values to the range [0., 1.].
* Create a new variable `b4_data_classified` with the same shape as `b4_data` but datatype int.
* Assign the new values (10, 20, 30) to the elements for which each of the conditions apply.
* Make a image plot of the reclassified variable `b4_data_classified`.
</div>
```
with rasterio.open("./data/gent/raster/2020-09-17_Sentinel_2_L1C_B04.tiff") as src:
b4_data = src.read()
b4_data_extent = plotting_extent(src)
# %load _solutions/11-numpy17.py
# %load _solutions/11-numpy18.py
# %load _solutions/11-numpy19.py
# %load _solutions/11-numpy20.py
# %load _solutions/11-numpy21.py
```
<div class="alert alert-success">
**EXERCISE**:
The data sets `./data/gent/raster/2020-09-17_Sentinel_2_L1C_B04.tiff` and `./data/gent/raster/2020-09-17_Sentinel_2_L1C_B08.tiff` contain respectively the 4th and the 8th band of a sentinel satellite image. To derive the [Normalized Difference Vegetation Index) (NDVI)](https://nl.wikipedia.org/wiki/Normalized_Difference_Vegetation_Index), the two bands need to be combined as follows:
$$\frac{band_8 - band_4}{band_8 + band_4} $$
Process the images and create a plot of the NDVI:
- Read both data sets using Rasterio and store them in resp. `b4_data` and `b8_data`.
- Combine both data sets using the `np.vstack` function and assign it to the variable `b48_bands`
- Transform the data range of each of the layers to the range .0 - 1.
- For the values equal to zero in the `b48_bands` data set, assign a new (very small) value 1e-6
- Calculate the NDVI
- Plot the NDVI and select an appropriate colormap.
<details><summary>Hints</summary>
* For more specific adjustments to the colormap, have a check on the [Matplotlib documentation on colormap normalization](https://matplotlib.org/3.3.2/tutorials/colors/colormapnorms.html)
</details>
</div>
```
# %load _solutions/11-numpy22.py
# %load _solutions/11-numpy23.py
b48_bands.shape
# %load _solutions/11-numpy24.py
# %load _solutions/11-numpy25.py
# %load _solutions/11-numpy26.py
```
Using a Matplotlib norm to adjust colormap influence on image https://matplotlib.org/api/_as_gen/matplotlib.colors.TwoSlopeNorm.html
```
# %load _solutions/11-numpy27.py
# %load _solutions/11-numpy28.py
```
---
## For the curious: Some more building blocks
Numpy provides lower-level building blocks used by other packages and you will once in a also need to rely on these functions to do some custom implementation. Some other useful building blocks with repect to reclassification could potentially help you:
- Remember the `np.where` function?
```
dummy = np.arange(1, 10).reshape(3, 3)
dummy
np.where(dummy > 4, 0, dummy)
```
- Clip the values in yanour array to defined limits can be done using `np.clip`
```
dummy = np.arange(1, 10).reshape(3, 3)
dummy
np.clip(dummy, 2, 6)
```
- Numpy provides also a `np.histogram` function, which is really useful to get the bincounts over a custom bin-set:
```
np.histogram(b4_data_classified, bins=[5, 15, 25, 35])
np.histogram(b4_data, bins=[0.001, 0.1, 0.2, 0.5])
```
- The `np.digitize` function return the indices of the bins to which each value in input array belongs. As such, it can be used to select and manipulate values containing to a specific bin:
```
dummy = np.arange(9).reshape(3, 3)
np.random.shuffle(dummy)
dummy
```
Define the bin to which each of the values belong to, using the bins x<2, 2<=x<4, x>=4:
```
id_mask = np.digitize(dummy, bins=[2, 4])
id_mask
dummy[id_mask == 1] = 20
dummy
```
Besides, it is also a practical method to create discrete classified maps:
1. Apply digitize to create classes:
```
ndvi_class_bins = [-np.inf, 0, 0.3, np.inf] # These limits are for demo purposes only
ndvi_landsat_class = np.digitize(ndvi, ndvi_class_bins)
```
2. Define custom colors and names:
```
nbr_colors = ["gray", "yellowgreen", "g"]
ndvi_names = ["No Vegetation", "Bare Area", "Vegetation"]
```
3. Prepare Matplotlib elements:
```
nbr_cmap = ListedColormap(nbr_colors)
# fake entries required for each class to create the legend
dummy_data = [Line2D([0], [0], color=color, lw=4) for color in nbr_colors]
```
4. Make the plot and add a legend:
```
fig, ax = plt.subplots(figsize=(12, 12))
im = ax.imshow(ndvi_landsat_class, cmap=nbr_cmap, extent=b4_data_extent)
ax.legend(dummy_data, ndvi_names, loc='upper left', framealpha=1)
```
- Find the modal (most common) value in an array is not provided by Numpy itself, but is available in the Scipy package:
```
from scipy.stats import mode
mode(b4_data.flatten()), mode(b4_data_classified.flatten())
```
### Side-note on convolution
In case you need custom convolutions for your 2D array, check the `scipy.signal.convolve` function as the Numpy function only works for 1-D arrays.
```
from scipy import signal
with rasterio.open("./data/gent/raster/2020-09-17_Sentinel_2_L1C_B04.tiff") as src:
b4_data = src.read()
b4_data_extent
b4_data = b4_data.squeeze().astype(float)
```
As an example, apply a low pass filter example as window, smoothing the image:
```
window = np.ones((5, 5), dtype=int)
window[1:-1, 1:-1] = 4
window[2, 2] = 12
window
grad = signal.convolve(b4_data, window, mode='same')
fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(16, 6))
ax0.imshow(b4_data, extent=b4_data_extent)
ax1.imshow(grad, extent=b4_data_extent)
```
| true |
code
| 0.697802 | null | null | null | null |
|
```
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('drive/My Drive/Colab Notebooks/ML_and_NN_course/module 1')
cwd=os.getcwd()
print(cwd)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from utils import *
data = pd.read_csv('train_data.csv', sep = ',')
data.head()
data.describe()
data.plot(kind='scatter', x='size', y='price', figsize=(10,5))
```
# Feature scaling
We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.
>
The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:
>
−1 ≤ x≤ 1
>
These aren’t exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.
>
Two techniques to help with this are **feature scaling and mean normalization**. Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1. Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula:

```
data = (data - np.mean(data))/np.std(data)
data.describe()
data.plot(kind='scatter', x='size', y='price', figsize=(10,5))
#theta = np.matrix(np.array([0,0]))
theta=np.random.randn(1,2)
```
### Inserting a Column of ones
since the theta(parameters) has 2 elements we can obtain hypothesis function simply by dot product between input
and theta if we insert a column of ones so that theta[0] * 1 = theta[0] only [we use it inside computecost funtion only]
.Hence the dot product will compute (theta[0] * 1 + theta[1]*data['x']) which is a required hypothesis function
```
data.insert(0, 'Ones', 1)
data.head()
X=data.iloc[:,0:2]
X.head()
y=data['price']
y.head(),y.shape
x = np.matrix(X)
y = np.matrix(y)
y=y.T
x.shape, theta.shape, y.shape
theta
def computeCost(x, y, theta):
"""
Compute cost for linear regression. Computes the cost of using theta as the
parameter for linear regression to fit the data points in X and y.
Parameters
----------
X : array_like
The input dataset of shape (m , n+1) <Here n is 1 and we added one more column of ones>, where m is the number of examples,
and n is the number of features. <Hence the dimension is (46,2)
y : array_like
The values of the function at each data point. This is a vector of
shape (m, 1).
theta : array_like
The parameters for the regression function. This is a vector of
shape (1,n+1 ).
Returns
-------
J : float
The value of the regression cost function.
Instructions
------------
Compute the cost of a particular choice of theta.
You should set J to the cost.
"""
# initialize some useful values
m =46 # number of training examples
# You need to return the following variables correctly
J = 0
h = np.matmul(x, theta.T)
J = (1/(2 * m)) * np.sum(np.square(h - y))
return J
computeCost(x,y,theta)
num_iters=250
new_theta, cost = gradientDescent(x, y, theta,num_iters, lr=0.1)
print(new_theta, cost)
Model_price = new_theta[0, 0] + (new_theta[0, 1] * x)
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x, Model_price, 'r', label='Prediction')
ax.scatter(data['size'],data.price, label='Training Data')
ax.legend(loc=2)
ax.set_xlabel('Size')
ax.set_ylabel('Price')
ax.set_title('Predicted Price vs. Size')
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(np.arange(num_iters), cost, 'r')
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.set_title('MSE vs. Iterations')
```
## Problem of Overshooting
```
num_iters=250
new_theta, cost = gradientDescent(x, y, theta,num_iters, lr=2.1)
print(new_theta, cost)
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(np.arange(num_iters), cost, 'r')
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.set_title('MSE vs. Iterations')
theta,new_theta
```
| true |
code
| 0.658719 | null | null | null | null |
|
# Hate speech classification by k-fold cross validation on movies dataset
The class labels depict the following:
0: Normal speech,
1: Offensive speech
2: Hate speech
#### To work with this, the following folder paths needs to be created in the directory of this notebook:
classification_reports/ : This will contain all the classification reports generated by the model
movies/ : contains all_movies.csv file
movies/for_training/: contains 6 movies used for cross validation training and testing
```
! pip install transformers==2.6.0
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
import re
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import os
import glob
from transformers import BertTokenizer, TFBertForSequenceClassification
from transformers import InputExample, InputFeatures
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
```
---
### Cross validation
#### 6-fold cross validation on movies
Methods to convert the data into the data required by the model for training and testing
```
def convert_data_to_examples_cv(train, DATA_COLUMN, LABEL_COLUMN):
train_InputExamples = train.apply(
lambda x: InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this case
text_a=x[DATA_COLUMN],
text_b=None,
label=x[LABEL_COLUMN]), axis=1)
return train_InputExamples
def convert_examples_to_tf_dataset_cv(examples, tokenizer, max_length=128):
features = [] # -> will hold InputFeatures to be converted later
for e in examples:
# Documentation is really strong for this method, so please take a look at it
input_dict = tokenizer.encode_plus(
e.text_a,
add_special_tokens=True,
max_length=max_length, # truncates if len(s) > max_length
return_token_type_ids=True,
return_attention_mask=True,
pad_to_max_length=True, # pads to the right by default # CHECK THIS for pad_to_max_length
truncation=True
)
input_ids, token_type_ids, attention_mask = (input_dict["input_ids"],
input_dict["token_type_ids"], input_dict['attention_mask'])
features.append(
InputFeatures(
input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, label=e.label
)
)
def gen():
for f in features:
yield (
{
"input_ids": f.input_ids,
"attention_mask": f.attention_mask,
"token_type_ids": f.token_type_ids,
},
f.label,
)
return tf.data.Dataset.from_generator(
gen,
({"input_ids": tf.int32, "attention_mask": tf.int32, "token_type_ids": tf.int32}, tf.int64),
(
{
"input_ids": tf.TensorShape([None]),
"attention_mask": tf.TensorShape([None]),
"token_type_ids": tf.TensorShape([None]),
},
tf.TensorShape([]),
),
)
def train_bert(df_train, df_test):
# initialize model with 3 labels, for hate, offensive and normal class classification
model = TFBertForSequenceClassification.from_pretrained("bert-base-uncased",
trainable=True,
num_labels=3)
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
train = df_train[['text', 'majority_answer']]
train.columns = ['DATA_COLUMN', 'LABEL_COLUMN']
test = df_test[['text', 'majority_answer']]
test.columns = ['DATA_COLUMN', 'LABEL_COLUMN']
DATA_COLUMN = 'DATA_COLUMN'
LABEL_COLUMN = 'LABEL_COLUMN'
train_InputExamples = convert_data_to_examples_cv(train, DATA_COLUMN, LABEL_COLUMN)
test_InputExamples = convert_data_to_examples_cv(test, DATA_COLUMN, LABEL_COLUMN)
train_data = convert_examples_to_tf_dataset_cv(list(train_InputExamples), tokenizer)
train_data = train_data.batch(32)
valid_data = convert_examples_to_tf_dataset_cv(list(test_InputExamples), tokenizer)
valid_data = valid_data.batch(32)
# compile and fit
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=3e-6, epsilon=1e-08, clipnorm=1.0),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy('accuracy')])
print('train data type',type(train_data))
model.fit(train_data, epochs=6, validation_data=valid_data)
test_data = convert_examples_to_tf_dataset_cv(list(test_InputExamples), tokenizer)
test_data = test_data.batch(32)
print('predicting')
preds = model.predict(test_data)
# classification
return classification_report(pd.DataFrame(test['LABEL_COLUMN']), np.argmax(preds[0], axis=1), output_dict=True)
def load_movies_to_df(path):
df_movies = []
for filename in glob.glob(path + '*.csv'):
df_movies.append(pd.read_csv(filename))
return df_movies
df_movies = load_movies_to_df('movies/for_training/')
classification_reports = []
df_main = pd.DataFrame()
# perform cross folding
for i in range(len(df_movies)):
df_train = pd.concat(df_movies[0:i] + df_movies[i + 1:])
df_test = df_movies[i]
train_movies = df_train['movie_name'].unique()
test_movie = df_test['movie_name'].unique()
print(','.join(train_movies))
print(test_movie[0])
report = train_bert(df_train, df_test)
classification_reports.append(report)
print('Train movies: ', str(','.join(train_movies)))
print('Test movie: ', str(test_movie[0]))
print('Classification report: \n', classification_reports[i])
print('------------------------------------------------')
df_cr = pd.DataFrame(classification_reports[i]).transpose()
df_cr['movie_train'] = str(','.join(train_movies))
df_cr['movie_test'] = str(test_movie[0])
df_cr.to_csv('classification_reports/'+'bert_cv_testmovie_'+str(test_movie[0])+'.csv')
df_main = df_main.append(df_cr)
df_main.to_csv('classification_reports/bert_crossvalid_movies.csv')
print(df_main)
len(classification_reports[0])
df_main.head()
def get_precision_recall_f1(category, result_df):
precision = result_df[result_df.label==category].precision.mean()
recall = result_df[result_df.label==category].recall.mean()
f1 = result_df[result_df.label==category]['f1-score'].mean()
return {'label': category, 'precision': precision, 'recall': recall, 'f1': f1}
df_cv= pd.read_csv('classification_reports/bert_crossvalid_movies.csv')
len(classification_reports[0])
df_main.head()
def get_precision_recall_f1(category, result_df):
precision = result_df[result_df.label==category].precision.mean()
recall = result_df[result_df.label==category].recall.mean()
f1 = result_df[result_df.label==category]['f1-score'].mean()
return {'label': category, 'precision': precision, 'recall': recall, 'f1': f1}
df_cv= pd.read_csv('classification_reports/bert_crossvalid_movies.csv')
df_cv = df_cv.rename(columns={'Unnamed: 0': 'label', 'b': 'Y'})
df_cv.head()
normal_dict = get_precision_recall_f1('0', df_cv)
offensive_dict = get_precision_recall_f1('1',df_cv)
hate_dict = get_precision_recall_f1('2',df_cv)
```
#### Aggregated classification results for all 6 folds
```
df_result = pd.DataFrame([normal_dict, offensive_dict, hate_dict])
df_result
for cr in classification_reports:
print(cr)
```
| true |
code
| 0.606091 | null | null | null | null |
|
# Feature Exploration for Proxy Model
- have many different feature models (by prefix)
- do boxplot and PCA for features
```
!pip install git+https://github.com/IBM/ibm-security-notebooks.git
# Default settings, constants
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', -1)
pd.set_option('mode.chained_assignment', None)
FIGSIZE=(15,8)
matplotlib.rcParams['figure.figsize'] = FIGSIZE
# Data is from AQL.proxy_model query
from pyclient.qradar import QRadar, AQL
qi = QRadar(console='YOUR-CONSOLE-IP-ADDRESS', username='admin', token='YOUR-SERVICE-TOKEN')
_df = pd.DataFrame.from_records(qi.search(AQL.proxy_model))
_df.fillna(0, inplace=True)
print(_df.shape)
_df.head(10)
_df.describe()
# Different Feature groups
ALL = 'All Columns'
PREFIX = [
'General',
'Network',
'Time',
'Proxy',
ALL
]
from sklearn import preprocessing
import matplotlib.pyplot as plt
def boxplot(df, prefix):
# drop text columns
df = df.drop('user',axis=1).drop('timeslice',axis=1)
min_max_scaler = preprocessing.MinMaxScaler() # StandardScaler, MinMaxScaler, RobustScaler
scaled = pd.DataFrame(min_max_scaler.fit_transform(df.values), columns=df.columns)
scaled.boxplot(figsize=FIGSIZE, rot=90)
plt.title(f'Boxplot for {prefix}')
plt.show()
for prefix in PREFIX:
df = _df
if prefix != ALL:
cols = ['user', 'timeslice']
cols.extend([col for col in _df if col.startswith(prefix.lower()+'_')])
df = _df[cols]
boxplot(df, prefix)
from sklearn.decomposition import PCA
from sklearn import preprocessing
X = 'PC 1'
Y = 'PC 2'
def pca(df, prefix):
# drop text columns
df = df.drop('user',axis=1).drop('timeslice',axis=1)
# scale data or else some columns dominate
min_max_scaler = preprocessing.StandardScaler() # StandardScaler, MinMaxScaler, RobustScaler
df = pd.DataFrame(min_max_scaler.fit_transform(df.values), columns=df.columns)
pca = PCA(n_components=2)
components = pca.fit_transform(df)
components_df = pd.DataFrame(components, columns = [X, Y])
df[X] = components_df[X]
df[Y] = components_df[Y]
ax1 = df.plot(kind='scatter', x=X, y=Y, color='grey', s=1, title=f'PCA for {prefix}')
plt.show()
for prefix in PREFIX:
df = _df
if prefix != ALL:
cols = ['user', 'timeslice']
cols.extend([col for col in _df if col.startswith(prefix.lower()+'_')])
df = _df[cols]
pca(df, prefix)
# users vs population, look for all outlier points and graph on PCA
# specific user vs self, plot own PCA
```
| true |
code
| 0.519034 | null | null | null | null |
|
```
import keras
keras.__version__
```
# 5.1 - Introduction to convnets
This notebook contains the code sample found in Chapter 5, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
----
First, let's take a practical look at a very simple convnet example. We will use our convnet to classify MNIST digits, a task that you've already been
through in Chapter 2, using a densely-connected network (our test accuracy then was 97.8%). Even though our convnet will be very basic, its
accuracy will still blow out of the water that of the densely-connected model from Chapter 2.
The 6 lines of code below show you what a basic convnet looks like. It's a stack of `Conv2D` and `MaxPooling2D` layers. We'll see in a
minute what they do concretely.
Importantly, a convnet takes as input tensors of shape `(image_height, image_width, image_channels)` (not including the batch dimension).
In our case, we will configure our convnet to process inputs of size `(28, 28, 1)`, which is the format of MNIST images. We do this via
passing the argument `input_shape=(28, 28, 1)` to our first layer.
```
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
```
Let's display the architecture of our convnet so far:
```
model.summary()
```
You can see above that the output of every `Conv2D` and `MaxPooling2D` layer is a 3D tensor of shape `(height, width, channels)`. The width
and height dimensions tend to shrink as we go deeper in the network. The number of channels is controlled by the first argument passed to
the `Conv2D` layers (e.g. 32 or 64).
The next step would be to feed our last output tensor (of shape `(3, 3, 64)`) into a densely-connected classifier network like those you are
already familiar with: a stack of `Dense` layers. These classifiers process vectors, which are 1D, whereas our current output is a 3D tensor.
So first, we will have to flatten our 3D outputs to 1D, and then add a few `Dense` layers on top:
```
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
```
We are going to do 10-way classification, so we use a final layer with 10 outputs and a softmax activation. Now here's what our network
looks like:
```
model.summary()
```
As you can see, our `(3, 3, 64)` outputs were flattened into vectors of shape `(576,)`, before going through two `Dense` layers.
Now, let's train our convnet on the MNIST digits. We will reuse a lot of the code we have already covered in the MNIST example from Chapter
2.
```
from keras.datasets import mnist
from keras.utils import to_categorical
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5, batch_size=64)
```
Let's evaluate the model on the test data:
```
test_loss, test_acc = model.evaluate(test_images, test_labels)
test_acc
```
While our densely-connected network from Chapter 2 had a test accuracy of 97.8%, our basic convnet has a test accuracy of 99.3%: we
decreased our error rate by 68% (relative). Not bad!
| true |
code
| 0.786295 | null | null | null | null |
|
__Author: Manu Jayadharan, University of Pittsburgh, 2020__
# Solving difussion equation using its mixed form.
We have a system of equation to solve: $p_t + \nabla\cdot u -f = 0$ and $-\nabla p = u$,
over domain $\Omega$ from time T_initial to T_final.
Variables $p$ and $u$ has the physical meaning of pressure and velocity respectively.
For demonstration purposes we take $f=sin(x_1 + x_2) + tsin(x_1 + x_2)$ and $\Omega = [-2,2]\times [0,1]$ and the time interval to be $[0,1]$, so we can compare the results with the actual solution $u=tsin(x_1 + x_2)$.
```
#Import fluidlearn package and classes
import fluidlearn
from fluidlearn import dataprocess
```
### Defining the domain and time interval for which the PDE needs to be solved.
This matters only for generating collocation points and if the user is feeding their own collocation points,
they can skip this step.
```
#domain range
X_1_domain = [-2, 2]
X_2_domain = [0, 1]
#time range
T_initial = 0
T_final = 1
T_domain = [T_initial, T_final]
#domain of the problem
domain_bounds = [X_1_domain, X_2_domain, T_domain]
```
### Loading data from a csv file
- We use the manufactured data with $u=tsin(x_1 + x_2)$ saved in a csv file.
- Data is saved in the format: ($x_1 , x_2, t, u(x_1, x_2, t)$) as four columns.
- You could load either preprocess your data to be in this format or load your data
from a csv file with similar format.
```
path_to_data = "data_manufactured/t_sin_x_plus_y.csv"
X_data, Y_data = dataprocess.imp_from_csv(path_to_csv_file=path_to_data,
x_y_combined=True, y_dim=1)
```
### Defining the rhs function $f=sin(x_1 + x_2) + tsin(x_1 + x_2)$ of the PDE.
We use tensorflow.sin function instead of python functions, we could used numpy.sin as well.
```
def rhs_function (args, time_dep=True):
import tensorflow as tf
if time_dep:
space_inputs = args[:-1]
time_inputs = args[-1]
else:
space_inputs = args
return tf.sin(space_inputs[0]+space_inputs[1]) + 2*time_inputs*tf.sin(space_inputs[0]+space_inputs[1])
```
### Defining the model architecture
```
model_type = 'forward'
space_dim = 2 #dimension of Omega
time_depedent_problem = True
n_hid_lay=3 #numberof hidden layers in the neural network
n_hid_nrn=20 #number of neurons in each hidden layer
act_func='tanh' #activation function used for hidden layers: could be elu, relu, sigmoid
loss_list='mse' #type of error function used for cost functin, we use mean squared error.
optimizer='adam' #type of optimizer for cost function minimization
dom_bounds=domain_bounds #domain bounds where collocation points has to be generated
distribution = 'uniform' #type of distribution used for generating the pde collocation points.
number_of_collocation_points = 5000
batch_size = 32 #batch size for stochastic batch gradient type optimization
num_epochs = 10 #number of epochs used for trainng
```
### Defining the fluidlearn solver
```
diffusion_model = fluidlearn.Solver()
diffusion_model(model_type=model_type,
space_dim=space_dim,
time_dep=time_depedent_problem,
output_dim=1,
n_hid_lay=n_hid_lay,
n_hid_nrn=n_hid_lay,
act_func=act_func,
rhs_func=rhs_function,
loss_list=loss_list,
optimizer=optimizer,
dom_bounds=dom_bounds,
load_model=False,
model_path=None,)
```
### Fitting the model
```
diffusion_model.fit(
x=X_data,
y=Y_data,
colloc_points=number_of_collocation_points,
dist=distribution,
batch_size=batch_size,
epochs=num_epochs,
)
```
### Resuming Training the model again for 50 more epochs
```
diffusion_model.fit(
x=X_data,
y=Y_data,
colloc_points=number_of_collocation_points,
dist=distribution,
batch_size=batch_size,
epochs=50,
)
```
### Demo Using the trained model for predicton
```
#taking two points from the domain for time t=0.3 and t=0.76 respectively
x_test_points = [[-0.5,0.1,0.3],
[0.66,0.6,0.76]]
#Predicting the value
y_predicted = diffusion_model.predict(x_test_points)
#finding the true y value for comparing
import numpy as np
x_test_points = np.array(x_test_points)
y_true = np.sin(x_test_points[:,0:1] + x_test_points[:,1:2]) * x_test_points[:,2:3]
#looking at predicted and true solution side by side.
np.concatenate([y_predicted, y_true], axis=1)
```
Note that we need more training for further improving the accuracy.
### Saving the model to a specified location.
```
path_to_save_model = "saved_model/model_name"
diffusion_model.save_model(path_to_save_model)
```
### Loading the saved model
```
path_to_load_model = "saved_model/model_name"
loaded_diffusion_model = fluidlearn.Solver()
loaded_diffusion_model(space_dim=2,
time_dep=True,
load_model=True,
model_path=path_to_load_model)
```
### Predicting using loaded model
```
y_predicted = loaded_diffusion_model.predict(X_data)
y_predicted
```
| true |
code
| 0.522019 | null | null | null | null |
|
# k-NN: finding optimal weight function ('distance' or 'uniform')
```
"""k-NN: finding optimal weight function ('distance' or 'uniform')
"""
# import libraries
import pandas as pd
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.model_selection import TimeSeriesSplit
# import data
df = pd.read_csv('data/SCADA_downtime_merged.csv', skip_blank_lines=True)
# list of turbines to plot
list1 = list(df['turbine_id'].unique())
# sort turbines in ascending order
list1 = sorted(list1, key=int)
# list of categories
list2 = list(df['TurbineCategory_id'].unique())
# remove NaN from list
list2 = [g for g in list2 if g >= 0]
# sort categories in ascending order
list2 = sorted(list2, key=int)
# categories to remove
list2 = [m for m in list2 if m not in (1, 12, 13, 14, 15, 17, 21, 22)]
# empty list to hold optimal n values for all turbines
num = []
# empty list to hold minimum error readings for all turbines
err = []
# filter only data for turbine x
for x in list1:
dfx = df[(df['turbine_id'] == x)].copy()
# copying fault to new column (mins) (fault when turbine category id is y)
for y in list2:
def f(c):
if c['TurbineCategory_id'] == y:
return 0
else:
return 1
dfx['mins'] = dfx.apply(f, axis=1)
# sort values by timestamp in descending order
dfx = dfx.sort_values(by='timestamp', ascending=False)
# reset index
dfx.reset_index(drop=True, inplace=True)
# assigning value to first cell if it's not 0
if dfx.loc[0, 'mins'] == 0:
dfx.set_value(0, 'mins', 0)
else:
dfx.set_value(0, 'mins', 999999999)
# using previous value's row to evaluate time
for i, e in enumerate(dfx['mins']):
if e == 1:
dfx.at[i, 'mins'] = dfx.at[i - 1, 'mins'] + 10
# sort in ascending order
dfx = dfx.sort_values(by='timestamp')
# reset index
dfx.reset_index(drop=True, inplace=True)
# convert to hours, then round to nearest hour
dfx['hours'] = dfx['mins'].astype(np.int64)
dfx['hours'] = dfx['hours']/60
dfx['hours'] = round(dfx['hours']).astype(np.int64)
# > 48 hours - label as normal (999)
def f1(c):
if c['hours'] > 48:
return 999
else:
return c['hours']
dfx['hours'] = dfx.apply(f1, axis=1)
# filter out curtailment - curtailed when turbine is pitching outside
# 0deg <= normal <= 3.5deg
def f2(c):
if 0 <= c['pitch'] <= 3.5 or c['hours'] != 999 or (
(c['pitch'] > 3.5 or c['pitch'] < 0) and (
c['ap_av'] <= (.1 * dfx['ap_av'].max()) or
c['ap_av'] >= (.9 * dfx['ap_av'].max()))):
return 'normal'
else:
return 'curtailed'
dfx['curtailment'] = dfx.apply(f2, axis=1)
# filter unusual readings, i.e., for normal operation, power <= 0 in
# operating wind speeds, power > 100 before cut-in, runtime < 600 and
# other downtime categories
def f3(c):
if c['hours'] == 999 and ((
3 < c['ws_av'] < 25 and (
c['ap_av'] <= 0 or c['runtime'] < 600 or
c['EnvironmentalCategory_id'] > 1 or
c['GridCategory_id'] > 1 or
c['InfrastructureCategory_id'] > 1 or
c['AvailabilityCategory_id'] == 2 or
12 <= c['TurbineCategory_id'] <= 15 or
21 <= c['TurbineCategory_id'] <= 22)) or
(c['ws_av'] < 3 and c['ap_av'] > 100)):
return 'unusual'
else:
return 'normal'
dfx['unusual'] = dfx.apply(f3, axis=1)
# round to 6 hour intervals
def f4(c):
if 1 <= c['hours'] <= 6:
return 6
elif 7 <= c['hours'] <= 12:
return 12
elif 13 <= c['hours'] <= 18:
return 18
elif 19 <= c['hours'] <= 24:
return 24
elif 25 <= c['hours'] <= 30:
return 30
elif 31 <= c['hours'] <= 36:
return 36
elif 37 <= c['hours'] <= 42:
return 42
elif 43 <= c['hours'] <= 48:
return 48
else:
return c['hours']
dfx['hours6'] = dfx.apply(f4, axis=1)
# change label for unusual and curtailed data (9999)
def f5(c):
if c['unusual'] == 'unusual' or c['curtailment'] == 'curtailed':
return 9999
else:
return c['hours6']
dfx['hours_%s' % y] = dfx.apply(f5, axis=1)
# drop unnecessary columns
dfx = dfx.drop('hours6', axis=1)
dfx = dfx.drop('hours', axis=1)
dfx = dfx.drop('mins', axis=1)
dfx = dfx.drop('curtailment', axis=1)
dfx = dfx.drop('unusual', axis=1)
# separate features from classes for classification
features = [
'ap_av', 'ws_av', 'wd_av', 'pitch', 'ap_max', 'ap_dev',
'reactive_power', 'rs_av', 'gen_sp', 'nac_pos']
classes = [col for col in dfx.columns if 'hours' in col]
# list of columns to copy into new df
list3 = features + classes + ['timestamp']
df2 = dfx[list3].copy()
# drop NaNs
df2 = df2.dropna()
X = df2[features]
# normalise features to values b/w 0 and 1
X = preprocessing.normalize(X)
Y = df2[classes]
# convert from pd dataframe to np array
Y = Y.as_matrix()
# subsetting just the odd ones
weights = ['uniform', 'distance']
# empty list that will hold average cross validation scores for each n
scores = []
# cross validation using time series split
tscv = TimeSeriesSplit(n_splits=5)
# looping for each value of w and defining classifier
for w in weights:
knn = KNeighborsClassifier(weights=w, n_jobs=-1)
# empty list to hold score for each cross validation fold
p1 = []
# looping for each cross validation fold
for train_index, test_index in tscv.split(X):
# split train and test sets
X_train, X_test = X[train_index], X[test_index]
Y_train, Y_test = Y[train_index], Y[test_index]
# fit to classifier and predict
knn1 = knn.fit(X_train, Y_train)
pred = knn1.predict(X_test)
# accuracy score
p2 = np.sum(np.equal(Y_test, pred))/Y_test.size
# add to list
p1.append(p2)
# average score across all cross validation folds
p = sum(p1)/len(p1)
scores.append(p)
# changing to misclassification error
MSE = [1 - x for x in scores]
# determining best n
optimal = weights[MSE.index(min(MSE))]
num.append(optimal)
err.append(min(MSE))
d = pd.DataFrame(num, columns=['weights'])
d['error'] = err
d['turbine'] = list1
d
```
| true |
code
| 0.441492 | null | null | null | null |
|
# Think Bayes: Chapter 11
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
```
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
```
## The Euro problem
Problem statement here.
Here's a more efficient version of the Euro class that takes the dataset in a more compact form and uses the binomial distribution (ignoring the binomial coefficient because it does not depend on `x`).
```
class Euro(Suite):
"""Represents hypotheses about the probability of heads."""
def Likelihood(self, data, hypo):
"""Computes the likelihood of the data under the hypothesis.
hypo: integer value of x, the probability of heads (0-100)
data: tuple of (number of heads, number of tails)
"""
x = hypo / 100.0
heads, tails = data
like = x**heads * (1-x)**tails
return like
```
If we know the coin is fair, we can evaluate the likelihood of the data directly.
```
data = 140, 110
suite = Euro()
like_f = suite.Likelihood(data, 50)
print('p(D|F)', like_f)
```
If we cheat an pretend that the alternative hypothesis is exactly the observed proportion, we can compute the likelihood of the data and the likelihood ratio, relative to the fair coin.
```
actual_percent = 100.0 * 140 / 250
likelihood = suite.Likelihood(data, actual_percent)
print('p(D|B_cheat)', likelihood)
print('p(D|B_cheat) / p(D|F)', likelihood / like_f)
```
Under this interpretation, the data are in favor of "biased", with K=6. But that's a total cheat.
Suppose we think "biased" means either 0.4 or 0.6, but we're not sure which. The total likelihood of the data is the weighted average of the two likelihoods.
```
like40 = suite.Likelihood(data, 40)
like60 = suite.Likelihood(data, 60)
likelihood = 0.5 * like40 + 0.5 * like60
print('p(D|B_two)', likelihood)
print('p(D|B_two) / p(D|F)', likelihood / like_f)
```
Under this interpretation, the data are in favor of "biased", but very weak.
More generally, if "biased" refers to a range of possibilities with different probabilities, the total likelihood of the data is the weighted sum:
```
def SuiteLikelihood(suite, data):
"""Computes the weighted average of likelihoods for sub-hypotheses.
suite: Suite that maps sub-hypotheses to probability
data: some representation of the data
returns: float likelihood
"""
total = 0
for hypo, prob in suite.Items():
like = suite.Likelihood(data, hypo)
total += prob * like
return total
```
Here's what it looks like if "biased" means "equally likely to be any value between 0 and 1".
```
b_uniform = Euro(range(0, 101))
b_uniform.Remove(50)
b_uniform.Normalize()
likelihood = SuiteLikelihood(b_uniform, data)
print('p(D|B_uniform)', likelihood)
print('p(D|B_uniform) / p(D|F)', likelihood / like_f)
```
By that definition, the data are evidence against the biased hypothesis, with K=2.
But maybe a triangle prior is a better model of what "biased" means.
```
def TrianglePrior():
"""Makes a Suite with a triangular prior."""
suite = Euro()
for x in range(0, 51):
suite.Set(x, x)
for x in range(51, 101):
suite.Set(x, 100-x)
suite.Normalize()
return suite
```
Here's what it looks like:
```
b_tri = TrianglePrior()
b_tri.Remove(50)
b_tri.Normalize()
likelihood = b_tri.Update(data)
print('p(D|B_tri)', likelihood)
print('p(D|B_tri) / p(D|F)', likelihood / like_f)
```
By the triangle definition of "biased", the data are very weakly in favor of "fair".
## Normalizing constant
We don't really need the SuiteLikelihood function, because `Suite.Update` already computes the total probability of the data, which is the normalizing constant.
```
likelihood = SuiteLikelihood(b_uniform, data)
likelihood
euro = Euro(b_uniform)
euro.Update(data)
likelihood = SuiteLikelihood(b_tri, data)
likelihood
euro = Euro(b_tri)
euro.Update(data)
```
This observation is the basis of hierarchical Bayesian models, of which this solution to the Euro problem is a simple example.
| true |
code
| 0.884064 | null | null | null | null |
|
# Predicting Time Series Data
> If you want to predict patterns from data over time, there are special considerations to take in how you choose and construct your model. This chapter covers how to gain insights into the data before fitting your model, as well as best-practices in using predictive modeling for time series data. This is the Summary of lecture "Machine Learning for Time Series Data in Python", via datacamp.
- toc: true
- badges: true
- comments: true
- author: Chanseok Kang
- categories: [Python, Datacamp, Time_Series_Analysis, Machine_Learning]
- image: images/price_percentile.png
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = (10, 5)
plt.style.use('fivethirtyeight')
```
## Predicting data over time
- Correlation and regression
- Regression is similar to calculating correlation, with some key differences
- Regression: A process that results in a formal model of the data
- Correlation: A statistic that describes the data. Less information than regression model
- Correlation between variables often changes over time
- Time series often have patterns that change over time
- Two timeseries that seem correlated at one moment may not remain so over time.
- Scoring regression models
- Two most common methods:
- Correlation ($r$)
- Coefficient of Determination ($R^2$)
- The value of $R^2$ is bounded on the top by 1, and can be infinitely low
- Values closer to 1 mean the model does a better jot of predicting outputs \
$1 - \frac{\text{error}(model)}{\text{variance}(testdata)}$
```
prices = pd.read_csv('./dataset/tsa_prices.csv', index_col='date', parse_dates=True)
prices.head()
# Plot the raw values over time
prices.plot();
# Scatterplot with one company per axis
prices.plot.scatter('EBAY', 'YHOO');
# Scatterplot with color relating to time
prices.plot.scatter('EBAY', 'YHOO', c=prices.index, cmap=plt.cm.viridis, colorbar=False);
```
### Fitting a simple regression model
Now we'll look at a larger number of companies. Recall that we have historical price values for many companies. Let's use data from several companies to predict the value of a test company. You'll attempt to predict the value of the Apple stock price using the values of NVidia, Ebay, and Yahoo. Each of these is stored as a column in the all_prices DataFrame. Below is a mapping from company name to column name:
```
ebay: "EBAY"
nvidia: "NVDA"
yahoo: "YHOO"
apple: "AAPL"
```
We'll use these columns to define the input/output arrays in our model.
```
all_prices = pd.read_csv('./dataset/all_prices.csv', index_col=0, parse_dates=True)
all_prices.head()
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
# Use stock symbols to extract training data
X = all_prices[['EBAY', 'NVDA', 'YHOO']]
y = all_prices[['AAPL']]
# Fit and score the model with cross-validation
scores = cross_val_score(Ridge(), X, y, cv=3)
print(scores)
```
### Visualizing predicted values
When dealing with time series data, it's useful to visualize model predictions on top of the "actual" values that are used to test the model.
In this exercise, after splitting the data (stored in the variables ```X``` and ```y```) into training and test sets, you'll build a model and then visualize the model's predictions on top of the testing data in order to estimate the model's performance.
```
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
# Split our data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8,
shuffle=False, random_state=1)
# Fit our model and generate predictions
model = Ridge()
model.fit(X_train, y_train)
predictions = model.predict(X_test)
score = r2_score(y_test, predictions)
print(score)
# Visualize our predictions along with the "true" values, and print the score
fig, ax = plt.subplots(figsize=(15, 5))
ax.plot(range(len(y_test)), y_test, color='k', lw=3);
ax.plot(range(len(predictions)), predictions, color='r', lw=2);
```
## Advanced time series prediction
- Data is messy
- Real-world data is often messy
- The two most common problems are missing data and outliers
- This often happens because of human error, machine error malfunction, database failure, etc..
- Visualizing your raw data makes it easier to spot these problems
- Interpolation: using time to fill in missing data
- A common way to deal with missing data is to interpolate missing values
- With timeseries data, you can use time to assist in interpolation.
- In this case, interpolation means using the known values on either side of a gap in the data to make assumptions about what's missing
- Using a rolling window to transform data
- Another common use of rolling windows is to transform the data
- Finding outliers in your data
- Outliers are datapoints that are significantly statistically different from the dataset.
- They can have negative effects on the predictive power of your model, biasing it away from its "true" value
- One solution is to remove or replace outliers with a more representative value
> Note: Be very careful about doing this - often it is difficult to determine what is a legitimately extreme value vs an abberation.
### Visualizing messy data
Let's take a look at a new dataset - this one is a bit less-clean than what you've seen before.
As always, you'll first start by visualizing the raw data. Take a close look and try to find datapoints that could be problematic for fitting models.
```
prices = pd.read_csv('./dataset/prices_null.csv', index_col=0, parse_dates=True)
# Visualize the dataset
prices.plot(legend=False);
plt.tight_layout();
# Count the missing values of each time series
missing_values = prices.isnull().sum()
print(missing_values)
```
### Imputing missing values
When you have missing data points, how can you fill them in?
In this exercise, you'll practice using different interpolation methods to fill in some missing values, visualizing the result each time. But first, you will create the function (```interpolate_and_plot()```) you'll use to interpolate missing data points and plot them.
```
# Create a function we'll use to interpolate and plot
def interpolate_and_plot(prices, interpolation):
# Create a boolean mask for missing values
missing_values = prices.isna()
# Interpolate the missing values
prices_interp = prices.interpolate(interpolation)
# Plot the results, highlighting the interpolated values in black
fig, ax = plt.subplots(figsize=(10, 5))
prices_interp.plot(color='k', alpha=0.6, ax=ax, legend=False);
# Note plot the interpolated values on top in red
prices_interp[missing_values].plot(ax=ax, color='r', lw=3, legend=False);
# Interpolate using the latest non-missing value
interpolation_type = 'zero'
interpolate_and_plot(prices, interpolation_type)
# Interpolate using the latest non-missing value
interpolation_type = 'linear'
interpolate_and_plot(prices, interpolation_type)
# Interpolate with a quadratic function
interpolation_type = 'quadratic'
interpolate_and_plot(prices, interpolation_type)
```
### Transforming raw data
In the last chapter, you calculated the rolling mean. In this exercise, you will define a function that calculates the percent change of the latest data point from the mean of a window of previous data points. This function will help you calculate the percent change over a rolling window.
This is a more stable kind of time series that is often useful in machine learning.
```
# Your custom function
def percent_change(series):
# Collect all *but* the last value of this window, then the final value
previous_values = series[:-1]
last_value = series[-1]
# Calculate the % difference between the last value and the mean of earlier values
percent_change = (last_value - np.mean(previous_values)) / np.mean(previous_values)
return percent_change
# Apply your custom function and plot
prices_perc = prices.rolling(20).apply(percent_change)
prices_perc.loc["2014":"2015"].plot();
```
### Handling outliers
In this exercise, you'll handle outliers - data points that are so different from the rest of your data, that you treat them differently from other "normal-looking" data points. You'll use the output from the previous exercise (percent change over time) to detect the outliers. First you will write a function that replaces outlier data points with the median value from the entire time series.
```
def replace_outliers(series):
# Calculate the absolute difference of each timepoint from the series mean
absolute_differences_from_mean = np.abs(series - np.mean(series))
# Calculate a mask for the difference that are > 3 standard deviations from zero
this_mask = absolute_differences_from_mean > (np.std(series) * 3)
# Replace these values with the median across the data
series[this_mask] = np.nanmedian(series)
return series
# Apply your preprocessing functino to the timeseries and plot the results
prices_perc = prices_perc.apply(replace_outliers)
prices_perc.loc["2014":"2015"].plot();
```
## Creating features over time
- Calculating "date-based" features
- Thus far we've focused on calculating "statistical" features - these are features that correspond statistical properties of the data, like "mean" , "standard deviation", etc
- However, don't forget the timeseries data oftenhas more "human" features associated with it, like days of the week, holidays, etc.
- These features are often useful when dealing with timeseries data that spans multiple years (such as stock value over time)
### Engineering multiple rolling features at once
Now that you've practiced some simple feature engineering, let's move on to something more complex. You'll calculate a collection of features for your time series data and visualize what they look like over time. This process resembles how many other time series models operate.
```
# Define a rolling window with Pandas, excluding the right-most datapoint of the window
prices_perc_rolling = prices_perc['EBAY'].rolling(20, min_periods=5, closed='right')
# Define the features you'll calculate for each window
features_to_calculate = [np.min, np.max, np.mean, np.std]
# Calculate these features for your rolling window object
features = prices_perc_rolling.aggregate(features_to_calculate)
# Plot the results
ax = features.loc[:"2011-01"].plot();
prices_perc['EBAY'].loc[:"2011-01"].plot(ax=ax, color='k', alpha=0.2, lw=3);
ax.legend(loc=(1.01, 0.6));
```
### Percentiles and partial functions
In this exercise, you'll practice how to pre-choose arguments of a function so that you can pre-configure how it runs. You'll use this to calculate several percentiles of your data using the same ```percentile()``` function in numpy.
```
from functools import partial
percentiles = [1, 10, 25, 50, 75, 90, 99]
# Use a list comprehension to create a partial function for each quantile
percentile_functions = [partial(np.percentile, q=percentile) for percentile in percentiles]
# Calculate each of these quantiles on the data using a rolling window
prices_perc_rolling = prices_perc['EBAY'].rolling(20, min_periods=5, closed='right')
features_percentiles = prices_perc_rolling.aggregate(percentile_functions)
# Plot a subset of the result
ax = features_percentiles.loc[:"2011-01"].plot(cmap=plt.cm.viridis);
ax.legend(percentiles, loc=(1.01, 0.5));
plt.savefig('../images/price_percentile.png')
```
### Using "date" information
It's easy to think of timestamps as pure numbers, but don't forget they generally correspond to things that happen in the real world. That means there's often extra information encoded in the data such as "is it a weekday?" or "is it a holiday?". This information is often useful in predicting timeseries data.
```
# Extract date features from the data, add them as columns
prices_perc['day_of_week'] = prices_perc.index.dayofweek
prices_perc['week_of_year'] = prices_perc.index.weekofyear
prices_perc['month_of_year'] = prices_perc.index.month
# Print prices_perc
print(prices_perc)
```
| true |
code
| 0.619558 | null | null | null | null |
|
# Imports
```
# Import pandas
import pandas as pd
# Import matplotlib
import matplotlib.pyplot as plt
# Import numpy
import numpy as np
# Import Network X
import networkx as nx
```
# Getting the data
## Paths for in.out files
```
# Path of IN-labels
mesh_path = '../../data/final/mesh.pkl'
# Path for IN-tags
geo_path = '../../data/final/geo.pkl'
# Path for IN-tags-restful
rest_path = '../../data/final/geo_restful_chem.pkl'
```
## Read geo_df and mesh_df
```
# Read MeSH
mesh_df = pd.read_pickle(mesh_path)
# Read GEO
geo_df = pd.read_pickle(geo_path)
# Read Restful API
rest_df = pd.read_pickle(rest_path)
# Separate Diseases
geo_C = geo_df[geo_df['category']=='C']
geo_D = geo_df[geo_df['category']=='D']
# Find new tags for drugs
geo_D_rest = pd.merge(
geo_D,
rest_df['mesh_id disease_tag_from_tagger'.split()].drop_duplicates(),
how='inner',
on='mesh_id')
geo_D_rest.drop(columns='mesh_heading', inplace=True)
geo_D_rest = geo_D_rest['geo_id nsamples date mesh_id disease_tag_from_tagger category method'.split()]
geo_D_rest.rename(columns={'disease_tag_from_tagger':'mesh_heading'}, inplace=True)
# Concatenate them into new geo_df
geo_df = pd.concat([geo_C, geo_D_rest])
# Echo
geo_df.head()
```
## Compute category-depth
```
# Construct grand AstraZeneca dataframe
az_df = pd.merge(geo_df, mesh_df, on='mesh_id')
# Drop extra columns from merge
az_df.drop(columns='mesh_heading_y category_y method'.split(), inplace=True)
# Rename merge column
az_df.rename(columns={'mesh_heading_x':'mesh_heading'}, inplace=True)
# Calculate category - Again
az_df['category']=az_df['mesh_treenumbers'].str.split('.').str[0].str[0]
# Report on propperly classified MeSH-ids category-wise
Propper_Tags = list(az_df['category_x']==az_df['category']).count(True)
Total_Tags = az_df['category_x'].shape[0]
print('Correctly categorized MeSH ids: {:4.1f}%'.format(100*Propper_Tags/Total_Tags))
# Calculate category depth
az_df['depth']=az_df['mesh_treenumbers'].str.split('.').str.len()
# Drop old-category column
az_df.drop(columns='category_x'.split(), inplace=True)
# Echo
az_df.head()
```
## Filter and Clean geo DataFrame
```
# Construct date filter
mask_date = az_df['date']==az_df['date'] # Take all studies
# Construct category filter
mask_category = ((az_df['category']=='C') | (az_df['category']=='D')) # Drugs and Diseases
# Construct mask to filter high-general categories
mask_depth = True #((az_df['depth']>=2) & (az_df['depth']>=2))
# Construct mask to avoid specific categories
mask_c23 = ~az_df['mesh_treenumbers'].str.startswith('C23', na=False)
mask_avoid_cats = mask_c23
# Apply filters
filtered_geo_df = pd.DataFrame(az_df[mask_date & mask_category & mask_depth & mask_avoid_cats])
# Eliminate filterning columns
filtered_geo_df.drop(columns='date mesh_treenumbers depth'.split(), inplace=True)
# Drop NaNs
filtered_geo_df.dropna(axis=0, inplace=True)
# Drop duplicates
filtered_geo_df.drop_duplicates(inplace=True)
# Only select summaries with +1 tag
tags_by_summary = filtered_geo_df['geo_id mesh_id'.split()].groupby('geo_id').count().reset_index() # Count tags per summary
good_summaries = tags_by_summary[tags_by_summary['mesh_id']>1] # Select abstracts with more than one tag
clean_geo = pd.merge(filtered_geo_df, good_summaries, on='geo_id') # Inner Join
clean_geo = clean_geo.drop(columns='mesh_id_y') # Drop column from inner join
clean_geo = clean_geo.rename(columns={'mesh_id_x':'mesh_id'}) # Rename key column
# Write info
print('Number of Records: ',clean_geo.shape[0])
# Echo
clean_geo.head()
```
# Constructing the Disease-Drug Graph
## Construct Nodes
```
# Select only relevant columns
nodes = pd.DataFrame(clean_geo['mesh_id category mesh_heading'.split()])
# Drop duplicates
nodes.drop_duplicates(inplace=True, keep='first')
# Echo
nodes.head()
```
## Construct Edges
```
# Construct all-with-all links inside same geoid-nsample-date record
links = pd.merge(clean_geo, clean_geo, on='geo_id nsamples'.split())
# Rename to Source-Target
links.rename(columns={'mesh_id_x':'source', 'mesh_id_y':'target'}, inplace=True)
# Delete self-linkage
links.drop(links[links['source']==links['target']].index, inplace=True)
# Collapse repetitions while calculating weights
edges = links.groupby('source target'.split()).sum().reset_index()
# Rename sum(nsamples) to 'weight'
edges.rename(columns={'nsamples':'weight'}, inplace=True)
# Account for mirror-duplicates
edges['weight']/=2
# Normalize weights
edges['weight']/=edges['weight'].max()
# Head
edges.head()
```
## Construct Graph
```
# Construct Directed Graph
dd = nx.from_pandas_edgelist(edges,
source='source',
target='target',
edge_attr='weight',
create_using=nx.DiGraph()
)
# Transform to undirected graph
dd = nx.to_undirected(dd)
# Add nodes attributes - Category
nx.set_node_attributes(dd, nodes['mesh_id category'.split()].set_index('mesh_id').to_dict()['category'], 'category')
# Add nodes attributes - Mesh Heading
nx.set_node_attributes(dd, nodes['mesh_id mesh_heading'.split()].set_index('mesh_id').to_dict()['mesh_heading'], 'mesh_heading')
# Save as pickle
nx.write_gpickle(dd,'Gephi_DD.pkl')
# Save to gephi
nx.write_gexf(dd,'Gephi_DD.gexf')
# Echo info
print(' Size (Nodes): ', dd.size())
print(' Order (Edges): ', dd.order())
print(' Graph Density: ', nx.density(dd))
```
## Define some useful functions over the tree
```
def get_categories(graph):
"""
Get a dictionary with the categories of all the nodes
"""
return nx.get_node_attributes(graph, 'category')
def get_mesh_headings(graph):
"""
Get a dictionary with the mesh-headings of all the nodes
"""
return nx.get_node_attributes(graph, 'mesh_heading')
def get_neighbors(graph, node, cats):
"""
Get the neighbors of the node such that they have the same/opposite category
"""
# Define empty lists
same = list()
oppo = list()
# Select only those with same category
for neigh in nx.neighbors(dd, node):
# Check for same neighbors
if cats[neigh]==cats[node]:
same.append(neigh)
else:
oppo.append(neigh)
# Return the tuples same and oppo
return same, oppo
def get_top(dictionary_metric, top):
"""
Find the top-n nodes according to some metric
"""
# Get the items in the metric dictionary
items = list(dictionary_metric.items())
# Sort them out
items.sort(reverse=True, key=lambda x: x[1])
# Return the keys
return list(map(lambda x:x[0], items[:top]))
def get_only(graph, cats, specific_category):
"""
Select the nodes of the graph where category==category and returns a subgraph
"""
# Define empty list
only_nodes = list()
# Cycle through the nodes
for node in graph.nodes():
if cats[node]==specific_category:
only_nodes.append(node)
# Return the subgraph
return nx.subgraph(graph, only_nodes)
```
# Recomend drugs for top diseases ['C']
## Select diseases
```
# Read full graph
ee = nx.read_gpickle('Gephi_DD.pkl')
# Read categories and labels
cats = get_categories(graph=ee)
labs = get_mesh_headings(graph=ee)
# Choose only disease-nodes
diseases = get_only(graph=ee, cats=cats, specific_category='C')
```
## Runs stats on diseases
```
# Disease eigenvector centrality
diseases_eig = nx.eigenvector_centrality(diseases, max_iter=500, weight='weight')
# Disease PageRank
diseases_pgn = nx.pagerank(diseases, alpha=0.9, weight='weight')
# Disease Degree
diseases_deg = nx.degree_centrality(diseases)
```
## Choose n-top disease nodes
```
# Find top-diseases
top = 100
top_eig = get_top(dictionary_metric=diseases_eig, top=top)
top_pgn = get_top(dictionary_metric=diseases_pgn, top=top)
top_deg = get_top(dictionary_metric=diseases_deg, top=top)
top_diseases = top_eig
```
## Measure recommendation-strenght (rs)
```
# Define containers of important recommendations
rs = list()
# Choose a node
for disease in top_diseases:
# Get neighbors diseases and neighboring drugs
nei_dis, nei_dru = get_neighbors(graph=dd, node=disease, cats=cats)
# Get max possible weight
ww_max = sum([dd.get_edge_data(disease, nei, 'weight')['weight'] for nei in nei_dis])
# For every neighboring disease
for n_disease in nei_dis:
# Find all the neighboring drugs
_ , nei_nei_dru = get_neighbors(graph=dd, node=n_disease, cats=cats)
# Chose drugs not in nei_dru
not_in_nei_dru = list(set(nei_nei_dru) - set(nei_dru))
# Add them to rs with weight
c1 = [disease]*len(not_in_nei_dru)
c2 = not_in_nei_dru
ww = dd.get_edge_data(disease, n_disease, 'weight')['weight']
c3 = [ww/ww_max]*len(not_in_nei_dru)
rs.extend(zip(c1, c2, c3))
# Get into a DF
rs = pd.DataFrame(data=rs, columns='Disease Drug Recommendation_Strenght'.split())
# Group by disease-drug pairs and add the weights
rs = pd.DataFrame(rs.groupby('Disease Drug'.split()).sum().reset_index())
# Clean duplicates
rs = rs.drop_duplicates().reset_index(drop=True)
# Add names to mesh_ids
rs['Disease_Name'] = [labs[node] for node in rs.Disease]
rs['Drug_Name'] = [labs[node] for node in rs.Drug]
# Rearrange
rs = rs['Disease Disease_Name Drug Drug_Name Recommendation_Strenght'.split()]
# Sort by r-strenght
rs.sort_values(by='Recommendation_Strenght Disease Drug'.split(), inplace=True, ascending=False)
# Reset index
rs.reset_index(inplace=True, drop=True)
# Echo
print('Size of rs: ', rs.shape)
rs.head(25)
```
## Visualization of rs
```
# Choose input
cardinality = 1
# Get nodes
dis_node = rs['Disease'].iloc[cardinality]
dru_node = rs['Drug'].iloc[cardinality]
dis_neighs, _ = get_neighbors(graph=ee, node=dis_node, cats=cats)
# Gather nodes
my_nodes = [dis_node, dru_node]
my_nodes.extend(dis_neighs)
# Gather categories
my_cats={node:cats[node] for node in my_nodes}
# Gather labels
my_labs={node:labs[node] for node in my_nodes}
# Gather positions
eps = 3
angle = np.linspace(0, 2*np.pi, len(my_nodes)-2)
radius = np.ones(len(my_nodes)-2)
x_pos, y_pos = radius*np.cos(angle), radius*np.sin(angle)
my_poss=dict()
my_poss[dis_node]=(0, +eps)
my_poss[dru_node]=(0, -eps)
for i in range(len(my_nodes)-2):
my_poss[dis_neighs[i]]=(x_pos[i], y_pos[i])
# Construct subgraph
ee_sub = ee.subgraph(my_nodes)
# Modify original node
ee_sub.nodes[dis_node]['category']='X'
# Export subgraph to gephi
nx.write_gexf(ee_sub, 'drug_recommendation_{:07d}.gexf'.format(cardinality))
# Plot
fig = plt.figure()
axes = fig.add_axes([0.1,0.1,0.8,0.8])
nx.draw_networkx_labels(ee_sub, pos=my_poss, labels=my_labs, font_size=10)
nx.draw_networkx(ee_sub, pos=my_poss, node_size=200, node_shape='^', with_labels=False)
titulo='Drug recommendation (rank=#{:}, rs={:3.3f})'.format(
cardinality,
rs['Recommendation_Strenght'].iloc[cardinality])
axes.set_title(titulo)
axes.set_xlim(-1.5,1.5)
axes.set_ylim(-3.5,3.5)
plt.axis('off')
plt.savefig('drug_recommendation_{:07d}.png'.format(cardinality), dpi=500)
plt.show()
```
# Recomend disease for top drug ['D']
## Select drugs
```
# Read full graph
ee = nx.read_gpickle('Gephi_DD.pkl')
# Read categories and labels
cats = get_categories(graph=ee)
labs = get_mesh_headings(graph=ee)
# Choose only drug-nodes
drugs = get_only(graph=ee, cats=cats, specific_category='D')
```
## Runs stats on drugs
```
# Drugs eigenvector centrality
drugs_eig = nx.eigenvector_centrality(drugs, max_iter=500, weight='weight')
# Drugs PageRank
drugs_pgn = nx.pagerank(drugs, alpha=0.9, weight='weight')
# Drugs Degree
drugs_deg = nx.degree_centrality(drugs)
```
## Select n-top drugs
```
# Find top-drugs
top = 100
top_eig = get_top(dictionary_metric=drugs_eig, top=top)
top_pgn = get_top(dictionary_metric=drugs_pgn, top=top)
top_deg = get_top(dictionary_metric=drugs_deg, top=top)
top_drugs = top_eig
```
## Compute recommendation-strenght (rs)
```
# Define containers of important recommendations
rs = list()
# Choose a node
for drug in top_drugs:
# Get neighbors diseases and neighboring drugs
nei_dru, nei_dis = get_neighbors(graph=dd, node=drug, cats=cats)
# Get max possible weight
ww_max = sum([dd.get_edge_data(drug, nei, 'weight')['weight'] for nei in nei_dru])
# For every neighboring drug
for n_drug in nei_dru:
# Find all the neighboring diseases
_, nei_nei_dis = get_neighbors(graph=dd, node=n_drug, cats=cats)
# Chose disease not in nei_dis
not_in_nei_dis = list(set(nei_nei_dis) - set(nei_dis))
# Add them to rs with weight
c1 = [drug]*len(not_in_nei_dis)
c2 = not_in_nei_dis
ww = dd.get_edge_data(drug, n_drug, 'weight')['weight']
c3 = [ww/ww_max]*len(not_in_nei_dis)
rs.extend(zip(c1, c2, c3))
# Get into a DF
rs = pd.DataFrame(data=rs, columns='Drug Disease Recommendation_Strenght'.split())
# Group by disease-drug pairs and add the weights
rs = pd.DataFrame(rs.groupby('Drug Disease'.split()).sum().reset_index())
# Clean duplicates
rs = rs.drop_duplicates().reset_index(drop=True)
# Add names to mesh_ids
rs['Drug_Name'] = [labs[node] for node in rs.Drug]
rs['Disease_Name'] = [labs[node] for node in rs.Disease]
# Rearrange
rs = rs['Drug Drug_Name Disease Disease_Name Recommendation_Strenght'.split()]
# Sort by r-strenght
rs.sort_values(by='Recommendation_Strenght Drug Disease'.split(), inplace=True, ascending=False)
# Reset index
rs.reset_index(inplace=True, drop=True)
# Echo
print('Size of rs: ', rs.shape)
rs.head(25)
```
## Visualization of rs
```
# Choose input
cardinality = 250
# Get nodes
dru_node = rs['Drug'].iloc[cardinality]
dis_node = rs['Disease'].iloc[cardinality]
dru_neighs, _ = get_neighbors(graph=ee, node=dru_node, cats=cats)
# Gather nodes
my_nodes = [dru_node, dis_node]
my_nodes.extend(dru_neighs)
# Gather categories
my_cats={node:cats[node] for node in my_nodes}
# Gather labels
my_labs={node:labs[node] for node in my_nodes}
# Gather positions
eps = 3
angle = np.linspace(0, 2*np.pi, len(my_nodes)-2)
radius = np.ones(len(my_nodes)-2)
x_pos, y_pos = radius*np.cos(angle), radius*np.sin(angle)
my_poss=dict()
my_poss[dru_node]=(0, +eps)
my_poss[dis_node]=(0, -eps)
for i in range(len(my_nodes)-2):
my_poss[dru_neighs[i]]=(x_pos[i], y_pos[i])
# Construct subgraph
ee_sub = ee.subgraph(my_nodes)
# Modify original node
ee_sub.nodes[dru_node]['category']='X'
# Export subgraph to gephi
nx.write_gexf(ee_sub, 'second_use_recommendation_{:07d}.gexf'.format(cardinality))
# Plot
fig = plt.figure()
axes = fig.add_axes([0.1,0.1,0.8,0.8])
nx.draw_networkx_labels(ee_sub, pos=my_poss, labels=my_labs, font_size=10)
nx.draw_networkx(ee_sub, pos=my_poss, node_size=200, node_shape='^', with_labels=False)
titulo='Drug recommendation (rank=#{:}, rs={:3.3f})'.format(
cardinality,
rs['Recommendation_Strenght'].iloc[cardinality])
axes.set_title(titulo)
axes.set_xlim(-1.5,1.5)
axes.set_ylim(-3.5,3.5)
plt.axis('off')
plt.savefig('second_use_recommendation_{:07d}.png'.format(cardinality))
plt.show()
```
# End
| true |
code
| 0.377856 | null | null | null | null |
|
# Fine-Tuning a BERT Model and Create a Text Classifier
We have already performed the Feature Engineering to create BERT embeddings from the `reviews_body` text using the pre-trained BERT model, and split the dataset into train, validation and test files. To optimize for Tensorflow training, we saved the files in TFRecord format.
Now, let’s fine-tune the BERT model to our Customer Reviews Dataset and add a new classification layer to predict the `star_rating` for a given `review_body`.

As mentioned earlier, BERT’s attention mechanism is called a Transformer. This is, not coincidentally, the name of the popular BERT Python library, “Transformers,” maintained by a company called [HuggingFace](https://github.com/huggingface/transformers). We will use a variant of BERT called [DistilBert](https://arxiv.org/pdf/1910.01108.pdf) which requires less memory and compute, but maintains very good accuracy on our dataset.
# DEMO 1:
# Develop Model Training Code In Noteboook
```
!pip install -q tensorflow==2.1.0
!pip install -q transformers==2.8.0
!pip install -q scikit-learn==0.23.1
train_data = "./input/data/train"
validation_data = "./input/data/validation"
test_data = "./input/data/test"
local_model_dir = "./model/"
num_gpus = 0
input_data_config = "File"
epochs = 1
learning_rate = 0.00001
epsilon = 0.00000001
train_batch_size = 8
validation_batch_size = 8
test_batch_size = 8
train_steps_per_epoch = 1
validation_steps = 1
test_steps = 1
use_xla = True
use_amp = False
max_seq_length = 64
freeze_bert_layer = True
run_validation = True
run_test = True
run_sample_predictions = True
import time
import random
import pandas as pd
import easydict
from glob import glob
import pprint
import argparse
import json
import subprocess
import sys
import os
import tensorflow as tf
from transformers import DistilBertTokenizer
from transformers import TFDistilBertForSequenceClassification
from transformers import TextClassificationPipeline
from transformers.configuration_distilbert import DistilBertConfig
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import load_model
CLASSES = [1, 2, 3, 4, 5]
def select_data_and_label_from_record(record):
x = {"input_ids": record["input_ids"], "input_mask": record["input_mask"], "segment_ids": record["segment_ids"]}
y = record["label_ids"]
return (x, y)
def file_based_input_dataset_builder(
channel,
input_filenames,
pipe_mode,
is_training,
drop_remainder,
batch_size,
epochs,
steps_per_epoch,
max_seq_length,
):
# For training, we want a lot of parallel reading and shuffling.
# For eval, we want no shuffling and parallel reading doesn't matter.
if pipe_mode:
print("***** Using pipe_mode with channel {}".format(channel))
from sagemaker_tensorflow import PipeModeDataset
dataset = PipeModeDataset(channel=channel, record_format="TFRecord")
else:
print("***** Using input_filenames {}".format(input_filenames))
dataset = tf.data.TFRecordDataset(input_filenames)
dataset = dataset.repeat(epochs * steps_per_epoch * 100)
name_to_features = {
"input_ids": tf.io.FixedLenFeature([max_seq_length], tf.int64),
"input_mask": tf.io.FixedLenFeature([max_seq_length], tf.int64),
"segment_ids": tf.io.FixedLenFeature([max_seq_length], tf.int64),
"label_ids": tf.io.FixedLenFeature([], tf.int64),
}
def _decode_record(record, name_to_features):
"""Decodes a record to a TensorFlow example."""
record = tf.io.parse_single_example(record, name_to_features)
return record
dataset = dataset.apply(
tf.data.experimental.map_and_batch(
lambda record: _decode_record(record, name_to_features),
batch_size=batch_size,
drop_remainder=drop_remainder,
num_parallel_calls=tf.data.experimental.AUTOTUNE,
)
)
dataset = dataset.shuffle(buffer_size=1000, reshuffle_each_iteration=True)
row_count = 0
print("**************** {} *****************".format(channel))
for row in dataset.as_numpy_iterator():
if row_count == 1:
break
print(row)
row_count = row_count + 1
return dataset
if __name__ == "__main__":
args = easydict.EasyDict(
{
"train_data": train_data,
"validation_data": validation_data,
"test_data": test_data,
"local_model_dir": local_model_dir,
"num_gpus": num_gpus,
"use_xla": use_xla,
"use_amp": use_amp,
"max_seq_length": max_seq_length,
"train_batch_size": train_batch_size,
"validation_batch_size": validation_batch_size,
"test_batch_size": test_batch_size,
"epochs": epochs,
"learning_rate": learning_rate,
"epsilon": epsilon,
"train_steps_per_epoch": train_steps_per_epoch,
"validation_steps": validation_steps,
"test_steps": test_steps,
"freeze_bert_layer": freeze_bert_layer,
"run_validation": run_validation,
"run_test": run_test,
"run_sample_predictions": run_sample_predictions,
"input_data_config": input_data_config,
}
)
env_var = os.environ
print("Environment Variables:")
pprint.pprint(dict(env_var), width=1)
train_data = args.train_data
print("train_data {}".format(train_data))
validation_data = args.validation_data
print("validation_data {}".format(validation_data))
test_data = args.test_data
print("test_data {}".format(test_data))
local_model_dir = args.local_model_dir
print("local_model_dir {}".format(local_model_dir))
num_gpus = args.num_gpus
print("num_gpus {}".format(num_gpus))
use_xla = args.use_xla
print("use_xla {}".format(use_xla))
use_amp = args.use_amp
print("use_amp {}".format(use_amp))
max_seq_length = args.max_seq_length
print("max_seq_length {}".format(max_seq_length))
train_batch_size = args.train_batch_size
print("train_batch_size {}".format(train_batch_size))
validation_batch_size = args.validation_batch_size
print("validation_batch_size {}".format(validation_batch_size))
test_batch_size = args.test_batch_size
print("test_batch_size {}".format(test_batch_size))
epochs = args.epochs
print("epochs {}".format(epochs))
learning_rate = args.learning_rate
print("learning_rate {}".format(learning_rate))
epsilon = args.epsilon
print("epsilon {}".format(epsilon))
train_steps_per_epoch = args.train_steps_per_epoch
print("train_steps_per_epoch {}".format(train_steps_per_epoch))
validation_steps = args.validation_steps
print("validation_steps {}".format(validation_steps))
test_steps = args.test_steps
print("test_steps {}".format(test_steps))
freeze_bert_layer = args.freeze_bert_layer
print("freeze_bert_layer {}".format(freeze_bert_layer))
run_validation = args.run_validation
print("run_validation {}".format(run_validation))
run_test = args.run_test
print("run_test {}".format(run_test))
run_sample_predictions = args.run_sample_predictions
print("run_sample_predictions {}".format(run_sample_predictions))
input_data_config = args.input_data_config
print("input_data_config {}".format(input_data_config))
# Determine if PipeMode is enabled
pipe_mode = input_data_config.find("Pipe") >= 0
print("Using pipe_mode: {}".format(pipe_mode))
# Model Output
transformer_fine_tuned_model_path = os.path.join(local_model_dir, "transformers/fine-tuned/")
os.makedirs(transformer_fine_tuned_model_path, exist_ok=True)
# SavedModel Output
tensorflow_saved_model_path = os.path.join(local_model_dir, "tensorflow/saved_model/0")
os.makedirs(tensorflow_saved_model_path, exist_ok=True)
distributed_strategy = tf.distribute.MirroredStrategy()
with distributed_strategy.scope():
tf.config.optimizer.set_jit(use_xla)
tf.config.optimizer.set_experimental_options({"auto_mixed_precision": use_amp})
train_data_filenames = glob(os.path.join(train_data, "*.tfrecord"))
print("train_data_filenames {}".format(train_data_filenames))
train_dataset = file_based_input_dataset_builder(
channel="train",
input_filenames=train_data_filenames,
pipe_mode=pipe_mode,
is_training=True,
drop_remainder=False,
batch_size=train_batch_size,
epochs=epochs,
steps_per_epoch=train_steps_per_epoch,
max_seq_length=max_seq_length,
).map(select_data_and_label_from_record)
tokenizer = None
config = None
model = None
successful_download = False
retries = 0
while retries < 5 and not successful_download:
try:
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
config = DistilBertConfig.from_pretrained("distilbert-base-uncased", num_labels=len(CLASSES))
model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", config=config)
successful_download = True
print("Sucessfully downloaded after {} retries.".format(retries))
except:
retries = retries + 1
random_sleep = random.randint(1, 30)
print("Retry #{}. Sleeping for {} seconds".format(retries, random_sleep))
time.sleep(random_sleep)
callbacks = []
initial_epoch_number = 0
if not tokenizer or not model or not config:
print("Not properly initialized...")
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, epsilon=epsilon)
print("** use_amp {}".format(use_amp))
if use_amp:
# loss scaling is currently required when using mixed precision
optimizer = tf.keras.mixed_precision.experimental.LossScaleOptimizer(optimizer, "dynamic")
print("*** OPTIMIZER {} ***".format(optimizer))
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy("accuracy")
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
print("Compiled model {}".format(model))
model.layers[0].trainable = not freeze_bert_layer
print(model.summary())
if run_validation:
validation_data_filenames = glob(os.path.join(validation_data, "*.tfrecord"))
print("validation_data_filenames {}".format(validation_data_filenames))
validation_dataset = file_based_input_dataset_builder(
channel="validation",
input_filenames=validation_data_filenames,
pipe_mode=pipe_mode,
is_training=False,
drop_remainder=False,
batch_size=validation_batch_size,
epochs=epochs,
steps_per_epoch=validation_steps,
max_seq_length=max_seq_length,
).map(select_data_and_label_from_record)
print("Starting Training and Validation...")
validation_dataset = validation_dataset.take(validation_steps)
train_and_validation_history = model.fit(
train_dataset,
shuffle=True,
epochs=epochs,
initial_epoch=initial_epoch_number,
steps_per_epoch=train_steps_per_epoch,
validation_data=validation_dataset,
validation_steps=validation_steps,
callbacks=callbacks,
)
print(train_and_validation_history)
else: # Not running validation
print("Starting Training (Without Validation)...")
train_history = model.fit(
train_dataset,
shuffle=True,
epochs=epochs,
initial_epoch=initial_epoch_number,
steps_per_epoch=train_steps_per_epoch,
callbacks=callbacks,
)
print(train_history)
if run_test:
test_data_filenames = glob(os.path.join(test_data, "*.tfrecord"))
print("test_data_filenames {}".format(test_data_filenames))
test_dataset = file_based_input_dataset_builder(
channel="test",
input_filenames=test_data_filenames,
pipe_mode=pipe_mode,
is_training=False,
drop_remainder=False,
batch_size=test_batch_size,
epochs=epochs,
steps_per_epoch=test_steps,
max_seq_length=max_seq_length,
).map(select_data_and_label_from_record)
print("Starting test...")
test_history = model.evaluate(test_dataset, steps=test_steps, callbacks=callbacks)
print("Test history {}".format(test_history))
# Save the Fine-Tuned Transformers Model as a New "Pre-Trained" Model
print("transformer_fine_tuned_model_path {}".format(transformer_fine_tuned_model_path))
model.save_pretrained(transformer_fine_tuned_model_path)
# Save the TensorFlow SavedModel for Serving Predictions
print("tensorflow_saved_model_path {}".format(tensorflow_saved_model_path))
model.save(tensorflow_saved_model_path, save_format="tf")
if run_sample_predictions:
loaded_model = TFDistilBertForSequenceClassification.from_pretrained(
transformer_fine_tuned_model_path,
id2label={0: 1, 1: 2, 2: 3, 3: 4, 4: 5},
label2id={1: 0, 2: 1, 3: 2, 4: 3, 5: 4},
)
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
if num_gpus >= 1:
inference_device = 0 # GPU 0
else:
inference_device = -1 # CPU
print("inference_device {}".format(inference_device))
inference_pipeline = TextClassificationPipeline(
model=loaded_model, tokenizer=tokenizer, framework="tf", device=inference_device
)
print(
"""I loved it! I will recommend this to everyone.""",
inference_pipeline("""I loved it! I will recommend this to everyone."""),
)
print("""It's OK.""", inference_pipeline("""It's OK."""))
print(
"""Really bad. I hope they don't make this anymore.""",
inference_pipeline("""Really bad. I hope they don't make this anymore."""),
)
```
| true |
code
| 0.665954 | null | null | null | null |
|
##### Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# JAX Quickstart
Dougal Maclaurin, Peter Hawkins, Matthew Johnson, Roy Frostig, Alex Wiltschko, Chris Leary

#### [JAX](https://github.com/google/jax) is NumPy on the CPU, GPU, and TPU, with great automatic differentiation for high-performance machine learning research.
With its updated version of [Autograd](https://github.com/hips/autograd), JAX
can automatically differentiate native Python and NumPy code. It can
differentiate through a large subset of Python’s features, including loops, ifs,
recursion, and closures, and it can even take derivatives of derivatives of
derivatives. It supports reverse-mode as well as forward-mode differentiation, and the two can be composed arbitrarily
to any order.
What’s new is that JAX uses
[XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md)
to compile and run your NumPy code on accelerators, like GPUs and TPUs.
Compilation happens under the hood by default, with library calls getting
just-in-time compiled and executed. But JAX even lets you just-in-time compile
your own Python functions into XLA-optimized kernels using a one-function API.
Compilation and automatic differentiation can be composed arbitrarily, so you
can express sophisticated algorithms and get maximal performance without having
to leave Python.
```
!pip install --upgrade -q https://storage.googleapis.com/jax-releases/cuda$(echo $CUDA_VERSION | sed -e 's/\.//' -e 's/\..*//')/jaxlib-0.1.23-cp36-none-linux_x86_64.whl
!pip install --upgrade -q jax
from __future__ import print_function, division
import jax.numpy as np
from jax import grad, jit, vmap
from jax import random
```
### Multiplying Matrices
We'll be generating random data in the following examples. One big difference between NumPy and JAX is how you generate random numbers. For more details, see the readme.
```
key = random.PRNGKey(0)
x = random.normal(key, (10,))
print(x)
```
Let's dive right in and multiply two big matrices.
```
size = 3000
x = random.normal(key, (size, size), dtype=np.float32)
%timeit np.dot(x, x.T).block_until_ready() # runs on the GPU
```
JAX NumPy functions work on regular NumPy arrays.
```
import numpy as onp # original CPU-backed NumPy
x = onp.random.normal(size=(size, size)).astype(onp.float32)
%timeit np.dot(x, x.T).block_until_ready()
```
That's slower because it has to transfer data to the GPU every time. You can ensure that an NDArray is backed by device memory using `device_put`.
```
from jax import device_put
x = onp.random.normal(size=(size, size)).astype(onp.float32)
x = device_put(x)
%timeit np.dot(x, x.T).block_until_ready()
```
The output of `device_put` still acts like an NDArray.
If you have a GPU (or TPU!) these calls run on the accelerator and have the potential to be much faster than on CPU.
```
x = onp.random.normal(size=(size, size)).astype(onp.float32)
%timeit onp.dot(x, x.T)
```
JAX is much more than just a GPU-backed NumPy. It also comes with a few program transformations that are useful when writing numerical code. For now, there's three main ones:
- `jit`, for speeding up your code
- `grad`, for taking derivatives
- `vmap`, for automatic vectorization or batching.
Let's go over these, one-by-one. We'll also end up composing these in interesting ways.
### Using `jit` to speed up functions
JAX runs transparently on the GPU (or CPU, if you don't have one, and TPU coming soon!). However, in the above example, JAX is dispatching kernels to the GPU one operation at a time. If we have a sequence of operations, we can use the `@jit` decorator to compile multiple operations together using [XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md). Let's try that.
```
def selu(x, alpha=1.67, lmbda=1.05):
return lmbda * np.where(x > 0, x, alpha * np.exp(x) - alpha)
x = random.normal(key, (1000000,))
%timeit selu(x).block_until_ready()
```
We can speed it up with `@jit`, which will jit-compile the first time `selu` is called and will be cached thereafter.
```
selu_jit = jit(selu)
%timeit selu_jit(x).block_until_ready()
```
### Taking derivatives with `grad`
In addition to evaluating numerical functions, we also want to transform them. One transformation is [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation). In JAX, just like in [Autograd](https://github.com/HIPS/autograd), you can compute gradients with the `grad` function.
```
def sum_logistic(x):
return np.sum(1.0 / (1.0 + np.exp(-x)))
x_small = np.arange(3.)
derivative_fn = grad(sum_logistic)
print(derivative_fn(x_small))
```
Let's verify with finite differences that our result is correct.
```
def first_finite_differences(f, x):
eps = 1e-3
return np.array([(f(x + eps * v) - f(x - eps * v)) / (2 * eps)
for v in onp.eye(len(x))])
print(first_finite_differences(sum_logistic, x_small))
```
Taking derivatives is as easy as calling `grad`. `grad` and `jit` compose and can be mixed arbitrarily. In the above example we jitted `sum_logistic` and then took its derivative. We can go further:
```
print(grad(jit(grad(jit(grad(sum_logistic)))))(1.0))
```
For more advanced autodiff, you can use `jax.vjp` for reverse-mode vector-Jacobian products and `jax.jvp` for forward-mode Jacobian-vector products. The two can be composed arbitrarily with one another, and with other JAX transformations. Here's one way to compose them to make a function that efficiently computes full Hessian matrices:
```
from jax import jacfwd, jacrev
def hessian(fun):
return jit(jacfwd(jacrev(fun)))
```
### Auto-vectorization with `vmap`
JAX has one more transformation in its API that you might find useful: `vmap`, the vectorizing map. It has the familiar semantics of mapping a function along array axes, but instead of keeping the loop on the outside, it pushes the loop down into a function’s primitive operations for better performance. When composed with `jit`, it can be just as fast as adding the batch dimensions by hand.
We're going to work with a simple example, and promote matrix-vector products into matrix-matrix products using `vmap`. Although this is easy to do by hand in this specific case, the same technique can apply to more complicated functions.
```
mat = random.normal(key, (150, 100))
batched_x = random.normal(key, (10, 100))
def apply_matrix(v):
return np.dot(mat, v)
```
Given a function such as `apply_matrix`, we can loop over a batch dimension in Python, but usually the performance of doing so is poor.
```
def naively_batched_apply_matrix(v_batched):
return np.stack([apply_matrix(v) for v in v_batched])
print('Naively batched')
%timeit naively_batched_apply_matrix(batched_x).block_until_ready()
```
We know how to batch this operation manually. In this case, `np.dot` handles extra batch dimensions transparently.
```
@jit
def batched_apply_matrix(v_batched):
return np.dot(v_batched, mat.T)
print('Manually batched')
%timeit batched_apply_matrix(batched_x).block_until_ready()
```
However, suppose we had a more complicated function without batching support. We can use `vmap` to add batching support automatically.
```
@jit
def vmap_batched_apply_matrix(v_batched):
return vmap(apply_matrix)(v_batched)
print('Auto-vectorized with vmap')
%timeit vmap_batched_apply_matrix(batched_x).block_until_ready()
```
Of course, `vmap` can be arbitrarily composed with `jit`, `grad`, and any other JAX transformation.
This is just a taste of what JAX can do. We're really excited to see what you do with it!
| true |
code
| 0.534673 | null | null | null | null |
|
# Tenor functions with Pytorch
### Torch up your tensor game
The following 5 functions might empower you to navigate through your Deep Learning endeavours with Pytorch
- torch.diag()
- torch.inverse()
- torch.randn()
- torch.zeros_like()
- torch.arange()
```
# Import torch and other required modules
import torch
```
## Function 1 - torch.diag()
Given some tensor, the above method returns it's diagonal(s).
The arguments to the function are as follows:
### torch.diag(input, diagonal=0, output=None)
diagonal and output are optional paramenters.
1. If diagonal = 0, it is the main diagonal. (Principal diagonal)
2. If diagonal > 0, it is above the main diagonal.
3. If diagonal < 0, it is below the main diagonal.
```
# Example 1 - working
scalar = torch.diag(torch.tensor([99]))
print('diagonal of scalar: ', scalar, '\n') # returns scalar as is
vector = torch.diag(torch.tensor([1,2,3,4])) # 1D tensor(Vector)
print('diagonal of vector: \n',vector) # return square matrix with upper and lower
# triangular sections imputed with 0's
```
The above returns diagonals for scalars (the value itself) and vectors (in the form of a square matrix)
```
# Example 2 - working
matrix = torch.diag(torch.tensor([[1,2,3],[4,5,6], [7,8,9]])) # 3x3 matrix
print('diagonal of matrix: ', matrix, '\n') # returns the diagonal as is
mat1 = torch.diag(torch.tensor([[1,2,3],[4,5,6], [7,8,9]]), diagonal=1)
print("mat1: ", mat1) # returns diagonal above the principal diagonal of the matrix
```
The first example upon running simply returns the principal diagonal(PD) of the matrix.
The second example returns the diagonal above the PD of the matrix found the uppe triangular region.
```
# Example 3 - breaking (to illustrate when it breaks)
tensor = torch.diag(torch.randn(2,3,32,32))
print("tensor: ", tensor) # tries to return diagonal of dim > 2 tensor but it isn't possible
```
Higher order tensors, i.e of dimensions above 2D aren't processed to return a diagonal as there can be numerous combinations of choosing one.
Matrix decompositons are invaluable in DL and this method can be availed of in computing for diagonals. An example of this use case would be in SVD (Singular Value Decomposition).
## Function 2 - torch.inverse()
torch.inverse(input, output=None)
Takes the inverse of the square matrix input. Batches of 2D tensors to this function would return tensor composed of individual inverses.
```
# Example 1 - working
tensor = torch.randn(3,3)
torch.inverse(tensor)
```
Input should be in the form of a square matrix.
```
# Example 2 - working
tensor = torch.randn(3,3,3)
torch.inverse(tensor)
```
The output tensor is a composition of individual inverses of the square matrix within the input matrix.
```
# Example 3 - breaking (to illustrate when it breaks)
tensor = torch.randn(2,1,3)
torch.inverse(tensor)
```
Since the input tensor does not within itself contain batches of square matrices an error is raised for its non-invertibility (in this case).
Theta = (X^(T)X)^(-1).(X^(T)y) - is the normal equation for to find the parameter value for the Linear Regression algorithm.
The above torch.inverse() method can be used in its computation.
## Function 3 - torch.randn()
One of the most sort after functions to initialize a tensor to values of the normal distribution.
```
# Example 1 - working
scalar = torch.randn(1)
print('scalar: ', scalar, '\n')
vector = torch.randn(4)
print('vector: ', vector)
```
Generates random values from the normal distribution and assignes it to the above 0 and 1 dimensional tensors.
```
# Example 2 - working
matrix = torch.randn(3,4)
print('matrix: ', matrix, '\n')
tensor = torch.randn(2,3,4)
print('tensor: ', tensor)
```
Generates random values from the normal distribution and assignes it to the above 2 and 3 dimensional tensors.
```
# Example 3 - breaking (to illustrate when it breaks)
tensor = torch.randn(-1,0,0)
print('tensor: ', tensor)
```
Specified dimensions given to the torch.randn() method have to be Natural Numbers, which is obvious.
This method is handy when any tensor is to be initialized before performing any operations. Example of this would be to initialise the weights of the matrix of every layer of a neural network, randomly before performing backpropogation.
## Function 4 - torch.zeros_like()
The above method takes in a tensor, and given that input's dimensionalty it creates a corressponding tensor of the same shape, but with zeros all over.
```
# Example 1 - working
scal = torch.randn(1)
scal_zero = torch.zeros_like(scal)
print("zero_like for scalar: ", scal_zero, '\n')
vec = torch.randn(4)
vec_z = torch.zeros_like(vec)
print("zero_like for vector: ", vec_z)
```
Corresponding zero tensors are produced with the given input tensor.
```
# Example 2 - working
mat = torch.randn(2,3)
mat_z = torch.zeros_like(mat)
print("zero_like for matrix: ", mat_z, '\n')
ten = torch.randn(2,3,4)
ten_z = torch.zeros_like(ten)
print("zero_like for 3D tensor: ", ten_z, '\n')
```
Explanation about example
```
# Example 3 - breaking (to illustrate when it breaks)
ten_Z = torch.zeros_like(mat @ vec)
ten_z
```
torch_zeros_like() is a pretty full proof method, only a nonsensical input will give rise to error.
Handy way to initialize tensors to zeros with the demensions of another tensor.
## Function 5 - torch.arange()
Returns a 1D tensor of the interval of [start, end) with the given step size
```
# Example 1 - working
t = torch.arange(0, 5, 0.1)
t
```
Returns a tensor with all the values within the range, given the step size.
```
# Example 2 - working
tt = torch.arange(0, 5, 0.01)
tt
```
Returns a 1D tensor with all the values in the range with a smaller step size.
```
# Example 3 - breaking (to illustrate when it breaks)
t = torch.arange(0, 5, 0.0000000000000000000000000000000000001)
t
```
Cannot accomodate for high precision like the example above.
This function can be used to initiate a 1D tensor with alll values within an interval.
## Conclusion
By no means were these 5 functions the keystones to your Deep Learning endevours. But I think we're ready to start looking at linear models for Machine Learning - given at least this much knowledge and an ability to perform other inbetween manipulations.
## Reference Links
Provide links to your references and other interesting articles about tensors
* Official documentation for `torch.Tensor`: https://pytorch.org/docs/stable/tensors.html
* Check out this blog for more on Matrices: https://jhui.github.io/2017/01/05/Deep-learning-linear-algebra/
```
!pip install jovian --upgrade --quiet
import jovian
jovian.commit(project='01-tensor-operations-4abc9', environment=None)
```
| true |
code
| 0.704567 | null | null | null | null |
|
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [cbe61622](https://jckantor.github.io/cbe61622);
content is available [on Github](https://github.com/jckantor/cbe61622.git).*
<!--NAVIGATION-->
< [A.2 Downloading Python source files from github](https://jckantor.github.io/cbe61622/A.02-Downloading_Python_source_files_from_github.html) | [Contents](toc.html) | [A.4 Scheduling Real-Time Events with Simpy](https://jckantor.github.io/cbe61622/A.04-Scheduling-Real-Time-Events-with-Simpy.html) ><p><a href="https://colab.research.google.com/github/jckantor/cbe61622/blob/master/docs/A.03-Getting-Started-with-Pymata4.ipynb"> <img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a><p><a href="https://jckantor.github.io/cbe61622/A.03-Getting-Started-with-Pymata4.ipynb"> <img align="left" src="https://img.shields.io/badge/Github-Download-blue.svg" alt="Download" title="Download Notebook"></a>
# A.3 Getting Started with Pymata4
[Pymata4](https://github.com/MrYsLab/pymata4) is a Python library that allows you to monitor and control Arduino hardware from a host computer. The library uses the Firmata protocol for communicating with the Arduino hardware. Pymata4 supports the StandardFirmata server included with the Arduino IDE, and also StandardFirmataWiFi, and an enhanced server FirmataExpress distributed with Pymata4.
Pymata4 uses [concurrent Python threads](https://mryslab.github.io/pymata4/concurrency/) to manage interaction with the Arduino. The concurrency model enables development of performant and interactive Arduino applications using Python on a host computer. Changes in the status of an Arduino pin can be processed with callbacks. It's sibling, [pymata-express](https://github.com/MrYsLab/pymata-express), is available using the [Python asyncio package](https://docs.python.org/3/library/asyncio.html).
Support for common $I^2C$ devices, including stepper motors, is included in FirmataExpress. Applications using unsupported $I^2C$ devices may require [modifications to the Firmata server sketch](https://www.instructables.com/Going-Beyond-StandardFirmata-Adding-New-Device-Sup/).
Useful links:
* [Pymata4 API documentation](http://htmlpreview.github.io/?https://raw.githubusercontent.com/MrYsLab/pymata4/master/html/pymata4/index.html)
## A.3.1 Hardware Setup and Software Installations
The Arduino must be attached to the host by USB with either the StandardFirmata or Firmata-express sketch installed using the Arduino IDE. For use with WiFi, install StandardFirmataWiFi.
The Python pymata4 package can be installed with pip.
```
!pip install pymata4
```
## A.3.2 Basic Usage
pymata4.Pymata()
board.shutdown()
```
from pymata4 import pymata4
# create a board instance
board = pymata4.Pymata4()
# remember to shutdown
board.shutdown()
```
## A.3.3 Blinker
board.digital_write(pin, value)
Pymata4 has two methods for writing a 1 or a 0 to a digital output. `digital_write(pin, value)` hides details of the Firmata protocol from the user. The user can refer to digital pins just as they would in standard Arduino coding. A second method, `digital_pin_write(pin, value)` allows writing to multiples at the same time, but requires the user to understand further details of the Firmata protocol.
```
from pymata4 import pymata4
import time
LED_PIN = 13
board = pymata4.Pymata4()
# set the pin mode
board.set_pin_mode_digital_output(LED_PIN)
for n in range(5):
print("LED ON")
board.digital_write(LED_PIN, 1)
time.sleep(1)
print("LED OFF")
board.digital_write(LED_PIN, 0)
time.sleep(1)
board.shutdown()
```
## A.3.4 Handling a Keyboard Interrupt
Pymata4 sets up multiple concurrent processes upon opening connection to the Arduino hardware. If Python execution is interrupted, it isimportant to catch the interrupt and shutdown the board before exiting the code. Otherwise the Arduino may continue to stream data requiring the Arduino to be reset.
```
from pymata4 import pymata4
import time
def blink(board, pin, N=20):
board.set_pin_mode_digital_output(LED_PIN)
for n in range(N):
board.digital_write(LED_PIN, 1)
time.sleep(0.5)
board.digital_write(LED_PIN, 0)
time.sleep(0.5)
board.shutdown()
LED_PIN = 13
board = pymata4.Pymata4()
try:
blink(board, LED_PIN)
except KeyboardInterrupt:
print("Operation interrupted. Shutting down board.")
board.shutdown()
```
## A.3.5 Getting Information about the Arduino
[Firmata protocol](https://github.com/firmata/protocol/blob/master/protocol.md)
```
from pymata4 import pymata4
import time
board = pymata4.Pymata4()
print("Board Report")
print(f"Firmware version: {board.get_firmware_version()}")
print(f"Protocol version: {board.get_protocol_version()}")
print(f"Pymata version: {board.get_pymata_version()}")
def print_analog_map(board):
analog_map = board.get_analog_map()
for pin, apin in enumerate(analog_map):
if apin < 127:
print(f"Pin {pin:2d}: analog channel = {apin}")
def print_pin_state_report(board):
pin_modes = {
0x00: "INPUT",
0x01: "OUTPUT",
0x02: "ANALOG INPUT",
0x03: "PWM OUTPUT",
0x04: "SERVO OUTPUT",
0x06: "I2C",
0x08: "STEPPER",
0x0b: "PULLUP",
0x0c: "SONAR",
0x0d: "TONE",
}
analog_map = board.get_analog_map()
for pin in range(len(analog_map)):
state = board.get_pin_state(pin)
print(f"Pin {pin:2d}: {pin_modes[state[1]]:>15s} = {state[2]}")
print_pin_state_report(board)
board.digital_write(13, 1)
print_pin_state_report(board)
print_analog_map(board)
capability_report = board.get_capability_report()
board.shutdown()
# get capability report
print("\nCapability Report")
modes = {
0x00: "DIN", # digital input
0x01: "DO", # digital output
0x02: "AIN", # analog input
0x03: "PWM", # pwm output
0x04: "SRV", # servo output
0x05: "SFT", # shift
0x06: "I2C", # I2C
0x07: "WIR", # ONEWIRE
0x08: "STP", # STEPPER
0x09: "ENC", # ENCODER
0x0A: "SRL", # SERIAL
0x0B: "INP", # INPUT_PULLUP
}
pin_report = {}
pin = 0
k = 0
while k < len(capability_report):
pin_report[pin] = {}
while capability_report[k] < 127:
pin_report[pin][modes[capability_report[k]]] = capability_report[k+1]
k += 2
k += 1
pin += 1
mode_set = set([mode for pin in pin_report.keys() for mode in pin_report[pin].keys()])
print(" " + "".join([f" {mode:>3s} " for mode in sorted(mode_set)]))
for pin in pin_report.keys():
s = f"Pin {pin:2d}:"
for mode in sorted(mode_set):
s += f" {pin_report[pin][mode]:>3d} " if mode in pin_report[pin].keys() else " "*5
print(s)
```
## A.3.6 Temperature Control Lab Shield
```
from pymata4 import pymata4
import time
class tclab():
def __init__(self):
self.board = pymata4.Pymata4()
self.LED_PIN = 9
self.Q1_PIN = 3
self.Q2_PIN = 5
self.T1_PIN = 0
self.T2_PIN = 2
self.board.set_pin_mode_pwm_output(self.LED_PIN)
self.board.set_pin_mode_pwm_output(self.Q1_PIN)
self.board.set_pin_mode_pwm_output(self.Q2_PIN)
self.board.set_pin_mode_analog_input(self.T1_PIN)
self.board.set_pin_mode_analog_input(self.T2_PIN)
self._Q1 = 0
self._Q2 = 0
time.sleep(0.1)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
return
def close(self):
self.Q1(0)
self.Q2(0)
self.board.shutdown()
def read_temperature(self, pin):
# firmata doesn't provide a means to use the 3.3 volt reference
adc, ts = self.board.analog_read(pin)
return round(adc*513/1024 - 50.0, 1)
def Q1(self, val):
val = int(255*max(0, min(100, val))/100)
self.board.pwm_write(self.Q1_PIN, val)
def Q2(self, val):
val = int(255*max(0, min(100, val))/100)
self.board.pwm_write(self.Q2_PIN, val)
def T1(self):
return self.read_temperature(self.T1_PIN)
def T2(self):
return self.read_temperature(self.T2_PIN)
def LED(self, val):
val = max(0, min(255, int(255*val/100)))
self.board.pwm_write(self.LED_PIN, val)
with tclab() as lab:
lab.Q1(100)
lab.Q2(100)
for n in range(30):
print(lab.T1(), lab.T2())
lab.LED(100)
time.sleep(0.5)
lab.LED(0)
time.sleep(0.5)
lab.Q1(0)
lab.Q2(0)
```
<!--NAVIGATION-->
< [A.2 Downloading Python source files from github](https://jckantor.github.io/cbe61622/A.02-Downloading_Python_source_files_from_github.html) | [Contents](toc.html) | [A.4 Scheduling Real-Time Events with Simpy](https://jckantor.github.io/cbe61622/A.04-Scheduling-Real-Time-Events-with-Simpy.html) ><p><a href="https://colab.research.google.com/github/jckantor/cbe61622/blob/master/docs/A.03-Getting-Started-with-Pymata4.ipynb"> <img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a><p><a href="https://jckantor.github.io/cbe61622/A.03-Getting-Started-with-Pymata4.ipynb"> <img align="left" src="https://img.shields.io/badge/Github-Download-blue.svg" alt="Download" title="Download Notebook"></a>
| true |
code
| 0.279533 | null | null | null | null |
|
# Predicting Marketing Efforts: SEO Advertising, Brand Advertising, and Retailer Support
Let's look at predicting the average Brand Advertising Efforts and Search Engine Optimization Efforts
This helps us make more accurate decisions in BSG and identify if we'll hit the shareholder expectations for the period.
```
#let's grab a few packages for stats
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Let's set some variables that we'll change each round
#Change this year to the year being predicted (i.e. if you're predicting year 16, enter '16')
predictionYear = 17
#Load the dataset from our bsg_prices_actual - Sheet1.csv
df = pd.read_csv('bsg_marketing_actual - Sheet1.csv')
df
```
## Functions
1. Slope Intercept
2. Print Slope as Formula
3. Hypothetical Slope and Intercept from our data
4. Print the Predicted Year using Hypothetical Slope and Intercept
```
#1. Slope Intercept Function
#Function to find the slope intercept of a first degree polynomial
def getSlope(x,y): #pass in the x value, y value, and a string for printing
slope, intercept = np.polyfit(x,y,1).round(decimals = 4) #compute the slope
return slope, intercept
#2. Print Slope as Formulas
#Function to print the slope
def printSlope(x,y,string):
slope, intercept = np.polyfit(x,y,1).round(decimals = 4)
printed_string = string + '= ' + str(slope) + 'x + ' + str(intercept)
return printed_string
#3. Hypothetical Slope and Intercept from our data
x_theor = np.array([10,predictionYear]) #set x_theor as it will be used in all our Linear Models
def getYTheor(slope, x_theor, intercept): #pass in the slope, x_theor, and intercept
y_theor = slope * x_theor + intercept
return y_theor
#4. Print Predicted Year using Hypothetical Slope and Intercept
def printPrediction(slope, intercept, string):
prediction = 'Year ' + str(predictionYear) + ' ' + string +' predicted price: ' + str(slope * predictionYear + intercept)
return prediction
```
### Find Slope Intercept for each segment
```
# variable assignments
x = np.array(df['YEAR'])
y_na_seo = np.array(df['NA_SEO'])
y_na_advertising = np.array(df['NA_ADVERTISING'])
y_na_retailsup = np.array(df['NA_RETAIL_SUPPORT'])
y_eu_seo = np.array(df['EU_SEO'])
y_eu_advertising = np.array(df['EU_ADVERTISING'])
y_eu_retailsup = np.array(df['EU_RETAIL_SUPPORT'])
y_ap_seo = np.array(df['AP_SEO'])
y_ap_advertising = np.array(df['AP_ADVERTISING'])
y_ap_retailsup = np.array(df['AP_RETAIL_SUPPORT'])
y_la_seo = np.array(df['LA_SEO'])
y_la_advertising = np.array(df['LA_ADVERTISING'])
y_la_retailsup = np.array(df['LA_RETAIL_SUPPORT'])
#print the slope in y=mx+b form
print(printSlope(x,y_na_seo,'NA SEO'))
print(printSlope(x,y_na_advertising,'NA Advertising'))
print(printSlope(x,y_na_retailsup,'NA Retailer Support'))
print(printSlope(x,y_eu_seo,'EU SEO'))
print(printSlope(x,y_eu_advertising,'EU Advertising'))
print(printSlope(x,y_eu_retailsup,'EU Retailer Support'))
print(printSlope(x,y_ap_seo,'AP SEO'))
print(printSlope(x,y_ap_advertising,'AP Advertising'))
print(printSlope(x,y_ap_retailsup,'AP Retailer Support'))
print(printSlope(x,y_la_seo,'LA SEO'))
print(printSlope(x,y_la_advertising,'LA Advertising'))
print(printSlope(x,y_la_retailsup,'LA Retailer Support'))
```
### North America SEO, Advertising, and Retailer Support Predictions
* SEO: Search Engine Optimization Advertising 000s dollars
* Advertising: Wholesale Brand Advertising 000s dollars
* Retailer Support: dollars per outlet
```
#grab the slope and intercepts for NA
na_seo_slope, na_seo_intercept = getSlope(x,y_na_seo)
na_advertising_slope,na_advertising_intercept = getSlope(x,y_na_advertising)
na_retailsup_slope, na_retailsup_intercept = getSlope(x,y_na_retailsup)
#set the y theoretical for NA
seo_y_theor = getYTheor(na_seo_slope, x_theor, na_seo_intercept)
advertising_y_theor = getYTheor(na_advertising_slope,x_theor,na_advertising_intercept)
retailsup_y_theor = getYTheor(na_retailsup_slope, x_theor, na_retailsup_intercept)
#print the predicted price
print(printPrediction(na_seo_slope, na_seo_intercept, 'SEO'))
print(printPrediction(na_advertising_slope, na_advertising_intercept, 'Brand Advertising'))
print(printPrediction(na_retailsup_slope, na_retailsup_intercept, 'Retailer Support'))
#plot the anscombe data and theoretical lines
_ = plt.plot(x,y_na_seo,marker='.', linestyle='none')
_ = plt.plot(x_theor,seo_y_theor)
_ = plt.plot(x,y_na_advertising,marker='.', linestyle='none')
_ = plt.plot(x_theor,advertising_y_theor)
_ = plt.plot(x,y_na_retailsup,marker='.', linestyle='none')
_ = plt.plot(x_theor,retailsup_y_theor)
#label the axes
plt.xlabel('Year')
plt.ylabel('Advertising Dollars')
plt.title('North America')
plt.show()
```
### Europe Africa SEO, Advertising, and Retailer Support Predictions
* SEO: Search Engine Optimization Advertising 000s dollars
* Advertising: Wholesale Brand Advertising 000s dollars
* Retailer Support: dollars per outlet
```
#grab the slope and intercepts for EU
eu_seo_slope, eu_seo_intercept = getSlope(x,y_eu_seo)
eu_advertising_slope, eu_advertising_intercept = getSlope(x,y_eu_advertising)
eu_retailsup_slope, eu_retailsup_intercept = getSlope(x,y_eu_retailsup)
#set the y theoretical for EU
seo_y_theor = getYTheor(eu_seo_slope, x_theor, eu_seo_intercept)
advertising_y_theor = getYTheor(eu_advertising_slope,x_theor,eu_advertising_intercept)
retailsup_y_theor = getYTheor(eu_retailsup_slope, x_theor, eu_retailsup_intercept)
#print the predicted price
print(printPrediction(eu_seo_slope, eu_seo_intercept, 'SEO'))
print(printPrediction(eu_advertising_slope, eu_advertising_intercept, 'Brand Advertising'))
print(printPrediction(eu_retailsup_slope, eu_retailsup_intercept, 'Retailer Support'))
#plot the anscombe data and theoretical lines
_ = plt.plot(x,y_eu_seo,marker='.', linestyle='none')
_ = plt.plot(x_theor,seo_y_theor)
_ = plt.plot(x,y_eu_advertising,marker='.', linestyle='none')
_ = plt.plot(x_theor,advertising_y_theor)
_ = plt.plot(x,y_eu_retailsup,marker='.', linestyle='none')
_ = plt.plot(x_theor,retailsup_y_theor)
#label the axes
plt.xlabel('Year')
plt.ylabel('Advertising Dollars')
plt.title('Europe Africa')
plt.show()
```
### Asia Pacific SEO, Advertising, and Retailer Support Predictions
* SEO: Search Engine Optimization Advertising 000s dollars
* Advertising: Wholesale Brand Advertising 000s dollars
* Retailer Support: dollars per outlet
```
#grab the slope and intercepts for AP
ap_seo_slope, ap_seo_intercept = getSlope(x,y_ap_seo)
ap_advertising_slope, ap_advertising_intercept = getSlope(x,y_ap_advertising)
ap_retailsup_slope, ap_retailsup_intercept = getSlope(x,y_ap_retailsup)
#set the y theoretical for AP
seo_y_theor = getYTheor(ap_seo_slope, x_theor, ap_seo_intercept)
advertising_y_theor = getYTheor(ap_advertising_slope,x_theor,ap_advertising_intercept)
retailsup_y_theor = getYTheor(ap_retailsup_slope, x_theor, ap_retailsup_intercept)
#print the predicted price
print(printPrediction(ap_seo_slope, ap_seo_intercept, 'SEO'))
print(printPrediction(ap_advertising_slope, ap_advertising_intercept, 'Brand Advertising'))
print(printPrediction(ap_retailsup_slope, ap_retailsup_intercept, 'Retailer Support'))
#plot the anscombe data and theoretical lines
_ = plt.plot(x,y_ap_seo,marker='.', linestyle='none')
_ = plt.plot(x_theor,seo_y_theor)
_ = plt.plot(x,y_ap_advertising,marker='.', linestyle='none')
_ = plt.plot(x_theor,advertising_y_theor)
_ = plt.plot(x,y_ap_retailsup,marker='.', linestyle='none')
_ = plt.plot(x_theor,retailsup_y_theor)
#label the axes
plt.xlabel('Year')
plt.ylabel('Advertising Dollars')
plt.title('Asia Pacific')
plt.show()
```
### Latin America SEO, Advertising, and Retailer Support Predictions
* SEO: Search Engine Optimization Advertising 000s dollars
* Advertising: Wholesale Brand Advertising 000s dollars
* Retailer Support: dollars per outlet
```
#grab the slope and intercepts for LA
la_seo_slope, la_seo_intercept = getSlope(x,y_la_seo)
la_advertising_slope, la_advertising_intercept = getSlope(x,y_la_advertising)
la_retailsup_slope, la_retailsup_intercept = getSlope(x,y_la_retailsup)
#set the y theoretical for LA
seo_y_theor = getYTheor(la_seo_slope, x_theor, la_seo_intercept)
advertising_y_theor = getYTheor(la_advertising_slope,x_theor,la_advertising_intercept)
retailsup_y_theor = getYTheor(la_retailsup_slope, x_theor, la_retailsup_intercept)
#print the predicted price
print(printPrediction(la_seo_slope, la_seo_intercept, 'SEO'))
print(printPrediction(la_advertising_slope, la_advertising_intercept, 'Brand Advertising'))
print(printPrediction(la_retailsup_slope, la_retailsup_intercept, 'Retailer Support'))
#plot the anscombe data and theoretical lines
_ = plt.plot(x,y_la_seo,marker='.', linestyle='none')
_ = plt.plot(x_theor,seo_y_theor)
_ = plt.plot(x,y_la_advertising,marker='.', linestyle='none')
_ = plt.plot(x_theor,advertising_y_theor)
_ = plt.plot(x,y_la_retailsup,marker='.', linestyle='none')
_ = plt.plot(x_theor,retailsup_y_theor)
#label the axes
plt.xlabel('Year')
plt.ylabel('Advertising Dollars')
plt.title('Latin America')
plt.show()
```
| true |
code
| 0.494934 | null | null | null | null |
|
### Een parser-generator voor de wordgrammar
In de ETCBC-data wordt een morphologische analyse-annotatie gebruikt, die per project kan worden gedefinieerd in een `word_grammar`-definitiebestand. Per project moet er eerst een annotatieparser worden gegenereerd aan de hand van het `word-grammar`-bestand. Dat gebeurt in de `WordGrammar` class in `wrdgrm.py`, die afhankelijk is van de parser-generator in de `wgr.py` en `yapps-runtime.py` modules. De parser-generator is gegenereerd met Yapps2 (zie de website: http://theory.stanford.edu/~amitp/yapps/ en https://github.com/smurfix/yapps).
Om een `WordGrammar`-object te maken zijn een `word_grammar`-bestand en een `lexicon`-bestand vereist. Vervolgens kunnen woorden geanalyseerd worden met de method `WordGrammar.analyze(word)`.
```
# eerst modules importeren
import os
from wrdgrm import WordGrammar
```
hulpfunctie
```
def filepath(rel_path):
return os.path.realpath(os.path.join(os.getcwd(), rel_path))
# bestandslocaties
lexicon_file = filepath("../../data/blc/syrlex")
word_grammar_file = filepath("../../data/blc/syrwgr")
an_file = filepath("../../data/blc/Laws.an")
# dan kan de wordgrammar worden geïnitialiseerd
wg = WordGrammar(word_grammar_file, lexicon_file)
```
De method `analyze()` retourneert een `Word`-object met de analyse.
```
# wrdgrm.Word object
wg.analyze(">TR/&WT=~>")
# voorbeeld
word = wg.analyze(">TR/&WT=~>")
print(
"{:15}".format("Morphemes:"),
tuple((m.mt.ident, (m.p, m.s, m.a)) for m in word.morphemes),
)
print("{:15}".format("Functions:"), word.functions)
print("{:15}".format("Lexicon:"), word.lex)
print("{:15}".format("Lexeme:"), word.lexeme)
print("{:15}".format("Annotated word:"), word.word)
print("{:15}".format("Meta form:"), word.meta_form)
print("{:15}".format("Surface form:"), word.surface_form)
print("{:15}".format("Paradigmatic form:"), word.paradigmatic_form)
```
Naast verschillende `string`-weergaven van het geanalyseerde woord bevat het `Word`-object drie `tuples`: `morphemes`, `functions` en `lex`, met daarin de belangrijkste analyses.
De eerste, `morphemes`, bevat een tuple met alle gevonden morfemen, elk als een ~~tuple met drie strings~~ `Morpheme` object met vier attributen: `mt`, een namedtuple met informatie over het morfeemtype; `p`, de paradigmatische vorm (zoals die in het lexicon staat); `s`, de oppervlaktevorm (zoals die in de tekst staat); en `a`, de geannoteerde vorm met meta-karakters.
De tweede, `functions`, bevat de grammaticale functies van het woord, zoals die in de `wordgrammar` gedefinieerd zijn: `ps: "person"`, `nu: "number"`, `gn: "gender"`, `ls: "lexical set"`, `sp: "part of speech"`, `st: "state"`, `vo: "voice"`, `vs: "verbal stem"`, `vt: "verbal tense"`. Een veld met de waarde `False` geeft aan dat deze functie niet van toepassing is op dit woord, een veld met waarde `None` geeft aan dat de waarde niet is vastgesteld.
De derde, `lex`, bevat het lemma zoals dat in het lexicon staat, met als eerste het woord-id, en vervolgens de annotaties. Behalve standaard-waarden voor de grammaticale functies bevat het lexicon een `gl`-veld voor ieder woord (gloss), en soms een `de`-veld (derived form). (In één resp. twee gevallen komen ook de velden `cs` en `ln` voor, waarvan de betekenis mij niet duidelijk is.)
```
# De method `dmp_str` genereert een string die overeenkomt met die in .dmp-bestanden.
# Hieronder een voorbeeld hoe die gebruikt kan worden om een .dmp-bestand te genereren.
# Voor een eenvoudiger manier, zie de AnParser notebook.
def dump_anfile(name, an_file):
with open(an_file) as f:
for line in f:
verse, s, a = line.split() # verse, surface form, analyzed form
for an_word in a.split("-"):
word = wg.analyze(an_word)
yield word.dmp_str(name, verse)
for i, line in zip(range(20), dump_anfile("Laws", an_file)):
# for line in dump_anfile('Laws', an_file):
print(line)
print("...")
```
Om te controleren dat de output correct is heb ik bovenstaande output vergeleken met de bestaande .dmp-bestanden. Omdat de volgorde van de waarden willekeurig lijkt te zijn - of in ieder geval niet in alle gevallen gelijk - moeten alle waarden gesorteerd worden voor ze vergeleken kunnen worden, een eenvoudige diff volstaat niet. Onderstaand script ~~bevestigt dat bovenstaande output, op de volgorde na, een exacte weergave is van de bestaande .dmp-bestanden~~ toont aan dat zowel de an-file als de word_grammar zijn aangepast sinds de .dmp-bestanden zijn gegenereerd:
(verschillen: woorden met vpm=dp zijn nu correct als vo=pas geanalyseerd, en van `]>](NKJ[` in 15,12 en `]M]SKN[/JN` in 19.12 zijn de annotaties gewijzigd)
```
dmp_file = filepath("../../data/blc/Laws.dmp")
dmp_gen = dump_anfile("BLC", an_file)
with open(dmp_file) as f_dmp:
for line1, line2 in zip(f_dmp, dmp_gen):
for f1, f2 in zip(line1.strip().split("\t"), line2.split("\t")):
f1s, f2s = (",".join(sorted(f.split(","))) for f in (f1, f2))
if f1s != f2s:
print(f"{line1}!=\n{line2}")
```
| true |
code
| 0.206414 | null | null | null | null |
|
最初に必要なライブラリを読み込みます。
```
from sympy import *
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import Qubit, QubitBra, measure_all, measure_all_oneshot
from sympy.physics.quantum.gate import H,X,Y,Z,S,T,CPHASE,CNOT,SWAP,UGate,CGateS,gate_simp
from sympy.physics.quantum.gate import IdentityGate as _I
from sympy.physics.quantum.qft import *
from sympy.printing.dot import dotprint
init_printing()
%matplotlib inline
import matplotlib.pyplot as plt
from sympy.physics.quantum.circuitplot import CircuitPlot,labeller, Mz,CreateOneQubitGate
```
## (狭義の)量子プログラミングの手順
1. 計算に必要な量子ビット(量子レジスタ)を準備して、その値を初期化する
2. 量子計算をユニタリ行列(ゲート演算子)で記述する
3. ユニタリ行列を量子ビットに作用する
4. 測定する
#### (1の例)計算に必要な量子ビット(量子レジスタ)を準備して、その値を初期化する
```
# 全て 0 の3量子ビットを準備
Qubit('000')
```
#### (2の例)量子計算をユニタリ行列(ゲート演算子)で記述する
```
# 基本的なユニタリ演算子
pprint(represent(X(0),nqubits=1))
pprint(represent(Y(0),nqubits=1))
pprint(represent(Z(0),nqubits=1))
pprint(represent(H(0),nqubits=1))
pprint(represent(S(0),nqubits=1))
pprint(represent(S(0)**(-1),nqubits=1))
pprint(represent(T(0),nqubits=1))
pprint(represent(T(0)**(-1),nqubits=1))
pprint(represent(CNOT(1,0),nqubits=2))
```
#### (3の例)ユニタリ行列を量子ビットに作用する
```
# ユニタリ行列を量子ビットに作用するには、qapply() を使います。
hadamard3 = H(2)*H(1)*H(0)
qapply(hadamard3*Qubit('000'))
```
#### (4の例)測定する
```
# 測定は、qapply() した量子状態に対して、measure_all_oneshot() で確率的な結果を得ます。
for i in range(10):
pprint(measure_all_oneshot(qapply(hadamard3*Qubit('000'))))
# SymPyの量子シミュレーターでは、内部で量子状態を厳密に計算して、すべての状態を保持しています。
# そのため。measure_all() では、全ての量子状態の確率を得ることができます。
measure_all(qapply(hadamard3*Qubit('000')))
```
## 【練習問題】いつもの説明資料の量子回路をプログラミング手順にそって計算しましょう。

```
### 1. 計算に必要な量子ビット(量子レジスタ)を準備して、その値を初期化する
## 2量子ビットを 0 で初期化してください。
Qubit('00')
### 2. 量子計算をユニタリ行列(ゲート演算子)で記述する
## Hadamard のテンソル積 の行列表現を表示してください。
represent(H(1)*H(0),nqubits=2)
## CNOT を Hadamard で挟んだゲート操作 の行列表現を表示してください。
represent(H(0)*CNOT(1,0)*H(0),nqubits=2)
### 3. ユニタリ行列を量子ビットに作用する
## Hadamard のテンソル積 を `Qubit('00')` に作用してください。
qapply(H(1)*H(0)*Qubit('00'))
## 次に、CNOT を Hadamard で挟んだゲート操作 を 前の状態に作用してください。
qapply(H(0)*CNOT(1,0)*H(0)*H(1)*H(0)*Qubit('00'))
### 4. 測定する
## measure_all() を使って、それぞれの状態が測定される確率を表示してください。
measure_all(qapply(H(0)*CNOT(1,0)*H(0)*H(1)*H(0)*Qubit('00')))
```
## 【課題1】グローバーのアルゴリズム
<strong>
問1)
1. 次の「問1の初期状態」 quest_state を入力として、この量子状態に $\lvert 111 \rangle $ が含まれるか
グローバーのアルゴリズムを使って調べてください。
2. 上の条件で、この量子状態に $\lvert 101 \rangle $ が含まれるかをグローバーのアルゴリズムを
使って調べる考察をします。(うまくいかない例を見ます)
・プログラムを作り、実際は、$\lvert 101 \rangle $ が高確率で検出されることを調べてください。
・なぜ、初期状態に含まれていない状態が検出されるか理由を考えましょう。(解答は口頭でよい)
問2)
1. 下の「問2の初期状態」quest2_state を入力として、問1と同様、
$\lvert 111 \rangle $ と $\lvert 101 \rangle $ の状態にの検知について グローバーのアルゴリズムを適用して、
その状況を考察してください。
</strong>
**以降、【課題1】問1−1)の回答欄:**
```
# 問1の初期状態
quest_state = CNOT(1,0)*CNOT(2,1)*H(2)*H(0)*Qubit('000')
CircuitPlot(quest_state,nqubits=3)
# 計算した初期状態を init_state とする
init_state = qapply(quest_state)
init_state
# 以降で役立ちそうな関数を定義します。
def CCX(c1,c2,t): return CGateS((c1,c2),X(t))
def hadamard(s,n):
h = H(s)
for i in range(s+1,n+s):
h = H(i)*h
return h
def CCZ(c1,c2,t): return (H(t)*CCX(c1,c2,t)*H(t)) # CCZ演算子を定義します。
def DOp(n): return (Qubit('0'*n)*QubitBra('0'*n)*2-_I(0)) # ゲート操作で計算するには、上記コメントのような演算になります。
h_3 = hadamard(0,3)
d_3 = h_3 * DOp(3) * h_3 # 平均値周りの反転操作
# represent(d_3,nqubits=3)
# | 111 > の検索する量子回路を作成する。
mark_7 = CCZ(1,2,0)
grover_7 = gate_simp(d_3*mark_7*d_3*mark_7)
state1_7 = qapply(d_3*mark_7*init_state)
pprint(state1_7)
qapply(d_3*mark_7*state1_7)
# 上で作った量子回路を初期状態と作用させて measure_all_oneshot() で何回か試行して、結果をみる。
for i in range(10):
pprint(measure_all_oneshot(qapply(grover_7*init_state)))
```
**以降、【課題1】問1−2)の回答欄:**
```
# | 101 > の検索する量子回路を作成する。
mark_5 = X(1)*CCZ(1,2,0)*X(1)
grover_5 = gate_simp(d_3*mark_5*d_3*mark_5)
state1_5 = qapply(d_3*mark_5*init_state)
pprint(state1_5)
qapply(d_3*mark_5*state1_5)
# 上で作った量子回路を初期状態と作用させて measure_all() でかく状態の確率をみて、考察する。
measure_all(qapply(grover_5*init_state))
```
**以降、【課題1】問2−1)の回答欄:**
```
# 問2の初期状態
quest2_state = CNOT(2,1)*H(2)*X(2)*CNOT(2,1)*CNOT(2,0)*H(2)*X(2)*Qubit('000')
CircuitPlot(quest2_state,nqubits=3)
# 問2の回答欄(1)
init2_state = qapply(quest2_state)
init2_state
# 問2の回答欄(2)
for i in range(10):
pprint(measure_all_oneshot(qapply(grover_7*init2_state)))
# 問2の回答欄(3)
measure_all(qapply(grover_5*init2_state))
```
## 【課題2】量子フーリエ変換
<strong>
問1)
1. 3量子ビットを対象にした、量子フーリエ変換を行います。
|000>, |001>, ..., |110>, |111> の全ての状態のそれぞれの QFT の結果を出してください。
ヒント)sympy.physics.quantum.qft の QFT 関数を使います。
2. QFT(0,3) の量子回路図を CircuitPlot() で作図してください。
問2)
1. 3量子ビットを対象にした、量子フーリエ変換を基本的な量子ゲートだけで表してください。
$\sqrt{T}$ゲートである Rk(n,4) は利用してもよい。
・演算をテンソル積で表してください。
・(この場合の量子回路図は、うまく描けません。)
</strong>
**以降、【課題2】問1−1)の回答欄:**
```
## QFT(0,3) の行列表現を表示してください。
qft3=QFT(0,3)
represent(qft3,nqubits=3)
# |000> を量子フーリエ変換してください。
qapply(qft3*Qubit('000'))
# |001> を量子フーリエ変換してください。
qapply(qft3*Qubit('001'))
# |010> を量子フーリエ変換してください。
qapply(qft3*Qubit('010'))
# |011> を量子フーリエ変換してください。
qapply(qft3*Qubit('011'))
# |100> を量子フーリエ変換してください。
qapply(qft3*Qubit('100'))
# |101> を量子フーリエ変換してください。
qapply(qft3*Qubit('101'))
# |110> を量子フーリエ変換してください。
qapply(qft3*Qubit('110'))
# |111> を量子フーリエ変換してください。
qapply(qft3*Qubit('111'))
```
**以降、【課題2】問1−2)の回答欄:**
```
### QFT(0,3) は、SymPy ではひと塊りのまとまったオペレータとして定義されています。
### 基本ゲートを知るためには、decompose() を使います。
QFT(0,3).decompose()
# QFT(0,3) の量子回路図を CircuitPlot() で作図してください。
CircuitPlot(QFT(0,3).decompose(), nqubits=3)
# decompose() した上記の回路を改めて、定義しなおします。
qft3_decomp = SWAP(0,2)*H(0)*CGateS((0,),S(1))*H(1)*CGateS((0,),T(2))*CGateS((1,),S(2))*H(2)
qft3_decomp
# 上記で定義しなおした QFT の量子回路図を CircuitPlot() で作図します。
# QFT(0,3).decompose() の量子回路図と比較してください。
CircuitPlot(qft3_decomp,nqubits=3)
```
**以降、【課題2】問2−1)の解答欄:**
(ヒント)$c_{g}$ をグローバル位相として、Z軸回転
$ R_{z\theta} = c_{g} X \cdot R_{z\theta/2}^{\dagger} \cdot X \cdot R_{z\theta/2} $
と表せることを使います。
```
# S = c・X・T†・X・T であることを示します。
pprint(represent(S(0),nqubits=1))
represent(exp(I*pi/4)*X(0)*T(0)**(-1)*X(0)*T(0),nqubits=1)
# T = c・X・sqrt(T)†・X・sqrt(T) であることを示します。
pprint(represent(T(0),nqubits=1))
represent(exp(I*pi/8)*X(0)*Rk(0,4)**(-1)*X(0)*Rk(0,4),nqubits=1)
# qft3_decomp = SWAP(0,2)*H(0)*CGateS((0,),S(1))*H(1)*CGateS((0,),T(2))*CGateS((1,),S(2))*H(2)
# qft3_decomp を見ながら、制御Sゲートを置き換えて、qft3_decomp2 へ代入します。
qft3_decomp2 = SWAP(0,2)*H(0)*CNOT(0,1)*T(1)**(-1)*CNOT(0,1)*T(1)*H(1)*CGateS((0,),T(2))*CNOT(1,2)*T(2)**(-1)*CNOT(1,2)*T(2)*H(2)
qft3_decomp2
# qft3_decomp2 = SWAP(0,2)*H(0)*CNOT(0,1)*T(1)**(-1)*CNOT(0,1)*T(1)*H(1)*CGateS((0,),T(2))*CNOT(1,2)*T(2)**(-1)*CNOT(1,2)*T(2)*H(2)
# qft3_decomp を見ながら、制御Tゲートを置き換えて、qft3_decomp3 へ代入します。
qft3_decomp3 = SWAP(0,2)*H(0)*CNOT(0,1)*T(1)**(-1)*CNOT(0,1)*T(1)*H(1)*CNOT(0,2)*Rk(2,4)**(-1)*CNOT(0,2)*Rk(2,4)*CNOT(1,2)*T(2)**(-1)*CNOT(1,2)*T(2)*H(2)
qft3_decomp3
# |000> の量子フーリエ変換の結果をみます。
### ゲート操作が少し複雑になるため、SymPyがうまく判断できません。
### represent()で計算します。解答例では、結果が縦ベクトルで行数が長くなるのを嫌い、transpose()します。
# (解答例)transpose(represent(qft3_decomp2*Qubit('000'), nqubits=3))
transpose(represent(qft3_decomp2*Qubit('000'), nqubits=3))
# |001> の量子フーリエ変換の結果をみます。
### グローバル位相 exp(I*pi/4) をかけると同じになります。
exp(I*pi/4)*transpose(represent(qft3_decomp2*Qubit('001'), nqubits=3))
# |010> の量子フーリエ変換の結果をみます。
### グローバル位相 exp(I*pi/4) をかけると同じになります。
exp(I*pi/4)*transpose(represent(qft3_decomp2*Qubit('010'), nqubits=3))
# |011> の量子フーリエ変換の結果をみます。
### グローバル位相 exp(I*pi/2) をかけると同じになります。
exp(I*pi/2)*transpose(represent(qft3_decomp2*Qubit('011'), nqubits=3))
# |100> の量子フーリエ変換の結果をみます。
transpose(represent(qft3_decomp2*Qubit('100'), nqubits=3))
# |101> の量子フーリエ変換の結果をみます。
### グローバル位相 exp(I*pi/4) をかけると同じになります。
exp(I*pi/4)*transpose(represent(qft3_decomp2*Qubit('101'), nqubits=3))
# |110> の量子フーリエ変換の結果をみます。
### グローバル位相 exp(I*pi/4) をかけると同じになります。
exp(I*pi/4)*transpose(represent(qft3_decomp2*Qubit('110'), nqubits=3))
# |111> の量子フーリエ変換の結果をみます。
### グローバル位相 exp(I*pi/2) をかけると同じになります。
exp(I*pi/2)*transpose(represent(qft3_decomp2*Qubit('111'), nqubits=3))
```
| true |
code
| 0.331096 | null | null | null | null |
|
# **BentoML Example: Image Segmentation with PaddleHub**
**BentoML makes moving trained ML models to production easy:**
* Package models trained with any ML framework and reproduce them for model serving in production
* **Deploy anywhere** for online API serving or offline batch serving
* High-Performance API model server with adaptive micro-batching support
* Central hub for managing models and deployment process via Web UI and APIs
* Modular and flexible design making it adaptable to your infrastrcuture
BentoML is a framework for serving, managing, and deploying machine learning models. It is aiming to bridge the gap between Data Science and DevOps, and enable teams to deliver prediction services in a fast, repeatable, and scalable way.
Before reading this example project, be sure to check out the [Getting started guide](https://github.com/bentoml/BentoML/blob/master/guides/quick-start/bentoml-quick-start-guide.ipynb) to learn about the basic concepts in BentoML.
This notebook demonstrates how to use BentoML to turn a Paddlehub module into a docker image containing a REST API server serving this model, how to use your ML service built with BentoML as a CLI tool, and how to distribute it a pypi package.
This example notebook is based on the [Python quick guide from PaddleHub](https://github.com/PaddlePaddle/PaddleHub/blob/release/v2.0/docs/docs_en/quick_experience/python_use_hub_en.md).
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
!pip3 install -q bentoml paddlepaddle paddlehub
!hub install deeplabv3p_xception65_humanseg
```
## Prepare Input Data
```
!wget https://paddlehub.bj.bcebos.com/resources/test_image.jpg
```
## Create BentoService with PaddleHub Module Instantiation
```
%%writefile paddlehub_service.py
import paddlehub as hub
import bentoml
from bentoml import env, artifacts, api, BentoService
import imageio
from bentoml.adapters import ImageInput
@env(infer_pip_packages=True)
class PaddleHubService(bentoml.BentoService):
def __init__(self):
super(PaddleHubService, self).__init__()
self.module = hub.Module(name="deeplabv3p_xception65_humanseg")
@api(input=ImageInput(), batch=True)
def predict(self, images):
results = self.module.segmentation(images=images, visualization=True)
return [result['data'] for result in results]
# Import the custom BentoService defined above
from paddlehub_service import PaddleHubService
import numpy as np
import cv2
# Pack it with required artifacts
bento_svc = PaddleHubService()
# Predict with the initialized module
image = cv2.imread("test_image.jpg")
images = [image]
segmentation_results = bento_svc.predict(images)
```
### Visualizing the result
```
# View the segmentation mask layer
from matplotlib import pyplot as plt
for result in segmentation_results:
plt.imshow(cv2.cvtColor(result, cv2.COLOR_BGR2RGB))
plt.axis('off')
plt.show()
# Get the segmented image of the original image
for result, original in zip(segmentation_results, images):
result = cv2.cvtColor(result, cv2.COLOR_GRAY2RGB)
original_mod = cv2.cvtColor(original, cv2.COLOR_RGB2RGBA)
mask = result / 255
*_, alpha = cv2.split(mask)
mask = cv2.merge((mask, alpha))
segmented_image = (original_mod * mask).clip(0, 255).astype(np.uint8)
plt.imshow(cv2.cvtColor(segmented_image, cv2.COLOR_BGRA2RGBA))
plt.axis('off')
plt.show()
```
### Start dev server for testing
```
# Start a dev model server
bento_svc.start_dev_server()
!curl -i \
-F image=@test_image.jpg \
localhost:5000/predict
# Stop the dev model server
bento_svc.stop_dev_server()
```
### Save the BentoService for deployment
```
saved_path = bento_svc.save()
```
## REST API Model Serving
```
!bentoml serve PaddleHubService:latest
```
If you are running this notebook from Google Colab, you can start the dev server with --run-with-ngrok option, to gain acccess to the API endpoint via a public endpoint managed by ngrok:
```
!bentoml serve PaddleHubService:latest --run-with-ngrok
```
## Make request to the REST server
*After navigating to the location of this notebook, copy and paste the following code to your terminal and run it to make request*
```
curl -i \
--header "Content-Type: image/jpeg" \
--request POST \
--data-binary @test_image.jpg \
localhost:5000/predict
```
## Launch inference job from CLI
```
!bentoml run PaddleHubService:latest predict --input-file test_image.jpg
```
## Containerize model server with Docker
One common way of distributing this model API server for production deployment, is via Docker containers. And BentoML provides a convenient way to do that.
Note that docker is **not available in Google Colab**. You will need to download and run this notebook locally to try out this containerization with docker feature.
If you already have docker configured, simply run the follow command to product a docker container serving the PaddeHub prediction service created above:
```
!bentoml containerize PaddleHubService:latest
!docker run --rm -p 5000:5000 PaddleHubService:latest
```
# **Deployment Options**
If you are at a small team with limited engineering or DevOps resources, try out automated deployment with BentoML CLI, currently supporting AWS Lambda, AWS SageMaker, and Azure Functions:
* [AWS Lambda Deployment Guide](https://docs.bentoml.org/en/latest/deployment/aws_lambda.html)
* [AWS SageMaker Deployment Guide](https://docs.bentoml.org/en/latest/deployment/aws_sagemaker.html)
* [Azure Functions Deployment Guide](https://docs.bentoml.org/en/latest/deployment/azure_functions.html)
If the cloud platform you are working with is not on the list above, try out these step-by-step guide on manually deploying BentoML packaged model to cloud platforms:
* [AWS ECS Deployment](https://docs.bentoml.org/en/latest/deployment/aws_ecs.html)
* [Google Cloud Run Deployment](https://docs.bentoml.org/en/latest/deployment/google_cloud_run.html)
* [Azure container instance Deployment](https://docs.bentoml.org/en/latest/deployment/azure_container_instance.html)
* [Heroku Deployment](https://docs.bentoml.org/en/latest/deployment/heroku.html)
Lastly, if you have a DevOps or ML Engineering team who's operating a Kubernetes or OpenShift cluster, use the following guides as references for implementating your deployment strategy:
* [Kubernetes Deployment](https://docs.bentoml.org/en/latest/deployment/kubernetes.html)
* [Knative Deployment](https://docs.bentoml.org/en/latest/deployment/knative.html)
* [Kubeflow Deployment](https://docs.bentoml.org/en/latest/deployment/kubeflow.html)
* [KFServing Deployment](https://docs.bentoml.org/en/latest/deployment/kfserving.html)
* [Clipper.ai Deployment Guide](https://docs.bentoml.org/en/latest/deployment/clipper.html)
| true |
code
| 0.553083 | null | null | null | null |
|
# Ax Service API with RayTune on PyTorch CNN
Ax integrates easily with different scheduling frameworks and distributed training frameworks. In this example, Ax-driven optimization is executed in a distributed fashion using [RayTune](https://ray.readthedocs.io/en/latest/tune.html).
RayTune is a scalable framework for hyperparameter tuning that provides many state-of-the-art hyperparameter tuning algorithms and seamlessly scales from laptop to distributed cluster with fault tolerance. RayTune leverages [Ray](https://ray.readthedocs.io/)'s Actor API to provide asynchronous parallel and distributed execution.
Ray 'Actors' are a simple and clean abstraction for replicating your Python classes across multiple workers and nodes. Each hyperparameter evaluation is asynchronously executed on a separate Ray actor and reports intermediate training progress back to RayTune. Upon reporting, RayTune then uses this information to performs actions such as early termination, re-prioritization, or checkpointing.
```
import logging
from ray import tune
from ray.tune import track
from ray.tune.suggest.ax import AxSearch
logger = logging.getLogger(tune.__name__)
logger.setLevel(level=logging.CRITICAL) # Reduce the number of Ray warnings that are not relevant here.
import torch
import numpy as np
from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.ax_client import AxClient
from ax.utils.notebook.plotting import render, init_notebook_plotting
from ax.utils.tutorials.cnn_utils import load_mnist, train, evaluate
init_notebook_plotting()
```
## 1. Initialize client
We specify `enforce_sequential_optimization` as False, because Ray runs many trials in parallel. With the sequential optimization enforcement, `AxClient` would expect the first few trials to be completed with data before generating more trials.
When high parallelism is not required, it is best to enforce sequential optimization, as it allows for achieving optimal results in fewer (but sequential) trials. In cases where parallelism is important, such as with distributed training using Ray, we choose to forego minimizing resource utilization and run more trials in parallel.
```
ax = AxClient(enforce_sequential_optimization=False)
```
## 2. Set up experiment
Here we set up the search space and specify the objective; refer to the Ax API tutorials for more detail.
```
ax.create_experiment(
name="mnist_experiment",
parameters=[
{"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True},
{"name": "momentum", "type": "range", "bounds": [0.0, 1.0]},
],
objective_name="mean_accuracy",
)
```
## 3. Define how to evaluate trials
Since we use the Ax Service API here, we evaluate the parameterizations that Ax suggests, using RayTune. The evaluation function follows its usual pattern, taking in a parameterization and outputting an objective value. For detail on evaluation functions, see [Trial Evaluation](https://ax.dev/docs/runner.html).
```
def train_evaluate(parameterization):
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_loader, valid_loader, test_loader = load_mnist(data_path='~/.data')
net = train(train_loader=train_loader, parameters=parameterization, dtype=torch.float, device=device)
track.log(
mean_accuracy=evaluate(
net=net,
data_loader=valid_loader,
dtype=torch.float,
device=device,
)
)
```
## 4. Run optimization
Execute the Ax optimization and trial evaluation in RayTune using [AxSearch algorithm](https://ray.readthedocs.io/en/latest/tune-searchalg.html#ax-search):
```
tune.run(
train_evaluate,
num_samples=30,
search_alg=AxSearch(ax), # Note that the argument here is the `AxClient`.
verbose=0, # Set this level to 1 to see status updates and to 2 to also see trial results.
# To use GPU, specify: resources_per_trial={"gpu": 1}.
)
```
## 5. Retrieve the optimization results
```
best_parameters, values = ax.get_best_parameters()
best_parameters
means, covariances = values
means
```
## 6. Plot the response surface and optimization trace
```
render(
plot_contour(
model=ax.generation_strategy.model, param_x='lr', param_y='momentum', metric_name='mean_accuracy'
)
)
# `plot_single_method` expects a 2-d array of means, because it expects to average means from multiple
# optimization runs, so we wrap out best objectives array in another array.
best_objectives = np.array([[trial.objective_mean * 100 for trial in ax.experiment.trials.values()]])
best_objective_plot = optimization_trace_single_method(
y=np.maximum.accumulate(best_objectives, axis=1),
title="Model performance vs. # of iterations",
ylabel="Accuracy",
)
render(best_objective_plot)
```
| true |
code
| 0.773954 | null | null | null | null |
|
# 1: Palindrome 1
```
palindrome_answer = "abcdefghijklmnopqrstuvwxyzyxwvutsrqponmlkjihgfedcba"
def basic_palindrome():
letter = 'a'
output_string = ""
# chr() converts a numeric value to a character and ord() converts a character to a numeric value
# This allows us to arithmetically change the value of our letter
while letter != 'z':
output_string += letter
letter = chr(ord(letter) + 1)
# The top loop adds 'a' -> 'y' to the output_string, the bottom loop adds 'z' -> 'b' to the output_string
while letter != 'a':
output_string += letter
letter = chr(ord(letter) - 1)
# We add the final 'a' here and return the answer
output_string += letter
return output_string
print(palindrome_answer == basic_palindrome())
# We could also do:
import string
def string_lib_palindrome():
# string.acsii_lowercase will get us the whole alphabet in lowercase from Python
# then we use list slicing to add the same list, but in reverse and removing the last 'z'
return string.ascii_lowercase + string.ascii_lowercase[-2::-1]
print(palindrome_answer == string_lib_palindrome())
```
# 2: Palindrome 2
```
palindrome_f_answer = "fghijklmnopqrstuvwxyzyxwvutsrqponmlkjihgf"
# Note that there is no error checking done here - the user could enter a whole set of letters, punctuation etc
def input_palindrome():
start_letter = input("Enter a starting letter for your palindrome alphabet: ")
letter = start_letter
output_string = ""
while letter != 'z':
output_string += letter
letter = chr(ord(letter) + 1)
while letter != start_letter:
output_string += letter
letter = chr(ord(letter) - 1)
output_string += letter
return output_string
print(palindrome_f_answer == input_palindrome())
# We could also do:
import string
def input_string_lib_palindrome():
start_letter = ord(input("Enter a starting letter for your palindrome alphabet: ")) - ord('a')
return string.ascii_lowercase[start_letter::] + string.ascii_lowercase[-2:start_letter - 1:-1]
print(palindrome_f_answer == input_string_lib_palindrome())
```
# 3: Pyramid
```
def pyramid_printer():
pyramid_height = 15
pyramid_output = ""
# This loop walks from the tip of the pyramid to the base
for level in range(pyramid_height):
level_string = ""
# This loop adds an appropriate amount of space-padding to the left of the letters
# We subtract 1 to take the central letter into account (the width of the letters in each line is: (level * 2) + 1)
for space in range(pyramid_height - 1 - level):
level_string += " "
# This loops adds as many letters as the level we are on (so no letters on 0, 'a' on 1, 'ab' on 2 etc)
# Note that this loop prints nothing at the very tip, level 0
for letter_offset in range(level):
level_string += chr(ord('a') + letter_offset)
# This loop prints 1 more letters than the current level
# so 1 letter is printed on level 0, the letter will be 'a' + 0 - 0 ('a') This is the pyramid's tip
# on level 1, 2 letters are printed: 'a' + 1 - 0 ('b') and 'a' + 1 - 1 ('a')
for letter_offset in range(level + 1):
level_string += chr(ord('a') + level - letter_offset)
pyramid_output += level_string + "\n"
return pyramid_output
print(pyramid_printer())
```
# 4: Collatz Conjecture
```
collatz_12_answer = [12, 6, 3, 10, 5, 16, 8, 4, 2, 1]
# First we use the modulo operator (%) to check for evenness
def collatz_sequence(starting_number):
# Make sure to add the first number to the list
sequence = [starting_number]
# We change the name of the variable here to make it more readable in this loop
current_number = starting_number
while current_number != 1:
if current_number % 2 == 0:
# We don't have to cast down to an int here, as the value will always be exact
# but it makes the output look consistent
current_number = int(current_number / 2)
sequence.append(current_number)
else:
# Again, we don't need to cast to int, but it looks nicer
current_number = int((3 * current_number) + 1)
sequence.append(current_number)
return sequence
print(collatz_12_answer == collatz_sequence(12))
# And now we use bitwise AND to check for evenness
def collatz_sequence(starting_number):
sequence = [starting_number]
current_number = starting_number
while current_number != 1:
# Remember that binary reads from right to left, from indices 0 to n
# Each position represents 2^position_index
# To find the value of a binary number, therefore, we calculate the power of 2 for any position with a 1 and add the results together
# e.g. 100110101 has 1s for values 2^0 = 1, 2^2 = 4, 2^4 = 16, 2^5 = 32, and 2^8 = 256 meaning this is 309 in binary
# Notice that if the rightmost bit is set to 1, then we add 1 to the number, and for every other index we add an even number
# This means that all odd binary numbers end with 1
# Therefore, f we take the bitwise AND of any number and binary 1, the result will always be 0 for even numbers, and 1 for odd numbers
# e.g. 100110101 & 000000001 = 000000001 whilst 100110100 & 000000001 == 000000000 (as 1 & 1 = 1 and 1 & 0 = 0)
if current_number & 1 == 0:
current_number = int(current_number / 2)
sequence.append(current_number)
else:
current_number = int((3 * current_number) + 1)
sequence.append(current_number)
return sequence
print(collatz_12_answer == collatz_sequence(12))
```
# 5: Run-Length Encoding
```
rle_input_01 = "aaeeeeae"
rle_input_01_answer = "2a4e1a1e"
rle_input_02 = "rr44errre"
rle_input_02_answer = "invalid input"
rle_input_03 = "eeeeeeeeeeeeeeeeeeeee"
rle_input_03_answer = "21e"
import string
def run_length_encoder(input_to_encode):
encoding = ""
# This value always starts at 1 as we are always looking at some letter, so the minimum encoding length is 1
current_length = 1
# This loop walks through all indices except the final index.
# We need to skip the last index as we check "index + 1" within the loop,
# if we did this with the final index we would get an out of bounds error
for i in range(len(input_to_encode) - 1):
# First we check that the letter at the current index is a valid lowercase letter
if input_to_encode[i] not in string.ascii_lowercase:
return "invalid input"
# Next we see if this letter is the same as the next letter, if so increase the current encoding length
elif input_to_encode[i] == input_to_encode[i + 1]:
current_length += 1
# Otherwise, we add the current encoding length and the relevant letter to the encoding output string
# We also need to make sure we reset the encoding length back to the starting value
else:
encoding += (str(current_length) + input_to_encode[i])
current_length = 1
# Since we don't look at the final index directly in the loop, we must look at it here
# If the letter is new, then current_length is 1 already, and we just need to add the last letter
# If current_length isn't 1, then it's already at the correct value as the loop incremented the encoding length when it saw the second last letter
encoding += (str(current_length) + input_to_encode[i + 1])
return encoding
print(run_length_encoder(rle_input_01) == rle_input_01_answer)
print(run_length_encoder(rle_input_02) == rle_input_02_answer)
print(run_length_encoder(rle_input_03) == rle_input_03_answer)
```
| true |
code
| 0.401658 | null | null | null | null |
|
# Least Squares Regression for Impedance Analysis

## Introduction
This is a tutorial for how to set up the functions and calls for curve fitting
an experimental impedance spectrum with Python using a least squares
regression. Four different models are used as examples for how to set up the
curve fit and estimate parameters. The first two are generic circuit element
combinations, to illustrate the input-output flow of the functions, the third
example is of a Randles circuit, which can be used for parameter estimation and
finally the Macrohomogeneous Porous Electrode (MHPE).
## Contents
* [Nomenclature](#Nomenclature)
* [Modules](#Modules)
* [Functions](#Functions)
* [Impedance Models](#Impedance-Models)
* [z_a](#z_a) - Equivalent circuit example 1
* [Inputs](#z_a-Inputs)
* [Outputs](#z_a-Outputs)
* [Example function call](#Example-Usage-of-z_a)
* [z_b](#z_b) - Equivalent circuit example 2
* [Inputs](#z_b-Inputs)
* [Outputs](#z_b-Outputs)
* [Example function call](#Example-Usage-of-z_b)
* [Randles Circuit](#Randles-Circuit)
* [warburg](#warburg)
* [Inputs](#warburg-Inptus)
* [Outputs](#warburg-Outputs)
* [z_randles](#z_randles)
* [Inputs](#z_randles-Inptus)
* [Outputs](#z_randles-Outputs)
* [z_mhpe (Macrohomogeneous Porous Electrode](#z_mhpe)
* [Inputs](#z_mhpe-Inputs)
* [Outputs](#z_mhpe-Outputs)
* [cell_response](#cell_response)
* [Inputs](#cell_response-Inptus)
* [Outputs](#cell_response-Outputs)
* [Example function call](#Example-usage-of-cell_response)
* [Least Squares](#Least-Squares)
* [residual](#residual)
* [Inputs](#residual-Inptus)
* [Outputs](#residual-Outputs)
* [Example function call](#Example-usage-of-residual)
* [z_fit](#z_fit)
* [z_plot](#z_plot)
* [Example Function Calls](#Examples) - examples of the input - output syntax of functions
* [Experimental Data](#Experimental-Data) - these are the data used in the curve fitting examples
* [Curve Fitting Examples](#Curve-Fitting-Examples)
* [Comparison of z_a and z_b](#Example-fitting-a-spectrum-(z_a-and-z_b))
* [Randles Circuit](#Example-fitting-a-spectrum-(z_randles))
* [Macrohomogeneous Porous Electrode](#Example-fitting-a-spectrum-(z_mhpe))
* [Appendix](#Appendix)
## Nomenclature
|Parameter | Description |Unit |
|---- |---- |---- |
|$A$ | Geometric surface area| $cm^2$|
|$ASR_x$ | Area Specific Resistance of x | $\Omega\ cm^2$|
|$A_t$ | Wetted surface area | $cm^2$|
|$CPE$ | Constant Phase Element| $F^P$|
|$C_x$ | Concentration of species x| $mol\ cm^{-3}$|
|$D_x$ | Diffusivity of species x| $cm^{2}\ s^{-1}$|
|$F$ | Faraday's constant| $C\ mol^{-1}$|
|$L$ | Inductance| $H$|
|$P$ | CPE exponent| $-$|
|$Q$ | CPE parameter| $F$|
|$R$ | Universal gas constant| $J\ mol^{-1}\ K^{-1}$|
|$T$ | Temperature| $K$|
|$W$ | Warburg impedance| $\Omega$|
|$Z_{x}$ | Impedance of x| $\Omega$|
|$a$ | Nernstian diffusion layer thickness| $cm$|
|$b$ | electrode thickness| $cm$|
|$f$ | Scale factor| $-$|
|$g_{ct}$ | charge transfer conductance (per unit length) | $S cm^{-1}$|
|$i_0$ | Exchange current density| $A\ cm^{-2}$|
|$j$ | Imaginary unit ($\sqrt{-1}$)| $-$|
|$n$ | Number of electrons transferred| $-$|
|$\lambda$ | complex decay length| cm |
|$\omega$ | Angular frequency| $rad\ s^{-1}$|
|$\rho_1$ | Electrolyte resistivity| $\Omega\ cm$|
|$\rho_2$ | Solid Phase resistivity| $\Omega\ cm$|
### Modules
The three modules used in this tutorial are [numpy](https://numpy.org/),
[scipy](https://www.scipy.org/), and [matplotlib](https://matplotlib.org/).
These can be installed from the shell (not iPython) with the following command:
pip install numpy scipy matplotlib
They are imported into the scope of the program with the following commands:
[Link to Contents](#Contents)
```
from numpy import real,imag,pi,inf,array,concatenate,log,logspace,tanh,sqrt,exp,sinh
from scipy.optimize import least_squares
import matplotlib.pyplot as plt
import matplotlib
#------------------------------------------------------------------------------
# Constants
F = 96485.33289
Ru = 8.3144598
#------------------------------------------------------------------------------
try:
plt.style.use("jupyter_style")
except:
plt.rcParams['axes.labelsize'] = 24
plt.rcParams['font.size'] = 20
fs = (12,6) # figure size
coth = lambda x: (exp(2*x)+1)/(exp(2*x)-1)
colors = 'rgbcmyk'
markers = 'x+v^os'
```
# Functions
## Impedance Models
Each of the impedance models adhere to the following function template:
* Inputs:
* input_dict - *dictionary* containing the model parameters. There is no distinction from the perspective of the model (function) between fitted and fixed parameters. This is handled by the least_squares functions and allows the user to define which parameters should float and which should be fixed.
* frequency - a frequency **or** an (numpy) *array* of frequencies at which to evaluate the model
* Output:
* Z($\omega$) - (complex number)
[Link to Contents](#Contents)
---
### z_a
*z_a* is a function which **returns** the complex impedance response of the following equivalent circuit model.

The equation for this circuit is:
$$z_a = R_0 + (R_1^{-1} + j \omega C_1)^{-1} + (R_2^{-1} + j\omega C_2)^{-1} $$
#### z_a Inputs
* a dictionary containing the parameters (indexed by keys that match the variable names) in the model:
0. 'R0' - Resistance 0 (ohmic resistance)
1. 'R1' - Resistance 1
2. 'R2' - Resistance 2
3. 'C1' - capacitance 1 (in parallel with R1)
4. 'C2' - capacitance 2 (in parallel with R2)
* frequency - frequency (or array of frequencies) to calculate impedance (rad / sec)
#### z_a Outputs
* Z($\omega$) - (complex number)
[Link to Contents](#Contents)
```
def z_a(input_dict,
frequency):
R0 = input_dict['R0']
R1 = input_dict['R1']
R2 = input_dict['R2']
C1 = input_dict['C1']
C2 = input_dict['C2']
return(R0+(R1**-1+1j*frequency*C1)**-1+(R2**-1+1j*frequency*C2)**-1)
```
### z_b
*z_b* is a function which **returns** the complex impedance response of the
following equivalent circuit model.

The equation for this circuit is:
$$z_b = R_0 + ((R_1 + (R_2^{-1} + j\omega C_2)^{-1})^{-1} + j \omega C_1)^{-1} $$
#### z_b Inputs
* a dictionary containing the parameters (indexed by keys that match the variable names) in the model:
0. 'R0' - Resistance 0 (ohmic resistance)
1. 'R1' - Resistance 1
2. 'R2' - Resistance 2
3. 'C1' - capacitance 1 (in parallel with R1 and the parallel combination of R2 and C2)
4. 'C2' - capacitance 2 (in parallel with R2)
* frequency - frequency (or array of frequencies) to calculate impedance (rad / sec)
#### z_b Outputs
* Z($\omega$) - (complex number)
[Link to Contents](#Contents)
```
def z_b(input_dict,
frequency):
R0 = input_dict['R0']
R1 = input_dict['R1']
R2 = input_dict['R2']
C1 = input_dict['C1']
C2 = input_dict['C2']
return(R0+1/(1/(R1+1/(1/R2+(1j*frequency)*C2))+(1j*frequency)*C1))
```
### Randles Circuit
The following functions set up the calculations for a Randles circuit response.
This is broken up into two functions:
* [warburg](#warburg)
* [z_randles](#z_randles)
Breaking the solution process up in this way helps the readability of the
functions and the process of decomposing circuit elements. Note that each of
these functions take the same arguments and follow the template described
[here](#Impedance-models). Using dictionaries (instead of lists/arrays) allows
for the arguments to these functions to be generic so that they can extract the
values that they need without needing inputs to be ordered, which allows for
higher flexibility. [z_randles](#z_randles) is called by the function
[cell_response](#cell_response) to determine the response of the symmetric cell.
[Link to Contents](#Contents)
### warburg
The Warburg element is modeled by:
$$\frac{W}{A} = \frac{RT}{A_tn^2F^2C_RfD_R}\frac{tanh\bigg(a\sqrt{\frac{j\omega}{D_R}}\bigg)}{\sqrt{\frac{j\omega}{D_R}} }+\frac{RT}{A_tn^2F^2C_OfD_O}\frac{tanh\bigg(a\sqrt{\frac{j\omega}{D_O}}\bigg)}{\sqrt{\frac{j\omega}{D_O} }}$$
#### warburg Inputs
* *input_dict*: a dictionary containing all the parameters for the full cell response, the warburge element only needs the entries for:
* 'T'
* 'C_R'
* 'C_O'
* 'D_R'
* 'D_O'
* 'n'
* 'A_t'
* 'a'
* 'f'
* frequency - frequency to calculate impedance (rad / sec)
#### warburg Outputs
* W($\omega$) - (complex number)
[Link to Contents](#Contents)
```
def warburg(input_dict,
frequency):
T = input_dict['T']
A_t = input_dict['A_t']
C_R = input_dict['C_R']
C_O = input_dict['C_O']
D_R = input_dict['D_R']
D_O = input_dict['D_O']
n = input_dict['n']
a = input_dict['a']
f = input_dict['f']
c1 = (Ru*T)/(A_t*n**2*F**2*f) # both terms multiply c1
term1 = c1*tanh(a*sqrt(1j*frequency/D_R))/(C_R*sqrt(D_R*1j*frequency))
term2 = c1*tanh(a*sqrt(1j*frequency/D_O))/(C_O*sqrt(D_O*1j*frequency))
return(term1+term2)
```
### z_randles
z_randles calculates the response of a randles circuit element with no ohmic
resistance (complex number). It calls [warburg](#warburg) and returns the
serial combination of the charge transfer resistance and the warburg impedance
in parallel with a constant phase element (CPE).

This is modeled by the equation:
$$Z_{Randles} = A \times (z_{ct}^{-1}+j\omega^{P}Q)^{-1}$$
The charge transfer impedance is modeled by the equation:
$$z_{ct} = R_{ct} + W $$
Where the charge transfer resistance is calculated by the equation:
$$R_{ct} = \frac{RT}{A_tnFi_0}$$
The term $\frac{1}{j\omega^PQ}$ represents the CPE.
#### z_randles Inputs
* *input_dict*: a dictionary containing all the parameters for the full cell response, z_randles only needs the entries for:
* 'T'
* 'n'
* 'A_t'
* 'i0'
* frequency - frequency to calculate impedance (rad / sec)
#### z_randles Outputs
* Z_ct($\omega$) / ($\Omega$ cm$^2$) - (complex number)
[Link to Contents](#Contents)
```
def z_randles( input_dict,
frequency):
T = input_dict['T']
n = input_dict['n']
A_t = input_dict['A_t']
i0 = input_dict['i0']
P = input_dict['P']
C_dl = input_dict['C_dl']
A = input_dict['A']
# Calculate Warburg Impedance
w = warburg(input_dict,frequency)
# Calculate charge transfer resistance
R_ct = Ru*T/(n*F*i0*A_t)
serial = R_ct+w
z_ct = 1/(1/serial+(1j*frequency)**P*C_dl*A_t)
return(A*z_ct)
```
### z_mhpe
*z_mhpe* calculates the response of the macrohomogeneous porous electrode model
to the specified inputs. Note that this function calls [z_randles](#z_randles)
to to determine the Randles circuit response. The derivation of this circuit
element is equivalent to a branched transmission line with Randles circuit
elements defining the interfacial interaction. Which interact with different
amounts of electrolyte resistivity determined by the electrode thickness.

This model was derived by Paasch *et al.* and has several equivalent
formulations:
Equation 22 from [Paasch *et al.*](https://doi.org/10.1016/0013-4686(93)85083-B)
$$ AZ_{mhpe} = \frac{(\rho_1^2+\rho_2^2)}{(\rho_1+\rho_2)}\frac{ coth(b \beta)}{\beta}
+ \frac{2\rho_1 \rho_2 }{(\rho_1+\rho_2)}\frac{1}{\beta sinh(b \beta)}
+ \frac{\rho_1 \rho_2 b}{(\rho_1+\rho_2)}$$
Equation 10 from [Nguyen *et al.*](https://doi.org/10.1016/S0022-0728(98)00343-X)
$$ Z_{mhpe} \frac{A}{b} = \frac{(\rho_1^2+\rho_2^2)}{(\rho_1+\rho_2)}\frac{\lambda}{b} coth\bigg(\frac{b}{\lambda}\bigg)
+ \frac{2\rho_1 \rho_2 }{(\rho_1+\rho_2)}\frac{\lambda}{b}\bigg(sinh\bigg(\frac{b}{\lambda}\bigg)\bigg)^{-1}
+ \frac{\rho_1 \rho_2 b}{(\rho_1+\rho_2)}$$
Equation 1 from [Sun *et al.*](https://www.osti.gov/biblio/1133556-resolving-losses-negative-electrode-all-vanadium-redox-flow-batteries-using-electrochemical-impedance-spectroscopy)
$$ A Z_{mhpe} = \frac{(\rho_1^2+\rho_2^2) b}{(\rho_1+\rho_2)}\frac{coth(Q_2)}{Q_2}
+ \frac{2\rho_1 \rho_2 b}{(\rho_1+\rho_2)Q_2sinh(Q_2)}
+ \frac{\rho_1 \rho_2 b}{(\rho_1+\rho_2)}$$
Clearly if these are equivalent then:
$$ b\beta = \frac{b}{\lambda} = Q_2$$
#### z_mhpe Inputs
* *input_dict*: a dictionary containing all the parameters for the, MHPE
response :
* 'rho1'
* 'rho2' - $\frac{\rho_{ 2}^*f_p}{\nu_p}$ (page 2654) where $\rho_2$ = bulk electrolyte resistivity, $f_p$ is the toruosity factor and $\nu_p$ is the relative pore volume
* 'b'
* frequency - frequency to calculate impedance (rad / sec)
#### z_mhpe Outputs
* AZ_mhpe ($\omega$) / ($\Omega$ cm$^2$) - (complex number)
[Link to Contents](#Contents)
```
def z_mhpe( input_dict,
frequency):
rho1 = input_dict['rho1']
rho2 = input_dict['rho2']
b = input_dict['b']
C_dl = input_dict['C_dl']
A_t = input_dict['A_t']
A = input_dict['A']
T = input_dict['T']
n = input_dict['n']
i0 = input_dict['i0']
P = input_dict['P']
# these are the same for each formulation of the MHPE model
coeff_1 = (rho1**2+rho2**2)/(rho1+rho2)
coeff_2 = 2*rho1*rho2/(rho1+rho2)
term_3 = rho1*rho2*b/(rho1+rho2)
##=========================================================================
## Sun et al. 2014
##-------------------------------------------------------------------------
## note: z_r multiplies the response by A/At so this operation is already
## finished for equation 2
##-------------------------------------------------------------------------
#z_r = z_randles(input_dict,frequency) # equation 6
#Q2 = sqrt((rho1+rho2)*b/z_r) # equation 2
#term_1 = coeff_1*(b/Q2)*coth(Q2)
#term_2 = coeff_2*(b/Q2)*sinh(Q2)**(-1)
##=========================================================================
## Paasch et al. '93
##--------------------------------------------------------------------------
## note: modification of g_ct (below equation 9) to include mass transfer
## conductance (on unit length basis) - recall that the warburg element
## already has A_t in its denominator so it is only multiplied by b to
## account for the fact that it has A*S_c in its denominator
##--------------------------------------------------------------------------
#S_c = A_t/(b*A) # below equation 2
#C_1 = A*S_c*C_dl # equation 3
#g_ct = 1/(Ru*T/(n*F*i0*S_c*A) + b*warburg(input_dict,frequency))
#k = g_ct/C_1 # equation 12
#K = 1/(C_dl*S_c*(rho1+rho2)) # equation 12
#omega_1 = K/b**2 # equation 23
#beta = (1/b)*((k+(1j*frequency)**P)/omega_1)**(1/2) # equation 23
#term_1 = coeff_1*(coth(b*beta)/beta)
#term_2 = coeff_2*(1/beta)*sinh(b*beta)**(-1)
##=========================================================================
# Nguyen et al.
#--------------------------------------------------------------------------
# note: Paasch was a co-author and it uses much of the same notation as '93
# They use a mass transfer hindrance term instead of warburg, so it is
# replaced again here like it was for the Paasch solution. Also notice that
# omega_0 is equivalent to k in Paasch et al.
# the warburg element function returns W with A_t already in the
# denominator, so it is multiplied by b to account for the fact that it has
# S_c*A in its denominator
#--------------------------------------------------------------------------
S_c = A_t/(b*A) # below equation 4
g_ct = 1/(Ru*T/(n*F*i0*S_c*A) + b*warburg(input_dict,frequency))
omega_0 = g_ct/(C_dl*S_c*A) # equation 6
g = C_dl*S_c*((1j*frequency)**P+omega_0) # equation 5
lamb = 1/sqrt(g*(rho1+rho2)) # equation 4
term_1 = coeff_1*lamb*coth(b/lamb)
term_2 = coeff_2*lamb*(sinh(b/lamb))**(-1)
#--------------------------------------------------------------------------
mhpe = term_1+term_2+term_3
return(mhpe)
```
### cell_response
*cell_response* calculates the Randles circuit response of a symmetric cell, which is illustrated graphically in the following circuit model.

The symmetric cell is assumed to contribute equally with both half-cells which
necessitates the $\times 2$. The following equation models this response:
$$\frac{Z_{Randles}}{A} = R_{mem} + j \omega L + 2 \times z_{electrode}$$
Where
$$z_{electrode} \in [z_{randles},z_{mhpe}]$$
#### cell_response Inputs
* *input_dict*: a dictionary containing all the parameters for the full cell response, z_randles only needs the entries for:
* 'A'
* 'C_dl'
* 'A_t'
* 'P'
* 'ASR_mem'
* 'L'
* frequency - frequency to calculate impedance (rad / sec)
#### cell_response Outputs
* AZ_randles$($\omega$) - (complex number)
[Link to Contents](#Contents)
```
def cell_response( input_dict,
frequency):
A = input_dict['A']
ASR_mem = input_dict['ASR_mem']
L = input_dict['L']
z_model = input_dict['z_model'] # z_model can be z_randles or z_mhpe
z_elec = z_model(input_dict,frequency)
return(ASR_mem+2*z_elec+1j*L*frequency*A)
```
## Least Squares
This script uses
[scipy.optimize.least_squares](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html).
To optimize the parameters. The two functions defined here are the *residual*
calculation and the fitting function *z_fit* which facilitates the I/O for the
least_squares with different models.
[Link to Contents](#Contents)
### *residual*
*residual* is a function which is an input to scipy.optimize.least_squares.
It calculates the impedance response at a given frequency (or range of
frequencies) and returns the difference between the generated value(s) and the
experimental data.
#### *residual* Inputs
* x0 - a list, which stores the arguments that are inputs to *Z*. The
parameters are unpacked in the following order:
* frequency - frequency argument to *Z* (can be single value or array)
* data - experimental data (complex number)
* input_dict - a dictionary which contains at the minimum the values that the selected model expects
* floating_params - *list* which contains the keys (which are entries in input_dict) for the floating parameters
* model - for the impedance models described in this tutorial, each has a
template format with 3 arguments. For this reason, they can be treated as
arguments to *residual* and can be used interchangeably.
#### *residual* Outputs
* 1D array (real numbers) with difference (at each frequency) between experimental and simulated spectrum
[Link to Contents](#Contents)
```
def residual( x0,
frequency,
data,
input_dict,
floating_params,
model):
# the dictionary (temp_dict) is the interface between the residual calculation and the
# models, by having the residual seed temp_dict with the floating
# parameters, the level of abstraction given to ability of parameters to
# float is increased. (not all of the models will need the same format and
# they can parse dictionaries for values they are expecting)
temp_dict = {}
for i,key in enumerate(floating_params):
temp_dict[key] = x0[i]
for key in input_dict:
if key not in floating_params:
temp_dict[key] = input_dict[key]
# generate a spectrum with the inputs
Z_ = model( temp_dict,frequency)
# Calculate the difference between the newly generated spectrum and the
# experimental data
Real = real(Z_)-real(data)
Imag = imag(Z_)-imag(data)
return(concatenate([Real,Imag]))
```
### *z_fit*
*z_fit* sets up the call to
[scipy.optimize.least_squares](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html)
and returns the fitted parameters
#### *z_fit* Inputs
* inputDict - a dictionary which contains the initial guess for the parameters.
* residualFunction - residual to be minimized by least_squares
* ReZ - real component of complex impedance ($\Omega$)
* ImZ - imaginary component of complex impedance ($\Omega$)
* frequency - array with same dimensions as ReZ and ImZ (rad s$^{-1}$)
* area - geometric surface area (cm$^2$)
* model - both z_a and z_b are valid input arguments as the model for Zfit to use.
* constants - fixed parameters for the model
* oneMaxKeys - *optional* - a list containing any parameters that have a maximum value of 1 (i.e. CPE exponent P)
*z_fit* converts the input data from individual ReZ and ImZ arrays of
resistances (in $\Omega$) to complex impedance values in ASR ($\Omega$ cm$^2$).
The bounds_lower and bounds_upper variables set the bounds for the parameters
(i.e. non-negative for this case). **Note**: The order of the arguments in
bounds_lower and bounds_upper is the same as the order of x0 and correspond to
parameters which share the same index.
The default method for least_squares is *Trust Region Reflective* ('trf'),
which is well suited for this case. The variable *out* contains the output of
the call to least_squares.
[Link to Contents](#Contents)
```
def z_fit( input_dict,
ReZ,
ImZ,
frequency,
floating_params,
area,
model,
residual_function,
one_max_keys = []):
# parameter order -> [R0, R1, R2, C1, C2]
data = (ReZ-1j*ImZ)*area
# Set the bounds for the parameters
x0 = [input_dict[key] for key in floating_params]
bounds_lower = [0 for key in floating_params]
bounds_upper = [inf if param not in one_max_keys else 1 for param in floating_params]
out = least_squares( residual_function,
x0,
bounds = (bounds_lower,bounds_upper),
args = (frequency,data,input_dict,floating_params,model))
output_dict = {}
j = 0
print("-"*80)
print("model = {}".format(input_dict['z_model'].__name__))
for key in floating_params:
output_dict[key] = out.x[j]
print("\t{} = {}".format(key,out.x[j]))
j+=1
for key in input_dict:
if key not in floating_params:
output_dict[key] = input_dict[key]
# the fitted parameters are extracted from the 'output' variable
return(output_dict)
```
### z_plot
*z_plot* is a function used to generate plots of the spectra (both
experiment and model). It creates a subplot with two columns: the first column
is an Argand diagram (Nyquist plot) comparing the experimetn and model; the
second column compares the spectra (real and imaginary components) as a function
of frequency.
#### *z_plot* Inputs
* ReZ - Real(Z) of a spectrum
* ImZ - Imaginary(Z) of a spectrum
* ax_1 - handle for axis with nyquist plot
* ax_2 - handle for axis with frequency plot
* label - label the curves on each axis
* color - *optional* - list of colors to be used
* marker - *optional* - list of markers to be used
* linestyle - *optional* - linestyle specifier
* resistances - (list) contains the intercepts with the real axis
[Link to Contents](#Contents)
```
def z_plot(ReZ,
ImZ,
frequency,
ax_1,
ax_2,
label,
color = ['k','k'],
marker = ['x','+'],
linestyle = '',
resistances = []):
# Nyquist plot for the spectrum
ax_1.plot(ReZ,-ImZ,color = color[0], marker = marker[0],
linestyle = linestyle, label = label)
# plotting the real axis intercepts
if resistances != []:
R0,R1,R2 = resistances
ax_1.plot(R0,0,'b+')
ax_1.plot(R0+R1,0,'b+')
ax_1.plot(R0+R1+R2,0,'b+',label = 'model ASR estimation')
ax_1.axis('equal')
ax_1.set_xlabel('Re(Z) / $\Omega$ cm$^2$')
ax_1.set_ylabel('-Im(Z) / $\Omega$ cm$^2$')
# |Z| as a function of frequency
ax_2.plot(log(frequency),-ImZ,color = color[0], marker = marker[0],
label = "-Im({})".format(label), linestyle = linestyle)
ax_2.plot(log(frequency),ReZ,color = color[1], marker = marker[1],
label = "Re({})".format(label), linestyle = linestyle)
ax_2.set_xlabel('log(frequency) / rad s$^{-1}$')
ax_2.set_ylabel('|Z| / $\Omega$ cm$^2$')
ax_2.yaxis.set_label_position('right')
[ax.legend(loc = 'best') for ax in [ax_1,ax_2]]
```
### high_frequency_detail
*high_frequency_detail*
#### *high_frequency_detail* Inputs
* ReZ - Real(Z) of a spectrum
* ImZ - Imaginary(Z) of a spectrum
* ax - handle for axis for nyquist plot
* label - label the curves on each axis
* color - *optional* - string with color
* marker - *optional* - string with marker to be used
* linestyle - *optional* - linestyle specifier
* spacing - *optional* - how wide and tall the window will be
* xlim - *optional* - list containing x limits for plot
* ylim - *optional* - list containing y limits for plot
[Link to Contents](#Contents)
```
def high_frequency_detail( ReZ,
ImZ,
ax,
label,
color = 'k',
marker = 'x',
linestyle = '',
spacing = 2,
x_min = 0,
y_min = 0):
ax.plot( ReZ,ImZ,
color = color,
marker = marker,
label = label,
linestyle = linestyle)
ax.set_ylim(y_min,y_min+spacing)
ax.set_xlim(x_min,x_min+spacing)
ax.set_xlabel('Re(Z) / $\Omega$ cm$^2$')
ax.set_ylabel('-Im(Z) / $\Omega$ cm$^2$')
```
## Examples
The following are examples demonstrating the I/O for the functions. These are
the *forward* calculations of spectra (i.e. the parameters are knowns).
[Here](#Curve-Fitting-Examples) is a link to the examples for curve fitting the
experimental data.
[Link to Contents](#Contents)
### Example Usage of z_a
[Link to Contents](#Contents)
```
input_dict = {
'R0':1,
'R1':2,
'R2':2,
'C1':10e-6,
'C2':10e-3,
'z_model':z_a
}
fre = 100
# uncomment the following line to use a numpy array as an input to z_a
#fre = logspace(-1,6,60)
z_ = z_a(input_dict,fre)
print(z_)
fig,ax = plt.subplots(nrows = 1, ncols = 2, figsize = fs, num = 0)
z_plot(real(z_),imag(z_),fre,ax[0],ax[1],'z_a')
```
### Example Usage of *z_b*
[Link to Contents](#Contents)
```
input_dict = {
'R0':1,
'R1':2,
'R2':2,
'C1':10e-6,
'C2':10e-3,
'z_model':z_b
}
fre = 100
# uncomment the following line to use a numpy array as an input to z_b
#fre = logspace(-1,6,60)
z_ = z_b(input_dict,fre)
print(z_)
fig,ax = plt.subplots(nrows = 1, ncols = 2, figsize = fs, num = 1)
z_plot(real(z_),imag(z_),fre,ax[0],ax[1],'z_b')
```
### Example usage of cell_response
[Link to Contents](#Contents)
```
# Example Usage of cell_response
input_dict = {
'T':303.15, # Temperature (K)
'A':5, # geometric surface area (cm^2)
'C_R':0.00025, # concentration of reduced species (mol / L)
'C_O':0.00025, # concentration of oxidized species (mol / L)
'rho1':1.6, # electrolyte resistivity (ohm cm^-1)
'rho2':.012, # solid phase resistivity (ohm cm^-1)
'b':.3, # electrode thickness (cm)
'D_R':1.1e-6, # Diffusion coefficient for reduced species (cm^2 / s)
'D_O':0.57e-6, # Diffusion coefficient for oxidized species (cm^2 / s)
'C_dl':20e-6, # Double layer capacitance per unit surface area
'n':1, # number of electrons transferred
'A_t':100, # Wetted surface area (cm^2)
'i0':3e-4, # exchange current density (A cm^2)
'P':.95, # CPE exponent (-)
'ASR_mem':.5, # membrane ASR (Ohm cm^2)
'a':.002, # Nernstian diffusion layer thickness (cm)
'f':.05, # scale factor (-)
'L':1e-7, # inductance (H)
}
fre = 100 # frequency to evaluate (rad / s)
#---------------------------------------------------------------------------
# uncomment the following line to use a numpy array as an input to Z_Randles
#---------------------------------------------------------------------------
#fre = logspace(-1,6,60)
fig,ax = plt.subplots(nrows = 1, ncols = 2, figsize = fs, num = 2)
for j,model in enumerate([z_randles,z_mhpe]):
input_dict['z_model'] = model
print("-"*80)
print("Model name = ",model.__name__)
z_ = cell_response(input_dict,fre)
print(z_)
z_plot(real(z_),imag(z_), fre,
ax[0],
ax[1],
model.__name__,
color = [colors[j]]*2)
```
### Example usage of *residual*
**Note**: *residual* is only used in the scope
of [*z_fit*](#z_fit) as an argument for *scipy.optimize.least_squares*. This is
simply to illustrate the way it is called.
```
input_dict = {
'R0':1,
'R1':2,
'R2':2,
'C1':10e-6,
'C2':10e-6,
'z_model':'z_a'
}
floating_params = list(input_dict.keys())
x0 = [input_dict[key] for key in input_dict]
area = 5
# artificial data point and frequency
Zexperimental = (2-1j*.3)*area
freExperimental = 10
# using model z_a for this example
model = z_a
print(residual( x0,
array([freExperimental]),
array([Zexperimental]),
input_dict,
floating_params,
model)
)
```
## Experimental Data
* 0.5 M V
* 50% SoC (Anolyte)
* Untreated GFD3 Electrode
* 5 cm$^2$ flow through (4 cm $\times$ 1.25 cm)
* Symmetric Cell
* 30$^o$ C
* 25 mL min$^{-1}$
* 5 mV amplitude
* 50 kHz - 60 mHz (10 points per decade)
* recorded with a Bio-Logic VSP potentiostat
[Link to Contents](#Contents)
```
ReZ = array([ 0.08284266, 0.08796988, 0.09247773, 0.09680536, 0.1008338, 0.1046377,
0.10762515, 0.11097036, 0.11456104, 0.11730141, 0.12102044, 0.12483983,
0.12752137, 0.13284937, 0.13733491, 0.14415036, 0.15262745, 0.16596687,
0.18005887, 0.20618707, 0.23595014, 0.28682289, 0.35581028, 0.4261204,
0.56032306, 0.66784477, 0.80271685, 0.94683707, 1.0815266 , 1.1962512,
1.3139805, 1.4123734, 1.4371094, 1.4075497, 1.451781 , 1.4999349,
1.4951819, 1.520785, 1.5343723, 1.5442845, 1.5559914, 1.5715505,
1.581386, 1.6037546, 1.6127067, 1.6335728, 1.6475986 , 1.6718234,
1.6902045, 1.7113601, 1.7273785, 1.7500663, 1.7663705 , 1.7867819,
1.8013573, 1.8191988, 1.83577, 1.8508064, 1.8635432 , 1.8764733 ])
ImZ = array([
0.01176712, 0.01655927, 0.02027502, 0.02348918, 0.02688588, 0.02939095,
0.03261841, 0.03675437, 0.04145328, 0.04685605, 0.05258239, 0.06055644,
0.07127699, 0.08661526, 0.10406214, 0.12678802, 0.15203825, 0.18704301,
0.2319441, 0.27572113, 0.33624849, 0.39666826, 0.47322401, 0.53811002,
0.58483958, 0.63875258, 0.66788822, 0.64877927, 0.58104825, 0.54410547,
0.46816054, 0.39988503, 0.36872765, 0.30746305, 0.26421374, 0.22142638,
0.18307872, 0.15832621, 0.14938028, 0.14581558, 0.13111286, 0.12468486,
0.12108799, 0.12532991, 0.12312792, 0.12196486, 0.12405467, 0.12505658,
0.12438187, 0.12210829, 0.11554052, 0.11543264, 0.11017759, 0.10541131,
0.0994625, 0.09146828, 0.0852076, 0.0776009, 0.06408801, 0.05852762])
frequency = 2*pi*array([
5.0019516e+04, 3.9687492e+04, 3.1494135e+04, 2.5019525e+04, 1.9843742e+04,
1.5751949e+04, 1.2519524e+04, 9.9218750e+03, 7.8710889e+03, 6.2695298e+03,
5.0195298e+03, 4.0571001e+03, 3.2362456e+03, 2.4807932e+03, 1.9681099e+03,
1.5624999e+03, 1.2403966e+03, 9.8405493e+02, 7.8124994e+02, 6.2019836e+02,
4.9204675e+02, 3.9062488e+02, 3.0970264e+02, 2.4602338e+02, 1.9531244e+02,
1.5485132e+02, 1.2299269e+02, 9.7656219e+01, 7.7504944e+01, 6.1515743e+01,
4.8828110e+01, 3.8771706e+01, 3.0757868e+01, 2.4414059e+01, 1.9385853e+01,
1.5358775e+01, 1.2187985e+01, 9.6689367e+00, 7.6894670e+00, 6.0939932e+00,
4.8344669e+00, 3.8447335e+00, 3.0493746e+00, 2.4172332e+00, 1.9195327e+00,
1.5246876e+00, 1.2086166e+00, 9.5976633e-01, 7.6234382e-01, 6.0430837e-01,
4.7988322e-01, 3.8109747e-01, 3.0215418e-01, 2.3994161e-01, 1.9053015e-01,
1.5107709e-01, 1.1996347e-01, 9.5265076e-02, 7.5513750e-02, 5.9981719e-02])
```
## Curve Fitting Examples
### Example fitting a spectrum (z_a and z_b)
[Link to Experimental data](#Experimental-Data)
A *for* loop is used to iterate over the impedance models ([z_a](#z_a) and .
Each iteration calculates new values for the resistances and capacitances and
prints their values. A new spectrum (Z_) is generated with a call to each
respective model with the newly fitted parameters. Z_ is then plotted against
the experimental data. The model is shown in both an Argand diagram (Nyquist
plot) and as a function of the frequency.
[Link to Contents](#Contents)
```
input_dict = {
'R0':1,
'R1':2,
'R2':2,
'C1':10e-6,
'C2':10e-6,
}
floating_params = [key for key in input_dict]
# calculating the fitted parameters with z_fit and printing their values
for i,model in enumerate([z_a,z_b]):
input_dict['z_model'] = model
fit_spectrum = z_fit( input_dict,
ReZ,
ImZ,
frequency,
floating_params,
area,
model,
residual)
# generating a new spectrum with fitted parameters
Z_ = model(fit_spectrum,frequency)
fig,ax = plt.subplots(nrows = 1, ncols = 2, figsize = fs, num = i+3)
z_plot(real(Z_),imag(Z_),frequency,ax[0],ax[1],model.__name__,color = ['r','b'],marker = ['.','.'])
z_plot(ReZ*5,-ImZ*5,frequency,ax[0],ax[1],'Experiment',color = ['k','k'],marker = ['x','+'])
plt.show()
```
### Comments for z_a and z_b
The models used to fit this spectrum were
chosen arbitrarily as circuits that can model spectra with two semi-circles.
Equivalent circuit models should always be chosen (or formulated) based on the
physics of an electrochemical interface and not on the manifestation of the
interface at the macro level. Clearly these models have a few limitations:
* HFR - This experimental spectrum has a small distributed ionic resistance that the model is incapable of capturing. This can be observed from zooming in the high frequency content of the Nyquist plot, as well as the deviation of Re(z_a) and Re(z_b) from Re(Experiment) in the frequency plot. The result of this is that the membrane resistance will be overestimated.
* Ambiguity of circuit elements in z_a - Because R1 and R2 are both resistors in parallel with capacitors, there is no guarantee that R1 will correspond with Rct and R2 will correspond with R2. The starting guesses will determine this and not the interface physics.
* Constant Phase Element (CPE) - The discrepancy between Im(Z_) and Im(Experiment) in the Nyquist plot can be aided by improved by using a CPE
* Capacitances in z_a - There is no reason to expect that C1 and C2 would be fully independent
The model does, however, give a decent estimate of the chord of the charge transfer resistance and the diffusion resistance with some interpretation of the data.
[Link to Contents](#Contents)
### Example fitting a spectrum (z_randles)
[link to z_randles](#z_randles)
[Link to Contents](#Contents)
```
input_dict = {
'T':303.15, # Temperature (K)
'A':5, # geometric surface area (cm^2)
'C_R':0.00025, # concentration of reduced species (mol / L)
'C_O':0.00025, # concentration of oxidized species (mol / L)
'D_R':1.1e-6, # Diffusion coefficient for reduced species (cm^2 / s)
'D_O':0.57e-6, # Diffusion coefficient for oxidized species (cm^2 / s)
'C_dl':20e-6, # Double layer capacitance per unit surface area
'n':1, # number of electrons transferred
'A_t':100, # Wetted surface area (cm^2)
'i0':1e-4, # exchange current density (A cm^2)
'P':.95, # CPE exponent (-)
'ASR_mem':.5, # membrane ASR (Ohm cm^2)
'a':.001, # Nernstian diffusion layer thickness (cm)
'f':.01, # scale factor (-)
'L':1e-7, # inductance (H)
'z_model':z_randles # tells cell_response to use z_randles for the electrode
}
floating_params = ['A_t','i0','P','ASR_mem','a','f','L']
# calculating the fitted parameters with z_fit and printing their values
model = cell_response
fit_spectrum = z_fit( input_dict,
ReZ,
ImZ,
frequency,
floating_params,
area,
model,
residual,
one_max_keys = ['P'])
# generating a new spectrum with fitted parameters
Z_ = model(fit_spectrum,frequency)
fig,ax = plt.subplots(nrows = 1, ncols = 2, figsize = fs, num = 5)
z_plot(real(Z_),imag(Z_),
frequency,
ax[0],
ax[1],
input_dict['z_model'].__name__,
color = ['r','b'],
marker = ['.','.'])
z_plot(ReZ*5,-ImZ*5,frequency,ax[0],ax[1],'Experiment',color = ['k','k'],marker = ['x','+'])
plt.savefig('spectrumTest.png',dpi=300)
plt.show()
plt.figure(6, figsize = (10,10))
ax = plt.gca()
spacing = 2
y_min = -.25
x_min = 0
high_frequency_detail(ReZ*5,ImZ*5, ax, 'experiment',y_min = y_min, x_min = x_min, spacing = spacing)
high_frequency_detail(real(Z_),-imag(Z_), ax, 'z_mhpe', color = 'r',
marker = '.', x_min = x_min, y_min = y_min,
spacing = spacing)
```
### Example fitting a spectrum (z_mhpe)
[link to z_mhpe](#z_mhpe)
[Link to Contents](#Contents)
```
input_dict = {
'T':303.15, # Temperature (K)
'A':5, # geometric surface area (cm^2)
'C_R':0.00025, # concentration of reduced species (mol / L)
'C_O':0.00025, # concentration of oxidized species (mol / L)
'rho1':1.6, # electrolyte resistivity (ohm cm^-1)
'rho2':.012, # solid phase resistivity (ohm cm^-1)
'b':.3, # electrode thickness (cm)
'D_R':1.1e-6, # Diffusion coefficient for reduced species (cm^2 / s)
'D_O':0.57e-6, # Diffusion coefficient for oxidized species (cm^2 / s)
'C_dl':20e-6, # Double layer capacitance per unit surface area
'n':1, # number of electrons transferred
'A_t':100, # Wetted surface area (cm^2)
'i0':3e-4, # exchange current density (A cm^2)
'P':.95, # CPE exponent (-)
'ASR_mem':.5, # membrane ASR (Ohm cm^2)
'a':.002, # Nernstian diffusion layer thickness (cm)
'f':.05, # scale factor (-)
'L':1e-7, # inductance (H)
'z_model':z_mhpe
}
floating_params = ['A_t','i0','P','ASR_mem','a','f','L']
# calculating the fitted parameters with z_fit and printing their values
model = cell_response
fit_spectrum = z_fit( input_dict,
ReZ,
ImZ,
frequency,
floating_params,
area,
model,
residual,
one_max_keys = ['P'])
# generating a new spectrum with fitted parameters
Z_ = model(fit_spectrum,frequency)
fig,ax = plt.subplots(nrows = 1, ncols = 2, figsize = fs, num = 7)
z_plot( real(Z_),imag(Z_),
frequency,
ax[0],
ax[1],
input_dict['z_model'].__name__,
color = ['r','b'],
marker = ['.','.'])
z_plot( ReZ*5,-ImZ*5,
frequency,
ax[0],
ax[1],
'Experiment',
color = ['k','k'],
marker = ['x','+'])
plt.savefig('spectrumTest.png',dpi=300)
plt.show()
plt.figure(8, figsize = (10,10))
ax = plt.gca()
spacing = 2
y_min = -.25
x_min = 0
high_frequency_detail( ReZ*5,ImZ*5, ax, 'experiment',y_min = y_min,
x_min = x_min, spacing = spacing)
high_frequency_detail( real(Z_),-imag(Z_), ax, 'z_mhpe', color = 'r',
marker = '.', x_min = x_min, y_min = y_min,
spacing = spacing)
```
## Appendix
[Link to Contents](#Contents)
### Notes on coding style:
* I have tried to stick to PEP 8 style for function and variable naming as much as possible.
* It should be noted that for later versions of python, dictionary
insertion order is preserved so that when the keys are queried, they return in
the same order as they were inserted (not sorted alphabetically, etc.). For
this reason, [z_fit](#z_fit) can use a list comprehension to extract the values
of inputDict into x0 in the same order they were inserted. Without this, there
is no guarantee that x0 will correspond to the expected argument that the
respective model is receiving.
* When a list is unpacked into parameters, passing an underscore is used only to highlight that this parameter is not used in that particular scope.
* Text folding: the triple curly bracket sets are to mark the text folding. since they are behind the comment symbol ("#") they are not part of the code, but only for the text editor to hide
| true |
code
| 0.46393 | null | null | null | null |
|
```
%matplotlib inline
import ipywidgets as widgets
from ipywidgets import interact
import numpy as np
import matplotlib.pyplot as pl
from scipy.spatial.distance import cdist
from numpy.linalg import inv
import george
```
# Gaussian process regression
## Lecture 1
### Suzanne Aigrain, University of Oxford
#### LSST DSFP Session 4, Seattle, Sept 2017
- Lecture 1: Introduction and basics
- Tutorial 1: Write your own GP code
- Lecture 2: Examples and practical considerations
- Tutorial 3: Useful GP modules
- Lecture 3: Advanced applications
## Why GPs?
- flexible, robust probabilistic regression and classification tools.
- applied across a wide range of fields, from finance to zoology.
- useful for data containing non-trivial stochastic signals or noise.
- time-series data: causation implies correlation, so noise always correlated.
- increasingly popular in astronomy [mainly time-domain, but not just].
#### Spitzer exoplanet transits and eclipses (Evans et al. 2015)
<img src="images/Evans_Spitzer.png" width="800">
#### GPz photometric redshifts (Almosallam, Jarvis & Roberts 2016)
<img src="images/Almosallam_GPz.png" width="600">
## What is a GP?
A Gaussian process is a collection of random variables, any
finite number of which have a joint Gaussian distribution.
Consider a scalar variable $y$, drawn from a Gaussian distribution with mean $\mu$ and variance $\sigma^2$:
$$
p(y) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left[ - \frac{(y-\mu)^2}{2 \sigma^2} \right].
$$
As a short hand, we write: $y \sim \mathcal{N}(\mu,\sigma^2)$.
```
def gauss1d(x,mu,sig):
return np.exp(-(x-mu)**2/sig*2/2.)/np.sqrt(2*np.pi)/sig
def pltgauss1d(sig=1):
mu=0
x = np.r_[-4:4:101j]
pl.figure(figsize=(10,7))
pl.plot(x, gauss1d(x,mu,sig),'k-');
pl.axvline(mu,c='k',ls='-');
pl.axvline(mu+sig,c='k',ls='--');
pl.axvline(mu-sig,c='k',ls='--');
pl.axvline(mu+2*sig,c='k',ls=':');
pl.axvline(mu-2*sig,c='k',ls=':');
pl.xlim(x.min(),x.max());
pl.ylim(0,1);
pl.xlabel(r'$y$');
pl.ylabel(r'$p(y)$');
return
interact(pltgauss1d,
sig=widgets.FloatSlider(value=1.0,
min=0.5,
max=2.0,
step=0.25,
description=r'$\sigma$',
readout_format='.2f'));
```
Now let us consider a pair of variables $y_1$ and $y_2$, drawn from a *bivariate Gaussian distribution*. The *joint probability density* for $y_1$ and $y_2$ is:
$$
\left[ \begin{array}{l} y_1 \\ y_2 \end{array} \right] \sim \mathcal{N} \left(
\left[ \begin{array}{l} \mu_1 \\ \mu_2 \end{array} \right] ,
\left[ \begin{array}{ll}
\sigma_1^2 & C \\
C & \sigma_2^2
\end{array} \right]
\right),
$$
where $C = {\rm cov}(y_1,y_2)$ is the *covariance* between $y_1$ and $y_2$.
The second term on the right hand side is the *covariance matrix*, $K$.
We now use two powerful *identities* of Gaussian distributions to elucidate the relationship between $y_1$ and $y_2$.
The *marginal distribution* of $y_1$ describes what we know about $y_1$ in the absence of any other information about $y_2$, and is simply:
$$
p(y_1)= \mathcal{N} (\mu_1,\sigma_1^2).
$$
If we know the value of $y_2$, the probability density for $y_1$ collapses to the the *conditional distribution* of $y_1$ given $y_2$:
$$
p(y_1 \mid y_2) = \mathcal{N} \left( \mu_1 + C (y_2-\mu_2)/\sigma_2^2, \sigma_1^2-C^2\sigma_2^2 \right).
$$
If $K$ is diagonal, i.e. if $C=0$, $p(y_1 \mid y_2) = p(y_1)$. Measuring $y_2$ doesn't teach us anything about $y_1$. The two variables are *uncorrelated*.
If the variables are *correlated* ($C \neq 0$), measuring $y_2$ does alter our knowledge of $y_1$: it modifies the mean and reduces the variance.
```
def gauss2d(x1,x2,mu1,mu2,sig1,sig2,rho):
z = (x1-mu1)**2/sig1**2 + (x2-mu2)**2/sig2**2 - 2*rho*(x1-mu1)*(x2-mu2)/sig1/sig2
e = np.exp(-z/2/(1-rho**2))
return e/(2*np.pi*sig1*sig2*np.sqrt(1-rho**2))
def pltgauss2d(rho=0,show_cond=0):
mu1, sig1 = 0,1
mu2, sig2 = 0,1
y2o = -1
x1 = np.r_[-4:4:101j]
x2 = np.r_[-4:4:101j]
x22d,x12d = np.mgrid[-4:4:101j,-4:4:101j]
y = gauss2d(x12d,x22d,mu1,mu2,sig1,sig2,rho)
y1 = gauss1d(x1,mu1,sig1)
y2 = gauss1d(x2,mu2,sig2)
mu12 = mu1+rho*(y2o-mu2)/sig2**2
sig12 = np.sqrt(sig1**2-rho**2*sig2**2)
y12 = gauss1d(x1,mu12,sig12)
pl.figure(figsize=(10,10))
ax1 = pl.subplot2grid((3,3),(1,0),colspan=2,rowspan=2,aspect='equal')
v = np.array([0.02,0.1,0.3,0.6]) * y.max()
CS = pl.contour(x1,x2,y,v,colors='k')
if show_cond: pl.axhline(y2o,c='r')
pl.xlabel(r'$y_1$');
pl.ylabel(r'$y_2$');
pl.xlim(x1.min(),x1.max())
ax1.xaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both'))
ax1.yaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both'))
ax2 = pl.subplot2grid((3,3),(0,0),colspan=2,sharex=ax1)
pl.plot(x1,y1,'k-')
if show_cond: pl.plot(x1,y12,'r-')
pl.ylim(0,0.8)
pl.ylabel(r'$p(y_1)$')
pl.setp(ax2.get_xticklabels(), visible=False)
ax2.xaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both'))
ax2.yaxis.set_major_locator(pl.MaxNLocator(4, prune = 'upper'))
pl.xlim(x1.min(),x1.max())
ax3 = pl.subplot2grid((3,3),(1,2),rowspan=2,sharey=ax1)
pl.plot(y2,x2,'k-')
if show_cond: pl.axhline(y2o,c='r')
pl.ylim(x2.min(),x2.max());
pl.xlim(0,0.8);
pl.xlabel(r'$p(y_2)$')
pl.setp(ax3.get_yticklabels(), visible=False)
ax3.xaxis.set_major_locator(pl.MaxNLocator(4, prune = 'upper'))
ax3.yaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both'))
pl.subplots_adjust(hspace=0,wspace=0)
return
interact(pltgauss2d,
rho=widgets.FloatSlider(min=-0.8,max=0.8,step=0.4,description=r'$\rho$',value=0),
show_cond=widgets.Checkbox(value=True,description='show conditional distribution'));
```
To make the relation to time-series data a bit more obvious, let's plot the two variables side by side, then see what happens to one variable when we observe (fix) the other.
```
def SEKernel(par, x1, x2):
A, Gamma = par
D2 = cdist(x1.reshape(len(x1),1), x2.reshape(len(x2),1),
metric = 'sqeuclidean')
return A * np.exp(-Gamma*D2)
A = 1.0
Gamma = 0.01
x = np.array([-1,1])
K = SEKernel([A,Gamma],x,x)
m = np.zeros(len(x))
sig = np.sqrt(np.diag(K))
pl.figure(figsize=(15,7))
pl.subplot(121)
for i in range(len(x)):
pl.plot([x[i]-0.1,x[i]+0.1],[m[i],m[i]],'k-')
pl.fill_between([x[i]-0.1,x[i]+0.1],
[m[i]+sig[i],m[i]+sig[i]],
[m[i]-sig[i],m[i]-sig[i]],color='k',alpha=0.2)
pl.xlim(-2,2)
pl.ylim(-2,2)
pl.xlabel(r'$x$')
pl.ylabel(r'$y$');
def Pred_GP(CovFunc, CovPar, xobs, yobs, eobs, xtest):
# evaluate the covariance matrix for pairs of observed inputs
K = CovFunc(CovPar, xobs, xobs)
# add white noise
K += np.identity(xobs.shape[0]) * eobs**2
# evaluate the covariance matrix for pairs of test inputs
Kss = CovFunc(CovPar, xtest, xtest)
# evaluate the cross-term
Ks = CovFunc(CovPar, xtest, xobs)
# invert K
Ki = inv(K)
# evaluate the predictive mean
m = np.dot(Ks, np.dot(Ki, yobs))
# evaluate the covariance
cov = Kss - np.dot(Ks, np.dot(Ki, Ks.T))
return m, cov
xobs = np.array([-1])
yobs = np.array([1.0])
eobs = 0.0001
pl.subplot(122)
pl.errorbar(xobs,yobs,yerr=eobs,capsize=0,fmt='k.')
x = np.array([1])
m,C=Pred_GP(SEKernel,[A,Gamma],xobs,yobs,eobs,x)
sig = np.sqrt(np.diag(C))
for i in range(len(x)):
pl.plot([x[i]-0.1,x[i]+0.1],[m[i],m[i]],'k-')
pl.fill_between([x[i]-0.1,x[i]+0.1],
[m[i]+sig[i],m[i]+sig[i]],
[m[i]-sig[i],m[i]-sig[i]],color='k',alpha=0.2)
pl.xlim(-2,2)
pl.ylim(-2,2)
pl.xlabel(r'$x$')
pl.ylabel(r'$y$');
```
Now consider $N$ variables drawn from a multivariate Gaussian distribution:
$$
\boldsymbol{y} \sim \mathcal{N} (\boldsymbol{m},K)
$$
where $y = (y_1,y_2,\ldots,y_N)^T$, $\boldsymbol{m} = (m_1,m_2,\ldots,m_N)^T$ is the *mean vector*, and $K$ is an $N \times N$ positive semi-definite *covariance matrix*, with elements $K_{ij}={\rm cov}(y_i,y_j)$.
A Gaussian process is an extension of this concept to infinite $N$, giving rise to a probability distribution over functions.
This last generalisation may not be obvious conceptually, but in practice only ever deal with finite samples.
```
xobs = np.array([-1,1,2])
yobs = np.array([1,-1,0])
eobs = np.array([0.0001,0.1,0.1])
pl.figure(figsize=(15,7))
pl.subplot(121)
pl.errorbar(xobs,yobs,yerr=eobs,capsize=0,fmt='k.')
Gamma = 0.5
x = np.array([-2.5,-2,-1.5,-0.5, 0.0, 0.5,1.5,2.5])
m,C=Pred_GP(SEKernel,[A,Gamma],xobs,yobs,eobs,x)
sig = np.sqrt(np.diag(C))
for i in range(len(x)):
pl.plot([x[i]-0.1,x[i]+0.1],[m[i],m[i]],'k-')
pl.fill_between([x[i]-0.1,x[i]+0.1],
[m[i]+sig[i],m[i]+sig[i]],
[m[i]-sig[i],m[i]-sig[i]],color='k',alpha=0.2)
pl.xlim(-3,3)
pl.ylim(-3,3)
pl.xlabel(r'$x$')
pl.ylabel(r'$y$');
pl.subplot(122)
pl.errorbar(xobs,yobs,yerr=eobs,capsize=0,fmt='k.')
x = np.linspace(-3,3,100)
m,C=Pred_GP(SEKernel,[A,Gamma],xobs,yobs,eobs,x)
sig = np.sqrt(np.diag(C))
pl.plot(x,m,'k-')
pl.fill_between(x,m+sig,m-sig,color='k',alpha=0.2)
pl.xlim(-3,3)
pl.ylim(-3,3)
pl.xlabel(r'$x$')
pl.ylabel(r'$y$');
```
## Textbooks
A good, detailed reference is [*Gaussian Processes for Machine Learning*](http://www.gaussianprocess.org/gpml/) by C. E. Rasmussen & C. Williams, MIT Press, 2006.
The examples in the book are generated using the `Matlab` package `GPML`.
## A more formal definition
A Gaussian process is completely specified by its *mean function* and *covariance function*.
We define the mean function $m(x)$ and the covariance function $k(x,x)$ of a real process $y(x)$ as
$$
\begin{array}{rcl}
m(x) & = & \mathbb{E}[y(x)], \\
k(x,x') & = & \mathrm{cov}(y(x),y(x'))=\mathbb{E}[(y(x) − m(x))(y(x') − m(x'))].
\end{array}
$$
A very common covariance function is the squared exponential, or radial basis function (RBF) kernel
$$
K_{ij}=k(x_i,x_j)=A \exp\left[ - \Gamma (x_i-x_j)^2 \right],
$$
which has 2 parameters: $A$ and $\Gamma$.
We then write the Gaussian process as
$$
y(x) \sim \mathcal{GP}(m(x), k(x,x'))
$$
Here we are implicitly assuming the inputs $x$ are one-dimensional, e.g. $x$ might represent time. However, the input space can have more than one dimension. We will see an example of a GP with multi-dimensional inputs later.
## The prior
Now consider a finite set of inputs $\boldsymbol{x}$, with corresponding outputs $\boldsymbol{y}$.
The *joint distribution* of $\boldsymbol{y}$ given $\boldsymbol{x}$, $m$ and $k$ is
$$
\mathrm{p}(\boldsymbol{y} \mid \boldsymbol{x},m,k) = \mathcal{N}( \boldsymbol{m},K),
$$
where $\boldsymbol{m}=m(\boldsymbol{x})$ is the *mean vector*,
and $K$ is the *covariance matrix*, with elements $K_{ij} = k(x_i,x_j)$.
## Test and training sets
Suppose we have an (observed) *training set* $(\boldsymbol{x},\boldsymbol{y})$.
We are interested in some other *test set* of inputs $\boldsymbol{x}_*$.
The joint distribution over the training and test sets is
$$
\mathrm{p} \left( \left[ \begin{array}{l} \boldsymbol{y} \\ \boldsymbol{y}_* \end{array} \right] \right)
= \mathcal{N} \left( \left[ \begin{array}{l} \boldsymbol{m} \\ \boldsymbol{m}_* \end{array} \right],
\left[ \begin{array}{ll} K & K_* \\ K_*^T & K_{**} \end{array} \right] \right),
$$
where $\boldsymbol{m}_* = m(\boldsymbol{x}_*)$, $K_{**,ij} = k(x_{*,i},x_{*,j})$ and $K_{*,ij} = k(x_i,x_{*,j})$.
For simplicity, assume the mean function is zero everywhere: $\boldsymbol{m}=\boldsymbol{0}$. We will consider to non-trivial mean functions later.
## The conditional distribution
The *conditional distribution* for the test set given the training set is:
$$
\mathrm{p} ( \boldsymbol{y}_* \mid \boldsymbol{y},k) = \mathcal{N} (
K_*^T K^{-1} \boldsymbol{y}, K_{**} - K_*^T K^{-1} K_* ).
$$
This is also known as the *predictive distribution*, because it can be use to predict future (or past) observations.
More generally, it can be used for *interpolating* the observations to any desired set of inputs.
This is one of the most widespread applications of GPs in some fields (e.g. kriging in geology, economic forecasting, ...)
## Adding white noise
Real observations always contain a component of *white noise*, which we need to account for, but don't necessarily want to include in the predictions.
If the white noise variance $\sigma^2$ is constant, we can write
$$
\mathrm{cov}(y_i,y_j)=k(x_i,x_j)+\delta_{ij} \sigma^2,
$$
and the conditional distribution becomes
$$
\mathrm{p} ( \boldsymbol{y}_* \mid \boldsymbol{y},k) = \mathcal{N} (
K_*^T (K + \sigma^2 \mathbb{I})^{-1} \boldsymbol{y}, K_{**} - K_*^T (K + \sigma^2 \mathbb{I})^{-1} K_* ).
$$
In real life, we may need to learn $\sigma$ from the data, alongside the other contribution to the covariance matrix.
We assumed constant white noise, but it's trivial to allow for different $\sigma$ for each data point.
## Single-point prediction
Let us look more closely at the predictive distribution for a single test point $x_*$.
It is a Gaussian with mean
$$
\overline{y}_* = \boldsymbol{k}_*^T (K + \sigma^2 \mathbb{I})^{-1} \boldsymbol{y}
$$
and variance
$$
\mathbb{V}[y_*] = k(x_*,x_*) - \boldsymbol{k}_*^T (K + \sigma^2 \mathbb{I})^{-1} \boldsymbol{k}_*,
$$
where $\boldsymbol{k}_*$ is the vector of covariances between the test point and the training points.
Notice the mean is a linear combination of the observations: the GP is a *linear predictor*.
It is also a linear combination of covariance functions, each centred on a training point:
$$
\overline{y}_* = \sum_{i=1}^N \alpha_i k(x_i,x_*),
$$
where $\alpha_i = (K + \sigma^2 \mathbb{I})^{-1} y_i$.
## The likelihood
The *likelihood* of the data under the GP model is simply:
$$
\mathrm{p}(\boldsymbol{y} \,|\, \boldsymbol{x}) = \mathcal{N}(\boldsymbol{y} \, | \, \boldsymbol{0},K + \sigma^2 \mathbb{I}).
$$
This is a measure of how well the model explains, or predicts, the training set.
In some textbooks this is referred to as the *marginal likelihood*.
This arises if one considers the observed $\boldsymbol{y}$ as noisy realisations of a latent (unobserved) Gaussian process $\boldsymbol{f}$.
The term *marginal* refers to marginalisation over the function values $\boldsymbol{f}$:
$$
\mathrm{p}(\boldsymbol{y} \,|\, \boldsymbol{x}) = \int \mathrm{p}(\boldsymbol{y} \,|\, \boldsymbol{f},\boldsymbol{x}) \, \mathrm{p}(\boldsymbol{f} \,|\, \boldsymbol{x}) \, \mathrm{d}\boldsymbol{f},
$$
where
$$
\mathrm{p}(\boldsymbol{f} \,|\, \boldsymbol{x}) = \mathcal{N}(\boldsymbol{f} \, | \, \boldsymbol{0},K)
$$
is the *prior*, and
$$
\mathrm{p}(\boldsymbol{y} \,|\, \boldsymbol{f},\boldsymbol{x}) = \mathcal{N}(\boldsymbol{y} \, | \, \boldsymbol{0},\sigma^2 \mathbb{I})
$$
is the *likelihood*.
## Parameters and hyper-parameters
The parameters of the covariance and mean function as known as the *hyper-parameters* of the GP.
This is because the actual *parameters* of the model are the function values, $\boldsymbol{f}$, but we never explicitly deal with them: they are always marginalised over.
## *Conditioning* the GP...
...means evaluating the conditional (or predictive) distribution for a given covariance matrix (i.e. covariance function and hyper-parameters), and training set.
## *Training* the GP...
...means maximising the *likelihood* of the model with respect to the hyper-parameters.
## The kernel trick
Consider a linear basis model with arbitrarily many *basis functions*, or *features*, $\Phi(x)$, and a (Gaussian) prior $\Sigma_{\mathrm{p}}$ over the basis function weights.
One ends up with exactly the same expressions for the predictive distribution and the likelihood so long as:
$$
k(\boldsymbol{x},\boldsymbol{x'}) = \Phi(\boldsymbol{x})^{\mathrm{T}} \Sigma_{\mathrm{p}} \Phi(\boldsymbol{x'}),
$$
or, writing $\Psi(\boldsymbol{x}) = \Sigma_{\mathrm{p}}^{1/2} \Phi(\boldsymbol{x})$,
$$
k(\boldsymbol{x},\boldsymbol{x'}) = \Psi(\boldsymbol{x}) \cdot \Psi(\boldsymbol{x'}),
$$
Thus the covariance function $k$ enables us to go from a (finite) *input space* to a (potentially infinite) *feature space*. This is known as the *kernel trick* and the covariance function is often referred to as the *kernel*.
## Non-zero mean functions
In general (and in astronomy applications in particular) we often want to use non-trivial mean functions.
To do this simply replace $\boldsymbol{y}$ by $\boldsymbol{r}=\boldsymbol{y}-\boldsymbol{m}$ in the expressions for predictive distribution and likelihood.
The mean function represents the *deterministic* component of the model
- e.g.: a linear trend, a Keplerian orbit, a planetary transit, ...
The covariance function encodes the *stochastic* component.
- e.g.: instrumental noise, stellar variability
## Covariance functions
The only requirement for the covariance function is that it should return a positive semi-definite covariance matrix.
The simplest covariance functions have two parameters: one input and one output variance (or scale). The form of the covariance function controls the degree of smoothness.
### The squared exponential
The simplest, most widely used kernel is the squared exponential:
$$
k_{\rm SE}(x,x') = A \exp \left[ - \Gamma (x-x')^2 \right].
$$
This gives rise to *smooth* functions with variance $A$ and inverse scale (characteristic length scale) $A$ and output scale (amplitude) $l$.
```
def kernel_SE(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
D2 = cdist(X1,X2,'sqeuclidean')
K = p0 * np.exp(- p1 * D2)
return np.matrix(K)
def kernel_Mat32(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
DD = cdist(X1, X2, 'euclidean')
arg = np.sqrt(3) * abs(DD) / p1
K = p0 * (1 + arg) * np.exp(- arg)
return np.matrix(K)
def kernel_RQ(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
alpha = par[2]
D2 = cdist(X1, X2, 'sqeuclidean')
K = p0 * (1 + D2 / (2*alpha*p1))**(-alpha)
return np.matrix(K)
def kernel_Per(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
period = par[2]
DD = cdist(X1, X2, 'euclidean')
K = p0 * np.exp(- p1*(np.sin(np.pi * DD / period))**2)
return np.matrix(K)
def kernel_QP(X1,X2,par):
p0 = 10.0**par[0]
p1 = 10.0**par[1]
period = par[2]
p3 = 10.0**par[3]
DD = cdist(X1, X2, 'euclidean')
D2 = cdist(X1, X2, 'sqeuclidean')
K = p0 * np.exp(- p1*(np.sin(np.pi * DD / period))**2 - p3 * D2)
return np.matrix(K)
def add_wn(K,lsig):
sigma=10.0**lsig
N = K.shape[0]
return K + sigma**2 * np.identity(N)
def get_kernel(name):
if name == 'SE': return kernel_SE
elif name == 'RQ': return kernel_RQ
elif name == 'M32': return kernel_Mat32
elif name == 'Per': return kernel_Per
elif name == 'QP': return kernel_QP
else:
print 'No kernel called %s - using SE' % name
return kernel_SE
def pltsamples1(par0=0.0, par1=0.0, wn = 0.0):
x = np.r_[-5:5:201j]
X = np.matrix([x]).T # scipy.spatial.distance expects matrices
kernel = get_kernel('SE')
K = kernel(X,X,[par0,par1])
K = add_wn(K,wn)
fig=pl.figure(figsize=(10,4))
ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal')
pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10)
pl.title('Covariance matrix')
ax2 = pl.subplot2grid((1,3), (0,1),colspan=2)
np.random.seed(0)
for i in range(3):
y = np.random.multivariate_normal(np.zeros(len(x)),K)
pl.plot(x,y-i*2)
pl.xlim(-5,5)
pl.ylim(-8,5)
pl.xlabel('x')
pl.ylabel('y')
pl.title('Samples from %s prior' % 'SE')
pl.tight_layout()
interact(pltsamples1,
par0=widgets.FloatSlider(min=-1,max=1,step=0.5,description=r'$\log_{10} A$',value=0),
par1=widgets.FloatSlider(min=-1,max=1,step=0.5,description=r'$\log_{10} \Gamma$',value=0),
wn=widgets.FloatSlider(min=-2,max=0,step=1,description=r'$\log_{10} \sigma$',value=-2)
);
```
### The Matern family
The Matern 3/2 kernel
$$
k_{3/2}(x,x')= A \left( 1 + \frac{\sqrt{3}r}{l} \right) \exp \left( - \frac{\sqrt{3}r}{l} \right),
$$
where $r =|x-x'|$.
It produces somewhat rougher behaviour, because it is only differentiable once w.r.t. $r$ (whereas the SE kernel is infinitely differentiable). There is a whole family of Matern kernels with varying degrees of roughness.
## The rational quadratic kernel
is equivalent to a squared exponential with a powerlaw distribution of input scales
$$
k_{\rm RQ}(x,x') = A^2 \left(1 + \frac{r^2}{2 \alpha l} \right)^{-\alpha},
$$
where $\alpha$ is the index of the power law.
This is useful to model data containing variations on a range of timescales with just one extra parameter.
```
# Function to plot samples from kernel
def pltsamples2(par2=0.5, kernel_shortname='SE'):
x = np.r_[-5:5:201j]
X = np.matrix([x]).T # scipy.spatial.distance expects matrices
kernel = get_kernel(kernel_shortname)
K = kernel(X,X,[0.0,0.0,par2])
fig=pl.figure(figsize=(10,4))
ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal')
pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10)
pl.title('Covariance matrix')
ax2 = pl.subplot2grid((1,3), (0,1),colspan=2)
np.random.seed(0)
for i in range(3):
y = np.random.multivariate_normal(np.zeros(len(x)),K)
pl.plot(x,y-i*2)
pl.xlim(-5,5)
pl.ylim(-8,5)
pl.xlabel('x')
pl.ylabel('y')
pl.title('Samples from %s prior' % kernel_shortname)
pl.tight_layout()
interact(pltsamples2,
par2=widgets.FloatSlider(min=0.25,max=1,step=0.25,description=r'$\alpha$',value=0.5),
kernel_shortname=widgets.RadioButtons(options=['SE','M32','RQ'], value='SE',description='kernel')
);
```
## Periodic kernels...
...can be constructed by replacing $r$ in any of the above by a periodic function of $r$. For example, the cosine kernel:
$$
k_{\cos}(x,x') = A \cos\left(\frac{2\pi r}{P}\right),
$$
[which follows the dynamics of a simple harmonic oscillator], or...
...the "exponential sine squared" kernel, obtained by mapping the 1-D variable $x$ to the 2-D variable $\mathbf{u}(x)=(\cos(x),\sin(x))$, and then applying a squared exponential in $\boldsymbol{u}$-space:
$$
k_{\sin^2 {\rm SE}}(x,x') = A \exp \left[ -\Gamma \sin^2\left(\frac{\pi r}{P}\right) \right],
$$
which allows for non-harmonic functions.
```
# Function to plot samples from kernel
def pltsamples3(par2=2.0, par3=2.0,kernel_shortname='Per'):
x = np.r_[-5:5:201j]
X = np.matrix([x]).T # scipy.spatial.distance expects matrices
kernel = get_kernel(kernel_shortname)
K = kernel(X,X,[0.0,0.0,par2,par3])
fig=pl.figure(figsize=(10,4))
ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal')
pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10)
pl.title('Covariance matrix')
ax2 = pl.subplot2grid((1,3), (0,1),colspan=2)
np.random.seed(0)
for i in range(3):
y = np.random.multivariate_normal(np.zeros(len(x)),K)
pl.plot(x,y-i*2)
pl.xlim(-5,5)
pl.ylim(-8,5)
pl.xlabel('x')
pl.ylabel('y')
pl.title('Samples from %s prior' % kernel_shortname)
pl.tight_layout()
interact(pltsamples3,
par2=widgets.FloatSlider(min=1,max=3,step=1,description=r'$P$',value=2),
par3=widgets.FloatSlider(min=-2,max=0,step=1,description=r'$\log\Gamma_2$',value=-1),
kernel_shortname=widgets.RadioButtons(options=['Per','QP'], value='QP',description='kernel')
);
```
## Combining kernels
Any *affine tranform*, sum or product of valid kernels is a valid kernel.
For example, a quasi-periodic kernel can be constructed by multiplying a periodic kernel with a non-periodic one. The following is frequently used to model stellar light curves:
$$
k_{\mathrm{QP}}(x,x') = A \exp \left[ -\Gamma_1 \sin^2\left(\frac{\pi r}{P}\right) -\Gamma_2 r^2 \right].
$$
## Example: Mauna Kea CO$_2$ dataset
(From Rasmussen & Williams textbook)
<img height="700" src="images/RW_mauna_kea.png">
### 2 or more dimensions
So far we assumed the inputs were 1-D but that doesn't have to be the case. For example, the SE kernel can be extended to D dimensions...
using a single length scale, giving the *Radial Basis Function* (RBF) kernel:
$$
k_{\rm RBF}(\mathbf{x},\mathbf{x'}) = A \exp \left[ - \Gamma \sum_{j=1}^{D}(x_j-x'_j)^2 \right],
$$
where $\mathbf{x}=(x_1,x_2,\ldots, x_j,\ldots,x_D)^{\mathrm{T}}$ represents a single, multi-dimensional input.
or using separate length scales for each dimension, giving the *Automatic Relevance Determination* (ARD) kernel:
$$
k_{\rm ARD}(\mathbf{x},\mathbf{x'}) = A \exp \left[ - \sum_{j=1}^{D} \Gamma_j (x_j-x'_j)^2 \right].
$$
```
import george
x2d, y2d = np.mgrid[-3:3:0.1,-3:3:0.1]
x = x2d.ravel()
y = y2d.ravel()
N = len(x)
X = np.zeros((N,2))
X[:,0] = x
X[:,1] = y
k1 = george.kernels.ExpSquaredKernel(1.0,ndim=2)
s1 = george.GP(k1).sample(X).reshape(x2d.shape)
k2 = george.kernels.ExpSquaredKernel(1.0,ndim=2,axes=1) + george.kernels.ExpSquaredKernel(0.2,ndim=2,axes=0)
s2 = george.GP(k2).sample(X).reshape(x2d.shape)
pl.figure(figsize=(10,5))
pl.subplot(121)
pl.contourf(x2d,y2d,s1)
pl.xlim(x.min(),x.max())
pl.ylim(y.min(),y.max())
pl.xlabel(r'$x$')
pl.ylabel(r'$y$')
pl.title('RBF')
pl.subplot(122)
pl.contourf(x2d,y2d,s2)
pl.xlim(x.min(),x.max())
pl.ylim(y.min(),y.max())
pl.xlabel(r'$x$')
pl.title('ARD');
# Function to plot samples from kernel
def pltsamples3(par2=0.5,par3=0.5, kernel_shortname='SE'):
x = np.r_[-5:5:201j]
X = np.matrix([x]).T # scipy.spatial.distance expects matrices
kernel = get_kernel(kernel_shortname)
K = kernel(X,X,[0.0,0.0,par2,par3])
fig=pl.figure(figsize=(10,4))
ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal')
pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10)
pl.title('Covariance matrix')
ax2 = pl.subplot2grid((1,3), (0,1),colspan=2)
np.random.seed(0)
for i in range(5):
y = np.random.multivariate_normal(np.zeros(len(x)),K)
pl.plot(x,y)
pl.xlim(-5,5)
pl.ylim(-5,5)
pl.xlabel('x')
pl.ylabel('y')
pl.title('Samples from %s prior' % kernel_shortname)
pl.tight_layout()
interact(pltsamples3,
par2=widgets.FloatSlider(min=1,max=3,step=1,description=r'$P$',value=2),
par3=widgets.FloatSlider(min=-2,max=0,step=1,description=r'$\log_{10}\Gamma_2$',value=-1.),
kernel_shortname=widgets.RadioButtons(options=['Per','QP'], value='Per',description='kernel')
);
```
| true |
code
| 0.503845 | null | null | null | null |
|
```
import sys
sys.path.append('src/')
import numpy as np
import torch, torch.nn
from library_function import library_1D
from neural_net import LinNetwork
from DeepMod import *
import matplotlib.pyplot as plt
plt.style.use('seaborn-notebook')
import torch.nn as nn
from torch.autograd import grad
from scipy.io import loadmat
from scipy.optimize import curve_fit
%load_ext autoreload
%autoreload 2
```
# Preparing data
```
rawdata = loadmat('data/kinetics_new.mat')
raw = np.real(rawdata['Expression1'])
raw= raw.reshape((1901,3))
t = raw[:-1,0].reshape(-1,1)
X1= raw[:-1,1]
X2 = raw[:-1,2]
X = np.float32(t.reshape(-1,1))
y= np.vstack((X1,X2))
y = np.transpose(y)
number_of_samples = 1800
idx = np.random.permutation(y.shape[0])
X_train = torch.tensor(X[idx, :][:number_of_samples], dtype=torch.float32, requires_grad=True)
y_train = torch.tensor(y[idx, :][:number_of_samples], dtype=torch.float32)
y_train.shape
```
# Building network
```
optim_config ={'lambda':1e-6,'max_iteration':20000}
lib_config={'poly_order':1, 'diff_order':2, 'total_terms':4}
network_config={'input_dim':1, 'hidden_dim':20, 'layers':8, 'output_dim':2}
```
# MSE Run
```
prediction, network, y_t, theta = DeepMod_mse(X_train, y_train,network_config, lib_config, optim_config)
```
# Least square fit
```
plt.scatter(X,y[:,0])
plt.scatter(X_train.detach().numpy(),prediction[:,0].detach().numpy())
plt.plot(X,np.gradient(y[:,0])/0.001,'r--')
plt.scatter(X_train.detach().numpy()[:,0], y_t.detach().numpy()[:,0])
def func(X, a, b, c, d):
x1,x2 = X
return a + b*x1 + c*x2 + d*x1*x2
def func_simple(X, a, b, c):
x1,x2 = X
return a + b*x1 + c*x2
x1 = np.squeeze(prediction[:,0].detach().numpy())
x2 = np.squeeze(prediction[:,1].detach().numpy())
z1 = y_t.detach().numpy()[:,0]
z2 = y_t.detach().numpy()[:,1]
z1_ref = np.gradient(np.squeeze(prediction[:,0].detach().numpy()),np.squeeze(X_train.detach().numpy()))
z2_ref = np.gradient(np.squeeze(prediction[:,1].detach().numpy()),np.squeeze(X_train.detach().numpy()))
plt.scatter(X_train[:,0].detach().numpy(),z1_ref)
plt.scatter(X_train[:,0].detach().numpy(),z1)
x1 = y[:,0]
x2 = y[:,1]
z1 = np.gradient(y[:,0],np.squeeze(X))
z2 = np.gradient(y[:,1],np.squeeze(X))
# initial guesses for a,b,c:
p0 = 0., 0., 0., 0.
w1 = curve_fit(func, (x1,x2), z1, p0)[0]
w2 = curve_fit(func, (x1,x2), z2, p0)[0]
init_coeff=torch.tensor(np.transpose(np.array((w1,w2))), dtype=torch.float32, requires_grad=True)
print(init_coeff)
np.sum((0.2*theta[:,0]-theta[:,1]-y_t[:,0]).detach().numpy())
plt.scatter(x1,z2)
plt.scatter(x1,w2[0]+w2[1]*x1+w2[2]*x2+w2[3]*x1*x2)
plt.scatter(x1,x1-0.25*x2)
plt.scatter(x1,z2)
plt.scatter(x1,w2[0]+w2[1]*x1+w2[2]*x2+w2[3]*x1*x2)
plt.scatter(x1,x1-0.25*x2)
y_t,theta, weight_vector = DeepMod_single(X_train, y_train, network_config, lib_config, optim_config,network,init_coeff)
plt.scatter(X[:,0],y[:,0])
plt.scatter(X[:,0],y[:,1])
plt.scatter(X_train.detach().numpy(),prediction.detach().numpy()[:,0])
plt.scatter(X_train.detach().numpy(),prediction.detach().numpy()[:,1])
plt.show()
from scipy.optimize import curve_fit
def func(X, a, b, c, d):
x1,x2 = X
return a + b*x1 + c*x2 + d*x1*x2
x1 = y[:,0]
x2 = y[:,1]
z1 = np.gradient(y[:,0],np.squeeze(X))
z2 = np.gradient(y[:,1],np.squeeze(X))
# initial guesses for a,b,c:
p0 = 0., 0., 0., 0.0
curve_fit(func, (x1,x2), z1, p0)[0]
sparse_weight_vector, sparsity_pattern, prediction, network = DeepMod(X_train, y_train,network_config, lib_config, optim_config)
testlib = np.array([y[:,0],y[:,1],y[:,0]*y[:,1]])
X.shape
def reg_m(y, x):
ones = np.ones(len(x[0]))
X = sm.add_constant(np.column_stack((x[0], ones)))
for ele in x[1:]:
X = sm.add_constant(np.column_stack((ele, X)))
results = sm.OLS(y, X).fit()
return results
print(reg_m(np.gradient(y[:,0]), x).summary())
plt.scatter(X_train.detach().numpy(),prediction.detach().numpy()[:,0])
plt.scatter(X_train.detach().numpy(),prediction.detach().numpy()[:,1])
prediction = network(torch.tensor(X, dtype=torch.float32))
prediction = prediction.detach().numpy()
x, y = np.meshgrid(X[:,0], X[:,1])
mask = torch.tensor((0,1,3))
mask
sparse_coefs = torch.tensor((0.1,0.2,0.4)).reshape(-1,1)
sparse_coefs
dummy = torch.ones((5,3,1))
dummy2 = torch.ones((5,1,4))
(dummy @ dummy2).shape
dummy.shape
dummy.reshape(-1,3,1).shape
dummy = dummy.reshape(2,2)
torch.where(coefs(mask),coefs,dummy)
x = np.linspace(0, 1, 100)
X, Y = np.meshgrid(x, x)
Z = np.sin(X)*np.sin(Y)
b = torch.ones((10, 2), dtype=torch.float32, requires_grad=True)
a = torch.tensor(np.ones((2,10)), dtype=torch.float32)
test=torch.tensor([[0.3073, 0.4409],
[0.0212, 0.6602]])
torch.where(test>torch.tensor(0.3),test, torch.zeros_like(test))
```
```
test2[0,:].reshape(-1,1)
mask=torch.nonzero(test2[0,:])
mask=torch.reshape(torch.nonzero(test2), (1,4))
mask
test2[mask[1]]
a.shape[1]
```
| true |
code
| 0.582372 | null | null | null | null |
|
# Extreme Gradient Boosting Regressor
### Required Packages
```
!pip install xgboost
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import xgboost as xgb
import matplotlib.pyplot as plt
from xgboost import XGBRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path = ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target = ''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)
```
### Model
XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
For Tuning parameters, details refer to official API documentation [Tunning Parameters](https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn)
```
model = XGBRegressor(random_state = 123,n_jobs=-1)
model.fit(X_train, y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Feature Importances
The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction.
```
xgb.plot_importance(model,importance_type="gain",show_values=False)
plt.rcParams['figure.figsize'] = [5, 5]
plt.show()
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Thilakraj Devadiga , Github: [Profile](https://github.com/Thilakraj1998)
| true |
code
| 0.487856 | null | null | null | null |
|
# Simple Model of a Car on a Bumpy Road
This notebook allows you to compute and visualize the car model presented in Example 2.4.2 the book.
The road is described as:
$$y(t) = Ysin\omega_b t$$
And $\omega_b$ is a function of the car's speed.
```
import numpy as np
def x_h(t, wn, zeta, x0, xd0):
"""Returns the transient vertical deviation from the equilibrium of the mass.
Parameters
==========
t : ndarray, shape(n,)
An array of monotonically increasing values for time.
zeta : float
The damping ratio of the system.
x0 : float
The initial displacement from the equilibrium.
xd0 : float
The initial velocity of the mass.
Returns
========
x_h : ndarray, shape(n,)
An array containing the displacement from equilibrium as a function of time.
"""
wd = wn * np.sqrt(1 - zeta**2)
A = np.sqrt(x0**2 + ((xd0 + zeta * wn * x0) / wd)**2)
phi = np.arctan2(x0 * wd, xd0 + zeta * wn * x0)
return A * np.exp(-zeta * wn * t) * np.sin(wd * t + phi)
def x_p(t, wn, zeta, Y, wb):
"""Returns the steady state vertical deviation from the equilibrium of the mass.
Parameters
==========
t : ndarray, shape(n,)
An array of monotonically increasing values for time.
wn : float
The natural frequency of the system in radians per second.
zeta : float
The damping ratio of the system.
Y : float
The amplitude of the road bumps in meters.
wb : float
The frequency of the road bumps in radians per second.
Returns
========
x_p : ndarray, shape(n,)
An array containing the displacement from equilibrium as a function of time.
"""
theta1 = np.arctan2(2 * zeta * wn * wb, (wn**2 - wb**2))
theta2 = np.arctan2(wn, 2 * zeta * wb)
amp = wn * Y * ((wn**2 + (2 * zeta * wb)**2) / ((wn**2 - wb**2)**2 + (2 * zeta * wn * wb)**2))**0.5
return amp * np.cos(wb * t - theta1 - theta2)
def compute_force(t, m, c, k, Y, wb):
"""Returns the total force acting on the mass.
Parameters
==========
t : ndarray, shape(n,)
An array of monotonically increasing values for time.
m : float
The mass of the vehicle in kilograms.
c : float
The damping coefficient in Newton seconds per meter.
k : float
The spring stiffness in Newtons per meter.
Y : float
The amplitude of the road bumps in meters.
wb : float
The frequency of the road bumps in radians per second.
Returns
========
f : ndarray, shape(n,)
An array containing the acting on the mass as a function of time.
"""
wn = np.sqrt(k / m)
zeta = c / 2 / m / wn
r = wb / wn
amp = k * Y * r**2 * np.sqrt((1 + (2 * zeta * r)**2) / ((1 - r**2)**2 + (2 * zeta * r)**2))
theta1 = np.arctan2(2 * zeta * wn * wb, (wn**2 - wb**2))
theta2 = np.arctan2(wn, 2 * zeta * wb)
return -amp * np.cos(wb * t - theta1 - theta2)
def compute_trajectory(t, m, c, k, Y, wb, x0, xd0):
"""Returns the combined transient and steady state deviation of the mass from equilibrium.
Parameters
==========
t : ndarray, shape(n,)
An array of monotonically increasing values for time.
m : float
The mass of the vehicle in kilograms.
c : float
The damping coefficient in Newton seconds per meter.
k : float
The spring stiffness in Newtons per meter.
Y : float
The amplitude of the road bumps in meters.
wb : float
The frequency of the road bumps in radians per second.x0 : float
The initial displacement.
xd0 : float
The initial velocity of the mass.
Returns
========
x_h : ndarray, shape(n,)
An array containing the displacement from equilibrium as a function of time.
"""
wn = np.sqrt(k / m)
zeta = c / 2 / m / wn
return x_h(t, wn, zeta, x0, xd0) + x_p(t, wn, zeta, Y, wb)
```
Now start with the parameters given in the book.
```
Y = 0.01 # m
v = 20 # km/h
m = 1007 # kg
k = 4e4 # N/m
c = 20e2 # Ns/m
x0 = -0.05 # m
xd0 = 0 # m/s
bump_distance = 6 # m
```
The bump frequency is a function of the distance between the bumps and the speed of the vehicle.
```
wb = v / bump_distance * 1000 / 3600 * 2 * np.pi # rad /s
```
It is worth noting what the frequency ratio is:
```
r = np.sqrt(k / m) / wb
r
```
Now pick some time values and compute the displacement and the force trajectories.
```
t = np.linspace(0, 20, num=500)
x = compute_trajectory(t, m, c, k, Y, wb, x0, xd0)
f = compute_force(t, m, c, k, Y, wb)
```
Plot the trajectories.
```
import matplotlib.pyplot as plt
%matplotlib notebook
fig, axes = plt.subplots(2, 1, sharex=True)
axes[0].plot(t, x)
axes[1].plot(t, f / k)
axes[1].set_xlabel('Time [s]')
axes[0].set_ylabel('$x(t)$')
axes[1].set_ylabel('F / k [m]')
```
Now animate the simulation of the model showing the motion and a vector that represents the force per stiffness value.
```
from matplotlib.patches import Rectangle
import matplotlib.animation as animation
fig, ax = plt.subplots(1, 1)
ax.set_ylim((-0.1, 0.6))
ax.set_ylabel('Height [m]')
#ax.set_aspect('equal')
xeq = 0.1 # m
view_width = 4 # m
rect_width = 1.0 # m
rect_height = rect_width / 4 # m
bump_distance = 6 # m
lat_pos = 0
lat = np.linspace(lat_pos - view_width / 2, lat_pos + view_width / 2, num=100)
ax.set_xlim((lat[0], lat[-1]))
rect = Rectangle(
(-rect_width / 2, xeq + x0), # (x,y)
rect_width, # width
rect_height, # height
)
car = ax.add_patch(rect)
road = ax.plot(lat, Y * np.sin(2 * np.pi / bump_distance * lat), color='black')[0]
suspension = ax.plot([lat_pos, lat_pos],
[Y * np.sin(2 * np.pi / bump_distance * lat_pos), xeq + x0],
linewidth='4', marker='o', color='yellow')[0]
force_vec = ax.plot([lat_pos, lat_pos],
[xeq + x0 + rect_height / 2, xeq + x0 + rect_height / 2 + 0.2],
'r', linewidth=4)[0]
def kph2mps(speed):
# km 1 hr 1 min 1000 m
# -- * ------ * ------ * ------
# hr 60 min 60 sec 1 km
return speed * 1000 / 3600
def animate(i):
# update the data for all the drawn elements in this function
lat_pos = kph2mps(v) * t[i]
ax.set_xlim((lat_pos - view_width / 2, lat_pos + view_width / 2))
rect.set_xy([lat_pos - rect_width / 2, xeq + x[i]])
road.set_xdata(lat + lat_pos)
road.set_ydata(Y * np.sin(2 * np.pi / bump_distance * (lat + lat_pos)))
suspension.set_xdata([lat_pos, lat_pos])
suspension.set_ydata([Y * np.sin(2 * np.pi / bump_distance * lat_pos), xeq + x[i]])
force_vec.set_xdata([lat_pos, lat_pos])
force_vec.set_ydata([xeq + x[i] + rect_height / 2,
xeq + x[i] + rect_height / 2 + f[i] / k])
ani = animation.FuncAnimation(fig, animate, frames=len(t), interval=25)
```
# Questions
Explore the model and see if you can select a damping value that keeps the car pretty stable as it traverses the road.
What happens if you change the driving speed? Do the values of $k$ and $c$ work well for all speeds?
Can you detect the difference in displacement transmissibility and force transmissibility for different driving speeds?
| true |
code
| 0.820811 | null | null | null | null |
|
```
def cosamp(Phi, u, s, tol=1e-10, max_iter=1000):
"""
@Brief: "CoSaMP: Iterative signal recovery from incomplete and inaccurate
samples" by Deanna Needell & Joel Tropp
@Input: Phi - Sampling matrix
u - Noisy sample vector
s - Sparsity vector
@Return: A s-sparse approximation "a" of the target signal
"""
max_iter -= 1 # Correct the while loop
num_precision = 1e-12
a = np.zeros(Phi.shape[1])
v = u
iter = 0
halt = False
while not halt:
iter += 1
print("Iteration {}\r".format(iter))
y = abs(np.dot(np.transpose(Phi), v))
Omega = [i for (i, val) in enumerate(y) if val > np.sort(y)[::-1][2*s] and val > num_precision] # quivalent to below
#Omega = np.argwhere(y >= np.sort(y)[::-1][2*s] and y > num_precision)
T = np.union1d(Omega, a.nonzero()[0])
#T = np.union1d(Omega, T)
b = np.dot( np.linalg.pinv(Phi[:,T]), u )
igood = (abs(b) > np.sort(abs(b))[::-1][s]) & (abs(b) > num_precision)
T = T[igood]
a[T] = b[igood]
v = u - np.dot(Phi[:,T], b[igood])
halt = np.linalg.norm(v)/np.linalg.norm(u) < tol or \
iter > max_iter
return a
```
Test the cosamp Python method in the reconstruction of a high-frequency signal from sparse measurements:
```
import numpy as np
import scipy.linalg
import scipy.signal
import matplotlib.pyplot as plt
n = 4000 # number of measurements
t = np.linspace(0.0, 1.0, num=n)
x = np.sin(91*2*np.pi*t) + np.sin(412*2*np.pi*t) # original signal (to be reconstructed)
# randomly sample signal
p = 103 # random sampling (Note that this is one eighth of the Shannon–Nyquist rate!)
aquis = np.round((n-1) * np.random.rand(p)).astype(int)
y = x[aquis] # our compressed measurement from the random sampling
# Here {y} = [C]{x} = [C][Phi]{s}, where Phi is the inverse discrete cosine transform
Phi = scipy.fftpack.dct(np.eye(n), axis=0, norm='ortho')
CPhi = Phi[aquis,:]
# l1 minimization (through linear programming)
s = cosamp(CPhi, y, 10) # obtain the sparse vector through CoSaMP algorithm
xrec = scipy.fftpack.idct(s, axis=0, norm='ortho') # Reconstructed signal
figw, figh = 7.0, 5.0 # figure width and height
plt.figure(figsize=(figw, figh))
plt.plot(t, s)
plt.title('Sparse vector $s$')
plt.show()
# Visualize the compressed-sensing reconstruction signal
figw, figh = 7.0, 5.0 # figure width and height
plt.figure(figsize=(figw, figh))
plt.plot(t, x, 'b', label='Original signal')
plt.plot(t, xrec, 'r', label='Reconstructed signal')
plt.xlim(0.4, 0.5)
legend = plt.legend(loc='upper center', shadow=True, fontsize='x-large')
# Put a nicer background color on the legend.
legend.get_frame().set_facecolor('C0')
plt.show()
```
| true |
code
| 0.753211 | null | null | null | null |
|
# Model Understanding
Simply examining a model's performance metrics is not enough to select a model and promote it for use in a production setting. While developing an ML algorithm, it is important to understand how the model behaves on the data, to examine the key factors influencing its predictions and to consider where it may be deficient. Determination of what "success" may mean for an ML project depends first and foremost on the user's domain expertise.
blocktorch includes a variety of tools for understanding models, from graphing utilities to methods for explaining predictions.
** Graphing methods on Jupyter Notebook and Jupyter Lab require [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/user_install.html) to be installed.
** If graphing on Jupyter Lab, [jupyterlab-plotly](https://plotly.com/python/getting-started/#jupyterlab-support-python-35) required. To download this, make sure you have [npm](https://nodejs.org/en/download/) installed.
## Graphing Utilities
First, let's train a pipeline on some data.
```
import blocktorch
from blocktorch.pipelines import BinaryClassificationPipeline
X, y = blocktorch.demos.load_breast_cancer()
X_train, X_holdout, y_train, y_holdout = blocktorch.preprocessing.split_data(X, y, problem_type='binary',
test_size=0.2, random_seed=0)
pipeline_binary = BinaryClassificationPipeline(['Simple Imputer', 'Random Forest Classifier'])
pipeline_binary.fit(X_train, y_train)
print(pipeline_binary.score(X_holdout, y_holdout, objectives=['log loss binary']))
```
### Feature Importance
We can get the importance associated with each feature of the resulting pipeline
```
pipeline_binary.feature_importance
```
We can also create a bar plot of the feature importances
```
pipeline_binary.graph_feature_importance()
```
### Permutation Importance
We can also compute and plot [the permutation importance](https://scikit-learn.org/stable/modules/permutation_importance.html) of the pipeline.
```
from blocktorch.model_understanding import calculate_permutation_importance
calculate_permutation_importance(pipeline_binary, X_holdout, y_holdout, 'log loss binary')
from blocktorch.model_understanding import graph_permutation_importance
graph_permutation_importance(pipeline_binary, X_holdout, y_holdout, 'log loss binary')
```
### Partial Dependence Plots
We can calculate the one-way [partial dependence plots](https://christophm.github.io/interpretable-ml-book/pdp.html) for a feature.
```
from blocktorch.model_understanding.graphs import partial_dependence
partial_dependence(pipeline_binary, X_holdout, features='mean radius', grid_resolution=5)
from blocktorch.model_understanding.graphs import graph_partial_dependence
graph_partial_dependence(pipeline_binary, X_holdout, features='mean radius', grid_resolution=5)
```
You can also compute the partial dependence for a categorical feature. We will demonstrate this on the fraud dataset.
```
X_fraud, y_fraud = blocktorch.demos.load_fraud(100, verbose=False)
X_fraud.ww.init(logical_types={"provider": "Categorical", 'region': "Categorical",
"currency": "Categorical", "expiration_date": "Categorical"})
fraud_pipeline = BinaryClassificationPipeline(["DateTime Featurization Component","One Hot Encoder", "Random Forest Classifier"])
fraud_pipeline.fit(X_fraud, y_fraud)
graph_partial_dependence(fraud_pipeline, X_fraud, features='provider')
```
Two-way partial dependence plots are also possible and invoke the same API.
```
partial_dependence(pipeline_binary, X_holdout, features=('worst perimeter', 'worst radius'), grid_resolution=5)
graph_partial_dependence(pipeline_binary, X_holdout, features=('worst perimeter', 'worst radius'), grid_resolution=5)
```
### Confusion Matrix
For binary or multiclass classification, we can view a [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix) of the classifier's predictions. In the DataFrame output of `confusion_matrix()`, the column header represents the predicted labels while row header represents the actual labels.
```
from blocktorch.model_understanding.graphs import confusion_matrix
y_pred = pipeline_binary.predict(X_holdout)
confusion_matrix(y_holdout, y_pred)
from blocktorch.model_understanding.graphs import graph_confusion_matrix
y_pred = pipeline_binary.predict(X_holdout)
graph_confusion_matrix(y_holdout, y_pred)
```
### Precision-Recall Curve
For binary classification, we can view the precision-recall curve of the pipeline.
```
from blocktorch.model_understanding.graphs import graph_precision_recall_curve
# get the predicted probabilities associated with the "true" label
import woodwork as ww
y_encoded = y_holdout.ww.map({'benign': 0, 'malignant': 1})
y_pred_proba = pipeline_binary.predict_proba(X_holdout)["malignant"]
graph_precision_recall_curve(y_encoded, y_pred_proba)
```
### ROC Curve
For binary and multiclass classification, we can view the [Receiver Operating Characteristic (ROC) curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) of the pipeline.
```
from blocktorch.model_understanding.graphs import graph_roc_curve
# get the predicted probabilities associated with the "malignant" label
y_pred_proba = pipeline_binary.predict_proba(X_holdout)["malignant"]
graph_roc_curve(y_encoded, y_pred_proba)
```
The ROC curve can also be generated for multiclass classification problems. For multiclass problems, the graph will show a one-vs-many ROC curve for each class.
```
from blocktorch.pipelines import MulticlassClassificationPipeline
X_multi, y_multi = blocktorch.demos.load_wine()
pipeline_multi = MulticlassClassificationPipeline(['Simple Imputer', 'Random Forest Classifier'])
pipeline_multi.fit(X_multi, y_multi)
y_pred_proba = pipeline_multi.predict_proba(X_multi)
graph_roc_curve(y_multi, y_pred_proba)
```
### Binary Objective Score vs. Threshold Graph
[Some binary classification objectives](./objectives.ipynb) (objectives that have `score_needs_proba` set to False) are sensitive to a decision threshold. For those objectives, we can obtain and graph the scores for thresholds from zero to one, calculated at evenly-spaced intervals determined by `steps`.
```
from blocktorch.model_understanding.graphs import binary_objective_vs_threshold
binary_objective_vs_threshold(pipeline_binary, X_holdout, y_holdout, 'f1', steps=10)
from blocktorch.model_understanding.graphs import graph_binary_objective_vs_threshold
graph_binary_objective_vs_threshold(pipeline_binary, X_holdout, y_holdout, 'f1', steps=100)
```
### Predicted Vs Actual Values Graph for Regression Problems
We can also create a scatterplot comparing predicted vs actual values for regression problems. We can specify an `outlier_threshold` to color values differently if the absolute difference between the actual and predicted values are outside of a given threshold.
```
from blocktorch.model_understanding.graphs import graph_prediction_vs_actual
from blocktorch.pipelines import RegressionPipeline
X_regress, y_regress = blocktorch.demos.load_diabetes()
X_train, X_test, y_train, y_test = blocktorch.preprocessing.split_data(X_regress, y_regress, problem_type='regression')
pipeline_regress = RegressionPipeline(['One Hot Encoder', 'Linear Regressor'])
pipeline_regress.fit(X_train, y_train)
y_pred = pipeline_regress.predict(X_test)
graph_prediction_vs_actual(y_test, y_pred, outlier_threshold=50)
```
Now let's train a decision tree on some data.
```
pipeline_dt = BinaryClassificationPipeline(['Simple Imputer', 'Decision Tree Classifier'])
pipeline_dt.fit(X_train, y_train)
```
### Tree Visualization
We can visualize the structure of the Decision Tree that was fit to that data, and save it if necessary.
```
from blocktorch.model_understanding.graphs import visualize_decision_tree
visualize_decision_tree(pipeline_dt.estimator, max_depth=2, rotate=False, filled=True, filepath=None)
```
## Explaining Predictions
We can explain why the model made certain predictions with the [explain_predictions](../autoapi/blocktorch/model_understanding/prediction_explanations/explainers/index.rst#blocktorch.model_understanding.prediction_explanations.explainers.explain_predictions) function. This will use the [Shapley Additive Explanations (SHAP)](https://github.com/slundberg/shap) algorithm to identify the top features that explain the predicted value.
This function can explain both classification and regression models - all you need to do is provide the pipeline, the input features, and a list of rows corresponding to the indices of the input features you want to explain. The function will return a table that you can print summarizing the top 3 most positive and negative contributing features to the predicted value.
In the example below, we explain the prediction for the third data point in the data set. We see that the `worst concave points` feature increased the estimated probability that the tumor is malignant by 20% while the `worst radius` feature decreased the probability the tumor is malignant by 5%.
```
from blocktorch.model_understanding.prediction_explanations import explain_predictions
table = explain_predictions(pipeline=pipeline_binary, input_features=X_holdout, y=None, indices_to_explain=[3],
top_k_features=6, include_shap_values=True)
print(table)
```
The interpretation of the table is the same for regression problems - but the SHAP value now corresponds to the change in the estimated value of the dependent variable rather than a change in probability. For multiclass classification problems, a table will be output for each possible class.
Below is an example of how you would explain three predictions with [explain_predictions](../autoapi/blocktorch/model_understanding/prediction_explanations/explainers/index.rst#blocktorch.model_understanding.prediction_explanations.explainers.explain_predictions).
```
from blocktorch.model_understanding.prediction_explanations import explain_predictions
report = explain_predictions(pipeline=pipeline_binary,
input_features=X_holdout, y=y_holdout, indices_to_explain=[0, 4, 9], include_shap_values=True,
output_format='text')
print(report)
```
### Explaining Best and Worst Predictions
When debugging machine learning models, it is often useful to analyze the best and worst predictions the model made. The [explain_predictions_best_worst](../autoapi/blocktorch/model_understanding/prediction_explanations/explainers/index.rst#blocktorch.model_understanding.prediction_explanations.explainers.explain_predictions_best_worst) function can help us with this.
This function will display the output of [explain_predictions](../autoapi/blocktorch/model_understanding/prediction_explanations/explainers/index.rst#blocktorch.model_understanding.prediction_explanations.explainers.explain_predictions) for the best 2 and worst 2 predictions. By default, the best and worst predictions are determined by the absolute error for regression problems and [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) for classification problems.
We can specify our own ranking function by passing in a function to the `metric` parameter. This function will be called on `y_true` and `y_pred`. By convention, lower scores are better.
At the top of each table, we can see the predicted probabilities, target value, error, and row index for that prediction. For a regression problem, we would see the predicted value instead of predicted probabilities.
```
from blocktorch.model_understanding.prediction_explanations import explain_predictions_best_worst
report = explain_predictions_best_worst(pipeline=pipeline_binary, input_features=X_holdout, y_true=y_holdout,
include_shap_values=True, top_k_features=6, num_to_explain=2)
print(report)
```
We use a custom metric ([hinge loss](https://en.wikipedia.org/wiki/Hinge_loss)) for selecting the best and worst predictions. See this example:
```python
import numpy as np
def hinge_loss(y_true, y_pred_proba):
probabilities = np.clip(y_pred_proba.iloc[:, 1], 0.001, 0.999)
y_true[y_true == 0] = -1
return np.clip(1 - y_true * np.log(probabilities / (1 - probabilities)), a_min=0, a_max=None)
report = explain_predictions_best_worst(pipeline=pipeline, input_features=X, y_true=y,
include_shap_values=True, num_to_explain=5, metric=hinge_loss)
print(report)
```
### Changing Output Formats
Instead of getting the prediction explanations as text, you can get the report as a python dictionary or pandas dataframe. All you have to do is pass `output_format="dict"` or `output_format="dataframe"` to either `explain_prediction`, `explain_predictions`, or `explain_predictions_best_worst`.
### Single prediction as a dictionary
```
import json
single_prediction_report = explain_predictions(pipeline=pipeline_binary, input_features=X_holdout, indices_to_explain=[3],
y=y_holdout, top_k_features=6, include_shap_values=True,
output_format="dict")
print(json.dumps(single_prediction_report, indent=2))
```
### Single prediction as a dataframe
```
single_prediction_report = explain_predictions(pipeline=pipeline_binary, input_features=X_holdout,
indices_to_explain=[3],
y=y_holdout, top_k_features=6, include_shap_values=True,
output_format="dataframe")
single_prediction_report
```
### Best and worst predictions as a dictionary
```
report = explain_predictions_best_worst(pipeline=pipeline_binary, input_features=X, y_true=y,
num_to_explain=1, top_k_features=6,
include_shap_values=True, output_format="dict")
print(json.dumps(report, indent=2))
```
### Best and worst predictions as a dataframe
```
report = explain_predictions_best_worst(pipeline=pipeline_binary, input_features=X_holdout, y_true=y_holdout,
num_to_explain=1, top_k_features=6,
include_shap_values=True, output_format="dataframe")
report
```
### Force Plots
Force plots can be generated to predict single or multiple rows for binary, multiclass and regression problem types. Here's an example of predicting a single row on a binary classification dataset. The force plots show the predictive power of each of the features in making the negative ("Class: 0") prediction and the positive ("Class: 1") prediction.
```
import shap
from blocktorch.model_understanding.force_plots import graph_force_plot
rows_to_explain = [0] # Should be a list of integer indices of the rows to explain.
results = graph_force_plot(pipeline_binary, rows_to_explain=rows_to_explain,
training_data=X_holdout, y=y_holdout)
for result in results:
for cls in result:
print("Class:", cls)
display(result[cls]["plot"])
```
Here's an example of a force plot explaining multiple predictions on a multiclass problem. These plots show the force plots for each row arranged as consecutive columns that can be ordered by the dropdown above. Clicking the column indicates which row explanation is underneath.
```
rows_to_explain = [0,1,2,3,4] # Should be a list of integer indices of the rows to explain.
results = graph_force_plot(pipeline_multi,
rows_to_explain=rows_to_explain,
training_data=X_multi, y=y_multi)
for idx, result in enumerate(results):
print("Row:", idx)
for cls in result:
print("Class:", cls)
display(result[cls]["plot"])
```
| true |
code
| 0.644309 | null | null | null | null |
|
# Determining rigid body transformation using the SVD algorithm
Marcos Duarte
Ideally, three non-colinear markers placed on a moving rigid body is everything we need to describe its movement (translation and rotation) in relation to a fixed coordinate system. However, in pratical situations of human motion analysis, markers are placed on the soft tissue of a deformable body and this generates artifacts caused by muscle contraction, skin deformation, marker wobbling, etc. In this situation, the use of only three markers can produce unreliable results. It has been shown that four or more markers on the segment followed by a mathematical procedure to calculate the 'best' rigid-body transformation taking into account all these markers produces more robust results (Söderkvist & Wedin 1993; Challis 1995; Cappozzo et al. 1997).
One mathematical procedure to calculate the transformation with three or more marker positions envolves the use of the [singular value decomposition](http://en.wikipedia.org/wiki/Singular_value_decomposition) (SVD) algorithm from linear algebra. The SVD algorithm decomposes a matrix $\mathbf{M}$ (which represents a general transformation between two coordinate systems) into three simple transformations: a rotation $\mathbf{V^T}$, a scaling factor $\mathbf{S}$ along the rotated axes and a second rotation $\mathbf{U}$:
$$ \mathbf{M}= \mathbf{U\;S\;V^T}$$
And the rotation matrix is given by:
$$ \mathbf{R}= \mathbf{U\:V^T}$$
The matrices $\mathbf{U}$ and $\mathbf{V}$ are both orthonormal (det = $\pm$1).
For example, if we have registered the position of four markers placed on a moving segment in 100 different instants and the position of these same markers during, what is known in Biomechanics, a static calibration trial, we would use the SVD algorithm to calculate the 100 rotation matrices (between the static trials and the 100 instants) in order to find the Cardan angles for each instant.
The function `svdt.py` (its code is shown at the end of this text) determines the rotation matrix ($R$) and the translation vector ($L$) for a rigid body after the following transformation: $B = R*A + L + err$. Where $A$ and $B$ represent the rigid body in different instants and err is an aleatory noise. $A$ and $B$ are matrices with the marker coordinates at different instants (at least three non-collinear markers are necessary to determine the 3D transformation).
The matrix $A$ can be thought to represent a local coordinate system (but $A$ it's not a basis) and matrix $B$ the global coordinate system. The operation $P_g = R*P_l + L$ calculates the coordinates of the point $P_l$ (expressed in the local coordinate system) in the global coordinate system ($P_g$).
Let's test the `svdt` function:
```
# Import the necessary libraries
import numpy as np
import sys
sys.path.insert(1, r'./../functions')
from svdt import svdt
# markers in different columns (default):
A = np.array([0,0,0, 1,0,0, 0,1,0, 1,1,0]) # four markers
B = np.array([0,0,0, 0,1,0, -1,0,0, -1,1,0]) # four markers
R, L, RMSE = svdt(A, B)
print('Rotation matrix:\n', np.around(R, 4))
print('Translation vector:\n', np.around(L, 4))
print('RMSE:\n', np.around(RMSE, 4))
# markers in different rows:
A = np.array([[0,0,0], [1,0,0], [ 0,1,0], [ 1,1,0]]) # four markers
B = np.array([[0,0,0], [0,1,0], [-1,0,0], [-1,1,0]]) # four markers
R, L, RMSE = svdt(A, B, order='row')
print('Rotation matrix:\n', np.around(R, 4))
print('Translation vector:\n', np.around(L, 4))
print('RMSE:\n', np.around(RMSE, 4))
```
For the matrix of a pure rotation around the z axis, the element in the first row and second column is $-sin\gamma$, which means the rotation was $90^o$, as expected.
A typical use of the `svdt` function is to calculate the transformation between $A$ and $B$ ($B = R*A + L$), where $A$ is the matrix with the markers data in one instant (the calibration or static trial) and $B$ is the matrix with the markers data of more than one instant (the dynamic trial).
Input $A$ as a 1D array `[x1, y1, z1, ..., xn, yn, zn]` where `n` is the number of markers and $B$ a 2D array with the different instants as rows (like in $A$).
The output $R$ has the shape `(3, 3, tn)`, where `tn` is the number of instants, $L$ the shape `(tn, 3)`, and $RMSE$ the shape `(tn)`. If `tn` is equal to one, the outputs have the same shape as in `svdt` (the last dimension of the outputs above is dropped).
Let's show this case:
```
A = np.array([0,0,0, 1,0,0, 0,1,0, 1,1,0]) # four markers
B = np.array([0,0,0, 0,1,0, -1,0,0, -1,1,0]) # four markers
B = np.vstack((B, B)) # simulate two instants (two rows)
R, L, RMSE = svdt(A, B)
print('Rotation matrix:\n', np.around(R, 4))
print('Translation vector:\n', np.around(L, 4))
print('RMSE:\n', np.around(RMSE, 4))
```
## References
- Cappozzo A, Cappello A, Della Croce U, Pensalfini F (1997) [Surface-marker cluster design criteria for 3-D bone movement reconstruction](http://www.ncbi.nlm.nih.gov/pubmed/9401217). IEEE Trans Biomed Eng., 44:1165-1174.
- Challis JH (1995). [A procedure for determining rigid body transformation parameters](http://www.ncbi.nlm.nih.gov/pubmed/7601872). Journal of Biomechanics, 28, 733-737.
- Söderkvist I, Wedin PA (1993) [Determining the movements of the skeleton using well-configured markers](http://www.ncbi.nlm.nih.gov/pubmed/8308052). Journal of Biomechanics, 26, 1473-1477.
### Function svdt.py
```
%load './../functions/svdt.py'
#!/usr/bin/env python
"""Calculates the transformation between two coordinate systems using SVD."""
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'svdt.py v.1 2013/12/23'
import numpy as np
def svdt(A, B, order='col'):
"""Calculates the transformation between two coordinate systems using SVD.
This function determines the rotation matrix (R) and the translation vector
(L) for a rigid body after the following transformation [1]_, [2]_:
B = R*A + L + err.
Where A and B represents the rigid body in different instants and err is an
aleatory noise (which should be zero for a perfect rigid body). A and B are
matrices with the marker coordinates at different instants (at least three
non-collinear markers are necessary to determine the 3D transformation).
The matrix A can be thought to represent a local coordinate system (but A
it's not a basis) and matrix B the global coordinate system. The operation
Pg = R*Pl + L calculates the coordinates of the point Pl (expressed in the
local coordinate system) in the global coordinate system (Pg).
A typical use of the svdt function is to calculate the transformation
between A and B (B = R*A + L), where A is the matrix with the markers data
in one instant (the calibration or static trial) and B is the matrix with
the markers data for one or more instants (the dynamic trial).
If the parameter order='row', the A and B parameters should have the shape
(n, 3), i.e., n rows and 3 columns, where n is the number of markers.
If order='col', A can be a 1D array with the shape (n*3, like
[x1, y1, z1, ..., xn, yn, zn] and B a 1D array with the same structure of A
or a 2D array with the shape (ni, n*3) where ni is the number of instants.
The output R has the shape (ni, 3, 3), L has the shape (ni, 3), and RMSE
has the shape (ni,). If ni is equal to one, the outputs will have the
singleton dimension dropped.
Part of this code is based on the programs written by Alberto Leardini,
Christoph Reinschmidt, and Ton van den Bogert.
Parameters
----------
A : Numpy array
Coordinates [x,y,z] of at least three markers with two possible shapes:
order='row': 2D array (n, 3), where n is the number of markers.
order='col': 1D array (3*nmarkers,) like [x1, y1, z1, ..., xn, yn, zn].
B : 2D Numpy array
Coordinates [x,y,z] of at least three markers with two possible shapes:
order='row': 2D array (n, 3), where n is the number of markers.
order='col': 2D array (ni, n*3), where ni is the number of instants.
If ni=1, B is a 1D array like A.
order : string
'col': specifies that A and B are column oriented (default).
'row': specifies that A and B are row oriented.
Returns
-------
R : Numpy array
Rotation matrix between A and B with two possible shapes:
order='row': (3, 3).
order='col': (ni, 3, 3), where ni is the number of instants.
If ni=1, R will have the singleton dimension dropped.
L : Numpy array
Translation vector between A and B with two possible shapes:
order='row': (3,) if order = 'row'.
order='col': (ni, 3), where ni is the number of instants.
If ni=1, L will have the singleton dimension dropped.
RMSE : array
Root-mean-squared error for the rigid body model: B = R*A + L + err
with two possible shapes:
order='row': (1,).
order='col': (ni,), where ni is the number of instants.
See Also
--------
numpy.linalg.svd
Notes
-----
The singular value decomposition (SVD) algorithm decomposes a matrix M
(which represents a general transformation between two coordinate systems)
into three simple transformations [3]_: a rotation Vt, a scaling factor S
along the rotated axes and a second rotation U: M = U*S*Vt.
The rotation matrix is given by: R = U*Vt.
References
----------
.. [1] Soderkvist, Kedin (1993) Journal of Biomechanics, 26, 1473-1477.
.. [2] http://www.kwon3d.com/theory/jkinem/rotmat.html.
.. [3] http://en.wikipedia.org/wiki/Singular_value_decomposition.
Examples
--------
>>> import numpy as np
>>> from svdt import svdt
>>> A = np.array([0,0,0, 1,0,0, 0,1,0, 1,1,0]) # four markers
>>> B = np.array([0,0,0, 0,1,0, -1,0,0, -1,1,0]) # four markers
>>> R, L, RMSE = svdt(A, B)
>>> B = np.vstack((B, B)) # simulate two instants (two rows)
>>> R, L, RMSE = svdt(A, B)
>>> A = np.array([[0,0,0], [1,0,0], [ 0,1,0], [ 1,1,0]]) # four markers
>>> B = np.array([[0,0,0], [0,1,0], [-1,0,0], [-1,1,0]]) # four markers
>>> R, L, RMSE = svdt(A, B, order='row')
"""
A, B = np.asarray(A), np.asarray(B)
if order == 'row' or B.ndim == 1:
if B.ndim == 1:
A = A.reshape(A.size/3, 3)
B = B.reshape(B.size/3, 3)
R, L, RMSE = _svd(A, B)
else:
A = A.reshape(A.size/3, 3)
ni = B.shape[0]
R = np.empty((ni, 3, 3))
L = np.empty((ni, 3))
RMSE = np.empty(ni)
for i in range(ni):
R[i, :, :], L[i, :], RMSE[i] = _svd(A, B[i, :].reshape(A.shape))
return R, L, RMSE
def _svd(A, B):
"""Calculates the transformation between two coordinate systems using SVD.
See the help of the svdt function.
Parameters
----------
A : 2D Numpy array (n, 3), where n is the number of markers.
Coordinates [x,y,z] of at least three markers
B : 2D Numpy array (n, 3), where n is the number of markers.
Coordinates [x,y,z] of at least three markers
Returns
-------
R : 2D Numpy array (3, 3)
Rotation matrix between A and B
L : 1D Numpy array (3,)
Translation vector between A and B
RMSE : float
Root-mean-squared error for the rigid body model: B = R*A + L + err.
See Also
--------
numpy.linalg.svd
"""
Am = np.mean(A, axis=0) # centroid of m1
Bm = np.mean(B, axis=0) # centroid of m2
M = np.dot((B - Bm).T, (A - Am)) # considering only rotation
# singular value decomposition
U, S, Vt = np.linalg.svd(M)
# rotation matrix
R = np.dot(U, np.dot(np.diag([1, 1, np.linalg.det(np.dot(U, Vt))]), Vt))
# translation vector
L = B.mean(0) - np.dot(R, A.mean(0))
# RMSE
err = 0
for i in range(A.shape[0]):
Bp = np.dot(R, A[i, :]) + L
err += np.sum((Bp - B[i, :])**2)
RMSE = np.sqrt(err/A.shape[0]/3)
return R, L, RMSE
```
| true |
code
| 0.651244 | null | null | null | null |
|
```
from PreFRBLE.likelihood import *
from PreFRBLE.plot import *
```
### Identify intervening galaxies
Here we attempto to identify LoS with intervening galaxies.
For this purpose, we compare the likelihood of temporal broadening $L(\tau)$ for scenarios with and without intervening galaxies, as well as consider a scenario that realistically considers the probability for LoS to intersect an additoinal galaxy.
```
properties_benchmark = { ## this is our benchmark scenario, fed to procedures as kwargs-dict of models considered for the different regions are provided as lists (to allow to consider multiple models in the same scenario, e. g. several types of progenitors. Use mixed models only when you kno what you are doing)
'redshift' : 0.1, ## Scenario must come either with a redshift or a pair of telescope and redshift population
'IGM' : ['primordial'], ## constrained numerical simulation of the IGM (more info in Hackstein et al. 2018, 2019 & 2020 )
'Host' : ['Rodrigues18'], ## ensemble of host galaxies according to Rodrigues et al . 2018
# 'Inter' : ['Rodrigues18'], ## same ensemble for intervening galaxies
'Local' : ['Piro18_wind'], ## local environment of magnetar according to Piro & Gaensler 2018
# 'N_inter' : True, ## if N_Inter = True, then intervening galaxies are considered realistically, i. e. according to the expected number of intervened LoS N_inter
'f_IGM' : 0.9, ## considering baryon content f_IGM=0.9
}
## define our benchmark scenario without intervening galaxies
scenario_nointer = Scenario( **properties_benchmark )
## only LoS with a single intervening galaxy at rendom redshift, according to prior
scenario_inter = Scenario( Inter='Rodrigues18', **properties_benchmark )
## realistic mix of LoS with and without intervening galaxies, according to expectation from intersection probability
scenario_realistic = Scenario( N_inter=True, Inter='Rodrigues18', **properties_benchmark )
```
### compare likelihoods
First, we compare the distribution of $\tau$ expected to be observed by ASKAP, CHIME and Parkes.
For easier interpretation we plot the complementary cumulative likelihood $P(>\tau)$, that shows how many $\tau$ are obseved above the given value
```
tau_dist = {
'ASKAP_incoh' : 0.06,
'Parkes' : 0.06,
'CHIME' : 1.8
}
fig, axs = plt.subplots( 1, len(telescopes), figsize=(4*len(telescopes), 3) )
#fig, axs = plt.subplots( len(telescopes), 1, figsize=(4, len(telescopes)* 3) )
population = 'SMD'
scenarios = [scenario_inter, scenario_nointer,scenario_realistic]
scenario_labels = ['only intervening', 'no intervening', 'realistic']
linestyles = ['-.', ':', '-']
for telescope, ax in zip( telescopes, axs):
for i_s, scenario in enumerate(scenarios):
tmp = Scenario( population=population, telescope=telescope, **scenario.Properties( identifier=False ) )
L = GetLikelihood( 'tau', tmp )
L.Plot( ax=ax, deviation=True, label=scenario_labels[i_s], linestyle=linestyles[i_s], cumulative=-1 )
# P = GetLikelihood_Telescope( measure='tau', telescope=telescope, population='SMD', dev=True, **scenario )
# PlotLikelihood( *P, ax=ax, label=scenario_labels[i_s], linestyle=linestyles[i_s], measure='tau' , cumulative=-1 )#density=True )
ax.set_title(labels[telescope], fontsize=18)
ax.set_ylim(1e-3,2)
ax.set_xlim(1e-3, 1e2)
if telescope == telescopes[0]:
fig.legend(loc='center', bbox_to_anchor= (.5, 1.01), ncol=3, borderaxespad=0, frameon=False, fontsize=16, handlelength=3 )
ax.set_xscale('log')
ax.set_yscale('log')
ax.vlines( tau_dist[telescope], 1e-4,10 )
# AllSidesTicks(ax)
#for i in range(3):
# axs[i].legend(fontsize=16, loc=4+i)
#axs[1].legend(fontsize=14, loc=1)
fig.tight_layout()
```
### Bayes factor & Posterior
calculate bayes factor $\mathcal{B}$ as ratio of the two likelihood functions shown above.
To obtain the posterior likelihood $L(\tau)$ for an $\tau$ to mark a LoS with an intervening galaxy, we have to multiply $\mathcal{B}$ by the ratio of prior likelihoods $\pi_{\rm inter}$, i. e. the ratio of expected amount of LoS with and without LoS.
This amount can be found via
$$
\pi_{\rm inter} = \int N_{\rm Inter}(z) \pi(z) \text{d}z .
$$
For $L(\tau)>10^2$, the LoS is intersected by an intervening galaxy with >99% certainty.
```
## first determine amount of intervened LoS
telescope = 'Parkes'
populations = 'SMD'
for telescope in telescopes:
scenario = Scenario( telescope=telescope, population=population )
L = GetLikelihood( 'z', scenario )
# Pz = GetLikelihood_Redshift( telescope=telescope, population=population )
N_Inter = np.cumsum(nInter(redshift=redshift_range[-1]))
pi_inter = np.sum( N_Inter * L.Probability() )
##plot results
fig, ax = plt.subplots()
L.Plot( ax=ax, label=r"$\pi$" )
# PlotLikelihood( *Pz, log=False, measure='z', ax=ax, label=r"$\pi$" )
ax.plot( L.x_central(), N_Inter, label=r"$N_{\rm Inter}$" )
print( "{}: {:.1f}% of LoS have intervening galaxy".format( telescope, 100*pi_inter ) )
ax.set_ylabel('')
ax.legend()
plt.show()
## Compute and plot posterior likelihood of $\tau$ to indicate an intervening galaxy
fig, ax = plt.subplots( figsize=(5,3))
population = 'SMD'
#for telescope, ax in zip( telescopes, axs):
for telescope, color, linestyle in zip(telescopes, colors_telescope, linestyles_population):
L_i = GetLikelihood( 'tau', Scenario( population=population, telescope=telescope, **scenario_inter.Properties( identifier=False ) ) )
L = GetLikelihood( 'tau', Scenario( population=population, telescope=telescope, **scenario_nointer.Properties( identifier=False ) ) )
## force both P to same range
## give both P ranges the same min and max
x_max = np.min( [L.x[-1],L_i.x[-1]] )
x_min = np.max( [L.x[0],L_i.x[0]] )
L_i.Measureable( min=x_min, max=x_max )
L.Measureable( min=x_min, max=x_max, bins=L_i.P.size ) ### use identical bins
## check if successfull
if not np.all(L.x == L_i.x):
print("try harder!")
print(P[1], P_i[1])
break
# """
B = BayesFactors( P1=L_i.P, P2=L.P )
dev_B = np.sqrt( L_i.dev**2 + L.dev**2 )
## obtain prior
Lz = GetLikelihood( 'z', Scenario( population=population, telescope=telescope ) )
# Pz = GetLikelihood_Redshift( telescope=telescope, population=population )
N_Inter = np.cumsum(nInter())
pi_inter = np.sum( N_Inter * Lz.Probability() )
## compute posterior
B *= pi_inter/(1-pi_inter)
try:
i_tau = first( range(len(B)), lambda i: B[i] > 1e2 )
except:
print( "could not find L>100")
## highest value of P_nointer>0
i_tau = -1 ### by construction, last value
# print( "B_last {} at {}".format( B[i_tau],i_tau) )
tau_decisive = L.x[1:][i_tau]
print( "{}: decisive for intervening galaxies: tau>{:.3f} ms".format(telescope,tau_decisive))
## better aestethics for results with P = 0
B[B==1] = B[B!=1][-1] * 10.**np.arange(sum(B==1))
ax.errorbar( L.x[1:], B, yerr=B*dev_B, label=r"%s, $\tau_{\rm decisive} = %.2f$ ms" % (labels[telescope], tau_decisive), color=color, linestyle=linestyle )
ax.set_xlabel( r"$\tau$ / ms", fontdict={'size':18 } )
ax.set_ylabel( r"$L$", fontdict={'size':18 } )
# ax.set_ylabel( r"$\mathcal{B}$", fontdict={'size':18 } )
## compute how many LoS with intervening galaxies are identified / false positives
### reload the shrinked likelihood functions
L_i = GetLikelihood( 'tau', Scenario( population=population, telescope=telescope, **scenario_inter.Properties( identifier=False ) ) )
L = GetLikelihood( 'tau', Scenario( population=population, telescope=telescope, **scenario_nointer.Properties( identifier=False ) ) )
#P_i, x_i = GetLikelihood_Telescope( measure='tau', telescope=telescope, population='SMD', **scenario_inter )
#P, x = GetLikelihood_Telescope( measure='tau', telescope=telescope, population='SMD', **scenario_nointer )
i_tau = first( range(len(L_i.P)), lambda i: L_i.x[i] > tau_decisive )
print( "%s, %.6f %% of interveners identified" % ( telescope, 100*np.sum( L_i.Probability()[i_tau:] ) ) )
try:
i_tau = first( range(len(L.P)), lambda i: L.x[i] > tau_decisive )
except: ## fails, if chosen highest value of noInter
i_tau = -1
print( "%s, %.6f %% of others give false positives" % ( telescope, 100*np.sum( L.Probability()[i_tau:] ) ) )
ax.legend(loc='lower right', fontsize=14)
ax.loglog()
#ax.set_xlim(1e-2,1e2)
ax.set_ylim(1e-16,1e11)
PlotLimit(ax=ax, x=ax.get_xlim(), y=[1e2,1e2], lower_limit=True, label='decisive', shift_text_vertical=3e3, shift_text_horizontal=-0.95)
ax.tick_params(axis='both', which='major', labelsize=16)
#AllSidesTicks(ax)
# """
```
### compare to FRBcat
How many FRBs in FRBcat show $\tau > \tau_{\rm dist}$?
How many do we expect?
```
tau_dist = {
'ASKAP_incoh' : 0.06,
'Parkes' : 0.06,
'CHIME' : 1.8
}
population='SMD'
for telescope in telescopes:
FRBs = GetFRBcat( telescopes=[telescope])
N_tau = sum(FRBs['tau'] > tau_dist[telescope])
N_tot = len(FRBs)
print("{}: {} of {} > {} ms, {:.2f}%".format(telescope, N_tau, N_tot, tau_dist[telescope], 100*N_tau/N_tot))
scenario = Scenario( telescope=telescope, population=population, **scenario_realistic.Properties( identifier=False ) )
L = GetLikelihood( 'tau', scenario )
# L = GetLikelihood_Telescope( measure='tau', telescope=telescope, population=population, **scenario_inter_realistic)
ix = first( range(len(L.P)), lambda i: L.x[i] >= tau_dist[telescope] )
# ix = np.where(L.x >= tau_dist[telescope])[0][0]
print( "expected: {:.2f} % > {} ms".format( 100*L.Cumulative(-1)[ix], tau_dist[telescope] ) )
L.scenario.Key(measure='tau')
```
### How many tau > 0.06 ms observed by Parkes can intervening galaxies account for?
```
pi_I = 0.0192 ## expected amount of LoS with tau > 0.06 ms (all from intervening)
P_I = 0.4815 ## observed amount of tau > 0.06 ms, derived above
print( "{:.2f} %".format(100*pi_I/P_I) )
```
What about the other regions?
### outer scale of turbulence $L_0$
The observed values of $\tau$ from the IGM highly depend on the choice for $L_0$, whose true value is sparsely constrained, from few parsec to the Hubble scale.
In our estimates, we assume constant $L_0 = 1 ~\rm Mpc$, a value expected from results by Ryu et al. 2008.
The choice of constant $L_0$ allows for different choices in post processing.
Simplye add a "L0" keyword with the required number in kpc.
```
scenario_IGM = {
'IGM' : ['primordial'],
}
scenario_ref = Scenario( telescope='Parkes', population='SMD', **scenario_realistic.Properties( identifier=False ) )
#scenario_ref = scenario_IGM
tmp = scenario_ref.copy()
tmp.IGM_outer_scale = 0.005
measure='tau'
cumulative = -1
fig, ax = plt.subplots()
L = GetLikelihood( measure, scenario_ref )
L.Plot( ax=ax, cumulative=cumulative )
#PlotLikelihood( *L, measure=measure, ax=ax, cumulative=cumulative)
try:
ix = first( range(len(L.P)), lambda i: L.x[i] > 0.06 )
print( "{:.2f} % > 0.06 ms for L0 = 1 Mpc".format( 100*L.Cumulative(-1)[ix] ) )
except:
print( "0 > 0.06 ms for L0 = 1 Mpc")
L = GetLikelihood( measure=measure, scenario=tmp, force=tmp.IGM_outer_scale<1, ) ### L0 < 1 kpc all have same keyword in likelihood file !!! CHANGE THAT
L.Plot( ax=ax, cumulative=cumulative )
#PlotLikelihood( *L, measure=measure, ax=ax, cumulative=cumulative)
ix = first( range(len(L.P)), lambda i: L.x[i] > 0.06 )
print( "{:.2f} % > 0.06 ms for L0 = {} kpc".format( 100*np.cumsum((L[0]*np.diff(L[1]))[::-1])[::-1][ix], tmp['L0'] ) )
ax.set_ylim(1e-3,1)
```
The results above show that very low values of $L_0<5 ~\rm pc$ are required for the IGM.
It is thus more likely that a different model for the source environment can explain the high amount of $\tau > 0.06 ~\rm ms$ observed by Parkes
| true |
code
| 0.793206 | null | null | null | null |
|
<img src="../../../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle">
# _*Qiskit Finance: Pricing Asian Barrier Spreads*_
The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorials.
***
### Contributors
Stefan Woerner<sup>[1]</sup>, Daniel Egger<sup>[1]</sup>
### Affliation
- <sup>[1]</sup>IBMQ
### Introduction
<br>
An Asian barrier spread is a combination of 3 different option types, and as such, combines multiple possible features that the Qiskit Finance option pricing framework supports::
- <a href="https://www.investopedia.com/terms/a/asianoption.asp">Asian option</a>: The payoff depends on the average price over the considered time horizon.
- <a href="https://www.investopedia.com/terms/b/barrieroption.asp">Barrier Option</a>: The payoff is zero if a certain threshold is exceeded at any time within the considered time horizon.
- <a href="https://www.investopedia.com/terms/b/bullspread.asp">(Bull) Spread</a>: The payoff follows a piecewise linear function (depending on the average price) starting at zero, increasing linear, staying constant.
Suppose strike prices $K_1 < K_2$ and time periods $t=1,2$, with corresponding spot prices $(S_1, S_2)$ following a given multivariate distribution (e.g. generated by some stochastic process), and a barrier threshold $B>0$.
The corresponding payoff function is defined as:
<br>
<br>
$$
P(S_1, S_2) =
\begin{cases}
\min\left\{\max\left\{\frac{1}{2}(S_1 + S_2) - K_1, 0\right\}, K_2 - K_1\right\}, & \text{ if } S_1, S_2 \leq B \\
0, & \text{otherwise.}
\end{cases}
$$
<br>
In the following, a quantum algorithm based on amplitude estimation is used to estimate the expected payoff, i.e., the fair price before discounting, for the option:
<br>
<br>
$$\mathbb{E}\left[ P(S_1, S_2) \right].$$
<br>
The approximation of the objective function and a general introduction to option pricing and risk analysis on quantum computers are given in the following papers:
- <a href="https://arxiv.org/abs/1806.06893">Quantum Risk Analysis. Woerner, Egger. 2018.</a>
- <a href="https://arxiv.org/abs/1905.02666">Option Pricing using Quantum Computers. Stamatopoulos et al. 2019.</a>
```
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from scipy.interpolate import griddata
%matplotlib inline
import numpy as np
from qiskit import QuantumRegister, QuantumCircuit, BasicAer, execute
from qiskit.aqua.algorithms import AmplitudeEstimation
from qiskit.aqua.circuits import WeightedSumOperator, FixedValueComparator as Comparator
from qiskit.aqua.components.uncertainty_problems import UnivariatePiecewiseLinearObjective as PwlObjective
from qiskit.aqua.components.uncertainty_problems import MultivariateProblem
from qiskit.aqua.components.uncertainty_models import MultivariateLogNormalDistribution
```
### Uncertainty Model
We construct a circuit factory to load a multivariate log-normal random distribution into a quantum state on $n$ qubits.
For every dimension $j = 1,\ldots,d$, the distribution is truncated to a given interval $[low_j, high_j]$ and discretized using $2^{n_j}$ grid points, where $n_j$ denotes the number of qubits used to represent dimension $j$, i.e., $n_1+\ldots+n_d = n$.
The unitary operator corresponding to the circuit factory implements the following:
$$\big|0\rangle_{n} \mapsto \big|\psi\rangle_{n} = \sum_{i_1,\ldots,i_d} \sqrt{p_{i_1\ldots i_d}}\big|i_1\rangle_{n_1}\ldots\big|i_d\rangle_{n_d},$$
where $p_{i_1\ldots i_d}$ denote the probabilities corresponding to the truncated and discretized distribution and where $i_j$ is mapped to the right interval using the affine map:
$$ \{0, \ldots, 2^{n_j}-1\} \ni i_j \mapsto \frac{high_j - low_j}{2^{n_j} - 1} * i_j + low_j \in [low_j, high_j].$$
For simplicity, we assume both stock prices are independent and indentically distributed.
This assumption just simplifies the parametrization below and can be easily relaxed to more complex and also correlated multivariate distributions.
The only important assumption for the current implementation is that the discretization grid of the different dimensions has the same step size.
```
# number of qubits per dimension to represent the uncertainty
num_uncertainty_qubits = 2
# parameters for considered random distribution
S = 2.0 # initial spot price
vol = 0.4 # volatility of 40%
r = 0.05 # annual interest rate of 4%
T = 40 / 365 # 40 days to maturity
# resulting parameters for log-normal distribution
mu = ((r - 0.5 * vol**2) * T + np.log(S))
sigma = vol * np.sqrt(T)
mean = np.exp(mu + sigma**2/2)
variance = (np.exp(sigma**2) - 1) * np.exp(2*mu + sigma**2)
stddev = np.sqrt(variance)
# lowest and highest value considered for the spot price; in between, an equidistant discretization is considered.
low = np.maximum(0, mean - 3*stddev)
high = mean + 3*stddev
# map to higher dimensional distribution
# for simplicity assuming dimensions are independent and identically distributed)
dimension = 2
num_qubits=[num_uncertainty_qubits]*dimension
low=low*np.ones(dimension)
high=high*np.ones(dimension)
mu=mu*np.ones(dimension)
cov=sigma**2*np.eye(dimension)
# construct circuit factory
u = MultivariateLogNormalDistribution(num_qubits=num_qubits, low=low, high=high, mu=mu, cov=cov)
# plot PDF of uncertainty model
x = [ v[0] for v in u.values ]
y = [ v[1] for v in u.values ]
z = u.probabilities
#z = map(float, z)
#z = list(map(float, z))
resolution = np.array([2**n for n in num_qubits])*1j
grid_x, grid_y = np.mgrid[min(x):max(x):resolution[0], min(y):max(y):resolution[1]]
grid_z = griddata((x, y), z, (grid_x, grid_y))
fig = plt.figure(figsize=(10, 8))
ax = fig.gca(projection='3d')
ax.plot_surface(grid_x, grid_y, grid_z, cmap=plt.cm.Spectral)
ax.set_xlabel('Spot Price $S_1$ (\$)', size=15)
ax.set_ylabel('Spot Price $S_2$ (\$)', size=15)
ax.set_zlabel('Probability (\%)', size=15)
plt.show()
```
### Payoff Function
For simplicity, we consider the sum of the spot prices instead of their average.
The result can be transformed to the average by just dividing it by 2.
The payoff function equals zero as long as the sum of the spot prices $(S_1 + S_2)$ is less than the strike price $K_1$ and then increases linearly until the sum of the spot prices reaches $K_2$.
Then payoff stays constant to $K_2 - K_1$ unless any of the two spot prices exceeds the barrier threshold $B$, then the payoff goes immediately down to zero.
The implementation first uses a weighted sum operator to compute the sum of the spot prices into an ancilla register, and then uses a comparator, that flips an ancilla qubit from $\big|0\rangle$ to $\big|1\rangle$ if $(S_1 + S_2) \geq K_1$ and another comparator/ancilla to capture the case that $(S_1 + S_2) \geq K_2$.
These ancillas are used to control the linear part of the payoff function.
In addition, we add another ancilla variable for each time step and use additional comparators to check whether $S_1$, respectively $S_2$, exceed the barrier threshold $B$. The payoff function is only applied if $S_1, S_2 \leq B$.
The linear part itself is approximated as follows.
We exploit the fact that $\sin^2(y + \pi/4) \approx y + 1/2$ for small $|y|$.
Thus, for a given approximation scaling factor $c_{approx} \in [0, 1]$ and $x \in [0, 1]$ we consider
$$ \sin^2( \pi/2 * c_{approx} * ( x - 1/2 ) + \pi/4) \approx \pi/2 * c_{approx} * ( x - 1/2 ) + 1/2 $$ for small $c_{approx}$.
We can easily construct an operator that acts as
$$\big|x\rangle \big|0\rangle \mapsto \big|x\rangle \left( \cos(a*x+b) \big|0\rangle + \sin(a*x+b) \big|1\rangle \right),$$
using controlled Y-rotations.
Eventually, we are interested in the probability of measuring $\big|1\rangle$ in the last qubit, which corresponds to
$\sin^2(a*x+b)$.
Together with the approximation above, this allows to approximate the values of interest.
The smaller we choose $c_{approx}$, the better the approximation.
However, since we are then estimating a property scaled by $c_{approx}$, the number of evaluation qubits $m$ needs to be adjusted accordingly.
For more details on the approximation, we refer to:
<a href="https://arxiv.org/abs/1806.06893">Quantum Risk Analysis. Woerner, Egger. 2018.</a>
Since the weighted sum operator (in its current implementation) can only sum up integers, we need to map from the original ranges to the representable range to estimate the result, and reverse this mapping before interpreting the result. The mapping essentially corresponds to the affine mapping described in the context of the uncertainty model above.
```
# determine number of qubits required to represent total loss
weights = []
for n in num_qubits:
for i in range(n):
weights += [2**i]
n_s = WeightedSumOperator.get_required_sum_qubits(weights)
# create circuit factory
agg = WeightedSumOperator(sum(num_qubits), weights)
# set the strike price (should be within the low and the high value of the uncertainty)
strike_price_1 = 3
strike_price_2 = 4
# set the barrier threshold
barrier = 2.5
# map strike prices and barrier threshold from [low, high] to {0, ..., 2^n-1}
max_value = 2**n_s - 1
low_ = low[0]
high_ = high[0]
mapped_strike_price_1 = (strike_price_1 - dimension*low_) / (high_ - low_) * (2**num_uncertainty_qubits - 1)
mapped_strike_price_2 = (strike_price_2 - dimension*low_) / (high_ - low_) * (2**num_uncertainty_qubits - 1)
mapped_barrier = (barrier - low) / (high - low) * (2**num_uncertainty_qubits - 1)
# condition and condition result
conditions = []
barrier_thresholds = [2]*dimension
for i in range(dimension):
# target dimension of random distribution and corresponding condition (which is required to be True)
conditions += [(i, Comparator(num_qubits[i], mapped_barrier[i] + 1, geq=False))]
# set the approximation scaling for the payoff function
c_approx = 0.25
# setup piecewise linear objective fcuntion
breakpoints = [0, mapped_strike_price_1, mapped_strike_price_2]
slopes = [0, 1, 0]
offsets = [0, 0, mapped_strike_price_2 - mapped_strike_price_1]
f_min = 0
f_max = mapped_strike_price_2 - mapped_strike_price_1
bull_spread_objective = PwlObjective(
n_s,
0,
max_value,
breakpoints,
slopes,
offsets,
f_min,
f_max,
c_approx
)
# define overall multivariate problem
asian_barrier_spread = MultivariateProblem(u, agg, bull_spread_objective, conditions=conditions)
# plot exact payoff function
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
x = np.linspace(sum(low), sum(high))
y = (x <= 5)*np.minimum(np.maximum(0, x - strike_price_1), strike_price_2 - strike_price_1)
plt.plot(x, y, 'r-')
plt.grid()
plt.title('Payoff Function (for $S_1 = S_2$)', size=15)
plt.xlabel('Sum of Spot Prices ($S_1 + S_2)$', size=15)
plt.ylabel('Payoff', size=15)
plt.xticks(size=15, rotation=90)
plt.yticks(size=15)
# plot contour of payoff function with respect to both time steps, including barrier
plt.subplot(1,2,2)
z = np.zeros((17, 17))
x = np.linspace(low[0], high[0], 17)
y = np.linspace(low[1], high[1], 17)
for i, x_ in enumerate(x):
for j, y_ in enumerate(y):
z[i, j] = np.minimum(np.maximum(0, x_ + y_ - strike_price_1), strike_price_2 - strike_price_1)
if x_ > barrier or y_ > barrier:
z[i, j] = 0
plt.title('Payoff Function', size =15)
plt.contourf(x, y, z)
plt.colorbar()
plt.xlabel('Spot Price $S_1$', size=15)
plt.ylabel('Spot Price $S_2$', size=15)
plt.xticks(size=15)
plt.yticks(size=15)
plt.show()
# evaluate exact expected value
sum_values = np.sum(u.values, axis=1)
payoff = np.minimum(np.maximum(sum_values - strike_price_1, 0), strike_price_2 - strike_price_1)
leq_barrier = [ np.max(v) <= barrier for v in u.values ]
exact_value = np.dot(u.probabilities[leq_barrier], payoff[leq_barrier])
print('exact expected value:\t%.4f' % exact_value)
```
### Evaluate Expected Payoff
We first verify the quantum circuit by simulating it and analyzing the resulting probability to measure the $|1\rangle$ state in the objective qubit
```
num_req_qubits = asian_barrier_spread.num_target_qubits
num_req_ancillas = asian_barrier_spread.required_ancillas()
q = QuantumRegister(num_req_qubits, name='q')
q_a = QuantumRegister(num_req_ancillas, name='q_a')
qc = QuantumCircuit(q, q_a)
asian_barrier_spread.build(qc, q, q_a)
print('state qubits: ', num_req_qubits)
print('circuit width:', qc.width())
print('circuit depth:', qc.depth())
job = execute(qc, backend=BasicAer.get_backend('statevector_simulator'))
# evaluate resulting statevector
value = 0
for i, a in enumerate(job.result().get_statevector()):
b = ('{0:0%sb}' % asian_barrier_spread.num_target_qubits).format(i)[-asian_barrier_spread.num_target_qubits:]
prob = np.abs(a)**2
if prob > 1e-4 and b[0] == '1':
value += prob
# all other states should have zero probability due to ancilla qubits
if i > 2**num_req_qubits:
break
# map value to original range
mapped_value = asian_barrier_spread.value_to_estimation(value) / (2**num_uncertainty_qubits - 1) * (high_ - low_)
print('Exact Operator Value: %.4f' % value)
print('Mapped Operator value: %.4f' % mapped_value)
print('Exact Expected Payoff: %.4f' % exact_value)
```
Next we use amplitude estimation to estimate the expected payoff.
Note that this can take a while since we are simulating a large number of qubits. The way we designed the operator (asian_barrier_spread) impliesthat the number of actual state qubits is significantly smaller, thus, helping to reduce the overall simulation time a bit.
```
# set number of evaluation qubits (=log(samples))
m = 3
# construct amplitude estimation
ae = AmplitudeEstimation(m, asian_barrier_spread)
# result = ae.run(quantum_instance=BasicAer.get_backend('qasm_simulator'), shots=100)
result = ae.run(quantum_instance=BasicAer.get_backend('statevector_simulator'))
print('Exact value: \t%.4f' % exact_value)
print('Estimated value:\t%.4f' % (result['estimation'] / (2**num_uncertainty_qubits - 1) * (high_ - low_)))
print('Probability: \t%.4f' % result['max_probability'])
# plot estimated values for "a"
plt.bar(result['values'], result['probabilities'], width=0.5/len(result['probabilities']))
plt.xticks([0, 0.25, 0.5, 0.75, 1], size=15)
plt.yticks([0, 0.25, 0.5, 0.75, 1], size=15)
plt.title('"a" Value', size=15)
plt.ylabel('Probability', size=15)
plt.ylim((0,1))
plt.grid()
plt.show()
# plot estimated values for option price (after re-scaling and reversing the c_approx-transformation)
mapped_values = np.array(result['mapped_values']) / (2**num_uncertainty_qubits - 1) * (high_ - low_)
plt.bar(mapped_values, result['probabilities'], width=1/len(result['probabilities']))
plt.plot([exact_value, exact_value], [0,1], 'r--', linewidth=2)
plt.xticks(size=15)
plt.yticks([0, 0.25, 0.5, 0.75, 1], size=15)
plt.title('Estimated Option Price', size=15)
plt.ylabel('Probability', size=15)
plt.ylim((0,1))
plt.grid()
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| true |
code
| 0.728658 | null | null | null | null |
|
# Deep Markov Model
## Introduction
We're going to build a deep probabilistic model for sequential data: the deep markov model. The particular dataset we want to model is composed of snippets of polyphonic music. Each time slice in a sequence spans a quarter note and is represented by an 88-dimensional binary vector that encodes the notes at that time step.
Since music is (obviously) temporally coherent, we need a model that can represent complex time dependencies in the observed data. It would not, for example, be appropriate to consider a model in which the notes at a particular time step are independent of the notes at previous time steps. One way to do this is to build a latent variable model in which the variability and temporal structure of the observations is controlled by the dynamics of the latent variables.
One particular realization of this idea is a markov model, in which we have a chain of latent variables, with each latent variable in the chain conditioned on the previous latent variable. This is a powerful approach, but if we want to represent complex data with complex (and in this case unknown) dynamics, we would like our model to be sufficiently flexible to accommodate dynamics that are potentially highly non-linear. Thus a deep markov model: we allow for the transition probabilities governing the dynamics of the latent variables as well as the the emission probabilities that govern how the observations are generated by the latent dynamics to be parameterized by (non-linear) neural networks.
The specific model we're going to implement is based on the following reference:
[1] `Structured Inference Networks for Nonlinear State Space Models`,<br />
Rahul G. Krishnan, Uri Shalit, David Sontag
Please note that while we do not assume that the reader of this tutorial has read the reference, it's definitely a good place to look for a more comprehensive discussion of the deep markov model in the context of other time series models.
We've described the model, but how do we go about training it? The inference strategy we're going to use is variational inference, which requires specifying a parameterized family of distributions that can be used to approximate the posterior distribution over the latent random variables. Given the non-linearities and complex time-dependencies inherent in our model and data, we expect the exact posterior to be highly non-trivial. So we're going to need a flexible family of variational distributions if we hope to learn a good model. Happily, together PyTorch and Pyro provide all the necessary ingredients. As we will see, assembling them will be straightforward. Let's get to work.
## The Model
A convenient way to describe the high-level structure of the model is with a graphical model.
Here, we've rolled out the model assuming that the sequence of observations is of length three: $\{{\bf x}_1, {\bf x}_2, {\bf x}_3\}$. Mirroring the sequence of observations we also have a sequence of latent random variables: $\{{\bf z}_1, {\bf z}_2, {\bf z}_3\}$. The figure encodes the structure of the model. The corresponding joint distribution is
$$p({\bf x}_{123} , {\bf z}_{123})=p({\bf x}_1|{\bf z}_1)p({\bf x}_2|{\bf z}_2)p({\bf x}_3|{\bf z}_3)p({\bf z}_1)p({\bf z}_2|{\bf z}_1)p({\bf z}_3|{\bf z}_2)$$
Conditioned on ${\bf z}_t$, each observation ${\bf x}_t$ is independent of the other observations. This can be read off from the fact that each ${\bf x}_t$ only depends on the corresponding latent ${\bf z}_t$, as indicated by the downward pointing arrows. We can also read off the markov property of the model: each latent ${\bf z}_t$, when conditioned on the previous latent ${\bf z}_{t-1}$, is independent of all previous latents $\{ {\bf z}_{t-2}, {\bf z}_{t-3}, ...\}$. This effectively says that everything one needs to know about the state of the system at time $t$ is encapsulated by the latent ${\bf z}_{t}$.
We will assume that the observation likelihoods, i.e. the probability distributions $p({{\bf x}_t}|{{\bf z}_t})$ that control the observations, are given by the bernoulli distribution. This is an appropriate choice since our observations are all 0 or 1. For the probability distributions $p({\bf z}_t|{\bf z}_{t-1})$ that control the latent dynamics, we choose (conditional) gaussian distributions with diagonal covariances. This is reasonable since we assume that the latent space is continuous.
The solid black squares represent non-linear functions parameterized by neural networks. This is what makes this a _deep_ markov model. Note that the black squares appear in two different places: in between pairs of latents and in between latents and observations. The non-linear function that connects the latent variables ('Trans' in Fig. 1) controls the dynamics of the latent variables. Since we allow the conditional probability distribution of ${\bf z}_{t}$ to depend on ${\bf z}_{t-1}$ in a complex way, we will be able to capture complex dynamics in our model. Similarly, the non-linear function that connects the latent variables to the observations ('Emit' in Fig. 1) controls how the observations depend on the latent dynamics.
Some additional notes:
- we can freely choose the dimension of the latent space to suit the problem at hand: small latent spaces for simple problems and larger latent spaces for problems with complex dynamics
- note the parameter ${\bf z}_0$ in Fig. 1. as will become more apparent from the code, this is just a convenient way for us to parameterize the probability distribution $p({\bf z}_1)$ for the first time step, where there are no previous latents to condition on.
### The Gated Transition and the Emitter
Without further ado, let's start writing some code. We first define the two PyTorch Modules that correspond to the black squares in Fig. 1. First the emission function:
```python
class Emitter(nn.Module):
"""
Parameterizes the bernoulli observation likelihood p(x_t | z_t)
"""
def __init__(self, input_dim, z_dim, emission_dim):
super().__init__()
# initialize the three linear transformations used in the neural network
self.lin_z_to_hidden = nn.Linear(z_dim, emission_dim)
self.lin_hidden_to_hidden = nn.Linear(emission_dim, emission_dim)
self.lin_hidden_to_input = nn.Linear(emission_dim, input_dim)
# initialize the two non-linearities used in the neural network
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def forward(self, z_t):
"""
Given the latent z at a particular time step t we return the vector of
probabilities `ps` that parameterizes the bernoulli distribution p(x_t|z_t)
"""
h1 = self.relu(self.lin_z_to_hidden(z_t))
h2 = self.relu(self.lin_hidden_to_hidden(h1))
ps = self.sigmoid(self.lin_hidden_to_input(h2))
return ps
```
In the constructor we define the linear transformations that will be used in our emission function. Note that `emission_dim` is the number of hidden units in the neural network. We also define the non-linearities that we will be using. The forward call defines the computational flow of the function. We take in the latent ${\bf z}_{t}$ as input and do a sequence of transformations until we obtain a vector of length 88 that defines the emission probabilities of our bernoulli likelihood. Because of the sigmoid, each element of `ps` will be between 0 and 1 and will define a valid probability. Taken together the elements of `ps` encode which notes we expect to observe at time $t$ given the state of the system (as encoded in ${\bf z}_{t}$).
Now we define the gated transition function:
```python
class GatedTransition(nn.Module):
"""
Parameterizes the gaussian latent transition probability p(z_t | z_{t-1})
See section 5 in the reference for comparison.
"""
def __init__(self, z_dim, transition_dim):
super().__init__()
# initialize the six linear transformations used in the neural network
self.lin_gate_z_to_hidden = nn.Linear(z_dim, transition_dim)
self.lin_gate_hidden_to_z = nn.Linear(transition_dim, z_dim)
self.lin_proposed_mean_z_to_hidden = nn.Linear(z_dim, transition_dim)
self.lin_proposed_mean_hidden_to_z = nn.Linear(transition_dim, z_dim)
self.lin_sig = nn.Linear(z_dim, z_dim)
self.lin_z_to_loc = nn.Linear(z_dim, z_dim)
# modify the default initialization of lin_z_to_loc
# so that it's starts out as the identity function
self.lin_z_to_loc.weight.data = torch.eye(z_dim)
self.lin_z_to_loc.bias.data = torch.zeros(z_dim)
# initialize the three non-linearities used in the neural network
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
self.softplus = nn.Softplus()
def forward(self, z_t_1):
"""
Given the latent z_{t-1} corresponding to the time step t-1
we return the mean and scale vectors that parameterize the
(diagonal) gaussian distribution p(z_t | z_{t-1})
"""
# compute the gating function
_gate = self.relu(self.lin_gate_z_to_hidden(z_t_1))
gate = self.sigmoid(self.lin_gate_hidden_to_z(_gate))
# compute the 'proposed mean'
_proposed_mean = self.relu(self.lin_proposed_mean_z_to_hidden(z_t_1))
proposed_mean = self.lin_proposed_mean_hidden_to_z(_proposed_mean)
# assemble the actual mean used to sample z_t, which mixes
# a linear transformation of z_{t-1} with the proposed mean
# modulated by the gating function
loc = (1 - gate) * self.lin_z_to_loc(z_t_1) + gate * proposed_mean
# compute the scale used to sample z_t, using the proposed
# mean from above as input. the softplus ensures that scale is positive
scale = self.softplus(self.lin_sig(self.relu(proposed_mean)))
# return loc, scale which can be fed into Normal
return loc, scale
```
This mirrors the structure of `Emitter` above, with the difference that the computational flow is a bit more complicated. This is for two reasons. First, the output of `GatedTransition` needs to define a valid (diagonal) gaussian distribution. So we need to output two parameters: the mean `loc`, and the (square root) covariance `scale`. These both need to have the same dimension as the latent space. Second, we don't want to _force_ the dynamics to be non-linear. Thus our mean `loc` is a sum of two terms, only one of which depends non-linearily on the input `z_t_1`. This way we can support both linear and non-linear dynamics (or indeed have the dynamics of part of the latent space be linear, while the remainder of the dynamics is non-linear).
### Model - a Pyro Stochastic Function
So far everything we've done is pure PyTorch. To finish translating our model into code we need to bring Pyro into the picture. Basically we need to implement the stochastic nodes (i.e. the circles) in Fig. 1. To do this we introduce a callable `model()` that contains the Pyro primitive `pyro.sample`. The `sample` statements will be used to specify the joint distribution over the latents ${\bf z}_{1:T}$. Additionally, the `obs` argument can be used with the `sample` statements to specify how the observations ${\bf x}_{1:T}$ depend on the latents. Before we look at the complete code for `model()`, let's look at a stripped down version that contains the main logic:
```python
def model(...):
z_prev = self.z_0
# sample the latents z and observed x's one time step at a time
for t in range(1, T_max + 1):
# the next two lines of code sample z_t ~ p(z_t | z_{t-1}).
# first compute the parameters of the diagonal gaussian
# distribution p(z_t | z_{t-1})
z_loc, z_scale = self.trans(z_prev)
# then sample z_t according to dist.Normal(z_loc, z_scale)
z_t = pyro.sample("z_%d" % t, dist.Normal(z_loc, z_scale))
# compute the probabilities that parameterize the bernoulli likelihood
emission_probs_t = self.emitter(z_t)
# the next statement instructs pyro to observe x_t according to the
# bernoulli distribution p(x_t|z_t)
pyro.sample("obs_x_%d" % t,
dist.Bernoulli(emission_probs_t),
obs=mini_batch[:, t - 1, :])
# the latent sampled at this time step will be conditioned upon
# in the next time step so keep track of it
z_prev = z_t
```
The first thing we need to do is sample ${\bf z}_1$. Once we've sampled ${\bf z}_1$, we can sample ${\bf z}_2 \sim p({\bf z}_2|{\bf z}_1)$ and so on. This is the logic implemented in the `for` loop. The parameters `z_loc` and `z_scale` that define the probability distributions $p({\bf z}_t|{\bf z}_{t-1})$ are computed using `self.trans`, which is just an instance of the `GatedTransition` module defined above. For the first time step at $t=1$ we condition on `self.z_0`, which is a (trainable) `Parameter`, while for subsequent time steps we condition on the previously drawn latent. Note that each random variable `z_t` is assigned a unique name by the user.
Once we've sampled ${\bf z}_t$ at a given time step, we need to observe the datapoint ${\bf x}_t$. So we pass `z_t` through `self.emitter`, an instance of the `Emitter` module defined above to obtain `emission_probs_t`. Together with the argument `dist.Bernoulli()` in the `sample` statement, these probabilities fully specify the observation likelihood. Finally, we also specify the slice of observed data ${\bf x}_t$: `mini_batch[:, t - 1, :]` using the `obs` argument to `sample`.
This fully specifies our model and encapsulates it in a callable that can be passed to Pyro. Before we move on let's look at the full version of `model()` and go through some of the details we glossed over in our first pass.
```python
def model(self, mini_batch, mini_batch_reversed, mini_batch_mask,
mini_batch_seq_lengths, annealing_factor=1.0):
# this is the number of time steps we need to process in the mini-batch
T_max = mini_batch.size(1)
# register all PyTorch (sub)modules with pyro
# this needs to happen in both the model and guide
pyro.module("dmm", self)
# set z_prev = z_0 to setup the recursive conditioning in p(z_t | z_{t-1})
z_prev = self.z_0.expand(mini_batch.size(0), self.z_0.size(0))
# we enclose all the sample statements in the model in a plate.
# this marks that each datapoint is conditionally independent of the others
with pyro.plate("z_minibatch", len(mini_batch)):
# sample the latents z and observed x's one time step at a time
for t in range(1, T_max + 1):
# the next chunk of code samples z_t ~ p(z_t | z_{t-1})
# note that (both here and elsewhere) we use poutine.scale to take care
# of KL annealing. we use the mask() method to deal with raggedness
# in the observed data (i.e. different sequences in the mini-batch
# have different lengths)
# first compute the parameters of the diagonal gaussian
# distribution p(z_t | z_{t-1})
z_loc, z_scale = self.trans(z_prev)
# then sample z_t according to dist.Normal(z_loc, z_scale).
# note that we use the reshape method so that the univariate
# Normal distribution is treated as a multivariate Normal
# distribution with a diagonal covariance.
with poutine.scale(None, annealing_factor):
z_t = pyro.sample("z_%d" % t,
dist.Normal(z_loc, z_scale)
.mask(mini_batch_mask[:, t - 1:t])
.to_event(1))
# compute the probabilities that parameterize the bernoulli likelihood
emission_probs_t = self.emitter(z_t)
# the next statement instructs pyro to observe x_t according to the
# bernoulli distribution p(x_t|z_t)
pyro.sample("obs_x_%d" % t,
dist.Bernoulli(emission_probs_t)
.mask(mini_batch_mask[:, t - 1:t])
.to_event(1),
obs=mini_batch[:, t - 1, :])
# the latent sampled at this time step will be conditioned upon
# in the next time step so keep track of it
z_prev = z_t
```
The first thing to note is that `model()` takes a number of arguments. For now let's just take a look at `mini_batch` and `mini_batch_mask`. `mini_batch` is a three dimensional tensor, with the first dimension being the batch dimension, the second dimension being the temporal dimension, and the final dimension being the features (88-dimensional in our case). To speed up the code, whenever we run `model` we're going to process an entire mini-batch of sequences (i.e. we're going to take advantage of vectorization).
This is sensible because our model is implicitly defined over a single observed sequence. The probability of a set of sequences is just given by the products of the individual sequence probabilities. In other words, given the parameters of the model the sequences are conditionally independent.
This vectorization introduces some complications because sequences can be of different lengths. This is where `mini_batch_mask` comes in. `mini_batch_mask` is a two dimensional 0/1 mask of dimensions `mini_batch_size` x `T_max`, where `T_max` is the maximum length of any sequence in the mini-batch. This encodes which parts of `mini_batch` are valid observations.
So the first thing we do is grab `T_max`: we have to unroll our model for at least this many time steps. Note that this will result in a lot of 'wasted' computation, since some of the sequences will be shorter than `T_max`, but this is a small price to pay for the big speed-ups that come with vectorization. We just need to make sure that none of the 'wasted' computations 'pollute' our model computation. We accomplish this by passing the mask appropriate to time step $t$ to the `mask` method (which acts on the distribution that needs masking).
Finally, the line `pyro.module("dmm", self)` is equivalent to a bunch of `pyro.param` statements for each parameter in the model. This lets Pyro know which parameters are part of the model. Just like for the `sample` statement, we give the module a unique name. This name will be incorporated into the name of the `Parameters` in the model. We leave a discussion of the KL annealing factor for later.
## Inference
At this point we've fully specified our model. The next step is to set ourselves up for inference. As mentioned in the introduction, our inference strategy is going to be variational inference (see [SVI Part I](svi_part_i.ipynb) for an introduction). So our next task is to build a family of variational distributions appropriate to doing inference in a deep markov model. However, at this point it's worth emphasizing that nothing about the way we've implemented `model()` ties us to variational inference. In principle we could use _any_ inference strategy available in Pyro. For example, in this particular context one could imagine using some variant of Sequential Monte Carlo (although this is not currently supported in Pyro).
### Guide
The purpose of the guide (i.e. the variational distribution) is to provide a (parameterized) approximation to the exact posterior $p({\bf z}_{1:T}|{\bf x}_{1:T})$. Actually, there's an implicit assumption here which we should make explicit, so let's take a step back.
Suppose our dataset $\mathcal{D}$ consists of $N$ sequences
$\{ {\bf x}_{1:T_1}^1, {\bf x}_{1:T_2}^2, ..., {\bf x}_{1:T_N}^N \}$. Then the posterior we're actually interested in is given by
$p({\bf z}_{1:T_1}^1, {\bf z}_{1:T_2}^2, ..., {\bf z}_{1:T_N}^N | \mathcal{D})$, i.e. we want to infer the latents for _all_ $N$ sequences. Even for small $N$ this is a very high-dimensional distribution that will require a very large number of parameters to specify. In particular if we were to directly parameterize the posterior in this form, the number of parameters required would grow (at least) linearly with $N$. One way to avoid this nasty growth with the size of the dataset is *amortization* (see the analogous discussion in [SVI Part II](svi_part_ii.ipynb)).
#### Aside: Amortization
This works as follows. Instead of introducing variational parameters for each sequence in our dataset, we're going to learn a single parametric function $f({\bf x}_{1:T})$ and work with a variational distribution that has the form $\prod_{n=1}^N q({\bf z}_{1:T_n}^n | f({\bf x}_{1:T_n}^n))$. The function $f(\cdot)$—which basically maps a given observed sequence to a set of variational parameters tailored to that sequence—will need to be sufficiently rich to capture the posterior accurately, but now we can handle large datasets without having to introduce an obscene number of variational parameters.
So our task is to construct the function $f(\cdot)$. Since in our case we need to support variable-length sequences, it's only natural that $f(\cdot)$ have a RNN in the loop. Before we look at the various component parts that make up our $f(\cdot)$ in detail, let's look at a computational graph that encodes the basic structure: <p>
At the bottom of the figure we have our sequence of three observations. These observations will be consumed by a RNN that reads the observations from right to left and outputs three hidden states $\{ {\bf h}_1, {\bf h}_2,{\bf h}_3\}$. Note that this computation is done _before_ we sample any latent variables. Next, each of the hidden states will be fed into a `Combiner` module whose job is to output the mean and covariance of the the conditional distribution $q({\bf z}_t | {\bf z}_{t-1}, {\bf x}_{t:T})$, which we take to be given by a diagonal gaussian distribution. (Just like in the model, the conditional structure of ${\bf z}_{1:T}$ in the guide is such that we sample ${\bf z}_t$ forward in time.) In addition to the RNN hidden state, the `Combiner` also takes the latent random variable from the previous time step as input, except for $t=1$, where it instead takes the trainable (variational) parameter ${\bf z}_0^{\rm{q}}$.
#### Aside: Guide Structure
Why do we setup the RNN to consume the observations from right to left? Why not left to right? With this choice our conditional distribution $q({\bf z}_t |...)$ depends on two things:
- the latent ${\bf z}_{t-1}$ from the previous time step; and
- the observations ${\bf x}_{t:T}$, i.e. the current observation together with all future observations
We are free to make other choices; all that is required is that that the guide is a properly normalized distribution that plays nice with autograd. This particular choice is motivated by the dependency structure of the true posterior: see reference [1] for a detailed discussion. In brief, while we could, for example, condition on the entire sequence of observations, because of the markov structure of the model everything that we need to know about the previous observations ${\bf x}_{1:t-1}$ is encapsulated by ${\bf z}_{t-1}$. We could condition on more things, but there's no need; and doing so will probably tend to dilute the learning signal. So running the RNN from right to left is the most natural choice for this particular model.
Let's look at the component parts in detail. First, the `Combiner` module:
```python
class Combiner(nn.Module):
"""
Parameterizes q(z_t | z_{t-1}, x_{t:T}), which is the basic building block
of the guide (i.e. the variational distribution). The dependence on x_{t:T} is
through the hidden state of the RNN (see the pytorch module `rnn` below)
"""
def __init__(self, z_dim, rnn_dim):
super().__init__()
# initialize the three linear transformations used in the neural network
self.lin_z_to_hidden = nn.Linear(z_dim, rnn_dim)
self.lin_hidden_to_loc = nn.Linear(rnn_dim, z_dim)
self.lin_hidden_to_scale = nn.Linear(rnn_dim, z_dim)
# initialize the two non-linearities used in the neural network
self.tanh = nn.Tanh()
self.softplus = nn.Softplus()
def forward(self, z_t_1, h_rnn):
"""
Given the latent z at at a particular time step t-1 as well as the hidden
state of the RNN h(x_{t:T}) we return the mean and scale vectors that
parameterize the (diagonal) gaussian distribution q(z_t | z_{t-1}, x_{t:T})
"""
# combine the rnn hidden state with a transformed version of z_t_1
h_combined = 0.5 * (self.tanh(self.lin_z_to_hidden(z_t_1)) + h_rnn)
# use the combined hidden state to compute the mean used to sample z_t
loc = self.lin_hidden_to_loc(h_combined)
# use the combined hidden state to compute the scale used to sample z_t
scale = self.softplus(self.lin_hidden_to_scale(h_combined))
# return loc, scale which can be fed into Normal
return loc, scale
```
This module has the same general structure as `Emitter` and `GatedTransition` in the model. The only thing of note is that because the `Combiner` needs to consume two inputs at each time step, it transforms the inputs into a single combined hidden state `h_combined` before it computes the outputs.
Apart from the RNN, we now have all the ingredients we need to construct our guide distribution.
Happily, PyTorch has great built-in RNN modules, so we don't have much work to do here. We'll see where we instantiate the RNN later. Let's instead jump right into the definition of the stochastic function `guide()`.
```python
def guide(self, mini_batch, mini_batch_reversed, mini_batch_mask,
mini_batch_seq_lengths, annealing_factor=1.0):
# this is the number of time steps we need to process in the mini-batch
T_max = mini_batch.size(1)
# register all PyTorch (sub)modules with pyro
pyro.module("dmm", self)
# if on gpu we need the fully broadcast view of the rnn initial state
# to be in contiguous gpu memory
h_0_contig = self.h_0.expand(1, mini_batch.size(0),
self.rnn.hidden_size).contiguous()
# push the observed x's through the rnn;
# rnn_output contains the hidden state at each time step
rnn_output, _ = self.rnn(mini_batch_reversed, h_0_contig)
# reverse the time-ordering in the hidden state and un-pack it
rnn_output = poly.pad_and_reverse(rnn_output, mini_batch_seq_lengths)
# set z_prev = z_q_0 to setup the recursive conditioning in q(z_t |...)
z_prev = self.z_q_0.expand(mini_batch.size(0), self.z_q_0.size(0))
# we enclose all the sample statements in the guide in a plate.
# this marks that each datapoint is conditionally independent of the others.
with pyro.plate("z_minibatch", len(mini_batch)):
# sample the latents z one time step at a time
for t in range(1, T_max + 1):
# the next two lines assemble the distribution q(z_t | z_{t-1}, x_{t:T})
z_loc, z_scale = self.combiner(z_prev, rnn_output[:, t - 1, :])
z_dist = dist.Normal(z_loc, z_scale)
# sample z_t from the distribution z_dist
with pyro.poutine.scale(None, annealing_factor):
z_t = pyro.sample("z_%d" % t,
z_dist.mask(mini_batch_mask[:, t - 1:t])
.to_event(1))
# the latent sampled at this time step will be conditioned
# upon in the next time step so keep track of it
z_prev = z_t
```
The high-level structure of `guide()` is very similar to `model()`. First note that the model and guide take the same arguments: this is a general requirement for model/guide pairs in Pyro. As in the model, there's a call to `pyro.module` that registers all the parameters with Pyro. Also, the `for` loop has the same structure as the one in `model()`, with the difference that the guide only needs to sample latents (there are no `sample` statements with the `obs` keyword). Finally, note that the names of the latent variables in the guide exactly match those in the model. This is how Pyro knows to correctly align random variables.
The RNN logic should be familar to PyTorch users, but let's go through it quickly. First we prepare the initial state of the RNN, `h_0`. Then we invoke the RNN via its forward call; the resulting tensor `rnn_output` contains the hidden states for the entire mini-batch. Note that because we want the RNN to consume the observations from right to left, the input to the RNN is `mini_batch_reversed`, which is a copy of `mini_batch` with all the sequences running in _reverse_ temporal order. Furthermore, `mini_batch_reversed` has been wrapped in a PyTorch `rnn.pack_padded_sequence` so that the RNN can deal with variable-length sequences. Since we do our sampling in latent space in normal temporal order, we use the helper function `pad_and_reverse` to reverse the hidden state sequences in `rnn_output`, so that we can feed the `Combiner` RNN hidden states that are correctly aligned and ordered. This helper function also unpacks the `rnn_output` so that it is no longer in the form of a PyTorch `rnn.pack_padded_sequence`.
## Packaging the Model and Guide as a PyTorch Module
At this juncture, we're ready to proceed to inference. But before we do so let's quickly go over how we packaged the model and guide as a single PyTorch Module. This is generally good practice, especially for larger models.
```python
class DMM(nn.Module):
"""
This PyTorch Module encapsulates the model as well as the
variational distribution (the guide) for the Deep Markov Model
"""
def __init__(self, input_dim=88, z_dim=100, emission_dim=100,
transition_dim=200, rnn_dim=600, rnn_dropout_rate=0.0,
num_iafs=0, iaf_dim=50, use_cuda=False):
super().__init__()
# instantiate pytorch modules used in the model and guide below
self.emitter = Emitter(input_dim, z_dim, emission_dim)
self.trans = GatedTransition(z_dim, transition_dim)
self.combiner = Combiner(z_dim, rnn_dim)
self.rnn = nn.RNN(input_size=input_dim, hidden_size=rnn_dim,
nonlinearity='relu', batch_first=True,
bidirectional=False, num_layers=1, dropout=rnn_dropout_rate)
# define a (trainable) parameters z_0 and z_q_0 that help define
# the probability distributions p(z_1) and q(z_1)
# (since for t = 1 there are no previous latents to condition on)
self.z_0 = nn.Parameter(torch.zeros(z_dim))
self.z_q_0 = nn.Parameter(torch.zeros(z_dim))
# define a (trainable) parameter for the initial hidden state of the rnn
self.h_0 = nn.Parameter(torch.zeros(1, 1, rnn_dim))
self.use_cuda = use_cuda
# if on gpu cuda-ize all pytorch (sub)modules
if use_cuda:
self.cuda()
# the model p(x_{1:T} | z_{1:T}) p(z_{1:T})
def model(...):
# ... as above ...
# the guide q(z_{1:T} | x_{1:T}) (i.e. the variational distribution)
def guide(...):
# ... as above ...
```
Since we've already gone over `model` and `guide`, our focus here is on the constructor. First we instantiate the four PyTorch modules that we use in our model and guide. On the model-side: `Emitter` and `GatedTransition`. On the guide-side: `Combiner` and the RNN.
Next we define PyTorch `Parameter`s for the initial state of the RNN as well as `z_0` and `z_q_0`, which are fed into `self.trans` and `self.combiner`, respectively, in lieu of the non-existent random variable $\bf z_0$.
The important point to make here is that all of these `Module`s and `Parameter`s are attributes of `DMM` (which itself inherits from `nn.Module`). This has the consequence they are all automatically registered as belonging to the module. So, for example, when we call `parameters()` on an instance of `DMM`, PyTorch will know to return all the relevant parameters. It also means that when we invoke `pyro.module("dmm", self)` in `model()` and `guide()`, all the parameters of both the model and guide will be registered with Pyro. Finally, it means that if we're running on a GPU, the call to `cuda()` will move all the parameters into GPU memory.
## Stochastic Variational Inference
With our model and guide at hand, we're finally ready to do inference. Before we look at the full logic that is involved in a complete experimental script, let's first see how to take a single gradient step. First we instantiate an instance of `DMM` and setup an optimizer.
```python
# instantiate the dmm
dmm = DMM(input_dim, z_dim, emission_dim, transition_dim, rnn_dim,
args.rnn_dropout_rate, args.num_iafs, args.iaf_dim, args.cuda)
# setup optimizer
adam_params = {"lr": args.learning_rate, "betas": (args.beta1, args.beta2),
"clip_norm": args.clip_norm, "lrd": args.lr_decay,
"weight_decay": args.weight_decay}
optimizer = ClippedAdam(adam_params)
```
Here we're using an implementation of the Adam optimizer that includes gradient clipping. This mitigates some of the problems that can occur when training recurrent neural networks (e.g. vanishing/exploding gradients). Next we setup the inference algorithm.
```python
# setup inference algorithm
svi = SVI(dmm.model, dmm.guide, optimizer, Trace_ELBO())
```
The inference algorithm `SVI` uses a stochastic gradient estimator to take gradient steps on an objective function, which in this case is given by the ELBO (the evidence lower bound). As the name indicates, the ELBO is a lower bound to the log evidence: $\log p(\mathcal{D})$. As we take gradient steps that maximize the ELBO, we move our guide $q(\cdot)$ closer to the exact posterior.
The argument `Trace_ELBO()` constructs a version of the gradient estimator that doesn't need access to the dependency structure of the model and guide. Since all the latent variables in our model are reparameterizable, this is the appropriate gradient estimator for our use case. (It's also the default option.)
Assuming we've prepared the various arguments of `dmm.model` and `dmm.guide`, taking a gradient step is accomplished by calling
```python
svi.step(mini_batch, ...)
```
That's all there is to it!
Well, not quite. This will be the main step in our inference algorithm, but we still need to implement a complete training loop with preparation of mini-batches, evaluation, and so on. This sort of logic will be familiar to any deep learner but let's see how it looks in PyTorch/Pyro.
## The Black Magic of Optimization
Actually, before we get to the guts of training, let's take a moment and think a bit about the optimization problem we've setup. We've traded Bayesian inference in a non-linear model with a high-dimensional latent space—a hard problem—for a particular optimization problem. Let's not kid ourselves, this optimization problem is pretty hard too. Why? Let's go through some of the reasons:
- the space of parameters we're optimizing over is very high-dimensional (it includes all the weights in all the neural networks we've defined).
- our objective function (the ELBO) cannot be computed analytically. so our parameter updates will be following noisy Monte Carlo gradient estimates
- data-subsampling serves as an additional source of stochasticity: even if we wanted to, we couldn't in general take gradient steps on the ELBO defined over the whole dataset (actually in our particular case the dataset isn't so large, but let's ignore that).
- given all the neural networks and non-linearities we have in the loop, our (stochastic) loss surface is highly non-trivial
The upshot is that if we're going to find reasonable (local) optima of the ELBO, we better take some care in deciding how to do optimization. This isn't the time or place to discuss all the different strategies that one might adopt, but it's important to emphasize how decisive a good or bad choice in learning hyperparameters (the learning rate, the mini-batch size, etc.) can be.
Before we move on, let's discuss one particular optimization strategy that we're making use of in greater detail: KL annealing. In our case the ELBO is the sum of two terms: an expected log likelihood term (which measures model fit) and a sum of KL divergence terms (which serve to regularize the approximate posterior):
$\rm{ELBO} = \mathbb{E}_{q({\bf z}_{1:T})}[\log p({\bf x}_{1:T}|{\bf z}_{1:T})] - \mathbb{E}_{q({\bf z}_{1:T})}[ \log q({\bf z}_{1:T}) - \log p({\bf z}_{1:T})]$
This latter term can be a quite strong regularizer, and in early stages of training it has a tendency to favor regions of the loss surface that contain lots of bad local optima. One strategy to avoid these bad local optima, which was also adopted in reference [1], is to anneal the KL divergence terms by multiplying them by a scalar `annealing_factor` that ranges between zero and one:
$\mathbb{E}_{q({\bf z}_{1:T})}[\log p({\bf x}_{1:T}|{\bf z}_{1:T})] - \rm{annealing\_factor} \times \mathbb{E}_{q({\bf z}_{1:T})}[ \log q({\bf z}_{1:T}) - \log p({\bf z}_{1:T})]$
The idea is that during the course of training the `annealing_factor` rises slowly from its initial value at/near zero to its final value at 1.0. The annealing schedule is arbitrary; below we will use a simple linear schedule. In terms of code, to scale the log likelihoods by the appropriate annealing factor we enclose each of the latent sample statements in the model and guide with a `pyro.poutine.scale` context.
Finally, we should mention that the main difference between the DMM implementation described here and the one used in reference [1] is that they take advantage of the analytic formula for the KL divergence between two gaussian distributions (whereas we rely on Monte Carlo estimates). This leads to lower variance gradient estimates of the ELBO, which makes training a bit easier. We can still train the model without making this analytic substitution, but training probably takes somewhat longer because of the higher variance. To use analytic KL divergences use [TraceMeanField_ELBO](http://docs.pyro.ai/en/stable/inference_algos.html#pyro.infer.trace_mean_field_elbo.TraceMeanField_ELBO).
## Data Loading, Training, and Evaluation
First we load the data. There are 229 sequences in the training dataset, each with an average length of ~60 time steps.
```python
jsb_file_loc = "./data/jsb_processed.pkl"
data = pickle.load(open(jsb_file_loc, "rb"))
training_seq_lengths = data['train']['sequence_lengths']
training_data_sequences = data['train']['sequences']
test_seq_lengths = data['test']['sequence_lengths']
test_data_sequences = data['test']['sequences']
val_seq_lengths = data['valid']['sequence_lengths']
val_data_sequences = data['valid']['sequences']
N_train_data = len(training_seq_lengths)
N_train_time_slices = np.sum(training_seq_lengths)
N_mini_batches = int(N_train_data / args.mini_batch_size +
int(N_train_data % args.mini_batch_size > 0))
```
For this dataset we will typically use a `mini_batch_size` of 20, so that there will be 12 mini-batches per epoch. Next we define the function `process_minibatch` which prepares a mini-batch for training and takes a gradient step:
```python
def process_minibatch(epoch, which_mini_batch, shuffled_indices):
if args.annealing_epochs > 0 and epoch < args.annealing_epochs:
# compute the KL annealing factor appropriate
# for the current mini-batch in the current epoch
min_af = args.minimum_annealing_factor
annealing_factor = min_af + (1.0 - min_af) * \
(float(which_mini_batch + epoch * N_mini_batches + 1) /
float(args.annealing_epochs * N_mini_batches))
else:
# by default the KL annealing factor is unity
annealing_factor = 1.0
# compute which sequences in the training set we should grab
mini_batch_start = (which_mini_batch * args.mini_batch_size)
mini_batch_end = np.min([(which_mini_batch + 1) * args.mini_batch_size,
N_train_data])
mini_batch_indices = shuffled_indices[mini_batch_start:mini_batch_end]
# grab the fully prepped mini-batch using the helper function in the data loader
mini_batch, mini_batch_reversed, mini_batch_mask, mini_batch_seq_lengths \
= poly.get_mini_batch(mini_batch_indices, training_data_sequences,
training_seq_lengths, cuda=args.cuda)
# do an actual gradient step
loss = svi.step(mini_batch, mini_batch_reversed, mini_batch_mask,
mini_batch_seq_lengths, annealing_factor)
# keep track of the training loss
return loss
```
We first compute the KL annealing factor appropriate to the mini-batch (according to a linear schedule as described earlier). We then compute the mini-batch indices, which we pass to the helper function `get_mini_batch`. This helper function takes care of a number of different things:
- it sorts each mini-batch by sequence length
- it calls another helper function to get a copy of the mini-batch in reversed temporal order
- it packs each reversed mini-batch in a `rnn.pack_padded_sequence`, which is then ready to be ingested by the RNN
- it cuda-izes all tensors if we're on a GPU
- it calls another helper function to get an appropriate 0/1 mask for the mini-batch
We then pipe all the return values of `get_mini_batch()` into `elbo.step(...)`. Recall that these arguments will be further piped to `model(...)` and `guide(...)` during construction of the gradient estimator in `elbo`. Finally, we return a float which is a noisy estimate of the loss for that mini-batch.
We now have all the ingredients required for the main bit of our training loop:
```python
times = [time.time()]
for epoch in range(args.num_epochs):
# accumulator for our estimate of the negative log likelihood
# (or rather -elbo) for this epoch
epoch_nll = 0.0
# prepare mini-batch subsampling indices for this epoch
shuffled_indices = np.arange(N_train_data)
np.random.shuffle(shuffled_indices)
# process each mini-batch; this is where we take gradient steps
for which_mini_batch in range(N_mini_batches):
epoch_nll += process_minibatch(epoch, which_mini_batch, shuffled_indices)
# report training diagnostics
times.append(time.time())
epoch_time = times[-1] - times[-2]
log("[training epoch %04d] %.4f \t\t\t\t(dt = %.3f sec)" %
(epoch, epoch_nll / N_train_time_slices, epoch_time))
```
At the beginning of each epoch we shuffle the indices pointing to the training data. We then process each mini-batch until we've gone through the entire training set, accumulating the training loss as we go. Finally we report some diagnostic info. Note that we normalize the loss by the total number of time slices in the training set (this allows us to compare to reference [1]).
## Evaluation
This training loop is still missing any kind of evaluation diagnostics. Let's fix that. First we need to prepare the validation and test data for evaluation. Since the validation and test datasets are small enough that we can easily fit them into memory, we're going to process each dataset batchwise (i.e. we will not be breaking up the dataset into mini-batches). [_Aside: at this point the reader may ask why we don't do the same thing for the training set. The reason is that additional stochasticity due to data-subsampling is often advantageous during optimization: in particular it can help us avoid local optima._] And, in fact, in order to get a lessy noisy estimate of the ELBO, we're going to compute a multi-sample estimate. The simplest way to do this would be as follows:
```python
val_loss = svi.evaluate_loss(val_batch, ..., num_particles=5)
```
This, however, would involve an explicit `for` loop with five iterations. For our particular model, we can do better and vectorize the whole computation. The only way to do this currently in Pyro is to explicitly replicate the data `n_eval_samples` many times. This is the strategy we follow:
```python
# package repeated copies of val/test data for faster evaluation
# (i.e. set us up for vectorization)
def rep(x):
return np.repeat(x, n_eval_samples, axis=0)
# get the validation/test data ready for the dmm: pack into sequences, etc.
val_seq_lengths = rep(val_seq_lengths)
test_seq_lengths = rep(test_seq_lengths)
val_batch, val_batch_reversed, val_batch_mask, val_seq_lengths = poly.get_mini_batch(
np.arange(n_eval_samples * val_data_sequences.shape[0]), rep(val_data_sequences),
val_seq_lengths, cuda=args.cuda)
test_batch, test_batch_reversed, test_batch_mask, test_seq_lengths = \
poly.get_mini_batch(np.arange(n_eval_samples * test_data_sequences.shape[0]),
rep(test_data_sequences),
test_seq_lengths, cuda=args.cuda)
```
With the test and validation data now fully prepped, we define the helper function that does the evaluation:
```python
def do_evaluation():
# put the RNN into evaluation mode (i.e. turn off drop-out if applicable)
dmm.rnn.eval()
# compute the validation and test loss
val_nll = svi.evaluate_loss(val_batch, val_batch_reversed, val_batch_mask,
val_seq_lengths) / np.sum(val_seq_lengths)
test_nll = svi.evaluate_loss(test_batch, test_batch_reversed, test_batch_mask,
test_seq_lengths) / np.sum(test_seq_lengths)
# put the RNN back into training mode (i.e. turn on drop-out if applicable)
dmm.rnn.train()
return val_nll, test_nll
```
We simply call the `evaluate_loss` method of `elbo`, which takes the same arguments as `step()`, namely the arguments that are passed to the model and guide. Note that we have to put the RNN into and out of evaluation mode to account for dropout. We can now stick `do_evaluation()` into the training loop; see [the source code](https://github.com/pyro-ppl/pyro/blob/dev/examples/dmm.py) for details.
## Results
Let's make sure that our implementation gives reasonable results. We can use the numbers reported in reference [1] as a sanity check. For the same dataset and a similar model/guide setup (dimension of the latent space, number of hidden units in the RNN, etc.) they report a normalized negative log likelihood (NLL) of `6.93` on the testset (lower is better$)^{\S}$. This is to be compared to our result of `6.87`. These numbers are very much in the same ball park, which is reassuring. It seems that, at least for this dataset, not using analytic expressions for the KL divergences doesn't degrade the quality of the learned model (although, as discussed above, the training probably takes somewhat longer).
In the figure we show how the test NLL progresses during training for a single sample run (one with a rather conservative learning rate). Most of the progress is during the first 3000 epochs or so, with some marginal gains if we let training go on for longer. On a GeForce GTX 1080, 5000 epochs takes about 20 hours.
| `num_iafs` | test NLL |
|---|---|
| `0` | `6.87` |
| `1` | `6.82` |
| `2` | `6.80` |
Finally, we also report results for guides with normalizing flows in the mix (details to be found in the next section).
${ \S\;}$ Actually, they seem to report two numbers—6.93 and 7.03—for the same model/guide and it's not entirely clear how the two reported numbers are different.
## Bells, whistles, and other improvements
### Inverse Autoregressive Flows
One of the great things about a probabilistic programming language is that it encourages modularity. Let's showcase an example in the context of the DMM. We're going to make our variational distribution richer by adding normalizing flows to the mix (see reference [2] for a discussion). **This will only cost us four additional lines of code!**
First, in the `DMM` constructor we add
```python
iafs = [AffineAutoregressive(AutoRegressiveNN(z_dim, [iaf_dim])) for _ in range(num_iafs)]
self.iafs = nn.ModuleList(iafs)
```
This instantiates `num_iafs` many bijective transforms of the `AffineAutoregressive` type (see references [3,4]); each normalizing flow will have `iaf_dim` many hidden units. We then bundle the normalizing flows in a `nn.ModuleList`; this is just the PyTorchy way to package a list of `nn.Module`s. Next, in the guide we add the lines
```python
if self.iafs.__len__() > 0:
z_dist = TransformedDistribution(z_dist, self.iafs)
```
Here we're taking the base distribution `z_dist`, which in our case is a conditional gaussian distribution, and using the `TransformedDistribution` construct we transform it into a non-gaussian distribution that is, by construction, richer than the base distribution. Voila!
### Checkpointing
If we want to recover from a catastrophic failure in our training loop, there are two kinds of state we need to keep track of. The first is the various parameters of the model and guide. The second is the state of the optimizers (e.g. in Adam this will include the running average of recent gradient estimates for each parameter).
In Pyro, the parameters can all be found in the `ParamStore`. However, PyTorch also keeps track of them for us via the `parameters()` method of `nn.Module`. So one simple way we can save the parameters of the model and guide is to make use of the `state_dict()` method of `dmm` in conjunction with `torch.save()`; see below. In the case that we have `AffineAutoregressive`'s in the loop, this is in fact the only option at our disposal. This is because the `AffineAutoregressive` module contains what are called 'persistent buffers' in PyTorch parlance. These are things that carry state but are not `Parameter`s. The `state_dict()` and `load_state_dict()` methods of `nn.Module` know how to deal with buffers correctly.
To save the state of the optimizers, we have to use functionality inside of `pyro.optim.PyroOptim`. Recall that the typical user never interacts directly with PyTorch `Optimizers` when using Pyro; since parameters can be created dynamically in an arbitrary probabilistic program, Pyro needs to manage `Optimizers` for us. In our case saving the optimizer state will be as easy as calling `optimizer.save()`. The loading logic is entirely analagous. So our entire logic for saving and loading checkpoints only takes a few lines:
```python
# saves the model and optimizer states to disk
def save_checkpoint():
log("saving model to %s..." % args.save_model)
torch.save(dmm.state_dict(), args.save_model)
log("saving optimizer states to %s..." % args.save_opt)
optimizer.save(args.save_opt)
log("done saving model and optimizer checkpoints to disk.")
# loads the model and optimizer states from disk
def load_checkpoint():
assert exists(args.load_opt) and exists(args.load_model), \
"--load-model and/or --load-opt misspecified"
log("loading model from %s..." % args.load_model)
dmm.load_state_dict(torch.load(args.load_model))
log("loading optimizer states from %s..." % args.load_opt)
optimizer.load(args.load_opt)
log("done loading model and optimizer states.")
```
## Some final comments
A deep markov model is a relatively complex model. Now that we've taken the effort to implement a version of the deep markov model tailored to the polyphonic music dataset, we should ask ourselves what else we can do. What if we're handed a different sequential dataset? Do we have to start all over?
Not at all! The beauty of probalistic programming is that it enables—and encourages—modular approaches to modeling and inference. Adapting our polyphonic music model to a dataset with continuous observations is as simple as changing the observation likelihood. The vast majority of the code could be taken over unchanged. This means that with a little bit of extra work, the code in this tutorial could be repurposed to enable a huge variety of different models.
See the complete code on [Github](https://github.com/pyro-ppl/pyro/blob/dev/examples/dmm.py).
## References
[1] `Structured Inference Networks for Nonlinear State Space Models`,<br />
Rahul G. Krishnan, Uri Shalit, David Sontag
[2] `Variational Inference with Normalizing Flows`,
<br />
Danilo Jimenez Rezende, Shakir Mohamed
[3] `Improving Variational Inference with Inverse Autoregressive Flow`,
<br />
Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
[4] `MADE: Masked Autoencoder for Distribution Estimation`,
<br />
Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle
[5] `Modeling Temporal Dependencies in High-Dimensional Sequences:`
<br />
`Application to Polyphonic Music Generation and Transcription`,
<br />
Boulanger-Lewandowski, N., Bengio, Y. and Vincent, P.
| true |
code
| 0.944459 | null | null | null | null |
|
This notebook demonstrates [vaquero](https://github.com/jbn/vaquero), as both a library and data cleaning pattern.
```
from vaquero import Vaquero, callables_from
```
# Task
Say you think you have pairs of numbers serialized as comma separated values in a file. You want to extract the pair from each line, then sum over the result (per line).
## Sample Data
```
lines = ["1, 1.0", # An errant float
"1, $", # A bad number
"1,-1", # A good line
"10"] # Missing the second value
```
## Initial Implementation
```
def extract_pairs(s):
return s.split(",")
def to_int(items):
return [int(item) for item in items]
def sum_pair(items):
return items[0], items[1]
```
# Iteration 1
First, instantiate a vaquero instance. Here, I've set the maximum number of failures allowed to 5. After that many failures, the `Vaquero` object raises a `VaqueroException`. Generally, you want it to be large enough to collect a lot of unexpected failures. But, you don't want it to be so large you exhaust memory. This is an iterative process.
Also, as a tip, always instantiate the `Vaquero` object in its own cell. This way, you get to inspect it in your notebook even if it raises a `VaqueroException`.
I also registered all functions (well, callables) in this notebook with `vaquero`. The error capturing machinery only operates on the registered functions. And, it always ignores a `KeyboardInterrupt`.
```
vaquero = Vaquero(max_failures=5)
vaquero.register_targets(callables_from(globals()))
```
Just to be sure, I'll check the registered functions. It does matching by name, which is a bit naive. But, it's also surprisingly robust given vaquero usage patterns. Looking, you can see some things that don't belong. But, again, it mostly works well.
```
vaquero.target_funcs
```
Now, run my trivial examples over the initial implementation.
```
results = []
for s in lines:
with vaquero.on_input(s):
results.append(sum_pair(to_int(extract_pairs(s))))
```
It was not successful.
```
vaquero.was_successful
```
So, look at the failures. There were two functions, and both had failures.
```
vaquero.stats()
```
To get a sense of what happened, examine the failing functions.
You can do this by calling `examine` with the name of the function (or the function object). It returns the captured invocations and errors.
Here you can see that the `to_int` function from cell `In [3]` failed with a `ValueError` exception.
```
vaquero.examine('to_int')
```
Often though, we want to query only parts of the capture for a specific function. To do so, you can use [JMESPath](http://jmespath.org/), specifying the selector as an argument to `exam`. Also, you can say, show me only the set applied to the selected result (assuming it's hashable), to simplify things.
```
vaquero.examine('to_int', '[*].exc_value', as_set=True)
```
And, for `sum_pair`.
```
vaquero.examine('sum_pair')
```
# Iteration 2
We know know that there are some ints encoded as doubles. But, we know from our data source, it can only be an int. So, in `to_ints`, let's parse the strings first as `float`s, then create an `int` from it. It's robust.
Also, we know that some lines don't have two components. Those are just bad lines. Let's assert there are two parts as post condition of `extract_pairs`.
Finally, after a bit of digging, we found that `$` means `NA`. After cursing for a minute -- because that's crazy -- you decide to ignore those entries. Instead of adding this to an existing function, you write an `assert_no_missing_data` function.
```
def no_missing_data(s):
assert '$' not in s, "'{}' has missing data".format(s)
def extract_pairs(s):
parts = s.split(",")
assert len(parts) == 2, "'{}' not in 2 parts".format(s)
return tuple(parts)
def to_int(items):
return [int(float(item)) for item in items]
def sum_pair(items):
assert len(items) == 2, "Line is improperly formatted"
return items[0] + items[1]
vaquero.reset() # Clear logged errors, mostly.
vaquero.register_targets(globals())
results = []
for s in lines:
with vaquero.on_input(s):
no_missing_data(s)
results.append(sum_pair(to_int(extract_pairs(s))))
```
Now, we have one more success, but still two failures.
```
vaquero.stats()
```
Let's quickly examine.
```
vaquero.examine('extract_pairs')
vaquero.examine('no_missing_data')
```
Both these exceptions are bad data. We want to ignore them.
```
vaquero.stats_ignoring('AssertionError')
```
Looking at the results accumulated,
```
results
```
Things look good.
Now that we have something that works, we can use Vaquero in a more production-oriented mode. That is, we allow for unlimited errors, but we don't capture anything. That is, we note the failure, but otherwise ignore it since we won't be post-processing.
```
vaquero.reset(turn_off_error_capturing=True)
# Or, Vaquero(capture_error_invocations=False)
results = []
for s in lines:
with vaquero.on_input(s):
no_missing_data(s)
results.append(sum_pair(to_int(extract_pairs(s))))
results
```
They still show up as failures, but it doesn't waste memory storing the captures.
```
vaquero.stats()
```
| true |
code
| 0.470737 | null | null | null | null |
|
# Algo - Aparté sur le voyageur de commerce
Le voyageur de commerce ou Travelling Salesman Problem en anglais est le problème NP-complet emblématique : il n'existe pas d'algorithme capable de trouver la solution optimale en temps polynômial. La seule option est de parcourir toutes les configurations pour trouver la meilleure. Ce notebook ne fait qu'aborder le problème.
```
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
```
## Tirer des points aléatoirement et les afficher
```
import numpy
points = numpy.random.random((6, 2))
points
```
## Distance d'un chemin
```
def distance_chemin(points, chemin):
dist = 0
for i in range(1, len(points)):
dx, dy = points[chemin[i], :] - points[chemin[i-1], :]
dist += (dx ** 2 + dy ** 2) ** 0.5
dx, dy = points[chemin[0], :] - points[chemin[-1], :]
dist += (dx ** 2 + dy ** 2) ** 0.5
return dist
distance_chemin(points, list(range(points.shape[0])))
```
## Visualisation
```
import matplotlib.pyplot as plt
def plot_points(points, chemin):
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
loop = list(chemin) + [chemin[0]]
p = points[loop]
ax[0].plot(points[:, 0], points[:, 1], 'o')
ax[1].plot(p[:, 0], p[:, 1], 'o-')
ax[1].set_title("dist=%1.2f" % distance_chemin(points, chemin))
return ax
plot_points(points, list(range(points.shape[0])));
```
## Parcourir toutes les permutations
```
from itertools import permutations
def optimisation(points, chemin):
dist = distance_chemin(points, chemin)
best = chemin
for perm in permutations(chemin):
d = distance_chemin(points, perm)
if d < dist:
dist = d
best = perm
return best
res = optimisation(points, list(range(points.shape[0])))
plot_points(points, res);
```
## Module tqdm
Utile seulement dans un notebook, très utile pour les impatients.
```
from tqdm import tqdm
def optimisation(points, chemin):
dist = distance_chemin(points, chemin)
best = chemin
loop = tqdm(permutations(chemin))
for perm in loop:
loop.set_description(str(perm))
d = distance_chemin(points, perm)
if d < dist:
dist = d
best = perm
return best
res = optimisation(points, list(range(points.shape[0])))
plot_points(points, res);
```
## Retournement
Les permutations ça prend du temps même avec les machines d'aujourd'hui.
```
def optimisation_retournement(points, chemin):
dist = distance_chemin(points, chemin)
best = chemin
for i in range(1, len(chemin)):
for j in range(i+1, len(chemin)):
chemin[i: j] = chemin[j-1: i-1: -1]
d = distance_chemin(points, chemin)
if d < dist:
dist = d
else:
chemin[i: j] = chemin[j-1: i-1: -1]
return chemin
res = optimisation_retournement(points, list(range(points.shape[0])))
plot_points(points, res);
```
| true |
code
| 0.517144 | null | null | null | null |
|
# Метод ADMM (alternating direction methods of multipliers)
## На прошлом семинаре
- Субградиентный метод: базовый метод решения негладких задач
- Проксимальный метод и его свойства: альтернатива градиентному спуску
- Проксимальный градиентный метод: заглядывание в чёрный ящик
- Ускорение проксимального градиентного метода, ISTA и FISTA
## План на сегодня
- Использование Лагранжиана как модели целевой функции в задаче условной оптимизации
- Чередование спуска и подъёма для решения минимаксной задачи
- Регуляризация лагранжиана
- ADMM
## Двойственная задача: напоминание
- Исходная задача
\begin{align*}
& \min f(x) \\
\text{s.t. } & Ax = b
\end{align*}
- Лагранжиан
$$
L(x, \lambda) = f(x) + \lambda^{\top}(Ax - b)
$$
- Двойственная задача
$$
\max_{\lambda} g(\lambda),
$$
где $g(\lambda) = \inf_x L(x, \lambda)$
- Восстановление решения исходной заадчи
$$
x^* = \arg\min_x L(x, \lambda^*)
$$
## Решение двойственной задачи
- Градиентный подъём, так как задача без ограничений
$$
\lambda_{k+1} = \lambda_k + \alpha_k g'(\lambda_k)
$$
- При этом градиент двойственной функции
$$
g'(\lambda_k) = A\hat{x} - b,
$$
где $\hat{x} = \arg\min_x L(x, \lambda_k)$
- Объединим два шага в один и получим
\begin{align*}
& x_{k+1} = \arg\min_x L(x, \lambda_k)\\
& \lambda_{k+1} = \lambda_k + \alpha_k (Ax_{k+1} - b)
\end{align*}
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc("text", usetex=True)
import cvxpy as cvx
def dual_ascent(update_x, A, b, alpha, x0, lambda0, max_iter):
x = x0.copy()
lam = lambda0.copy()
conv_x = [x]
conv_lam = [lam]
for i in range(max_iter):
x = update_x(x, lam, A, b)
lam = lam + alpha * (A @ x - b)
conv_x.append(x.copy())
conv_lam.append(lam.copy())
return x, lam, conv_x, conv_lam
```
### Модельный пример
\begin{align*}
& \min \frac{1}{2}x^{\top}Px - c^{\top}x\\
\text{s.t. } & Ax = b
\end{align*}
- Лагранжиан $L(x, \lambda) = \frac{1}{2}x^{\top}Px - c^{\top}x + \lambda^{\top}(Ax - b)$
- Обновление прямых переменных
$$
x_{k+1} = P^{-1}(c - A^{\top}\lambda_k)
$$
```
m, n = 10, 20
A = np.random.randn(m, n)
b = np.random.randn(m)
P = np.random.randn(n, n)
P = P.T @ P
c = np.random.randn(n)
spec = np.linalg.eigvalsh(P)
mu = spec.min()
print(mu)
x = cvx.Variable(n)
obj = 0.5 * cvx.quad_form(x, P) - c @ x
problem = cvx.Problem(cvx.Minimize(obj), [A @ x == b])
problem.solve(verbose=True)
print(np.linalg.norm(A @ x.value - b))
print(problem.value)
x0 = np.random.randn(n)
lam0 = np.random.randn(m)
max_iter = 100000
alpha = mu / 10
def f(x):
return 0.5 * x @ P @ x - c @ x
def L(x, lam):
return f(x) + lam @ (A @ x - b)
def update_x(x, lam, A, b):
return np.linalg.solve(P, c - A.T @ lam)
x_da, lam_da, conv_x_da, conv_lam_da = dual_ascent(update_x, A, b, alpha, x0, lam0, max_iter)
print(np.linalg.norm(A @ x_da - b))
print(0.5 * x_da @ P @ x_da - c @ x_da)
plt.plot([f(x) for x in conv_x_da], label="Objective")
plt.plot(problem.value * np.ones(len(conv_x_da)), label="Target value")
# plt.yscale("log")
plt.xscale("log")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.plot([L(x, lam) for x, lam in zip(conv_x_da, conv_lam_da)],
label="Lagrangian")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_da], label="$\|Ax - b\|_2$")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
```
### Важный частный случай
- Функция сепарабельна
- Обновление $x$ распадается на параллельные задачи по каждой координате
## Явный учёт наличия ограничений - регуляризация Лагранжиана
$$
L_{\rho}(x, \lambda) = f(x) + \lambda^{\top}(Ax - b) + \frac{\rho}{2} \|Ax - b\|_2^2
$$
- Теперь метод примет вид
\begin{align*}
& x_{k+1} = \arg\min_x L_{\rho}(x, \lambda)\\
& \lambda_{k+1} = \lambda_k + \rho (Ax_{k+1} - b)
\end{align*}
- Возможны изменения $\rho$ в процессе сходимости
- Замена $\alpha_k$ на $\rho$ связаны с условиями оптимальности
```
def augmented_lagrangian(update_x, A, b, rho0, x0, lambda0, max_iter):
x = x0.copy()
lam = lambda0.copy()
conv_x = [x]
conv_lam = [lam]
rho = rho0
for i in range(max_iter):
x = update_x(x, lam, A, b)
lam = lam + rho * (A @ x - b)
conv_x.append(x.copy())
conv_lam.append(lam.copy())
return x, lam, conv_x, conv_lam
def update_x_al(x, lam, A, b):
return np.linalg.solve(P + rho * A.T @ A, c - A.T @ lam + A.T @ b)
rho = 10
max_iter = 1000
x_al, lam_al, conv_x_al, conv_lam_al = augmented_lagrangian(update_x_al, A, b, rho, x0, lam0, max_iter)
print(np.linalg.norm(A @ x_al - b))
print(0.5 * x_al @ P @ x_al - c @ x_al)
plt.plot([f(x) for x in conv_x_da], label="DA")
plt.plot([f(x) for x in conv_x_al], label="AL")
# plt.yscale("log")
plt.xscale("log")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("Objective", fontsize=20)
plt.plot([L(x, lam) for x, lam in zip(conv_x_da, conv_lam_da)],
label="DA")
plt.plot([L(x, lam) for x, lam in zip(conv_x_al, conv_lam_al)],
label="AL")
plt.legend(fontsize=20)
plt.xscale("log")
plt.xlabel("\# iterations", fontsize=20)
plt.xlabel("Lagrangian", fontsize=20)
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_da], label="DA")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_al], label="AL")
plt.legend(fontsize=20)
plt.xscale("log")
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("$\|Ax - b\|_2$", fontsize=20)
plt.yticks(fontsize=20)
plt.xticks(fontsize=20)
```
### Существенная проблема
- Слагаемое $\|Ax - b\|_2^2$ сделало лагранжиан НЕсепарабельным!
## Сделаем его сепарабельным и получим ADMM
Задача станет такой
\begin{align*}
& \min f(x) + I_{Ax = b} (z)\\
\text{s.t. } & x = z
\end{align*}
Для неё модифицированный лагранжиан примет вид
$$
L_{\rho}(x, z, \lambda) = f(x) + I_{Ax = b} (z) + \lambda^{\top}(x - z) + \frac{\rho}{2}\|x - z\|_2^2
$$
- Теперь метод примет вид
\begin{align*}
& x_{k+1} = \arg\min_x L_{\rho}(x, z_k, \lambda_k)\\
& z_{k+1} = \arg\min_z L_{\rho}(x_{k+1}, z, \lambda_k) \\
& \lambda_{k+1} = \lambda_k + \rho (x_{k+1} - z_{k+1})
\end{align*}
- Обновление $z$ эквивалентно $\pi_{Ax = b}(x_{k+1} + \frac{\lambda_k}{\rho})$
```
def admm(update_x, update_z, rho0, x0, z0, lambda0, max_iter):
x = x0.copy()
z = z0.copy()
lam = lambda0.copy()
conv_x = [x]
conv_z = [z]
conv_lam = [lam]
rho = rho0
for i in range(max_iter):
x = update_x(x, z, lam, A, b)
z = update_z(x, z, lam, A, b)
lam = lam + rho * (x - z)
conv_x.append(x.copy())
conv_z.append(z.copy())
conv_lam.append(lam.copy())
return x, z, lam, conv_x, conv_z, conv_lam
def update_x_admm(x, z, lam, A, b):
n = x.shape[0]
return np.linalg.solve(P + rho*np.eye(n), -lam + c + rho * z)
def update_z_admm(x, z, lam, A, b):
x_hat = lam / rho + x
return x_hat - A.T @ np.linalg.solve(A @ A.T, A @ x_hat - b)
z0 = np.random.randn(n)
lam0 = np.random.randn(n)
rho = 1
x_admm, z_admm, lam_admm, conv_x_admm, conv_z_admm, conv_lam_admm = admm(update_x_admm,
update_z_admm,
rho, x0, z0, lam0,
max_iter=10000)
plt.figure(figsize=(10, 8))
plt.plot([f(x) for x in conv_x_da], label="DA")
plt.plot([f(x) for x in conv_x_al], label="AL")
plt.plot([f(x) for x in conv_x_admm], label="ADMM x")
plt.plot([f(z) for z in conv_z_admm], label="ADMM z")
# plt.yscale("log")
plt.xscale("log")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("Objective", fontsize=20)
plt.grid(True)
plt.yticks(fontsize=20)
plt.xticks(fontsize=20)
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_da], label="DA")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_al], label="AL")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_admm], label="ADMM")
plt.legend(fontsize=20)
plt.xscale("log")
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("$\|Ax - b\|_2$", fontsize=20)
plt.grid(True)
plt.yticks(fontsize=20)
plt.show()
plt.semilogy([np.linalg.norm(x - z) for x, z in zip(conv_x_admm, conv_z_admm)])
plt.grid(True)
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("$\|x_k - z_k\|_2$", fontsize=20)
plt.yticks(fontsize=20)
plt.show()
```
### Учтём, что все свойства сохранятся при аффинных преобразованиях
- Тогда наша задача в общем виде может быть записана как
\begin{align*}
& \min f(x) + g(z)\\
\text{s.t. } & Ax + Bz = d
\end{align*}
- Модифицированный лагранжиан для неё будет
$$
L_{\rho}(x, z, \lambda) = f(x) + g(z) + \lambda^{\top}(Ax + Bz - d) + \frac{\rho}{2}\|Ax + Bz - d\|_2^2
$$
- В этом случае сепарабельность по $z$ и $x$, но не внутри этих переменных
- В итоге, после внесения линейного слагаемого в квадратичное получим
\begin{align*}
& x_{k+1} = \arg\min_x \left( f(x) + \frac{\rho}{2}\|Ax + Bz_k - d + u_k \|_2^2 \right)\\
& z_{k+1} = \arg\min_z \left( g(z) + \frac{\rho}{2}\|Ax_{k+1} + Bz - d + u_k \|_2^2 \right)\\
& u_{k+1} = u_k + x_{k+1} - z_{k+1},
\end{align*}
где $u_k = \lambda_k / \rho$
### Как это всё использовать?
- Часто приводить вашу задачу к стандартному виду с предыдущего слайда неудобно
- Поэтому лучше для конкретной задачи приводить её руками к виду, который допускает применение ADMM
- Выписать аналитически все решения вспомогательных задач
- Реализовать их вычисления наиболее оптимальным образом (сделать факторизации матриц, которые не меняются с итерациями)
## Задача линейного программирования
\begin{align*}
& \min c^{\top}x\\
\text{s.t. } & Ax = b\\
& x \geq 0
\end{align*}
- Модифицированный лагранжиан
$$
L_{\rho}(x, z, \lambda) = c^{\top}x + I_{z \geq 0}(z) + \lambda^{\top}(x - z) + \frac{\rho}{2}\|x - z\|_2^2,
$$
где $c^{\top}x$ определена на множестве $Ax = b$.
- Шаг обновления по $x$ примет вид
$$
x_{k+1} = \arg\min_{x: \; Ax = b} c^{\top}x +\lambda^{\top}x + \frac{\rho}{2}\|x - z\|_2^2
$$
- Получим систему из условий оптимальности
$$
\begin{bmatrix}
\rho I & A^{\top} \\
A & 0
\end{bmatrix}
\begin{bmatrix}
x_{k+1}\\
\mu
\end{bmatrix}
=
\begin{bmatrix}
-\lambda_k - c + \rho z_k\\
b
\end{bmatrix}
$$
```
import scipy.optimize as scopt
m, n = 10, 200
A = np.random.rand(m, n)
b = np.random.rand(m)
c = np.random.rand(n)
scipy_linprog_conv = []
def callback_splin(cur_res):
scipy_linprog_conv.append(cur_res)
res = scopt.linprog(c, A_eq=A, b_eq=b,
bounds=[(0, None) for i in range(n)],
callback=callback_splin, method="simplex")
print(res)
def update_x_admm(x, z, lam, A, b):
n = x.shape[0]
m = A.shape[0]
C = np.block([[rho * np.eye(n), A.T], [A, np.zeros((m, m))]])
rhs = np.block([-lam - c + rho * z, b])
return np.linalg.solve(C, rhs)[:n]
def update_z_admm(x, z, lam, A, b):
x_hat = lam / rho + x
return np.clip(x_hat, 0, np.max(x_hat))
x0 = np.random.randn(n)
z0 = np.random.randn(n)
lam0 = np.random.randn(n)
rho = 1
x_admm, z_admm, lam_admm, conv_x_admm, conv_z_admm, conv_lam_admm = admm(update_x_admm,
update_z_admm,
rho, x0, z0, lam0, max_iter=100)
print(c @ x_admm - res.fun, np.linalg.norm(x_admm - res.x))
plt.figure(figsize=(10, 8))
plt.plot([c @ x for x in conv_x_admm], label="ADMM")
plt.plot([c @ res.x for res in scipy_linprog_conv], label="Scipy")
plt.legend(fontsize=20)
plt.grid(True)
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("$c^{\\top}x_k$", fontsize=20)
plt.yticks(fontsize=20)
plt.xticks(fontsize=20)
```
## Комментарии
- Сходимость по итерациям медленнее, но стоимость одной итерации также меньше
- Основной выигрыш при использовании ADMM в получении не очень точного решения **параллельно** и очень быстро
- Различные способы представления задачи в виде, пригодном для использования ADMM, порождают различные методы, которые имеют различные свойства
- Например в [этой](https://papers.nips.cc/paper/6746-a-new-alternating-direction-method-for-linear-programming.pdf) статье предлагается альтернативный способ решения задачи линейного программирования через ADMM
- [Метод SCS](https://stanford.edu/~boyd/papers/pdf/scs_long.pdf), используемый по умолчанию в CVXPy, основан на применеии ADMM к коническому представлению исходной задачи
| true |
code
| 0.401277 | null | null | null | null |
|
<b> One-Layer Atmosphere Model </b><br>
Reference: Walter A. Robinson, Modeling Dynamic Climate Systems
```
import numpy as np
import matplotlib.pyplot as plt
plt.style.use("seaborn-dark")
# Step size
dt = 0.01
# Set up a 10 years simulation
tmin = 0
tmax = 10
t = np.arange(tmin, tmax + dt, dt)
n = len(t)
# Seconds per year
seconds_per_year = 365 * 24 * 60 * 60
# Albedo
albedo = 0.3
# Emissivity
e = 0.85
# Atmospheric Absorption
atmospheric_abs = 0.1
# Stefan-Boltzmann constant
sigma = 5.6696e-8 # W/m^2*K^4
# Solar constant
solar_const = 1367 # W/m^2
# Density of water
water_density = 1000 # kg/m^3
# Depth of the mixed layer
depth_mixed_layer = 50 # m
# Specific heat capacity of water
spec_heat_water = 4218 # J/kg*K
# Specific heat capacity of atmosphere
spec_heat_atm = 1004 # J/kg*K
# Gravity accelleration
g = 9.81 # m/s^2
# Atmospheric pressure
atm_press = 1e5; # Pa (kg/m*s^2)
# Atmospheric mass
mass_atm = atm_press/g; # kg
# Heat capacity water
heat_capacity_water = water_density * depth_mixed_layer * spec_heat_water # J/K
# Heat capacity atmosphere
heat_capacity_atm = mass_atm * spec_heat_atm # J/K
# Initialize temperature
ts = np.zeros((n,)) # Surface temperature
ts[0] = 273.15
ta = np.zeros((n,)) # Atmospheric temperature
ta[0] = 273.15
# Inflows (Solar to earth, IR down) - Surface
# Absorbed solar energy
solar = solar_const/4 * (1 - albedo) * seconds_per_year
# Solar to earth
solar_to_earth = solar * (1 - atmospheric_abs)
# IR Down
ir_down = sigma * e * ta[0] ** 4 * seconds_per_year
# Outflows (IR) - Surface
ir = sigma * ts[0] ** 4 * seconds_per_year
# Inflows (solar to atmosphere, IR) - Atmosphere
# Solar to atmosphere
solar_to_atm = solar * atmospheric_abs
# Outflows (IR down, IR to space) - Atmosphere
# IR to space
ir_to_space = ir * (1 - e) + ir_down
# Flows of energy
# IR norm
ir_norm = np.zeros((n, ))
ir_norm[0] = ir/seconds_per_year
# IR down norm
ir_down_norm = np.zeros((n, ))
ir_down_norm[0] = ir_down/seconds_per_year
# Solar to earth norm
solar_to_earth_norm = np.zeros((n, ))
solar_to_earth_norm[0] = solar_to_earth/seconds_per_year
for k in range(1, n):
# Inflows (Solar to earth, IR down) - Surface
solar_to_earth = (solar_const/4 * (1 - albedo) * seconds_per_year) * (1 - atmospheric_abs)
ir_down = sigma * e * ta[k-1] ** 4 * seconds_per_year
# Outflows (IR) - Surface
ir = sigma * ts[k-1] ** 4 * seconds_per_year
# Calculate the temperature - Surface
energy_ts = ts[k-1] * heat_capacity_water + (solar_to_earth + ir_down - ir) * dt
ts[k] = energy_ts/heat_capacity_water
# Inflows (solar to atmosphere, IR) - Atmosphere
solar_to_atm = (solar_const/4 * (1 - albedo) * seconds_per_year) * atmospheric_abs
# Outflows (IR down, IR to space) - Atmosphere
ir_to_space = ir * (1 - e) + ir_down
# Calculate the temperature - Atmosphere
energy_ta = ta[k-1] * heat_capacity_atm + (ir + solar_to_atm - ir_down - ir_to_space) * dt
ta[k] = energy_ta/heat_capacity_atm
# Calculate IR norm, IR down, Solar to earth norm
ir_norm[k] = ir/seconds_per_year
ir_down_norm[k] = ir_down/seconds_per_year
solar_to_earth_norm[k] = solar_to_earth/seconds_per_year
# Convert to °C
ts = ts - 273.15
ta = ta - 273.15
# Plot Ts and Ta
fig, (ax1, ax2) = plt.subplots(2, 1, figsize = (8, 6))
ax1.set_xlabel("Years")
ax1.set_ylabel("Temperature (°C)")
ax1.set_title("Surface and atmospheric temperatures in the one-layer climate model")
ax1.plot(t, ts, c="tab:red")
ax1.plot(t, ta, c="tab:blue")
ax1.legend(("T surface","T atmosphere"))
ax1.grid()
# Plot the Flows
ax2.set_xlabel("Years")
ax2.set_ylabel("Flow of energy ($W/m^{2}$)")
ax2.set_title("Flows of energy to and from the surface in the one-layer model")
ax2.set_ylim(0, ir_norm.max() + 10)
ax2.plot(t, ir_norm, c="tab:purple")
ax2.plot(t, solar_to_earth_norm, c="tab:orange")
ax2.plot(t, ir_down_norm, c="tab:green")
ax2.grid()
ax2.legend(("IR", "Solar to earth", "IR down"))
plt.tight_layout()
plt.show()
print(f"Final Temperature, T={round(ts[-1])}°C")
```
| true |
code
| 0.676914 | null | null | null | null |
|
# Math Operations
```
from __future__ import print_function
import torch
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/pytorch_exercises"
torch.__version__
np.__version__
```
NOTE on notation
_x, _y, _z, ...: NumPy 0-d or 1-d arrays<br/>
_X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays<br/>
x, y, z, ...: 0-d or 1-d tensors<br/>
X, Y, Z, ...: 2-d or higher dimensional tensors
## Trigonometric functions
Q1. Calculate sine, cosine, and tangent of x, element-wise.
```
x = torch.FloatTensor([0., 1., 30, 90])
sinx = x.sin()
cosx = x.cos()
tanx = x.tan()
print("• sine x=", sinx)
print("• cosine x=", cosx)
print("• tangent x=", tanx)
assert np.allclose(sinx.numpy(), np.sin(x.numpy()))
assert np.allclose(cosx.numpy(), np.cos(x.numpy()) )
assert np.allclose(tanx.numpy(), np.tan(x.numpy()) )
```
Q2. Calculate inverse sine, inverse cosine, and inverse tangent of x, element-wise.
```
x = torch.FloatTensor([-1., 0, 1.])
asinx = x.asin()
acosx = x.acos()
atanx = x.atan()
print("• inverse sine x=", asinx)
print("• inversecosine x=", acosx)
print("• inverse tangent x=", atanx)
assert np.allclose(asinx.numpy(), np.arcsin(x.numpy()) )
assert np.allclose(acosx.numpy(), np.arccos(x.numpy()) )
assert np.allclose(atanx.numpy(), np.arctan(x.numpy()) )
```
## Hyperbolic functions
Q3. Calculate hyperbolic sine, hyperbolic cosine, and hyperbolic tangent of x, element-wise.
```
x = torch.FloatTensor([-1., 0, 1.])
sinhx = x.sinh()
coshx = x.cosh()
tanhx = x.tanh()
print("• hyperbolic sine x=", sinhx)
print("• hyperbolic cosine x=", coshx)
print("• hyperbolic tangent x=", tanhx)
assert np.allclose(sinhx.numpy(), np.sinh(x.numpy()))
assert np.allclose(coshx.numpy(), np.cosh(x.numpy()))
assert np.allclose(tanhx.numpy(), np.tanh(x.numpy()))
```
## Rounding
Q4. Predict the results of these.
```
x = torch.FloatTensor([2.1, 1.5, 2.5, 2.9, -2.1, -2.5, -2.9])
roundx = x.round()
floorx = x.floor()
ceilx = x.ceil()
truncx = x.trunc()
print("• roundx=", roundx)
print("• floorx=", floorx)
print("• ceilx=", ceilx)
print("• truncx=", truncx)
```
## Sum, product
Q5. Sum the elements of X along the first dimension, retaining it.
```
X = torch.Tensor(
[[1, 2, 3, 4],
[5, 6, 7, 8]])
print(X.sum(dim=0, keepdim=True))
```
Q6. Return the product of all elements in X.
```
X = torch.Tensor(
[[1, 2, 3, 4],
[5, 6, 7, 8]])
print(X.prod())
```
Q7. Return the cumulative sum of all elements along the second axis in X.
```
X = torch.Tensor(
[[1, 2, 3, 4],
[5, 6, 7, 8]])
print(X.cumsum(dim=1))
```
Q8. Return the cumulative product of all elements along the second axis in X.
```
X = torch.Tensor(
[[1, 2, 3, 4],
[5, 6, 7, 8]])
print(X.cumprod(dim=1))
```
## Exponents and logarithms
Q9. Compute $e^x$, element-wise.
```
x = torch.Tensor([1., 2., 3.])
y = x.exp()
print(y)
assert np.array_equal(y.numpy(), np.exp(x.numpy()))
```
Q10. Compute logarithms of x element-wise.
```
x = torch.Tensor([1, np.e, np.e**2])
y = x.log()
print(y)
assert np.allclose(y.numpy(), np.log(x.numpy()))
```
## Arithmetic Ops
Q11. Add x and y element-wise.
```
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([-1, -2, -3])
z = x.add(y)
print(z)
assert np.array_equal(z.numpy(), np.add(x.numpy(), y.numpy()))
```
Q12. Subtract y from x element-wise.
```
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([-1, -2, -3])
z = y.sub(x)
print(z)
assert np.array_equal(z.numpy(), np.subtract(y.numpy(), x.numpy()))
```
Q13. Multiply x by y element-wise.
```
x = torch.Tensor([3, 4, 5])
y = torch.Tensor([1, 0, -1])
x_y = x.mul(y)
print(x_y)
assert np.array_equal(x_y.numpy(), np.multiply(x.numpy(), y.numpy()))
```
Q14. Divide x by y element-wise.
```
x = torch.FloatTensor([3., 4., 5.])
y = torch.FloatTensor([1., 2., 3.])
z = x.div(y)
print(z)
assert np.allclose(z.numpy(), np.true_divide(x.numpy(), y.numpy()))
```
Q15. Compute numerical negative value of x, element-wise.
```
x = torch.Tensor([1, -1])
negx = x.neg()
print(negx)
assert np.array_equal(negx.numpy(), np.negative(x.numpy()))
```
Q16. Compute the reciprocal of x, element-wise.
```
x = torch.Tensor([1., 2., .2])
y = x.reciprocal()
print(y)
assert np.array_equal(y.numpy(), np.reciprocal(x.numpy()))
```
Q17. Compute $X^Y$, element-wise.
```
X = torch.Tensor([[1, 2], [3, 4]])
Y = torch.Tensor([[1, 2], [1, 2]])
X_Y = X.pow(Y)
print(X_Y)
assert np.array_equal(X_Y.numpy(), np.power(X.numpy(), Y.numpy()))
```
Q18. Compute the remainder of x / y element-wise.
```
x = torch.Tensor([-3, -2, -1, 1, 2, 3])
y = 2.
z = x.remainder(y)
print(z)
assert np.array_equal(z.numpy(), np.remainder(x.numpy(), y))
```
Q19. Compute the fractional portion of each element of x.
```
x = torch.Tensor([1., 2.5, -3.2])
y = x.frac()
print(y)
assert np.allclose(y.numpy(), np.modf(x.numpy())[0])
```
## Comparision Ops
Q20. Return True if X and Y have the same size and elements, otherise False.
```
x = torch.randperm(3)
y = torch.randperm(3)
print("x=", x)
print("y=", y)
z = x.equal(y)
print(z)
#np.array_equal(x.numpy(), y.numpy())
```
Q21. Return 1 if an element of X is 0, otherwise 0.
```
X = torch.Tensor( [[-1, -2, -3], [0, 1, 2]] )
Y = X.eq(0)
print(Y)
assert np.allclose(Y.numpy(), np.equal(X.numpy(), 0))
```
Q22. Return 0 if an element of X is 0, otherwise 1.
```
X = torch.Tensor( [[-1, -2, -3], [0, 1, 2]] )
Y = X.ne(0)
print(Y)
assert np.allclose(Y.numpy(), np.not_equal(X.numpy(), 0))
```
Q23. Compute x >= y, x > y, x < y, and x <= y element-wise.
```
x = torch.randperm(3)
y = torch.randperm(3)
print("x=", x)
print("y=", y)
#1. x >= y
z = x.ge(y)
print("#1. x >= y", z)
#2. x > y
z = x.gt(y)
print("#2. x > y", z)
#3. x <= y
z = x.le(y)
print("#3. x <= y", z)
#4. x < y
z = x.lt(y)
print("#4. x < y", z)
```
## Miscellaneous
Q24. If an element of x is smaller than 3, replace it with 3.
And if an element of x is bigger than 7, replace it with 7.
```
x = torch.arange(0, 10)
y = x.clamp(min=3, max=7)
print(y)
assert np.array_equal(y.numpy(), np.clip(x.numpy(), a_min=3, a_max=7))
```
Q25. If an element of x is smaller than 3, replace it with 3.
```
x = torch.arange(0, 10)
y = x.clamp(min=3)
print(y)
assert np.array_equal(y.numpy(), np.clip(x.numpy(), a_min=3, a_max=None))
```
Q26. If an element of x is bigger than 7, replace it with 7.
```
x = torch.arange(0, 10)
y = x.clamp(max=7)
print(y)
assert np.array_equal(y.numpy(), np.clip(x.numpy(), a_min=None, a_max=7))
```
Q27. Compute square root of x element-wise.
```
x = torch.Tensor([1., 4., 9.])
y = x.sqrt()
print(y)
assert np.array_equal(y.numpy(), np.sqrt(x.numpy()))
```
Q28. Compute the reciprocal of the square root of x, element-wise.
```
x = torch.Tensor([1., 4., 9.])
y = x.rsqrt()
print(y)
assert np.allclose(y.numpy(), np.reciprocal(np.sqrt(x.numpy())))
```
Q29. Compute the absolute value of X.
```
X = torch.Tensor([[1, -1], [3, -3]])
Y = X.abs()
print(Y)
assert np.array_equal(Y.numpy(), np.abs(X.numpy()))
```
Q30. Compute an element-wise indication of the sign of x, element-wise.
```
x = torch.Tensor([1, 3, 0, -1, -3])
y = x.sign()
print(y)
assert np.array_equal(y.numpy(), np.sign(x.numpy()))
```
Q31. Compute the sigmoid of x, elemet-wise.
```
x = torch.FloatTensor([1.2, 0.7, -1.3, 0.1])
y = x.sigmoid()
print(y)
assert np.allclose(y.numpy(), 1. / (1 + np.exp(-x.numpy())))
```
Q32. Interpolate X and Y linearly with a weight of .9 on Y.
```
X = torch.Tensor([1,2,3,4])
Y = torch.Tensor([10,10,10,10])
Z = X.lerp(Y, .9)
print(Z)
assert np.allclose(Z.numpy(), X.numpy() + (Y.numpy()-X.numpy())*.9)
```
| true |
code
| 0.684343 | null | null | null | null |
|
# Introduction
In this tutorial, we will train a regression model with Foreshadow using the [House Pricing](https://www.kaggle.com/c/house-prices-advanced-regression-techniques) dataset from Kaggle.
# Getting Started
To get started with foreshadow, install the package using `pip install foreshadow`. This will also install the dependencies. Now create a simple python script that uses all the defaults with Foreshadow. Note that Foreshadow requires `Python >=3.6, <4.0`.
First import foreshadow related classes. Also import sklearn, pandas and numpy packages.
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
from sklearn.metrics import get_scorer
from sklearn.metrics import mean_squared_log_error
from foreshadow import Foreshadow
from foreshadow.intents import IntentType
from foreshadow.utils import ProblemType
pd.options.display.max_columns=None
RANDOM_SEED=42
np.random.seed(RANDOM_SEED)
```
# Load the dataset
```
df_train = pd.read_csv("train.csv")
X_df = df_train.drop(columns="SalePrice")
y_df = df_train[["SalePrice"]]
X_train, X_test, y_train, y_test = train_test_split(X_df, y_df, test_size=0.2)
X_train.head()
```
# Model Training Iteration 1 - ElasticNet
```
def measure(model, X_test, y_test):
y_pred = model.predict(X_test)
rmsle = np.sqrt(mean_squared_log_error(y_test, y_pred))
print('root mean squared log error = %5.4f' % rmsle)
return rmsle
shadow1 = Foreshadow(problem_type=ProblemType.REGRESSION, random_state=RANDOM_SEED, n_jobs=-1, estimator=ElasticNet(random_state=RANDOM_SEED))
_ = shadow1.fit(X_train, y_train)
_ = measure(shadow1, X_test, y_test)
```
### You might be curious how Foreshadow handled the input data. Let's take a look
```
shadow1.get_data_summary()
```
#### Foreshadow use a machine learning model to identify the 'intent' of features. 3 intents are supported as of v1.0 and they are 'Categorical', 'Numeric' and 'Text'. Foreshadow will transform the features intelligently according to its intent and statistics. Features not belonging to these three are tagged as 'Droppable'. For example, the Id is droppable since it has a unique value for each row and will not provide any signal to the model. Also in the above table, 'Label' in the intent row indicate that is the target column.
# Model Training Iteration 2 - Override
```
shadow2 = Foreshadow(problem_type=ProblemType.REGRESSION, random_state=RANDOM_SEED, n_jobs=-1, estimator=ElasticNet(random_state=RANDOM_SEED))
shadow2.override_intent('ExterQual', IntentType.CATEGORICAL)
shadow2.override_intent('ExterCond', IntentType.CATEGORICAL)
shadow2.override_intent('LotShape', IntentType.CATEGORICAL)
shadow2.override_intent('HeatingQC', IntentType.CATEGORICAL)
shadow2.override_intent('YearBuilt', IntentType.NUMERIC)
shadow2.override_intent('YearRemodAdd', IntentType.NUMERIC)
shadow2.override_intent('YrSold', IntentType.NUMERIC)
_ = shadow2.fit(X_train, y_train)
_ = measure(shadow2, X_test, y_test)
```
# Model Training Iteration 3 - AutoEstimator
#### Instead of trying one estimator, we can leverage AutoEstimator to search ML models and hyper-parameters. When we do not provide an estimator, Foreshadow will create the AutoEstimator automatically.
```
shadow3 = Foreshadow(problem_type=ProblemType.REGRESSION, allowed_seconds=300, random_state=RANDOM_SEED, n_jobs=-1)
shadow3.override_intent('ExterQual', IntentType.CATEGORICAL)
shadow3.override_intent('ExterCond', IntentType.CATEGORICAL)
shadow3.override_intent('LotShape', IntentType.CATEGORICAL)
shadow3.override_intent('HeatingQC', IntentType.CATEGORICAL)
shadow3.override_intent('YearBuilt', IntentType.NUMERIC)
shadow3.override_intent('YearRemodAdd', IntentType.NUMERIC)
shadow3.override_intent('YrSold', IntentType.NUMERIC)
_ = shadow3.fit(X_train, y_train)
_ = measure(shadow3, X_test, y_test)
```
# Model Training Iteration 4 - Customize Scoring Function in Search
#### The Kaggle competition use root mean squared log error to rank result. Let's ask the AutoEstimator to optimize for this scoring function
```
shadow4 = Foreshadow(problem_type=ProblemType.REGRESSION, allowed_seconds=300, random_state=RANDOM_SEED,
n_jobs=-1, auto_estimator_kwargs={"scoring": get_scorer('neg_mean_squared_log_error')})
shadow4.override_intent('ExterQual', IntentType.CATEGORICAL)
shadow4.override_intent('ExterCond', IntentType.CATEGORICAL)
shadow4.override_intent('LotShape', IntentType.CATEGORICAL)
shadow4.override_intent('HeatingQC', IntentType.CATEGORICAL)
shadow4.override_intent('YearBuilt', IntentType.NUMERIC)
shadow4.override_intent('YearRemodAdd', IntentType.NUMERIC)
shadow4.override_intent('YrSold', IntentType.NUMERIC)
_ = shadow4.fit(X_train, y_train)
rmsle = measure(shadow4, X_train, y_train)
```
## Compare Kaggle Leaderboard
```
leaderboard = pd.read_csv('house-prices-advanced-regression-techniques-publicleaderboard.csv')
leaderboard.sort_values(by='Score', ascending=True, inplace=True)
better_solutions = leaderboard[leaderboard.Score < rmsle]
ranking = len(better_solutions) * 100.0 / len(leaderboard)
print('Our solution ranked at %dth position within top %0.2f%%' % (len(better_solutions), ranking))
```
| true |
code
| 0.490358 | null | null | null | null |
|
# Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
## Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
* A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of word2vec from Chris McCormick
* [First word2vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al.
* [NIPS paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for word2vec also from Mikolov et al.
* An [implementation of word2vec](http://www.thushv.com/natural_language_processing/word2vec-part-1-nlp-with-deep-learning-with-tensorflow-skip-gram/) from Thushan Ganegedara
* TensorFlow [word2vec tutorial](https://www.tensorflow.org/tutorials/word2vec)
## Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
```
import time
import numpy as np
import tensorflow as tf
import utils
```
Load the [text8 dataset](http://mattmahoney.net/dc/textdata.html), a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the `data` folder. Then you can extract it and delete the archive file to save storage space.
```
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
```
## Preprocessing
Here I'm fixing up the text to make training easier. This comes from the `utils` module I wrote. The `preprocess` function coverts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
```
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
```
And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list `int_words`.
```
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
```
## Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
> **Exercise:** Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to `train_words`.
```
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if p_drop[word] < random.random()]
```
## Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf):
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
> **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
```
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
```
Here's a function that returns batches for our network. The idea is that it grabs `batch_size` words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
```
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
```
## Building the graph
From Chris McCormick's blog, we can see the general structure of our network.

The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the `inputs` and `labels` placeholders like normal.
> **Exercise:** Assign `inputs` and `labels` using `tf.placeholder`. We're going to be passing in integers, so set the data types to `tf.int32`. The batches we're passing in will have varying sizes, so set the batch sizes to [`None`]. To make things work later, you'll need to set the second dimension of `labels` to `None` or `1`.
```
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
```
## Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix:

You don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word.
<img src="assets/word2vec_weight_matrix_lookup_table.png" width=500>
> **Exercise:** Tensorflow provides a convenient function [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use `tf.nn.embedding_lookup` to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using [tf.random_uniform](https://www.tensorflow.org/api_docs/python/tf/random_uniform).
```
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
```
## Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). Tensorflow has a convenient function to do this, [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss).
> **Exercise:** Below, create weights and biases for the softmax layer. Then, use [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss) to calculate the loss. Be sure to read the documentation to figure out how it works.
```
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
```
## Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
```
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
```
Restore the trained network if you need to:
```
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
```
## Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
```
| true |
code
| 0.547827 | null | null | null | null |
|
# Simple 10-class classification
```
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import matplotlib.pyplot as plt
import warnings
# Suppress warkings (gets rid of some type-conversion warnings)
warnings.filterwarnings("ignore")
%matplotlib inline
```
### Generate some dummy data
```
classes = 10
data = np.random.random((1000, 100))
labels = np.random.randint(classes, size=(1000, 1))
```
### (Optional) Visualization of the data
This is not part of the Keras example, but it helps to understand what we are trying to do.
```
# Plot a 2D representation of the data, using t-SNE
from sklearn.manifold import TSNE
data_viz = TSNE(n_components=2).fit_transform(data)
print("Data dimensions after reduction: {}".format(data_viz.shape))
plt.scatter(data_viz[:,0], data_viz[:,1], c=labels[:,0], cmap=plt.cm.get_cmap("jet", classes))
plt.colorbar(ticks=range(classes))
```
#### Let's see what each example looks like
We can think of them as the images of "digits." We will actually train character recognition in future tutorials.
```
sampleSize = 10
samples = np.random.permutation(data.shape[0])[0:sampleSize].tolist()
fig=plt.figure(figsize=(15, 8))
for i in range(1, sampleSize+1):
fig.add_subplot(1, sampleSize, i)
plt.imshow(np.reshape(data[samples[i-1],:], (10,10)), interpolation='nearest', cmap="Blues")
plt.title('Class {}'.format(labels[samples[i-1]]))
plt.xlabel("Img {}".format(samples[i-1]))
```
## Finally, let's use Keras
### Create the model
```
# For a single-input model with 10 classes (categorical classification):
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=100))
model.add(Dense(classes, activation='softmax'))
```
### Compile the model
```
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
```
### Transform labels (i.e., the outputs), to the shape expected by the model
```
# Convert labels to categorical one-hot encoding
one_hot_labels = keras.utils.to_categorical(labels[:,0], num_classes=classes)
# Optional: visualize the label transformation
rIdx = np.random.randint(0, labels.shape[0])
print("Label shapes before: {}".format(labels.shape))
print("\tLabel at random index {}:\n\t{}\n".format(rIdx, labels[rIdx]))
print("Label shapes after: {}".format(one_hot_labels.shape))
print("\tOne-hot encoded label at random index {} (same as above):\n\t{}\n".format(rIdx, one_hot_labels[rIdx, :]))
print("(Pos.)\t{}".format(np.array(range(0,10),dtype="float")))
```
### Train the model
Note how the loss decreases, while the accuracy increases, as the training goes through more and more epochs.
```
# Train the model, iterating on the data in batches of 32 samples
model.fit(data, one_hot_labels, epochs=250, batch_size=32, verbose=2)
predSetSize = 5
predData = np.random.random((predSetSize, 100))
samples = np.random.permutation(predData.shape[0])[0:predSetSize].tolist()
fig=plt.figure(figsize=(15, 8))
results = np.round(model.predict(predData, verbose=1), decimals=2)
resultLabels = np.argmax(results, axis=0)
for i in range(1, predSetSize+1):
fig.add_subplot(1, predSetSize, i)
plt.imshow(np.reshape(predData[samples[i-1],:], (10,10)), interpolation='nearest', cmap="Blues")
plt.title('Class {}'.format(resultLabels[samples[i-1]]))
plt.xlabel("Img {}".format(samples[i-1]))
```
## Conclusions
This example is still abstract (i.e., we used random data), but it shows the general workflow. In the next tutorial, we will apply this to a meaningful dataset.
| true |
code
| 0.759928 | null | null | null | null |
|
# MODEL ANALYSIS [TEST DATA]
#### Dependecies
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from sklearn.metrics import brier_score_loss
LEN = range(70, 260, 10)
def decodePhed(x):
return 10**(-x/10.0)
```
#### Load csv files
```
test_regular = list()
test_mems = list()
test_bow = list()
test_bow_mems = list()
test_mems_stat = list()
orig = list()
control_60 = list()
reg_fn = "../data/stats/test/tstats_r_{}.tsv"
m_fn = "../data/stats/test/tstats_m_{}.tsv"
b_fn = "../data/stats/test/tstats_b_{}.tsv"
bm_fn = "../data/stats/test/tstats_bm_{}.tsv"
ms_fn = "../data/stats/test/tstats_ms_{}.tsv"
for i in range(70, 260, 10):
test_regular.append(pd.read_csv(reg_fn.format(i), sep='\t'))
test_mems.append(pd.read_csv(m_fn.format(i), sep='\t'))
test_bow.append(pd.read_csv(b_fn.format(i), sep='\t'))
test_bow_mems.append(pd.read_csv(bm_fn.format(i), sep='\t'))
test_mems_stat.append(pd.read_csv(ms_fn.format(i), sep='\t'))
mq_size = test_regular[-1][test_regular[-1].aligner == 'recal']['aligner'].count()
control_60.append(np.ones(mq_size) * 60)
for i in test_regular:
orig.append(i[i.aligner == 'orig'])
```
#### Counting correct and incorrect mappings
```
correct_counts = list()
incorrect_counts = list()
for r, m , b, bm, ms, ori in zip(test_regular, test_mems, test_bow, test_bow_mems, test_mems_stat, orig):
r = r[r.aligner == 'recal']
m = m[m.aligner == 'recal']
b = b[b.aligner == 'recal']
bm = bm[bm.aligner == 'recal']
ms = ms[ms.aligner == 'recal']
rc_counts = r[r.correct == 1].correct.count()
ri_counts = r[r.correct == 0].correct.count()
mc_counts = m[m.correct == 1].correct.count()
mi_counts = m[m.correct == 0].correct.count()
bc_counts = b[b.correct == 1].correct.count()
bi_counts = b[b.correct == 0].correct.count()
bmc_counts = bm[bm.correct == 1].correct.count()
bmi_counts = bm[bm.correct == 0].correct.count()
msc_counts = ms[ms.correct == 1].correct.count()
msi_counts = ms[ms.correct == 0].correct.count()
oric_counts = ori[ori.correct == 1].correct.count()
orii_counts = ori[ori.correct == 0].correct.count()
# print("correct {}, {}, {}, {}, {}, {}".format(rc_counts, mc_counts, bc_counts, bmc_counts, msc_counts, oric_counts))
# print("incorrect {}, {}, {}, {}, {}, {}".format(ri_counts, mi_counts, bi_counts, bmi_counts, msi_counts, orii_counts))
correct_counts.append(rc_counts)
incorrect_counts.append(ri_counts)
incorrect_counts = np.array(incorrect_counts)
correct_counts = np.array(correct_counts)
plt.figure(figsize=(20,8))
plt.subplot(1, 2, 1)
plt.plot(LEN, incorrect_counts)
plt.xlabel('Length', fontsize=20)
plt.ylabel('Incorrect amounts', fontsize=20)
plt.title('DNA reads Length vs Incorrect Amount', fontsize=25);
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.subplot(1, 2, 2)
plt.plot(LEN, correct_counts)
plt.xlabel('Length', fontsize=20)
plt.ylabel('Incorrect amounts', fontsize=20)
plt.title('DNA reads Length vs Correct Amount', fontsize=25);
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.savefig("LengthVSIncorrect_test.png")
plt.savefig("LengthVSIncorrect_test.pdf")
plt.figure(figsize=(20,8))
plt.plot(LEN, incorrect_counts/correct_counts.astype('float'))
plt.xlabel('Read Length', fontsize=20)
plt.ylabel('Incorrect mapping amount %', fontsize=20)
plt.title('Incorrect mapping amount % vs DNA Read Length ', fontsize=30);
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.savefig("LengthVSIncorrectPer_test.png")
plt.savefig("LengthVSIncorrectPer_test.pdf")
```
#### Brier score
```
r_bc = list()
m_bc = list()
b_bc = list()
bm_bc = list()
ms_bc = list()
ori_bc = list()
control60_bc = list()
for r, m , b, bm, ms, ori, con in zip(test_regular, test_mems, test_bow, test_bow_mems, test_mems_stat, orig, control_60):
rp = 1 - decodePhed(r.mq.values)
r = r.assign(p=pd.Series(rp).values)
r_bc.append(brier_score_loss(r.correct.values, r.p.values))
mp = 1 - decodePhed(m.mq.values)
m = m.assign(p=pd.Series(mp).values)
m_bc.append(brier_score_loss(m.correct.values, m.p.values))
bp = 1 - decodePhed(b.mq.values)
b = b.assign(p=pd.Series(bp).values)
b_bc.append(brier_score_loss(b.correct.values, b.p.values))
bmp = 1 - decodePhed(bm.mq.values)
bm = bm.assign(p=pd.Series(bmp).values)
bm_bc.append(brier_score_loss(bm.correct.values, bm.p.values))
msp = 1 - decodePhed(ms.mq.values)
ms = ms.assign(p=pd.Series(msp).values)
ms_bc.append(brier_score_loss(ms.correct.values, ms.p.values))
orip = 1 - decodePhed(ori.mq.values)
ori = ori.assign(p=pd.Series(orip).values)
ori_bc.append(brier_score_loss(ori.correct.values, ori.p.values))
conp = 1 - decodePhed(con)
control60_bc.append(brier_score_loss(ori.correct.values, conp))
r_bc = np.array(r_bc)
m_bc = np.array(m_bc)
b_bc = np.array(b_bc)
bm_bc = np.array(bm_bc)
ms_bc = np.array(ms_bc)
ori_bc = np.array(ori_bc)
control60_bc = np.array(control60_bc)
plt.figure(figsize=(20,10))
plt.plot(LEN, ori_bc, c='k', label='original', linewidth=6)
plt.plot(LEN, r_bc, c='b', label='mapping quality', linewidth=3)
plt.plot(LEN, m_bc, c='r', label='mems', linewidth=3)
plt.plot(LEN, b_bc, c='g', label='bag of words', linewidth=3)
plt.plot(LEN, bm_bc, c='m', label='bag of words with mems', linewidth=3)
plt.plot(LEN, ms_bc, c='y', label='mems stats', linewidth=3)
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.xlabel('Length', fontsize=30)
plt.xticks(fontsize=20)
plt.ylabel('Brier Score', fontsize=30)
plt.title('Brier Score vs DNA Read Length for model', fontsize=30);
plt.legend(fontsize=20);
plt.savefig("DNALengthVsBrierScore_test.png")
plt.savefig("DNALengthVsBrierScore_test.pdf")
```
| true |
code
| 0.380126 | null | null | null | null |
|
The goal of this notebook:
1. Utilize a statistic (derived from a hypothesis test) to measure change within each polarization of a time series of SAR images.
2. Use threshold determined from a pair to determine change throughout a time series of L-band imagery.
```
import rasterio
import numpy
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
from astropy.convolution import convolve
from tqdm import tqdm
import matplotlib.patches as mpatches
import geopandas as gpd
from rscube import (get_geopandas_features_from_array,
filter_binary_array_by_min_size,
scale_img,
polygonize_array_to_shapefile
)
```
# Setup Paths to Data
```
DATA_DIR = Path(f'data/')
DATA_DIR.exists()
hh_paths = sorted(list(DATA_DIR.glob('*/*hh*.tif')))
hv_paths = sorted(list(DATA_DIR.glob('*/*hv*.tif')))
vv_paths = sorted(list(DATA_DIR.glob('*/*vv*.tif')))
hv_paths
CHANGE_DIR = Path('out/change_maps')
CHANGE_DIR.mkdir(exist_ok=True, parents=True)
```
This will be a trehold associated with each polarization. We can save this if we want to continue such a threshold for the time series in this area.
```
if 'THRESHOLD_DICT' not in locals():
# keys = 'hh', 'hv', 'vv'
# values = threshold associated with the change statistics
THRESHOLD_DICT = {}
THRESHOLD_DICT
```
# Polarization
```
POL = 'hv'
```
# Read Tifs
```
with rasterio.open(hv_paths[0]) as ds:
profile = ds.profile
def read_arr(path):
with rasterio.open(path) as ds:
arr = (ds.read(1))
return arr
hv_ts = list(map(read_arr, hv_paths))
hh_ts = list(map(read_arr, hh_paths))
vv_ts = list(map(read_arr, vv_paths))
```
# Change Statistic and Inspection
We use a patch change detector elaborated in this [paper](https://www.researchgate.net/publication/229390532_How_to_Compare_Noisy_Patches_Patch_Similarity_Beyond_Gaussian_Noise) and simply add this metric channel by channel.
$$
s_{p} = \log\left(\frac{xy}{(x + y)^2} \right)
$$
where $x$ and $y$ are the backscatter values for a given polarization $p$ at a pixel for different times. We compute $s_p$ within a patch using a convolution.
```
PATCH_SIZE = 7
def get_change_statistic(img_0, img_1, patch_size=PATCH_SIZE, mask=None):
X_0 = np.clip(img_0, 1e-3, 1)
X_1 = np.clip(img_1, 1e-3, 1)
glr = np.log(X_0 * X_1 / (X_0 + X_1)**2)
if mask is not None:
glr[mask] = np.nan
kernel = np.ones((patch_size, patch_size))
change_statistic = convolve(glr,
kernel,
boundary='extend',
nan_treatment='interpolate',
normalize_kernel=True,
preserve_nan=True)
return change_statistic
```
We need a basic water mask, which we obtain via the HV image.
```
# Ignore the nodata areas and low db (specifically below -22)
mask = np.isnan(hv_ts[0]) | (10 * np.log10(hv_ts[0]) < -22)
# Remove small land areas from water
mask = 1 - filter_binary_array_by_min_size(1 - mask.astype(int), 140).astype(bool)
# Remove small water areas from land
mask = filter_binary_array_by_min_size(mask.astype(int), 140).astype(bool)
plt.imshow(mask, interpolation='none')
p = profile.copy()
p['dtype'] = 'uint8'
p['nodata'] = None
with rasterio.open('out/hv_mask.tif', 'w', **p) as ds:
ds.write(mask.astype(np.uint8), 1)
IND_0 = 0
IND_1 = 2
def get_ts(pol):
if pol == 'hh':
ts = hh_ts
elif pol == 'hv':
ts = hv_ts
elif pol == 'vv':
ts = vv_ts
else:
raise ValueError('choose hh, hv, vv')
return ts
ts = get_ts(POL)
change_statistic = get_change_statistic(ts[IND_0], ts[IND_1],
mask = mask
)
plt.figure(figsize=(10, 10))
plt.imshow(change_statistic, interpolation='none')
sy = np.s_[1_000:2_000]
sx = np.s_[4000:5000]
plt.figure(figsize=(10, 10))
plt.imshow(change_statistic[sy, sx], interpolation='none')
plt.colorbar()
PERCENTILE = 15
T = np.nanpercentile(change_statistic, PERCENTILE)
data_mask = ~np.isnan(change_statistic)
C = np.zeros(change_statistic.shape)
C[data_mask] = (change_statistic[data_mask] < T)
plt.imshow(C, interpolation=None)
plt.figure(figsize=(10, 10))
plt.imshow(C[sy, sx], interpolation='none')
SIZE = PATCH_SIZE**2
def morphological_filter(img):
return filter_binary_array_by_min_size(img, SIZE)
plt.figure(figsize=(10, 10))
C = morphological_filter(C)
plt.imshow(morphological_filter(C), interpolation='none')
plt.figure(figsize=(10, 10))
plt.imshow(C[sy, sx], interpolation='none')
THRESHOLD_DICT[POL] = T
DEST_DIR = (CHANGE_DIR/'_test_pairs')
DEST_DIR.mkdir(exist_ok = True, parents=True)
polygonize_array_to_shapefile(C, profile, DEST_DIR/f'{POL}_{IND_0}_{IND_1}', mask=~(C.astype(bool)))
```
# Inspecting Change Across the Time Series
```
n = len(ts)
pol_thresh = THRESHOLD_DICT[POL]
change_statistic_ts = [get_change_statistic(ts[i],
ts[i + 1],
mask=mask)
for i in tqdm(range(n-1))]
```
Let's make sure the histogram roughly is well behaved across pairs.
```
for cs in change_statistic_ts:
plt.figure(figsize=(5, 3))
data = (cs[~np.isnan(cs)])
plt.hist(data, bins=50)
plt.title(f'median$=${np.median(data):1.2f}')
```
We now apply the threshold and morphological filter to get a change map.
```
def change_determination(change_statistic):
data_mask = ~np.isnan(change_statistic)
C = np.zeros(change_statistic.shape)
C[data_mask] = (change_statistic[data_mask] < pol_thresh)
C = morphological_filter(C)
return C
change_ts = list(map(change_determination, tqdm(change_statistic_ts)))
```
Let's check the results for different indices.
```
plt.figure(figsize=(10, 10))
plt.imshow(change_ts[0][sy, sx], interpolation='none')
plt.figure(figsize=(10, 10))
plt.imshow(change_ts[1][sy, sx], interpolation='none')
plt.figure(figsize=(10, 10))
plt.imshow(change_ts[2][sy, sx], interpolation='none')
```
## Save Each Change Map
We are going to create a dictionary in which each the `index in the time series + 1 --> date`. It's based on the name convention of how we named the images. Note strings are also lists in python so we can grab the pieces of the string name that are relevant.
```
def format_date(date_str):
return f'{date_str[0:4]}-{date_str[4:6]}-{date_str[6:]}'
date_dict = {(k + 1): format_date(str(path.name)[14:22]) for (k, path) in enumerate(hv_paths[1:])}
date_dict[0] = 'No Change'
date_dict
```
Now, we will write a shapefile for each pair as well as saving a binary image (tif file) of the same results.
```
DEST_DIR = (CHANGE_DIR/POL)
DEST_DIR.mkdir(exist_ok = True, parents=True)
TIF_DIR = DEST_DIR/'tifs'
TIF_DIR.mkdir(exist_ok = True, parents=True)
p = profile.copy()
p['dtype'] = 'uint8'
p['nodata'] = None
def write_pairwise_changes(k):
C = change_ts[k]
date_str = date_dict[k+1]
dest_path_shp = DEST_DIR/f'{POL}_{k}_{date_str}'
dest_path_tif = TIF_DIR/f'{POL}_{k}_{date_str}.tif'
polygonize_array_to_shapefile(C, p, dest_path_shp, mask=~(C.astype(bool)))
with rasterio.open(dest_path_tif, 'w', **p) as ds:
ds.write(C.astype(np.uint8), 1)
return dest_path_shp, dest_path_tif
list(map(write_pairwise_changes, tqdm(range(n-1))))
```
## Combine Pairwise Changes into Single Change Map
We'll set each change map to an integer corresponding to it's `index + 1` in the time series.
```
change_map = np.zeros(change_ts[0].shape)
for k in range(len(change_ts)):
ind = change_ts[k].astype(bool)
change_map[ind] = (change_ts[k][ind] * (k + 1))
fig, ax = plt.subplots(figsize=(10, 10))
cmap='tab20c'
im = ax.imshow(change_map, cmap=cmap, interpolation='none')
values = range(0, len(change_ts) + 1)
colors = [im.cmap(im.norm(value)) for value in values]
patches = [mpatches.Patch(color=colors[i], label=f'{date_dict[i]}') for i in range(len(values)) ]
plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., fontsize=15)
plt.axis('off')
fig, ax = plt.subplots(figsize=(10, 10))
im = ax.imshow(change_map[sy, sx], cmap=cmap, interpolation='none')
plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., fontsize=15)
plt.axis('off')
p = profile.copy()
p['dtype'] = 'uint8'
p['nodata'] = 0
with rasterio.open(DEST_DIR/f'{POL}_change_map_combined.tif', 'w', **p) as ds:
ds.write(change_map.astype(np.uint8), 1)
```
## Save Combined Change Map as shapefile
```
m = np.zeros(change_map.shape, dtype=bool)
features = get_geopandas_features_from_array(change_map.astype(np.uint8), profile['transform'], mask=(change_map==0))
df = gpd.GeoDataFrame.from_features(features, crs=profile['crs'])
df.label = df.label.astype('int')
df.head()
def date_trans(label):
return date_dict[label]
df['date'] = df.label.map(date_trans)
df.head()
fig, ax = plt.subplots(figsize=(10, 10))
df.plot(column='date', ax=ax, legend=True)
df.to_file(DEST_DIR/f'{POL}_change_map_combined_shp')
```
| true |
code
| 0.402069 | null | null | null | null |
|
## Small introduction to Numpy
## Fast, faster, NumPy
<img src="images/numpy.png" width=200 align=right />
Numpy allows us to run mathematical operations over calculations in a efficiant manner.
Numpy provides several advantages for python user:
- powerful n-dimensional arrays
- advanced functions
- can integrate C/C++ and Fortran code
- efficient linear algebra, random number generation and Fourier transformation
### Vectorization
When looping over an list or an array, there’s a lot of overhead involved. Vectorized operations in NumPy delegate the looping internally to highly optimized C and Fortran functions.
Let's test the speed of numpy and create an array consisting of True and False. Assume we want to count how many times we have a transition from True to False or the other way round. First we will use a classic Python loop
```
import numpy as np
np.random.seed(123)
x = np.random.choice([False, True], size=100000)
def transitions(x):
count = 0
for i, j in zip(x[:-1], x[1:]):
if j and not i:
count += 1
return count
#transitions(x)
%timeit transitions(x)
```
Now can try the same with numpy
```
%timeit np.count_nonzero(x[:-1] < x[1:])
```
## Numpy arrays
The core class is the numpy ndarray (n-dimensional array). We can initialize a numpy array from nested Python lists.
#### Differences Between Python Lists and Numpy Arrays
- All elements in a numpy arrays must be the same data type
- Numpy arrays support arithmetic and other mathematical operations that run on each element of the array
- Numpy arrays can store data along multiple dimensions. This makes numpy arrays a very efficient data structure for large datasets.
This first example below shows how to generate different numpy arrays. For numpy arrays, brackets ```[]``` are used to assign and identify the dimensions of the numpy arrays. First we want to create a 1-dimensional array.
```
avg_precip = np.array([0.70, 0.75, 1.85, 1.90, 1.20, 0.9])
print(avg_precip)
```
In order to create a 2-dimensional array, we need to specify two sets of brackets ```[]```, the outer set that defines the entire array structure and inner sets that define the rows of the individual arrays.
```
min_max_temp_monthly = np.array([
[-2.11, -2.34, 1.40, 4.22, 9.34, 12.65, 14.26, 14.33, 11.19, 6.03, 2.33, 0.12],
[3.00, 4.00, 9.33, 13.45, 19.72, 22.94, 24.99, 24.03, 19.28, 13.44, 7.03, 4.33]
])
print(min_max_temp_monthly)
```
Of course we can create as many dimensions we want
```
# 3-dimensional array
multi_array = np.array([[[1,2,3,4], [5,6,7,8], [9,10,11,12],[13,14,15,16]],[[17,18,19,20], [21,22,23,24], [25,26,27,28],[30,31,32,33]]])
multi_array
```
Numpy also has some in-built functions to create certain types of numpy arrays.
```
a = np.zeros((2,2)) # Create an array of all zeros
print(a) # Prints "[[ 0. 0.]
# [ 0. 0.]]"
b = np.ones((1,2)) # Create an array of all ones
print(b) # Prints "[[ 1. 1.]]"
c = np.full((2,2), 7) # Create a constant array
print(c) # Prints "[[ 7. 7.]
# [ 7. 7.]]"
d = np.eye(2) # Create a 2x2 identity matrix
print(d) # Prints "[[ 1. 0.]
# [ 0. 1.]]"
e = np.random.random((2,2)) # Create an array filled with random values
print(e) # Might print "[[ 0.91940167 0.08143941]
# [ 0.68744134 0.87236687]]"
f = np.linspace(1, 15, 3) # The linspace() function returns numbers evenly spaced over a specified intervals.
print(f) # Say we want 3 evenly spaced points from 1 to 15, we can easily use this.
# linspace() takes the third argument as the number of datapoints to be created
g= np.arange(3,10) # Lists the natural numbers from 3 to 9, as the number in the second position is excluded
print(g)
```
Let's import some real world data into numpy.You can easily create new numpy arrays by importing numeric data from text files using the ```np.genfromtxt``` function from numpy. To start, let’s have a look at the top few lines of our data file.
```
f= open("../Data/non-spatial/climate-wue/monthly_climate_wuerzburg_18810101_20201231.txt",'r')
fl =f.readlines()
for x in fl:
print(x)
f.close()
```
Lets import the data into numpy and see what are we dealing with. Therefore we can use the ```np.genfromtxt()``` function
```
station_data = np.genfromtxt('../Data/non-spatial/climate-wue/monthly_climate_wuerzburg_18810101_20201231.txt', skip_header=1, delimiter = ';' )
print(station_data)
```
We can now start to work with the data. First let us have look at our data
```
print(type(station_data))
print(station_data.shape)
print(station_data.dtype)
```
Okay we can see that we now have created a numpy array with 1680 rows and 17 columns. The data are floating point values with 64-bit precision.
### Working with numpy data - Index and slicing
<img src="images/anatomyarray.png" width=800 />
Accessing single values in a 1-dimensional numpy array is straight forward. Always remember that indexing starts with 0.
```
avg_precip = np.array([0.70, 0.75, 1.85, 1.90, 1.20, 0.9])
avg_precip[0]
avg_precip[3]
```
In case of our station data we are dealing with a 2-dimensional datset. So if we want to access one value we just need one index for each dimsension
```
station_data[0,0]
```
Similar to Python lists, numpy arrays can be sliced
```
station_data[:5,0]
station_data[:5,:2]
```
Let's use this technique to slice our data up into different columns and creating new variables with the column data. The idea is to cut our 2D data into 3 separate columns that will be easier to work with. We will use following columns MESS_DATUM_BEGINN (start of measurement), MO_TN (Monthly mean of the air temperature minimum), MO_TX(Monthly mean of the air temperature maximum)
```
date = station_data[:, 1]
temp_max = station_data[:, 8]
temp_min = station_data[:, 9]
```
Now we can already start to do some basic calculations.Useful methods include mean(), min(), max(), and std().
```
print(date.min())
print(date.max())
```
Okay we now know that we have measurements from January 1881 until December 2020. Let's calculate the mean monthly maximum
```
print(temp_max.mean())
```
Ooops! This seems wrong. Let's have a look at our data again
```
temp_max
```
We can see that we have a lot of values with -999. These are no data values. In order to make correct calculations we first have to clean our dataset. First, we need to identify a useful test for identifying missing data. We could could convert the data to integer values everywhere and test for values being equal to -999, but there’s an even easier option. Since we know all of our data are dates or temperatures, we can simply look for numbers below -998 to identify missing data
```
data_mask = (station_data < -998)
station_data[data_mask] = np.nan
station_data
date = station_data[:, 1]
temp_max = station_data[:, 8]
temp_min = station_data[:, 9]
temp_max.mean()
```
In the last example we can see that min() and the max() function returned nan. If we just want to get our min and max value we could use the nanmin()/nanmax() function in numpy
```
np.nanmin(temp_max)
np.nanmax(temp_max)
```
But let's assume we want to get rid of the nan values. First of all we can count all missing values in our tavg array. To do this, we’ll need two new NumPy function, called np.count_nonzero() and np.isnan().
```
print("Number of missing dates:", np.count_nonzero(np.isnan(temp_min)))
```
Now we now the number of nan values in the tavg array. Let's remove them
```
clean_data = ~np.isnan(temp_min)
temp_min_clean = temp_min[clean_data ]
temp_min_clean
```
And of course we can use the same mask to clean also the other arrays
```
clean_date = date[clean_data]
temp_max_clean = temp_max[clean_data]
temp_max_clean
```
Of course we can use always matplotlib to visualize our data
```
import matplotlib.pylab as plt
plt.plot(temp_max_clean)
```
OK, now let’s use a range of dates to find the average maximum temperature for one year. In this case, let’s go for 2010.
```
temp_min_2010 = temp_min_clean[(clean_date >= 20100101) & (clean_date <= 20101231)]
temp_min_2010.mean()
```
Next we want to calculate average monthly temperatures. Therefor we first need to convert our dates to strings
```
date_clean_str = (clean_date.astype(int)).astype(str)
date_clean_str
```
Now we can only extract the year of our dates
```
year = [datenow[0:4] for datenow in date_clean_str]
year = np.array(year)
year
```
...now we do the same for month and day
```
month = [datenow[4:6] for datenow in date_clean_str]
month = np.array(month)
day = [datenow[6:8] for datenow in date_clean_str]
day = np.array(day)
```
Let’s take 2010 again as our example and find the average temperatures for each month in 2010
```
means_2010 = np.zeros(12)
index = 0
for month_now in np.unique(month):
means_2010[index] = temp_min_clean[(month == month_now) & (year == '2010')].mean()
index = index + 1
print(means_2010)
temp_mean = np.zeros(temp_min_clean.shape)
for i, temp in enumerate(temp_min_clean):
temp_mean[i] = (temp_min_clean[i] + temp_max_clean[i]) / 2
temp_mean
```
Fortunatly we don't need a loop to make element-wise calculations.
```
arr = np.arange(1,21) # Numbers from 1 to 20
arr
arr * arr # Multiplies each element by itself
arr - arr # Subtracts each element from itself
arr + arr # Adds each element to itself
arr / arr # Divides each element by itself
temp_mean = (temp_min_clean + temp_max_clean) / 2
temp_mean
```
You can also use the functions provided by numpy itself
```
x = np.array([[1,2,3,4],[5,6,7,8]])
y = np.array([[7,8,9,10],[11,12,13,14]])
# Elementwise sum
np.add(x, y)
# Elementwise difference
np.subtract(x, y)
# Elementwise product
np.multiply(x, y)
# Elementwise division
np.divide(x, y)
# Elementwise square root
np.sqrt(x)
temp_mean = np.divide(np.add(temp_min_clean, temp_max_clean), 2)
temp_mean
```
### Using functions
This makes it quite easy for us to do math with numpy. For example we could convert our temperature from Celsius to Fahrenheit
```
mean_clean_F = 1.8 * temp_mean + 32
mean_clean_F
```
| true |
code
| 0.365768 | null | null | null | null |
|
# Time Series Prediction
**Objectives**
1. Build a linear, DNN and CNN model in keras to predict stock market behavior.
2. Build a simple RNN model and a multi-layer RNN model in keras.
3. Combine RNN and CNN architecture to create a keras model to predict stock market behavior.
In this lab we will build a custom Keras model to predict stock market behavior using the stock market dataset we created in the previous labs. We'll start with a linear, DNN and CNN model
Since the features of our model are sequential in nature, we'll next look at how to build various RNN models in keras. We'll start with a simple RNN model and then see how to create a multi-layer RNN in keras. We'll also see how to combine features of 1-dimensional CNNs with a typical RNN architecture.
We will be exploring a lot of different model types in this notebook. To keep track of your results, record the accuracy on the validation set in the table here. In machine learning there are rarely any "one-size-fits-all" so feel free to test out different hyperparameters (e.g. train steps, regularization, learning rates, optimizers, batch size) for each of the models. Keep track of your model performance in the chart below.
| Model | Validation Accuracy |
|----------|:---------------:|
| Baseline | 0.295 |
| Linear | -- |
| DNN | -- |
| 1-d CNN | -- |
| simple RNN | -- |
| multi-layer RNN | -- |
| RNN using CNN features | -- |
| CNN using RNN features | -- |
## Load necessary libraries and set up environment variables
```
import os
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (Dense, DenseFeatures,
Conv1D, MaxPool1D,
Reshape, RNN,
LSTM, GRU, Bidirectional)
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint
from tensorflow.keras.optimizers import Adam
# To plot pretty figures
%matplotlib inline
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# For reproducible results.
from numpy.random import seed
seed(1)
tf.random.set_seed(2)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "your-gcp-bucket-here" # REPLACE WITH YOUR BUCKET
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
%env
PROJECT = PROJECT
BUCKET = BUCKET
REGION = REGION
```
## Explore time series data
We'll start by pulling a small sample of the time series data from Big Query and write some helper functions to clean up the data for modeling. We'll use the data from the `percent_change_sp500` table in BigQuery. The `close_values_prior_260` column contains the close values for any given stock for the previous 260 days.
```
%%time
bq = bigquery.Client(project=PROJECT)
bq_query = '''
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
LIMIT
100
'''
df_stock_raw = bq.query(bq_query).to_dataframe()
df_stock_raw.head()
```
The function `clean_data` below does three things:
1. First, we'll remove any inf or NA values
2. Next, we parse the `Date` field to read it as a string.
3. Lastly, we convert the label `direction` into a numeric quantity, mapping 'DOWN' to 0, 'STAY' to 1 and 'UP' to 2.
```
def clean_data(input_df):
"""Cleans data to prepare for training.
Args:
input_df: Pandas dataframe.
Returns:
Pandas dataframe.
"""
df = input_df.copy()
# TF doesn't accept datetimes in DataFrame.
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
df['Date'] = df['Date'].dt.strftime('%Y-%m-%d')
# TF requires numeric label.
df['direction_numeric'] = df['direction'].apply(lambda x: {'DOWN': 0,
'STAY': 1,
'UP': 2}[x])
return df
df_stock = clean_data(df_stock_raw)
df_stock.head()
```
## Read data and preprocessing
Before we begin modeling, we'll preprocess our features by scaling to the z-score. This will ensure that the range of the feature values being fed to the model are comparable and should help with convergence during gradient descent.
```
STOCK_HISTORY_COLUMN = 'close_values_prior_260'
COL_NAMES = ['day_' + str(day) for day in range(0, 260)]
LABEL = 'direction_numeric'
def _scale_features(df):
"""z-scale feature columns of Pandas dataframe.
Args:
features: Pandas dataframe.
Returns:
Pandas dataframe with each column standardized according to the
values in that column.
"""
avg = df.mean()
std = df.std()
return (df - avg) / std
def create_features(df, label_name):
"""Create modeling features and label from Pandas dataframe.
Args:
df: Pandas dataframe.
label_name: str, the column name of the label.
Returns:
Pandas dataframe
"""
# Expand 1 column containing a list of close prices to 260 columns.
time_series_features = df[STOCK_HISTORY_COLUMN].apply(pd.Series)
# Rename columns.
time_series_features.columns = COL_NAMES
time_series_features = _scale_features(time_series_features)
# Concat time series features with static features and label.
label_column = df[LABEL]
return pd.concat([time_series_features,
label_column], axis=1)
df_features = create_features(df_stock, LABEL)
df_features.head()
```
Let's plot a few examples and see that the preprocessing steps were implemented correctly.
```
ix_to_plot = [0, 1, 9, 5]
fig, ax = plt.subplots(1, 1, figsize=(15, 8))
for ix in ix_to_plot:
label = df_features['direction_numeric'].iloc[ix]
example = df_features[COL_NAMES].iloc[ix]
ax = example.plot(label=label, ax=ax)
ax.set_ylabel('scaled price')
ax.set_xlabel('prior days')
ax.legend()
```
### Make train-eval-test split
Next, we'll make repeatable splits for our train/validation/test datasets and save these datasets to local csv files. The query below will take a subsample of the entire dataset and then create a 70-15-15 split for the train/validation/test sets.
```
def _create_split(phase):
"""Create string to produce train/valid/test splits for a SQL query.
Args:
phase: str, either TRAIN, VALID, or TEST.
Returns:
String.
"""
floor, ceiling = '2002-11-01', '2010-07-01'
if phase == 'VALID':
floor, ceiling = '2010-07-01', '2011-09-01'
elif phase == 'TEST':
floor, ceiling = '2011-09-01', '2012-11-30'
return '''
WHERE Date >= '{0}'
AND Date < '{1}'
'''.format(floor, ceiling)
def create_query(phase):
"""Create SQL query to create train/valid/test splits on subsample.
Args:
phase: str, either TRAIN, VALID, or TEST.
sample_size: str, amount of data to take for subsample.
Returns:
String.
"""
basequery = """
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
"""
return basequery + _create_split(phase)
bq = bigquery.Client(project=PROJECT)
for phase in ['TRAIN', 'VALID', 'TEST']:
# 1. Create query string
query_string = create_query(phase)
# 2. Load results into DataFrame
df = bq.query(query_string).to_dataframe()
# 3. Clean, preprocess dataframe
df = clean_data(df)
df = create_features(df, label_name='direction_numeric')
# 3. Write DataFrame to CSV
if not os.path.exists('../data'):
os.mkdir('../data')
df.to_csv('../data/stock-{}.csv'.format(phase.lower()),
index_label=False, index=False)
print("Wrote {} lines to {}".format(
len(df),
'../data/stock-{}.csv'.format(phase.lower())))
ls -la ../data
```
## Modeling
For experimentation purposes, we'll train various models using data we can fit in memory using the `.csv` files we created above.
```
N_TIME_STEPS = 260
N_LABELS = 3
Xtrain = pd.read_csv('../data/stock-train.csv')
Xvalid = pd.read_csv('../data/stock-valid.csv')
ytrain = Xtrain.pop(LABEL)
yvalid = Xvalid.pop(LABEL)
ytrain_categorical = to_categorical(ytrain.values)
yvalid_categorical = to_categorical(yvalid.values)
```
To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy.
```
def plot_curves(train_data, val_data, label='Accuracy'):
"""Plot training and validation metrics on single axis.
Args:
train_data: list, metrics obtrained from training data.
val_data: list, metrics obtained from validation data.
label: str, title and label for plot.
Returns:
Matplotlib plot.
"""
plt.plot(np.arange(len(train_data)) + 0.5,
train_data,
"b.-", label="Training " + label)
plt.plot(np.arange(len(val_data)) + 1,
val_data, "r.-",
label="Validation " + label)
plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True))
plt.legend(fontsize=14)
plt.xlabel("Epochs")
plt.ylabel(label)
plt.grid(True)
```
### Baseline
Before we begin modeling in keras, let's create a benchmark using a simple heuristic. Let's see what kind of accuracy we would get on the validation set if we predict the majority class of the training set.
```
sum(yvalid == ytrain.value_counts().idxmax()) / yvalid.shape[0]
```
Ok. So just naively guessing the most common outcome `UP` will give about 29.5% accuracy on the validation set.
### Linear model
We'll start with a simple linear model, mapping our sequential input to a single fully dense layer.
```
# TODO 1a
model = Sequential()
model.add(Dense(units=N_LABELS,
activation='softmax',
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=30,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
```
The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training.
```
np.mean(history.history['val_accuracy'][-5:])
```
### Deep Neural Network
The linear model is an improvement on our naive benchmark. Perhaps we can do better with a more complicated model. Next, we'll create a deep neural network with keras. We'll experiment with a two layer DNN here but feel free to try a more complex model or add any other additional techniques to try an improve your performance.
```
# TODO 1b
dnn_hidden_units = [16, 8]
model = Sequential()
for layer in dnn_hidden_units:
model.add(Dense(units=layer,
activation="relu"))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=10,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
```
### Convolutional Neural Network
The DNN does slightly better. Let's see how a convolutional neural network performs.
A 1-dimensional convolutional can be useful for extracting features from sequential data or deriving features from shorter, fixed-length segments of the data set. Check out the documentation for how to implement a [Conv1d in Tensorflow](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D). Max pooling is a downsampling strategy commonly used in conjunction with convolutional neural networks. Next, we'll build a CNN model in keras using the `Conv1D` to create convolution layers and `MaxPool1D` to perform max pooling before passing to a fully connected dense layer.
```
# TODO 1c
model = Sequential()
# Convolutional layer
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(Conv1D(filters=5,
kernel_size=5,
strides=2,
padding="valid",
input_shape=[None, 1]))
model.add(MaxPool1D(pool_size=2,
strides=None,
padding='valid'))
# Flatten the result and pass through DNN.
model.add(tf.keras.layers.Flatten())
model.add(Dense(units=N_TIME_STEPS//4,
activation="relu"))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.01),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=10,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
```
### Recurrent Neural Network
RNNs are particularly well-suited for learning sequential data. They retain state information from one iteration to the next by feeding the output from one cell as input for the next step. In the cell below, we'll build a RNN model in keras. The final state of the RNN is captured and then passed through a fully connected layer to produce a prediction.
```
# TODO 2a
model = Sequential()
# Reshape inputs to pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(LSTM(N_TIME_STEPS // 8,
activation='relu',
return_sequences=False))
model.add(Dense(units=N_LABELS,
activation='softmax',
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
# Create the model.
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=40,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
```
### Multi-layer RNN
Next, we'll build multi-layer RNN. Just as multiple layers of a deep neural network allow for more complicated features to be learned during training, additional RNN layers can potentially learn complex features in sequential data. For a multi-layer RNN the output of the first RNN layer is fed as the input into the next RNN layer.
```
# TODO 2b
rnn_hidden_units = [N_TIME_STEPS // 16,
N_TIME_STEPS // 32]
model = Sequential()
# Reshape inputs to pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
for layer in rnn_hidden_units[:-1]:
model.add(GRU(units=layer,
activation='relu',
return_sequences=True))
model.add(GRU(units=rnn_hidden_units[-1],
return_sequences=False))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=50,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
```
### Combining CNN and RNN architecture
Finally, we'll look at some model architectures which combine aspects of both convolutional and recurrant networks. For example, we can use a 1-dimensional convolution layer to process our sequences and create features which are then passed to a RNN model before prediction.
```
# TODO 3a
model = Sequential()
# Reshape inputs for convolutional layer
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(Conv1D(filters=20,
kernel_size=4,
strides=2,
padding="valid",
input_shape=[None, 1]))
model.add(MaxPool1D(pool_size=2,
strides=None,
padding='valid'))
model.add(LSTM(units=N_TIME_STEPS//2,
return_sequences=False,
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.add(Dense(units=N_LABELS, activation="softmax"))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=30,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
```
We can also try building a hybrid model which uses a 1-dimensional CNN to create features from the outputs of an RNN.
```
# TODO 3b
rnn_hidden_units = [N_TIME_STEPS // 32,
N_TIME_STEPS // 64]
model = Sequential()
# Reshape inputs and pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
for layer in rnn_hidden_units:
model.add(LSTM(layer, return_sequences=True))
# Apply 1d convolution to RNN outputs.
model.add(Conv1D(filters=5,
kernel_size=3,
strides=2,
padding="valid"))
model.add(MaxPool1D(pool_size=4,
strides=None,
padding='valid'))
# Flatten the convolution output and pass through DNN.
model.add(tf.keras.layers.Flatten())
model.add(Dense(units=N_TIME_STEPS // 32,
activation="relu",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=80,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
```
## Extra Credit
1. The `eps_percent_change_sp500` table also has static features for each example. Namely, the engineered features we created in the previous labs with aggregated information capturing the `MAX`, `MIN`, `AVG` and `STD` across the last 5 days, 20 days and 260 days (e.g. `close_MIN_prior_5_days`, `close_MIN_prior_20_days`, `close_MIN_prior_260_days`, etc.). Try building a model which incorporates these features in addition to the sequence features we used above. Does this improve performance?
2. The `eps_percent_change_sp500` table also contains a `surprise` feature which captures information about the earnings per share. Try building a model which uses the `surprise` feature in addition to the sequence features we used above. Does this improve performance?
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| true |
code
| 0.723566 | null | null | null | null |
|
<img src="Frame.png" width="320">
# Reference frames
Objects can be created in reference frames in order to share
the same drawing order priority and the same coordinate system.
Objects in a reference frame
- Have $(x,y)$ coordinates determined by the frame geometry.
- Can change geometry all at once by changing the frame geometry.
- Can change visibility all at once by changing the visibility of the frame.
- Can be forgotten all at once by either forgetting or resetting the frame.
- Are drawn all at once in the drawing order of the frame -- objects drawn
after the frame will be "on top" of the frame elements.
Frame geometry does not effect the styling for elements in the frame including
- Line styling.
- Circle radius.
- Rectangle and image width, height, dx, dy, and degrees rotation.
- Polygon rotation.
- Text font and alignment.
Some special object types are adjusted more closely to the frame
- A `frame_circle` has its radius automatically adjusted to reflect the
maximum frame distortion.
- A `frame_rect` is essentially converted to a polygon with all vertices
determined by the frame geometry.
Below we create the `adorn` function to draw example objects on a frame
and show how these objects are effected by differently configured frame
geometries.
```
from jp_doodle import dual_canvas
from IPython.display import display
points = [[50,0], [40,-20], [40,-40], [30,-60],
[30,-50], [10,-40], [20,-30], [20,-20], [30,-10], [0,0]]
def adorn(frame, frame_name):
def text_at(x, y, content):
frame.text(x=x, y=y, text=content, color="black",
background="white", align="center", valign="center")
frame.circle(x=50, y=25, r=25, color="#339")
frame.frame_circle(x=75, y=75, r=25, color="#539")
frame.rect(x=-50, y=25, h=30, w=50, color="#369")
frame.frame_rect(x=-75, y=75, h=30, w=50, color="#93a")
# This local image reference works in "classic" notebook, but not in Jupyter Lab.
mandrill_url = "../mandrill.png"
frame.name_image_url("mandrill", mandrill_url)
frame.named_image(image_name="mandrill", x=-90, y=-90, w=79, h=70)
frame.polygon(points=points, lineWidth=5, color="#4fa", fill=False)
# add some labels for clarity
text_at(50, 25, "circle")
text_at(75, 75, "frame_circle")
text_at(-25, 40, "rect")
text_at(-50, 90, "frame_rect")
cfg = {"color":"salmon"}
frame.lower_left_axes(-100, -100, 100, 100,
max_tick_count=4, tick_text_config=cfg, tick_line_config=cfg)
text_at(0, -120, frame_name)
no_frame = dual_canvas.DualCanvasWidget(width=420, height=420)
display(no_frame)
adorn(no_frame, "Not a frame")
no_frame.fit(None, 20)
# Display the objects "outside" of any frame:
slanted = dual_canvas.SnapshotCanvas("Frame.png", width=420, height=420)
slanted.display_all()
# The vector frame factory provides the most general frame parameters.
slanted_frame = slanted.vector_frame(
x_vector={"x":1, "y":-0.3},
y_vector={"x":1, "y":1},
xy_offset={"x":1100, "y":1200}
)
adorn(slanted_frame, "Slanted")
slanted.fit()
slanted.lower_left_axes()
slanted.fit(None, 20)
```
Above note that the positions of the objects change,
but the styling of the objects is not changed by the
frame geometry except for the `frame_circle` radius which reflects
the `y` axis distortion and the vertices of the `frame_rect` which
reflect the frame geometry.
```
exploded = dual_canvas.DualCanvasWidget(width=420, height=420)
display(exploded)
# The rframe factory creates frames with scaling and translation.
exploded_frame = exploded.rframe(
scale_x=3.5, scale_y=3, translate_x=-700, translate_y=700)
adorn(exploded_frame, "Exploded")
exploded.fit()
exploded.lower_left_axes()
exploded.fit(None, 20)
squashed = dual_canvas.DualCanvasWidget(width=420, height=420)
display(squashed)
# The frame_region frame factory maps a region of "model space"
# to a region in "frame space". It is sometimes easier to think
# in terms of regions rather than vectors and scalings.
squashed_frame = squashed.frame_region(
minx=200, miny=-1200, maxx=400, maxy=-1100,
frame_minx=-100, frame_miny=-100, frame_maxx=100, frame_maxy=100)
adorn(squashed_frame, "Squashed")
squashed.fit()
squashed.lower_left_axes()
squashed.fit(None, 20)
# Note below that the frame_circle radius does not show
# any change because the maximum distortion in the x direction is 1.
```
| true |
code
| 0.535463 | null | null | null | null |
|
Copyright ENEOS, Corp., Preferred Computational Chemistry, Inc. and Preferred Networks, Inc. as contributors to Matlantis contrib project
# 不均一系触媒上の反応解析(NEB法)
目次:
- **[1. BulkからSlab作成](#chap1)**
- **[2. MoleculeをSlab上に配置、始状態(反応前)と終状態(反応後)を作成](#chap2)**
- **[3. NEB計算](#chap3)**
- **[4. NEB計算結果の確認と遷移状態構造取得](#chap4)**
- **[5. 遷移状態構造の構造最適化(by Sella)](#chap5)**
- **[6. 遷移状態の振動解析](#chap6)**
- **[7. 遷移状態からの追加解析(擬似IRC計算)](#chap7)**
<a id="chap0"></a>
## セットアップ
```
# time.sleep(3600*10)
# # notebookで1.5時間無処理状態が続きますとkernelが自動でshutdownします。
# # kernelを保持したい場合に上の行の#を外して実行してください。
!pip install pfp-api-client
!pip install pandas tqdm matplotlib seaborn optuna sella sklearn torch torch_dftd
# # 初回使用時のみ、ライブラリのインストールをお願いします。
# 汎用モジュール
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook as tqdm
from IPython.display import display_png
from IPython.display import Image as ImageWidget
import ipywidgets as widgets
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.widgets import Slider
from matplotlib.animation import PillowWriter
import seaborn as sns
import math
import optuna
import nglview as nv
import os,sys,csv,glob,shutil,re,time
from pathlib import Path
from PIL import Image, ImageDraw
# sklearn
from sklearn.metrics import mean_absolute_error
# ASE
import ase
from ase import Atoms, units
from ase.units import Bohr,Rydberg,kJ,kB,fs,Hartree,mol,kcal
from ase.io import read, write
from ase.build import surface, molecule, add_adsorbate
from ase.cluster.cubic import FaceCenteredCubic
from ase.constraints import FixAtoms, FixedPlane, FixBondLength, ExpCellFilter
from ase.neb import SingleCalculatorNEB
from ase.neb import NEB
from ase.vibrations import Vibrations
from ase.visualize import view
from ase.optimize import QuasiNewton
from ase.thermochemistry import IdealGasThermo
from ase.build.rotate import minimize_rotation_and_translation
from ase.visualize import view
from ase.optimize import BFGS, LBFGS, FIRE
from ase.md.velocitydistribution import MaxwellBoltzmannDistribution, Stationary
from ase.md.verlet import VelocityVerlet
from ase.md.langevin import Langevin
from ase.md.nptberendsen import NPTBerendsen, Inhomogeneous_NPTBerendsen
from ase.md import MDLogger
from ase.io import read, write, Trajectory
# from ase.calculators.dftd3 import DFTD3
from ase.build import sort
from sella import Sella, Constraints
from torch_dftd.torch_dftd3_calculator import TorchDFTD3Calculator
# PFP
from pfp_api_client.pfp.calculators.ase_calculator import ASECalculator
from pfp_api_client.pfp.estimator import Estimator
from pfp_api_client.pfp.estimator import EstimatorCalcMode
estimator = Estimator(calc_mode="CRYSTAL")
calculator = ASECalculator(estimator)
# calculatorD = DFTD3(dft=calculator, xc = 'pbe', label="ase_dftd3_3") # いままでと同じコードでD3計算とする場合
calculatorD = TorchDFTD3Calculator(dft=calculator, xc="pbe", label="dftd3") # いままでと同じコードでD3計算とする場合
# このセルはexperimental, unexpectedな元素の計算を行う際にでるwarningを抑制するためのコマンドです。必要なときのみ実行してください。
# from pfp_api_client.utils.messages import MessageEnum
# estimator.set_message_status(message=MessageEnum.ExperimentalElementWarning, message_enable=False)
# estimator.set_message_status(message=MessageEnum.UnexpectedElementWarning, message_enable=False)
def myopt(m,sn = 10,constraintatoms=[],cbonds=[]):
fa = FixAtoms(indices=constraintatoms)
fb = FixBondLengths(cbonds,tolerance=1e-5,)
m.set_constraint([fa,fb])
m.set_calculator(calculator)
maxf = np.sqrt(((m.get_forces())**2).sum(axis=1).max())
print("ini pot:{:.4f},maxforce:{:.4f}".format(m.get_potential_energy(),maxf))
de = -1
s = 1
ita = 50
while ( de < -0.001 or de > 0.001 ) and s <= sn :
opt = BFGS(m,maxstep=0.04*(0.9**s),logfile=None)
old = m.get_potential_energy()
opt.run(fmax=0.0005,steps =ita)
maxf = np.sqrt(((m.get_forces())**2).sum(axis=1).max())
de = m.get_potential_energy() - old
print("{} pot:{:.4f},maxforce:{:.4f},delta:{:.4f}".format(s*ita,m.get_potential_energy(),maxf,de))
s += 1
return m
def opt_cell_size(m,sn = 10, iter_count = False): # m:Atomsオブジェクト
m.set_constraint() # clear constraint
m.set_calculator(calculator)
maxf = np.sqrt(((m.get_forces())**2).sum(axis=1).max()) # √(fx^2 + fy^2 + fz^2)の一番大きいものを取得
ucf = ExpCellFilter(m)
print("ini pot:{:.4f},maxforce:{:.4f}".format(m.get_potential_energy(),maxf))
de = -1
s = 1
ita = 50
while ( de < -0.01 or de > 0.01 ) and s <= sn :
opt = BFGS(ucf,maxstep=0.04*(0.9**s),logfile=None)
old = m.get_potential_energy()
opt.run(fmax=0.005,steps =ita)
maxf = np.sqrt(((m.get_forces())**2).sum(axis=1).max())
de = m.get_potential_energy() - old
print("{} pot:{:.4f},maxforce:{:.4f},delta:{:.4f}".format(s*ita,m.get_potential_energy(),maxf,de))
s += 1
if iter_count == True:
return m, s*ita
else:
return m
#表面を作る
def makesurface(atoms,miller_indices=(1,1,1),layers=4,rep=[4,4,1]):
s1 = surface(atoms, miller_indices,layers)
s1.center(vacuum=10.0, axis=2)
s1 = s1.repeat(rep)
s1.set_positions(s1.get_positions() - [0,0,min(s1.get_positions()[:,2])])
s1.pbc = True
return s1
import threading
import time
from math import pi
from typing import Dict, List, Optional
import nglview as nv
from ase import Atoms
from ase.constraints import FixAtoms
from ase.optimize import BFGS
from ase.visualize import view
from IPython.display import display
from ipywidgets import (
Button,
Checkbox,
FloatSlider,
GridspecLayout,
HBox,
IntSlider,
Label,
Text,
Textarea,
)
from nglview.widget import NGLWidget
def save_image(filename: str, v: NGLWidget):
"""Save nglview image.
Note that it should be run on another thread.
See: https://github.com/nglviewer/nglview/blob/master/docs/FAQ.md#how-to-make-nglview-view-object-write-png-file
Args:
filename (str):
v (NGLWidget):
"""
image = v.render_image()
while not image.value:
time.sleep(0.1)
with open(filename, "wb") as fh:
fh.write(image.value)
class SurfaceEditor:
"""Structure viewer/editor"""
struct: List[Dict] # structure used for nglview drawing.
def __init__(self, atoms: Atoms):
self.atoms = atoms
self.vh = view(atoms, viewer="ngl")
self.v: NGLWidget = self.vh.children[0] # VIEW
self.v._remote_call("setSize", args=["450px", "450px"])
self.recont() # Add controller
self.set_representation()
self.set_atoms()
self.pots = []
self.traj = []
self.cal_nnp()
def display(self):
display(self.vh)
def recont(self):
self.vh.setatoms = FloatSlider(
min=0, max=50, step=0.1, value=8, description="atoms z>"
)
self.vh.setatoms.observe(self.set_atoms)
self.vh.selected_atoms_label = Label("Selected atoms:")
self.vh.selected_atoms_textarea = Textarea()
selected_atoms_hbox = HBox(
[self.vh.selected_atoms_label, self.vh.selected_atoms_textarea]
)
self.vh.move = FloatSlider(
min=0.1, max=2, step=0.1, value=0.5, description="move"
)
grid1 = GridspecLayout(2, 3)
self.vh.xplus = Button(description="X+")
self.vh.xminus = Button(description="X-")
self.vh.yplus = Button(description="Y+")
self.vh.yminus = Button(description="Y-")
self.vh.zplus = Button(description="Z+")
self.vh.zminus = Button(description="Z-")
self.vh.xplus.on_click(self.move)
self.vh.xminus.on_click(self.move)
self.vh.yplus.on_click(self.move)
self.vh.yminus.on_click(self.move)
self.vh.zplus.on_click(self.move)
self.vh.zminus.on_click(self.move)
grid1[0, 0] = self.vh.xplus
grid1[0, 1] = self.vh.yplus
grid1[0, 2] = self.vh.zplus
grid1[1, 0] = self.vh.xminus
grid1[1, 1] = self.vh.yminus
grid1[1, 2] = self.vh.zminus
self.vh.rotate = FloatSlider(
min=1, max=90, step=1, value=30, description="rotate"
)
grid2 = GridspecLayout(2, 3)
self.vh.xplus2 = Button(description="X+")
self.vh.xminus2 = Button(description="X-")
self.vh.yplus2 = Button(description="Y+")
self.vh.yminus2 = Button(description="Y-")
self.vh.zplus2 = Button(description="Z+")
self.vh.zminus2 = Button(description="Z-")
self.vh.xplus2.on_click(self.rotate)
self.vh.xminus2.on_click(self.rotate)
self.vh.yplus2.on_click(self.rotate)
self.vh.yminus2.on_click(self.rotate)
self.vh.zplus2.on_click(self.rotate)
self.vh.zminus2.on_click(self.rotate)
grid2[0, 0] = self.vh.xplus2
grid2[0, 1] = self.vh.yplus2
grid2[0, 2] = self.vh.zplus2
grid2[1, 0] = self.vh.xminus2
grid2[1, 1] = self.vh.yminus2
grid2[1, 2] = self.vh.zminus2
self.vh.nnptext = Textarea(disabled=True)
self.vh.opt_step = IntSlider(
min=0,
max=100,
step=1,
value=10,
description="Opt steps",
)
self.vh.constraint_checkbox = Checkbox(
value=True,
description="Opt only selected atoms",
)
self.vh.run_opt_button = Button(
description="Run mini opt",
tooltip="Execute BFGS optimization with small step update."
)
self.vh.run_opt_button.on_click(self.run_opt)
opt_hbox = HBox([self.vh.constraint_checkbox, self.vh.run_opt_button])
self.vh.filename_text = Text(value="screenshot.png", description="filename: ")
self.vh.download_image_button = Button(
description="download image",
tooltip="Download current frame to your local PC",
)
self.vh.download_image_button.on_click(self.download_image)
self.vh.save_image_button = Button(
description="save image",
tooltip="Save current frame to file.\n"
"Currently .png and .html are supported.\n"
"It takes a bit time, please be patient.",
)
self.vh.save_image_button.on_click(self.save_image)
self.vh.update_display = Button(
description="update_display",
tooltip="Refresh display. It can be used when target atoms is updated in another cell..",
)
self.vh.update_display.on_click(self.update_display)
r = list(self.vh.control_box.children)
r += [
self.vh.setatoms,
selected_atoms_hbox,
self.vh.move,
grid1,
self.vh.rotate,
grid2,
self.vh.nnptext,
self.vh.opt_step,
opt_hbox,
self.vh.filename_text,
HBox([self.vh.download_image_button, self.vh.save_image_button]),
self.vh.update_display,
]
self.vh.control_box.children = tuple(r)
def set_representation(self, bcolor: str = "white", unitcell: bool = True):
self.v.background = bcolor
self.struct = self.get_struct(self.atoms)
self.v.add_representation(repr_type="ball+stick")
self.v.control.spin([0, 1, 0], pi * 1.1)
self.v.control.spin([1, 0, 0], -pi * 0.45)
thread = threading.Thread(target=self.changestr)
thread.start()
def changestr(self):
time.sleep(2)
self.v._remote_call("replaceStructure", target="Widget", args=self.struct)
def get_struct(self, atoms: Atoms, ext="pdb") -> List[Dict]:
struct = nv.ASEStructure(atoms, ext=ext).get_structure_string()
for c in range(len(atoms)):
struct = struct.replace("MOL 1", "M0 " + str(c).zfill(3), 1)
struct = [dict(data=struct, ext=ext)]
return struct
def cal_nnp(self):
pot = self.atoms.get_potential_energy()
mforce = (((self.atoms.get_forces()) ** 2).sum(axis=1).max()) ** 0.5
self.pot = pot
self.mforce = mforce
self.vh.nnptext.value = f"pot energy: {pot} eV\nmax force : {mforce} eV/A"
self.pots += [pot]
self.traj += [self.atoms.copy()]
def update_display(self, clicked_button: Optional[Button] = None):
print("update display!")
struct = self.get_struct(self.atoms)
self.struct = struct
self.v._remote_call("replaceStructure", target="Widget", args=struct)
self.cal_nnp()
def set_atoms(self, slider: Optional[FloatSlider] = None):
"""Update text area based on the atoms position `z` greater than specified value."""
smols = [
i for i, atom in enumerate(self.atoms) if atom.z >= self.vh.setatoms.value
]
self.vh.selected_atoms_textarea.value = ", ".join(map(str, smols))
def get_selected_atom_indices(self) -> List[int]:
selected_atom_indices = self.vh.selected_atoms_textarea.value.split(",")
selected_atom_indices = [int(a) for a in selected_atom_indices]
return selected_atom_indices
def move(self, clicked_button: Button):
a = self.vh.move.value
for index in self.get_selected_atom_indices():
if clicked_button.description == "X+":
self.atoms[index].position += [a, 0, 0]
elif clicked_button.description == "X-":
self.atoms[index].position -= [a, 0, 0]
elif clicked_button.description == "Y+":
self.atoms[index].position += [0, a, 0]
elif clicked_button.description == "Y-":
self.atoms[index].position -= [0, a, 0]
elif clicked_button.description == "Z+":
self.atoms[index].position += [0, 0, a]
elif clicked_button.description == "Z-":
self.atoms[index].position -= [0, 0, a]
self.update_display()
def rotate(self, clicked_button: Button):
atom_indices = self.get_selected_atom_indices()
deg = self.vh.rotate.value
temp = self.atoms[atom_indices]
if clicked_button.description == "X+":
temp.rotate(deg, "x", center="COP")
elif clicked_button.description == "X-":
temp.rotate(-deg, "x", center="COP")
elif clicked_button.description == "Y+":
temp.rotate(deg, "y", center="COP")
elif clicked_button.description == "Y-":
temp.rotate(-deg, "y", center="COP")
elif clicked_button.description == "Z+":
temp.rotate(deg, "z", center="COP")
elif clicked_button.description == "Z-":
temp.rotate(-deg, "z", center="COP")
rotep = temp.positions
for i, atom in enumerate(atom_indices):
self.atoms[atom].position = rotep[i]
self.update_display()
def run_opt(self, clicked_button: Button):
"""OPT only specified steps and FIX atoms if NOT in text atoms list"""
if self.vh.constraint_checkbox.value:
# Fix non selected atoms. Only opt selected atoms.
print("Opt with selected atoms: fix non selected atoms")
atom_indices = self.get_selected_atom_indices()
constraint_atom_indices = [
i for i in range(len(self.atoms)) if i not in atom_indices
]
self.atoms.set_constraint(FixAtoms(indices=constraint_atom_indices))
opt = BFGS(self.atoms, maxstep=0.04, logfile=None)
steps: Optional[int] = self.vh.opt_step.value
if steps < 0:
steps = None # When steps=-1, opt until converged.
opt.run(fmax=0.0001, steps=steps)
print(f"Run opt for {steps} steps")
self.update_display()
def download_image(self, clicked_button: Optional[Button] = None):
filename = self.vh.filename_text.value
self.v.download_image(filename=filename)
def save_image(self, clicked_button: Optional[Button] = None):
filename = self.vh.filename_text.value
if filename.endswith(".png"):
thread = threading.Thread(
target=save_image, args=(filename, self.v), daemon=True
)
# thread.daemon = True
thread.start()
elif filename.endswith(".html"):
nv.write_html(filename, [self.v]) # type: ignore
else:
print(f"filename {filename}: extension not supported!")
```
<a id="chap1"></a>
## 1. BulkからSlab作成
### 1-1 Bulk構造を読み込みから作成まで
今回はMaterials Projectからダウンロードしたcifファイルをinputフォルダに入れて読み込み
Input cif file is from
A. Jain*, S.P. Ong*, G. Hautier, W. Chen, W.D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, K.A. Persson (*=equal contributions)
The Materials Project: A materials genome approach to accelerating materials innovation
APL Materials, 2013, 1(1), 011002.
[doi:10.1063/1.4812323](http://dx.doi.org/10.1063/1.4812323)
[[bibtex]](https://materialsproject.org/static/docs/jain_ong2013.349ca3156250.bib)
Licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
```
bulk = read("input/Rh_mp-74_conventional_standard.cif")
print("原子数 =", len(bulk))
print("initial 格子定数 =", bulk.cell.cellpar())
opt_cell_size(bulk)
print ("optimized 格子定数 =", bulk.cell.cellpar())
```
[sort](https://wiki.fysik.dtu.dk/ase/ase/build/tools.html#ase.build.sort)関数は、元素番号に応じて順序をソートする関数です。
```
bulk = bulk.repeat([2,2,2])
bulk = sort(bulk)
bulk.positions += [0.01,0,0] # 面を切るときに変なところで切れるのを防ぐために少し下駄を履かせます。
v = view(bulk, viewer='ngl')
v.view.add_representation("ball+stick")
display(v)
```
### 1-2 Slab構造を作成まで
bulk構造から任意のミラー指数でslab構造を作成。<br/>
`miller_indices=(x,y,z)`で指定できます。 `makesurface` は中で [surface](https://wiki.fysik.dtu.dk/ase//ase/build/surface.html#create-specific-non-common-surfaces) 関数を使用して表面構造を作成しています。
```
slab = makesurface(bulk, miller_indices=(1,1,1), layers=2, rep=[1,1,1])
slab = sort(slab)
slab.positions += [1,1,0] # 少しだけ位置調整
slab.wrap() # してからWRAP
v = view(slab, viewer='ngl')
v.view.add_representation("ball+stick")
display(v)
```
### 1-3 作成したslabのz座標を確認
slabの最も高い座標確認(吸着構造を作成するときに必要)<br/>
slabの層ごとの座標を確認(何層目までを固定するのか決めるのに必要)
```
# 原子のz_positionを確認。
z_pos = pd.DataFrame({
"symbol": slab.get_chemical_symbols(),
"z": slab.get_positions()[:, 2]
})
plt.scatter(z_pos.index, z_pos["z"])
plt.grid(True)
plt.xlabel("atom_index")
plt.ylabel("z_position")
plt.show()
print("highest position (z) =", z_pos["z"].max())
```
### 1-4 表面切り出したslab構造の下層を固定して構造最適化
[FixAtoms](https://wiki.fysik.dtu.dk/ase//ase/constraints.html#ase.constraints.FixAtoms) を用いることで、slab構造の下層の原子のみを固定してOptを実行できます。<br/>
ここでは1A以下を固定しており、一番下の層のみが固定されます。
表面の原子位置が緩和されるはずです。
```
%%time
c = FixAtoms(indices=[atom.index for atom in slab if atom.position[2] <= 1]) # 1A以下を固定
slab.set_constraint(c)
slab.calc = calculator
os.makedirs("output", exist_ok=True)
BFGS_opt = BFGS(slab, trajectory="output/slab_opt.traj")#, logfile=None)
BFGS_opt.run(fmax=0.005)
```
実際のOptの経過を見てみると、上3層のみの構造最適化がされている事がわかります。
```
# v = view(slab, viewer='ngl')
v = view(Trajectory("output/slab_opt.traj"), viewer='ngl')
v.view.add_representation("ball+stick")
display(v)
slabE = slab.get_potential_energy()
print(f"slab E = {slabE} eV")
# 作成したslab構造を保存
os.makedirs("structures/", exist_ok=True) # structures/というフォルダを作成します。
write("structures/Slab_Rh_111.xyz", slab) # 任意でファイル名を変更してください。
```
<a id="chap2"></a>
## 2. MoleculeをSlab上に配置、始状態(反応前)と終状態(反応後)を作成
### 2-1 吸着する分子読み込み、構造最適化後のpotential energyを得ましょう。
今回はaseの[molecule module](https://wiki.fysik.dtu.dk/ase/ase/build/build.html)を使います。<br/>
cif file, sdf fileなどからの読み込みもbulk構造を読み込みと同じように実施すればできます。
```
molec = molecule('NO')
# molec = read("structures/xxxxxxxxxx.sdf") # sdf fileの読み込み例
molec.calc = calculator
BFGS_opt = BFGS(molec, trajectory="output/molec_opt.traj", logfile=None)
BFGS_opt.run(fmax=0.005)
molecE = molec.get_potential_energy()
print(f"molecE = {molecE} eV")
v = view(Trajectory("output/molec_opt.traj"), viewer='ngl')
v.view.add_representation("ball+stick")
display(v)
```
### 2-2 吸着E計算
吸着状態を作成しましょう。
ここでは、[add_adsorbate](https://wiki.fysik.dtu.dk/ase//ase/build/surface.html#ase.build.add_adsorbate)関数を用いて`slab` 上部に `molec` を配置しています。
```
mol_on_slab = slab.copy()
# slab最表面から分子を配置する高さと、x,y positionを入力してください。
# あとで調整できるので、適当で大丈夫です。
add_adsorbate(mol_on_slab, molec, height=3, position=(8, 4))
c = FixAtoms(indices=[atom.index for atom in mol_on_slab if atom.position[2] <= 1])
mol_on_slab.set_constraint(c)
```
#### SurfaceEditor
`SurfaceEditor`というクラスを用いて分子の吸着位置最適化を行います。
<使用方法>
1. `SurfaceEditor(atoms).display()` で編集したい構造を表示しましょう。
2. atoms z>で動かしたい分子のindexを取得しましょう。1-3で確認したslab構造の最も高い座標より上にいるのが分子です。<br/>設定すると下のボックスに選択された分子のindexのみが入ります。
3. move, rotateのXYZ+-で分子のみを移動、角度変更できますので、位置を調整してください。<br/>この際、Ball sizeを調整すると吸着サイトが見やすくなります。
4. "Run mini opt" ボタンで、BFGSによる構造最適化を指定ステップ実施できます。
今回は以下の論文を参考に吸着構造を作成してみます。
”First-Principles Microkinetic Analysis of NO + CO Reactions on Rh(111) Surface toward Understanding NOx Reduction Pathways”
- https://pubs.acs.org/doi/10.1021/acs.jpcc.8b05906
今回の例では、"X-"を3回、"Y+"を1回、"Z-"を4回押すことでHCPサイトの吸着を行うための初期構造を作ることができます。<br/>
吸着のFCCサイト、HCPサイトに関しては以下の図をご覧ください。
<blockquote>
<figure>
<img src="https://www.researchgate.net/profile/Monica-Pozzo/publication/5521348/figure/fig1/AS:281313635520514@1444081805255/Colour-Possible-adsorption-sites-top-bridge-hollow-hcp-and-hollow-fcc-for-hydrogen.png"/>
<figcaption>(Colour) Possible adsorption sites (top, bridge, hollow-hcp and hollow-fcc) for hydrogen (dark red) on the Mg(0001) surface (light blue).<br/>
from <a href="https://www.researchgate.net/figure/Colour-Possible-adsorption-sites-top-bridge-hollow-hcp-and-hollow-fcc-for-hydrogen_fig1_5521348">https://www.researchgate.net/figure/Colour-Possible-adsorption-sites-top-bridge-hollow-hcp-and-hollow-fcc-for-hydrogen_fig1_5521348</a>
</figcaption>
</figure>
</blockquote>
```
# SurfaceEditor にはcalculator がSet されている必要があります。
mol_on_slab.calc = calculator
se = SurfaceEditor(mol_on_slab)
se.display()
c = FixAtoms(indices=[atom.index for atom in mol_on_slab if atom.position[2] <= 1])
mol_on_slab.set_constraint(c)
BFGS_opt = BFGS(mol_on_slab, logfile=None)
BFGS_opt.run(fmax=0.005)
mol_on_slabE = mol_on_slab.get_potential_energy()
print(f"mol_on_slabE = {mol_on_slabE} eV")
os.makedirs("ad_structures/", exist_ok=True)
write("ad_structures/mol_on_Rh(111).cif", mol_on_slab)
```
### 2-3 吸着E
Slabと分子それぞれが単体で存在していたときのエネルギーと、結合したときのエネルギー差を見ることで、吸着エネルギーが計算できます。
上記論文値では、1.79eVとなっています。値がずれているのは、論文ではRPBE汎関数が使われていますが、PFPではPBE汎関数が使われているところの違いが影響していそうです。
```
# Calculate adsorption energy
adsorpE = slabE + molecE - mol_on_slabE
print(f"Adsorption Energy: {adsorpE} eV")
```
### 2-4 吸着構造をリスト化
```
ad_st_path = "ad_structures/*"
ad_stru_list = [(filepath, read(filepath)) for filepath in glob.glob(ad_st_path)]
pd.DataFrame(ad_stru_list)
No = 0
view(ad_stru_list[No][1] , viewer="ngl")
```
### 2-5 IS構造を作る
ここでIS構造・FS構造を自分で作成し、NEBを行うための経路を作ります。<br/>
今回はこちらで作成しているものを用いるので、 [3. NEB計算](#chap3) に飛んで頂いても構いません。
```
filepath, atoms = ad_stru_list[No]
print(filepath)
IS = atoms.copy()
IS.calc = calculator
SurfaceEditor(IS).display()
c = FixAtoms(indices=[atom.index for atom in IS if atom.position[2] <= 1])
IS.set_constraint(c)
BFGS_opt = BFGS(IS, logfile=None)
BFGS_opt.run(fmax=0.05)
IS.get_potential_energy()
```
### 2-6 FS構造を作る
```
FS = IS.copy()
FS.calc = calculator
SurfaceEditor(FS).display()
FS.calc = calculator
c = FixAtoms(indices=[atom.index for atom in FS if atom.position[2] <= 1])
FS.set_constraint(c)
BFGS_opt = BFGS(FS, logfile=None)
BFGS_opt.run(fmax=0.005)
FS.get_potential_energy()
```
IS, FS構造を保存
```
filepath = Path(filepath).stem
# filepath = Path(ad_stru_list[No][0]).stem
os.makedirs(filepath, exist_ok=True)
write(filepath+"/IS.cif", IS)
write(filepath+"/FS.cif", FS)
```
<a id="chap3"></a>
## 3. NEB計算
### 3-1 NEB計算
今回はこちらで作成した、NO(fcc) -> N(fcc) + O(fcc) への反応に対するNEB計算を行ってみます。<br/>
`filepath`を変更することで、NO(fcc) -> N(hcp) + O(hcp) の反応に対するNEB計算も試すことができます。
```
!cp -r "input/NO_dissociation_NO(fcc)_N(fcc)_O(fcc)" .
!cp -r "input/NO_dissociation_NO(fcc)_N(hcp)_O(hcp)" .
filepath = "NO_dissociation_NO(fcc)_N(fcc)_O(fcc)"
# filepath = "NO_dissociation_NO(fcc)_N(hcp)_O(hcp)"
```
作成したIS, FS構造はこの様になっています。
```
IS = read(filepath+"/IS.cif")
FS = read(filepath+"/FS.cif")
v = view([IS, FS], viewer='ngl')
#v.view.add_representation("ball+stick")
display(v)
c = FixAtoms(indices=[atom.index for atom in IS if atom.position[2] <= 1])
IS.calc = calculator
IS.set_constraint(c)
BFGS_opt = BFGS(IS, logfile=None)
BFGS_opt.run(fmax=0.005)
print(f"IS {IS.get_potential_energy()} eV")
c = FixAtoms(indices=[atom.index for atom in FS if atom.position[2] <= 1])
FS.calc = calculator
FS.set_constraint(c)
BFGS_opt = BFGS(FS, logfile=None)
BFGS_opt.run(fmax=0.005)
print(f"FS {FS.get_potential_energy()} eV")
beads = 21
b0 = IS.copy()
b1 = FS.copy()
configs = [b0.copy() for i in range(beads-1)] + [b1.copy()]
for config in configs:
estimator = Estimator() # NEBでparallel=True, allowed_shared_calculator=Falseにする場合に必要
calculator = ASECalculator(estimator) # NEBでparallel=True, allowed_shared_calculator=Falseにする場合に必要
config.calc = calculator
%%time
steps=2000
# k:ばねの定数 最終的に0.05とか下げた方が安定する。
# NEBでparallel = True, allowed_shared_calculator=Falseにしたほうが、他のjobの影響を受けにくいので高速に処理が進みます。
neb = NEB(configs, k=0.05, parallel=True, climb=True, allow_shared_calculator=False)
neb.interpolate()
relax = FIRE(neb, trajectory=None, logfile=filepath+"/neb_log.txt")
# fmaxは0.05以下が推奨。小さすぎると収束に時間がかかります。
# 一回目のNEB計算は収束条件を緩め(0.2など)で実行し、無理のない反応経路が描けていたら収束条件を厳しくするほうが安定して計算できます。
# 緩めの収束条件で異常な反応経路となる場合はIS, FS構造を見直してください。
relax.run(fmax=0.1, steps=steps)
# additional calculation
steps=10000
relax.run(fmax=0.05, steps=steps)
write(filepath+"/NEB_images.xyz", configs)
```
<a id="chap4"></a>
## 4. NEB計算結果の確認と遷移状態構造取得
まずはいくつかの方法で可視化してみます。今回はpng --> gifファイルを作成してみます。
```
configs = read(filepath+"/NEB_images.xyz", index=":")
for config in configs:
estimator = Estimator() # NEBでparallel = True, allowed_shared_calculator=Falseにする場合に必要
calculator = ASECalculator(estimator) # NEBでparallel = True, allowed_shared_calculator=Falseにする場合に必要
config.calc = calculator
os.makedirs(filepath + "/pov_NEB/", exist_ok=True)
os.makedirs(filepath + "/png_NEB/", exist_ok=True)
for i, atoms in enumerate(configs):
m = atoms.copy()
write(filepath + f"/pov_NEB/NEB_{i:03}.pov", m, rotation="-60x, 30y, 15z")
write(filepath + f"/png_NEB/NEB_{i:03}.png", m, rotation="-60x, 30y, 15z")
imgs = []
for i in sorted(glob.glob(filepath + "/png_NEB/*.png"))[:]:
img = Image.open(i)
img.load()
#img = img.resize((250,480))
bg = Image.new("RGB", img.size, (255, 255, 255))
bg.paste(img, mask=img.split()[3])
imgs.append(bg)
imgs[0].save(filepath + "/gif_NEB.gif", save_all=True, append_images=imgs[1:], optimize=False, duration=100, loop=0)
ImageWidget(filepath + "/gif_NEB.gif")
```
TS構造となったIndexを確認。<br/>
Energy, Forceをみてみると、`index=11` で、エネルギーが最大、Forceが0付近の鞍点に達している事がわかります。
```
energies = [config.get_total_energy() for config in configs]
plt.plot(range(len(energies)),energies)
plt.xlabel("replica")
plt.ylabel("energy [eV]")
plt.xticks(np.arange(0, len(energies), 2))
plt.grid(True)
plt.show()
def calc_max_force(atoms):
return ((atoms.get_forces() ** 2).sum(axis=1).max()) ** 0.5
mforces = [calc_max_force(config) for config in configs]
plt.plot(range(len(mforces)), mforces)
plt.xlabel("replica")
plt.ylabel("max force [eV]")
plt.xticks(np.arange(0, len(mforces), 2))
plt.grid(True)
plt.show()
```
初期構造 `index=0` と、遷移状態 `index=11`のエネルギー差を見ることで活性化エネルギーが計算できます。
```
ts_index = 11
actE = energies[ts_index] - energies[0]
deltaE = energies[ts_index] - energies[-1]
print(f"actE {actE} eV, deltaE {deltaE} eV")
v = view(configs, viewer='ngl')
#v.view.add_representation("ball+stick")
display(v)
```
### NEBやり直し
実行済みのNEB計算結果から中間イメージのほうが始状態、終状態に適した構造が出た場合に、その構造を抽出して再実行してください。
```
# IS2 = configs[9].copy()
# FS2 = configs[-1].copy()
# c = FixAtoms(indices=[atom.index for atom in IS2 if atom.position[2] <= 1])
# IS2.calc = calculator
# IS2.set_constraint(c)
# BFGS_opt = BFGS(IS2, logfile=None)
# BFGS_opt.run(fmax=0.005)
# print(IS2.get_potential_energy())
# c = FixAtoms(indices=[atom.index for atom in FS2 if atom.position[2] <= 1])
# FS2.calc = calculator
# FS2.set_constraint(c)
# BFGS_opt = BFGS(FS2, logfile=None)
# BFGS_opt.run(fmax=0.005)
# print(FS2.get_potential_energy())
# write(filepath+"/IS2.cif", IS2)
# write(filepath+"/FS2.cif", FS2)
# v = view([IS2, FS2], viewer='ngl')
# #v.view.add_representation("ball+stick")
# display(v)
# beads = 21
# b0 = IS2.copy()
# b1 = FS2.copy()
# configs = [b0.copy() for i in range(beads-1)] + [b1.copy()]
# for config in configs:
# estimator = Estimator() # NEBでparallel=True, allowed_shared_calculator=Falseにする場合に必要
# calculator = ASECalculator(estimator) # NEBでparallel=True, allowed_shared_calculator=Falseにする場合に必要
# config.calc = calculator
# %%time
# steps=2000
# neb = NEB(configs, k=0.05, parallel=True, climb=True, allow_shared_calculator=False) #k:ばねの定数 最終的に0.05とか下げた方が安定する。
# # NEBでparallel = True, allowed_shared_calculator=Falseにしたほうが、他のjobの影響を受けにくいので高速に処理が進みます。
# neb.interpolate()
# relax = FIRE(neb, trajectory=None, logfile=filepath+"/neb_log_2.txt")
# relax.run(fmax=0.05, steps=steps)
# write(filepath+"/NEB_images2.xyz", configs)
# os.makedirs(filepath + "/pov_NEB2/", exist_ok=True)
# os.makedirs(filepath + "/png_NEB2/", exist_ok=True)
# for h,i in enumerate(configs):
# m = i.copy()
# write(filepath + '/pov_NEB2/NEB_' + str(h).zfill(3) + '.pov', m, rotation='-60x, 30y, 15z')
# write(filepath + '/png_NEB2/NEB_' + str(h).zfill(3) + '.png', m, rotation='-60x, 30y, 15z')
# imgs = []
# for i in sorted(glob.glob(filepath + "/png_NEB2/*.png"))[:]:
# img = Image.open(i)
# img.load()
# #img = img.resize((250,480))
# bg = Image.new("RGB", img.size, (255, 255, 255))
# bg.paste(img, mask=img.split()[3])
# imgs.append(bg)
# imgs[0].save(filepath + "/gif_NEB_2.gif", save_all=True, append_images=imgs[1:], optimize=False, duration=100, loop=0)
# energies = [config.get_total_energy() for config in configs]
# plt.plot(range(len(energies)),energies)
# plt.xlabel("replica")
# plt.ylabel("energy [eV]")
# plt.xticks(np.arange(0, beads, 2))
# plt.grid(True)
# plt.show()
# mforces = [config.get_forces().max() for config in configs]
# plt.plot(range(len(mforces)),mforces)
# plt.xlabel("replica")
# plt.ylabel("max force [eV]")
# plt.xticks(np.arange(0, beads, 2))
# plt.grid(True)
# plt.show()
# energies[13] - energies[0]
# v = view(configs, viewer='ngl')
# #v.view.add_representation("ball+stick")
# display(v)
```
<a id="chap5"></a>
## 5. 遷移状態構造の構造最適化(by Sella)
前章で得られたTS構造は、厳密な鞍点まで収束させる操作が入っていません。
ここでは、[sella](https://github.com/zadorlab/sella) というライブラリを用いて、TS構造を収束させます。
```
TSNo = 11
TS = configs[TSNo].copy()
c = FixAtoms(indices=[atom.index for atom in TS if atom.position[2] <= 1])
TS.set_constraint(c)
# 原子のz_positionを確認。
z_pos = pd.DataFrame({
"symbol": TS.get_chemical_symbols(),
"z": TS.get_positions()[:, 2]
})
plt.scatter(z_pos.index, z_pos["z"])
plt.grid(True)
plt.xlabel("atom_index")
plt.ylabel("z_position")
#plt.ylim(14,22)
plt.show()
TS.calc = calculator
TSopt = Sella(TS) # SellaでTSopt
%time TSopt.run(fmax=0.05)
potentialenergy = TS.get_potential_energy()
print (TS.get_potential_energy(), TS.get_forces().max())
write(filepath + "/TS_opt.cif", TS)
# TSopt前後の構造を比較
v = view([configs[TSNo], TS], viewer='ngl')
v.view.add_representation("ball+stick")
display(v)
```
<a id="chap6"></a>
## 6. 遷移状態の振動解析
```
# 振動計算で解析する元素はz_pos >= zzとする。
vibatoms = z_pos[z_pos["z"] >= 7.0].index
vibatoms
# 振動計算
vibpath = filepath + "/TS_vib/vib"
os.makedirs(vibpath, exist_ok=True)
vib = Vibrations(TS, name=vibpath, indices=vibatoms) # 振動計算する元素はココでvibatomsとして指定する。
vib.run()
vib_energies = vib.get_energies()
thermo = IdealGasThermo(vib_energies=vib_energies,
potentialenergy=potentialenergy,
atoms=TS,
geometry='linear', #'monatomic', 'linear', or 'nonlinear'
symmetrynumber=2, spin=0, natoms=len(vibatoms))
G = thermo.get_gibbs_energy(temperature=298.15, pressure=101325.)
vib.summary()
vib.summary(log=filepath+"/vib_summary.txt")
# 各振動モードの表示用のtrajファイルを出力します。
vib.write_mode(n=0, kT=300*kB, nimages=30)
vib.clean()
n = 0 # summary tableを見ながら表示したい振動モードの番号を入力してください。
vib_traj = Trajectory(vibpath + f".{n}.traj")
v = view(vib_traj, viewer='ngl')
v.view.add_representation("ball+stick")
display(v)
write(filepath + "/vib_traj.xyz", vib_traj)
vib_traj = read(filepath + "/vib_traj.xyz", index=":")
os.makedirs(filepath + "/pov_VIB/", exist_ok=True)
os.makedirs(filepath + "/png_VIB/", exist_ok=True)
for h,i in enumerate(vib_traj):
m = i.copy()
write(filepath + f"/pov_VIB/VIB_{h:03}.pov", m, rotation='-60x, 30y, 15z')
write(filepath + f"/png_VIB/VIB_{h:03}.png", m, rotation='-60x, 30y, 15z')
# 虚振動になっているか確認する。真ん中(と0)がTS。
vib_energies = []
for i in vib_traj:
i.calc = calculator
vib_energies.append(i.get_potential_energy())
plt.plot(range(len(vib_energies)), vib_energies)
plt.grid(True)
plt.show()
```
<a id="chap7"></a>
## 7. 遷移状態からの追加解析(擬似IRC計算)試作中
```
from ase.optimize.basin import BasinHopping
from ase.optimize.minimahopping import MinimaHopping
TS = read("mol_on_Rh(111)/TS_opt.cif")
TS.calc = calculator
# 虚振動モードの真ん中の両隣の構造を持ってきてBFGSで最適化するだけ。
c = FixAtoms(indices=[atom.index for atom in vib_traj[15] if atom.position[2] <= 1])
IRC_IS = vib_traj[14].copy()
IRC_IS.calc = calculator
IRC_IS.set_constraint(c)
# opt = BFGS(IRC_IS, logfile=None, maxstep=1)
# opt.run(fmax=0.5)
opt = BasinHopping(IRC_IS, temperature=300 * kB, dr=0.5, optimizer=LBFGS, fmax=0.005,)
print ("IS_done")
IRC_FS = vib_traj[16].copy()
IRC_FS.calc = calculator
IRC_FS.set_constraint(c)
# opt = BFGS(IRC_FS, logfile=None, maxstep=1)
# opt.run(fmax=0.5)
opt = BasinHopping(IRC_FS, temperature=300 * kB, dr=0.5, optimizer=LBFGS, fmax=0.005,)
print ("FS_done")
# 虚振動モードの真ん中の両隣の構造を持ってきてBFGSで最適化するだけ。
c = FixAtoms(indices=[atom.index for atom in TS if atom.position[2] <= 1])
IRC_IS = vib_traj[14].copy()
IRC_IS.calc = calculator
IRC_IS.set_constraint(c)
opt = BFGS(IRC_IS, logfile=None, maxstep=0.5)
opt.run(fmax=0.005, steps=500)
print ("IS_BFGS_done")
opt = MinimaHopping(IRC_IS, T0=0, fmax=0.005,)
opt(totalsteps=10)
print ("IS_MH_done")
IRC_FS = vib_traj[16].copy()
IRC_FS.calc = calculator
IRC_FS.set_constraint(c)
opt = BFGS(IRC_FS, logfile=None, maxstep=0.5)
opt.run(fmax=0.005, steps=500)
print ("FS_BFGS_done")
#opt = MinimaHopping(IRC_FS, T0=0, fmax=0.005,)
#opt(totalsteps=10)
print ("FS_MH_done")
v = view([IRC_IS, TS, IRC_FS], viewer='ngl')
v.view.add_representation("ball+stick")
display(v)
```
```
# NEBで計算したIS, TS, FSのenergyとTSopt+IRCの結果を比較する。
plt.plot([0,1,2], [configs[0].get_potential_energy(), configs[TSNo].get_potential_energy(), configs[-1].get_potential_energy()], label="NEB")
plt.plot([0,1,2], [IRC_IS.get_potential_energy(), TS.get_potential_energy(), IRC_FS.get_potential_energy()], label="TSopt+IRC")
plt.legend()
plt.grid(True)
plt.show()
print(TS.get_potential_energy() - IRC_IS.get_potential_energy())
print(TS.get_potential_energy() - IRC_FS.get_potential_energy())
write(filepath + "/IS_IRC.xyz",IRC_IS)
write(filepath + "/FS_IRC.xyz",IRC_FS)
```
| true |
code
| 0.465084 | null | null | null | null |
|
```
# ok, lets start by loading our usual stuffs
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Get daily summary data from: https://www.ncdc.noaa.gov/cdo-web/search
# and you can read more about the inputs here: https://www1.ncdc.noaa.gov/pub/data/cdo/documentation/GHCND_documentation.pdf
w = pd.read_csv("/Users/jillnaiman1/Downloads/2018_ChampaignWeather.csv")
w
# lets first sort our weather data by date
w.sort_values(by="DATE")
# now, lets change our date values to datetime
# note we did this using the "datetime" python package
# a few lectures ago
w["DATE"] = pd.to_datetime(w["DATE"])
# we've also got data from a few different stations
# so lets pick one!
mask = w["NAME"] == 'CHAMPAIGN 3 S, IL US'
# now lets make a quick plot!
plt.plot(w["DATE"][mask], w["TMIN"][mask], label='Min temp')
plt.plot(w["DATE"][mask], w["TMAX"][mask], label="Max temp")
plt.xlabel('Date')
plt.ylabel('Temp in F')
plt.legend()
# lets label some things
# and again, lets make our plot a little bigger
plt.rcParams["figure.dpi"] = 100 # 100 is better for lecture
```
# Activity #1: Histograms
* Here we will play with different sorts of simple rebinning methods
* We'll think about when we want to use sums vs. averaging
```
# lets also plot the mean and take a look
mean_temp = 0.5*(w["TMIN"]+w["TMAX"])[mask]
#print(len(mean_temp))
# lets also import ipywidgets
import ipywidgets
# now, if we recall last time, we have some methods to take out
# some of the "noisiness" in data
# and! we can do this interactively!
@ipywidgets.interact(window = (1, 40, 1))
def make_plot(window):
mean_temp_avg = mean_temp.rolling(window=window).mean()
plt.plot(mean_temp, marker = '.', linewidth = 0.5, alpha = 0.5)
plt.plot(mean_temp_avg, marker = '.', linewidth = 1.5)
plt.xlabel('Date')
plt.ylabel('Mean Daily Temp in F')
# when the window size is ~20, we can really see the seasonal averages
# as we discussed last lecture, what we are doing here is a
# "rolling average"
# this is now choosing a rolling average over which to do
# an averaging - it takes nearby bins and averages
# this is like (sum(values)/N)
# ok cool. So, this is our first taste of how we might want to
# modify data to show different things - for example if we
# average over more bins, then we see more seasonal variencies
# lets compare this dataset to a dataset we might want to
# visualize a little differently
w.keys()
# lets grab the amount of rain each day, I *think* in inches
precp = w['PRCP'][mask]
# we can now do a side by side figure using ipywidgets
# lets do some funny things to rejigger our plots
# to use panda's plotting routine
import matplotlib.dates as mdates
# if we've not done this before, only an issue
# if we want to re-run this cell a few times
# for any reason
set_ind = False
for k in w.keys():
if k.find('DATE') != -1:
set_ind = True
print(k,set_ind)
if set_ind: w.set_index('DATE',inplace=True)
# because we have re-indexed we have to re-define our arrays
mask = w["NAME"] == 'CHAMPAIGN 3 S, IL US'
mean_temp = 0.5*(w["TMIN"]+w["TMAX"])[mask]
precp = w['PRCP'][mask]
@ipywidgets.interact(window = (1, 40, 1))
def make_plot(window):
fig, ax = plt.subplots(1,2,figsize=(10,4))
mean_temp_avg = mean_temp.rolling(window=window).mean()
mean_temp.plot(ax=ax[0])
mean_temp_avg.plot(ax=ax[0])
precp_avg = precp.rolling(window=window).mean()
precp.plot(ax=ax[1], marker='.',linewidth=0.5, alpha=0.5)
precp_avg.plot(ax=ax[1], marker='.', linewidth=1.5)
# note: the below also works too
#ax[1].plot(precp, marker = '.', linewidth = 0.5, alpha = 0.5)
#ax[1].plot(precp_avg, marker = '.', linewidth = 1.5)
# format our axis
ax[1].set_xlabel('Date')
ax[1].set_ylabel('Daily rainfall in inches')
ax[0].set_xlabel('Date')
ax[0].set_ylabel('Mean Daily Temp in F')
for i in range(2): ax[i].xaxis.set_major_formatter(mdates.DateFormatter('%m'))
#fig.canvas.draw() # probably don't need this
# now we note something interesting:
# averaging brings out the seasonal
# differences in the temperature data
# however, no such stark trends are present
# when we averaging the rain data
# this is because we know that seasons
# have changes in daily temperature
# measurements, but they have changes
# in *TOTAL* rainfall.
# While you may get more rain on days
# throughout the year, its the total,
# averaged over SEVERAL days that will
# tell you more.
# With this in mind, lets re-do our viz
@ipywidgets.interact(window = (1, 100, 1))
def make_plot(window):
fig, ax = plt.subplots(1,2,figsize=(10,4))
mean_temp_avg = mean_temp.rolling(window=window).mean()
mean_temp.plot(ax=ax[0])
mean_temp_avg.plot(ax=ax[0])
precp_avg = precp.rolling(window=window).sum() # SUM IS THE KEY!!
precp.plot(ax=ax[1], marker='.',linewidth=0.5, alpha=0.5)
precp_avg.plot(ax=ax[1], marker='.', linewidth=1.5)
# format our axis
ax[1].set_xlabel('Date')
ax[1].set_ylabel('Daily rainfall in inches')
ax[0].set_xlabel('Date')
ax[0].set_ylabel('Mean Daily Temp in F')
for i in range(2): ax[i].xaxis.set_major_formatter(mdates.DateFormatter('%m'))
# now, if we really crank up the binning, we can see that the
# overall rainfail shape follows that of the
# smoothed temperature shape, eventhough the first plot is
# of a moving average & the other a moving sum
# now notice that everything is shifted to later dates, why is this?
# this is because of the formatting of our bins
precp.rolling?
# we can make a slight change to how the binning
# is done and get a very different effect
@ipywidgets.interact(window = (1, 100, 1))
def make_plot(window):
fig, ax = plt.subplots(1,2,figsize=(10,4))
mean_temp_avg = mean_temp.rolling(window=window,center=True).mean()
mean_temp.plot(ax=ax[0])
mean_temp_avg.plot(ax=ax[0])
precp_avg = precp.rolling(window=window,center=True).sum()
precp.plot(ax=ax[1], marker='.',linewidth=0.5, alpha=0.5)
precp_avg.plot(ax=ax[1], marker='.', linewidth=1.5)
# format our axis
ax[1].set_xlabel('Date')
ax[1].set_ylabel('Daily rainfall in inches')
ax[0].set_xlabel('Date')
ax[0].set_ylabel('Mean Daily Temp in F')
for i in range(2): ax[i].xaxis.set_major_formatter(mdates.DateFormatter('%m'))
# Now, another thing to note about our sum for
# rainfall is that its "rolling" - that is we
# are somewhat double summing the amount of
# rainfall in each bin - the sum in bin "i"
# includes data from bin "i-1" and bin "i+1"
# think last time I called this binning and
# not smoothing - but I think the correct
# view is that it is somewhere between the
# two. To think about the difference, lets
# try some straight forward binning by
# making a histogram.
# we can instead make a *HISTOGRAM* in which we
# rebin our data more coursely
@ipywidgets.interact(window = (1, 100, 1), day_bins=(1,100,5))
def make_plot(window,day_bins):
fig, ax = plt.subplots(1,2,figsize=(10,4))
#ax[1].xaxis_date() # I don't think we need these
#ax[0].xaxis_date()
mean_temp_avg = mean_temp.rolling(window=window,center=True).mean()
mean_temp.plot(ax=ax[0])
mean_temp_avg.plot(ax=ax[0])
precp.plot(ax=ax[1], marker='.',linewidth=0.5, alpha=0.5)
precp_resampled = precp.resample(str(day_bins)+'D').sum()
precp_resampled.plot(ax=ax[1], marker='.')
# label our axis
ax[1].set_xlabel('Date')
ax[1].set_ylabel('Daily rainfall in inches')
ax[0].set_xlabel('Date')
ax[0].set_ylabel('Mean Daily Temp in F')
ax[1].set_xlim(ax[0].get_xlim())
# lets leave this out and see the "natural" pandas formatting
#for i in range(2): ax[i].xaxis.set_major_formatter(mdates.DateFormatter('%m'))
# here now in the left plot each point is representing all
# rainfall within the specified time range
# The lines are the equivalent to overplotting the
# lines ontop of the histogram we made by hand in the
# lecture slides earlier
```
## Take aways:
* so far we've been doing fairly simple averages - the orange plot on the right of our last plot is of total rainfall is an example of a *HISTOGRAM* that we talked about in the lecture
* the orange plot on the left is a rolling average, which is a variation of a histogram *AND* each value in it is divided by the number of data points used to make the data point. Note that this makes a lot more sense for this dataset since the "total temperature" across several days doesn't make a whole lot of sense to talk about.
* the *CENTERING* we talked about in lecture applies here too - how we decided where the bins for each window started shifted our plot
# Activity #2: Smoothing
* we have thus far performed simple transformations to our datasets - some histogramming, something between binning and smoothing - however we note that in our final plots, while overall shapes of seasons are present, much detail is lost
* Can we preserve both? The short answer is yes, but not without sacrificing some loss of precise statistical information. We will do this with smoothing.
```
# we can check out the end of the rolling window docs to
# see our window options **post in chat too**
windows_avail = [None,'boxcar','triang','blackman','hamming',
'bartlett','parzen', 'bohman',
'blackmanharris','nuttall','barthann']
# **start with bartlett**
@ipywidgets.interact(window = (1, 100, 1), window_type=windows_avail)
def make_plot(window, window_type):
fig, ax = plt.subplots(1,2,figsize=(10,4))
mean_temp_avg = mean_temp.rolling(window=window,center=True, win_type=window_type).mean()
mean_temp.plot(ax=ax[0], marker='.')
mean_temp_avg.plot(ax=ax[0], marker='.')
precp_avg = precp.rolling(window=window,center=True, win_type=window_type).sum()
precp.plot(ax=ax[1], marker='.',linewidth=0.5, alpha=0.5)
precp_avg.plot(ax=ax[1], marker='.', linewidth=1.5)
# format our axis
ax[1].set_xlabel('Date')
ax[1].set_ylabel('Daily rainfall in inches')
ax[0].set_xlabel('Date')
ax[0].set_ylabel('Mean Daily Temp in F')
for i in range(2): ax[i].xaxis.set_major_formatter(mdates.DateFormatter('%m'))
# so, we can see in the above that different
# windows tend to smooth out or
# excentuate different features
# what is this all doing?
# (1) lets make a toy example of a bartlett window
npoints = 10
x = np.arange(0,npoints)
y = np.repeat(1,npoints)
plt.plot(x,y,'o',label='Origional Data')
# (2) ok, nothing exciting, but lets apply
# apply a bartlet window to our data
# and see what happens
plt.plot(x,y*np.bartlett(npoints),'o', label='Bartlett')
# so we see that our points have been weighted toward the
# center
# (3) we can try one more example
plt.plot(x,y*np.hamming(npoints),'o', label='Hamming')
# so we can see this is a slightly poofier window
# in the center part
plt.legend()
```
# The big ideas:
### So, why would we want to do this?
* When we apply a window like this to data in one of our bins and then take our sums or averages, we are essentially saying that in a particular window, we think the data in the center of the window is more important than the data at the sides of the window.
* This is an example of a *WEIGHTED AVERAGE*.
* This can make sense when we want a balance between making our data look less noisy, but still feature the data at each y-position more heavily than its neighbors when we plot at that x-position
### What do we give up?
* This smoothed representation is, in a sense, "lossy" - i.e. we can not easily backtrack out the exact value at each point because it has been convolved with a window function.
* To get back to our original data we would have to *deconvolve* our data with our smoothing function
* It would be less accurate to perform any statistical analysis with smoothed data, but it can make the plot more visually appealing.
A nice resource: https://space.mit.edu/cxc/isis/contrib/Smooth_vs_rebin/Smooth_vs_rebin.html
# Activity The Third: Binning 2D Datasets
```
# So, we can apply these ideas to 2D datasets as well
# lets use a different dataset:
ufos = pd.read_csv("/Users/jillnaiman1/Downloads/ufo-scrubbed-geocoded-time-standardized-00.csv",
names = ["date", "city", "state", "country",
"shape", "duration_seconds", "duration",
"comment", "report_date", "latitude", "longitude"],
parse_dates = ["date", "report_date"])
# you might get a memory warning thing, its just not deprecated correctly
# try not to panic :D
ufos
# so, lets start by plotting ufo sitings by lat and long
# nothing fancy
plt.plot(ufos['longitude'],ufos['latitude'],'.')
# hey look neat! A little map of the earth!
# so, we can do things like map
# different values to color
# lets say we want to color each point
# by the duration of the event
# lets first start by importing a color map
import matplotlib.cm as cm
# lets list the available ones
plt.colormaps()
# so, first lets grab a color map we like - choose your favorite
cmap = cm.RdPu
# now, lets map duration to color
plt.scatter(ufos['longitude'],ufos['latitude'],c=ufos['duration_seconds'])
# alright, what is going on here? We are supposed to have
# multiple colors plotted!
# we can see what is going on by plotting only a few points
plt.scatter(ufos['longitude'][0:10],ufos['latitude'][0:10],c=ufos['duration_seconds'][0:10])
# we see here that while yes, points are different colors, there is a lot of small
# times (purple) and these are being plotted over all the rest
# there are a few ways around this, one we can do our plot as a log
plt.scatter(ufos['longitude'],ufos['latitude'],c=np.log10(ufos['duration_seconds']))
# its still not that much detail though
# so, we can use our histogramming techniques
# to rebin this data in the 2d space!
plt.hexbin(ufos["longitude"], ufos["latitude"], ufos["duration_seconds"], gridsize=32, bins='log')
plt.colorbar()
# so here we are now plotting the locations of UFO sitings in
# hexbins, AND we are showing how long, in log(seconds) each
# bin's ufo sitings typically last (as a sum over all values in each bin)
```
# Activitity the Last: Smoothing 2D data => Smoothing Images
```
# lets begin by uploading an image to play around with
import PIL.Image as Image
im = Image.open("/Users/jillnaiman1/Downloads/stitch_reworked.png", "r")
# recall what this image looked like:
fig,ax = plt.subplots(figsize=(5,5))
ax.imshow(im)
#data = np.array(im)
# so, lets say we think the little
import PIL.ImageFilter as ImageFilter
myFilter = ImageFilter.GaussianBlur(radius=1)
smoothed_image = im.filter(myFilter)
fig,ax = plt.subplots(figsize=(5,5))
ax.imshow(smoothed_image)
# we can also do this interactively to see how
# different sized gaussian filters look
# (note: a gaussian filter looks similar to a Hamming
# window that we played with before)
@ipywidgets.interact(radius=(1,10,1))
def make_plot(radius):
myFilter = ImageFilter.GaussianBlur(radius=radius)
smoothed_image = im.filter(myFilter)
fig,ax = plt.subplots(figsize=(5,5))
ax.imshow(smoothed_image)
# so, again, if we wanted to show off our digitized
# drawing, certainly use this to "smooth" out the image
# however, note that if the radius gets large in
# pixels, we loose a lot of information about our
# drawing
# also note, that when we apply *ANY* smoothing
# we are changing the color profile
# our originonal image was, if we recall, had a limited
# number of colors colors:
print(np.unique(im))
# print out unique pixels
myFilter = ImageFilter.GaussianBlur(radius=1)
smoothed_image = im.filter(myFilter)
# this is not the case for the smoothed image
data = np.array(smoothed_image)
print(np.unique(data))
# here, we have *ADDED* data values that are
# *not* in the original dataset
# This is the downside to smoothing
# just for fun (and to see the comparitive effects of smoothing):
import PIL.Image as Image
import PIL.ImageFilter as ImageFilter
import matplotlib.pyplot as plt
im = Image.open("/Users/jillnaiman1/Downloads/littleCorgiInHat.png", "r")
import ipywidgets
@ipywidgets.interact(radius=(1,100,5))
def make_plot(radius):
myFilter = ImageFilter.GaussianBlur(radius=radius)
smoothed_image = im.filter(myFilter)
# plot originonal image
fig,ax = plt.subplots(1,2,figsize=(10,5))
ax[0].imshow(im)
ax[1].imshow(smoothed_image)
# note how pixelated the last image becomes
# the upshot is that when we give up detail we
# can then remap our data onto a larger grid
# if we set radius == 100, we can almost
# represent our little corgi with a single
# pixel at 1 color
# Something to consider going forward
```
| true |
code
| 0.566678 | null | null | null | null |
|
# Logistic Regression (scikit-learn) with HDFS/Spark Data Versioning
This example is based on our [basic census income classification example](census-end-to-end.ipynb), using local setups of ModelDB and its client, and [HDFS/Spark data versioning](https://verta.readthedocs.io/en/master/_autogen/verta.dataset.HDFSPath.html).
```
!pip install /path/to/verta-0.15.10-py2.py3-none-any.whl
HOST = "localhost:8080"
PROJECT_NAME = "Census Income Classification - HDFS Data"
EXPERIMENT_NAME = "Logistic Regression"
```
## Imports
```
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
```
---
# Log Workflow
This section demonstrates logging model metadata and training artifacts to ModelDB.
## Instantiate Client
```
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
```
<h2>Prepare Data</h2>
```
from pyspark import SparkContext
sc = SparkContext("local")
from verta.dataset import HDFSPath
hdfs = "hdfs://HOST:PORT"
dataset = client.set_dataset(name="Census Income S3")
blob = HDFSPath.with_spark(sc, "{}/data/census/*".format(hdfs))
version = dataset.create_version(blob)
version
csv = sc.textFile("{}/data/census/census-train.csv".format(hdfs)).collect()
from verta.external.six import StringIO
df_train = pd.read_csv(StringIO('\n'.join(csv)))
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
```
## Prepare Hyperparameters
```
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
```
## Train Models
```
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# save and log model
run.log_model(model)
# log dataset snapshot as version
run.log_dataset_version("train", version)
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
```
---
# Revisit Workflow
This section demonstrates querying and retrieving runs via the Client.
## Retrieve Best Run
```
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
```
## Train on Full Dataset
```
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
```
## Calculate Accuracy on Full Training Set
```
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
```
---
| true |
code
| 0.578627 | null | null | null | null |
|
# Modeling and Simulation in Python
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
```
### Low pass filter
```
with units_off():
for i, name in enumerate(dir(UNITS)):
unit = getattr(UNITS, name)
try:
res = 1*unit - 1
if res == 0:
print(name, 1*unit - 1)
except TypeError:
pass
if i > 10000:
break
with units_off():
print(2 * UNITS.farad - 1)
with units_off():
print(2 * UNITS.volt - 1)
with units_off():
print(2 * UNITS.newton - 1)
mN = UNITS.gram * UNITS.meter / UNITS.second**2
with units_off():
print(2 * mN - 1)
```
Now I'll create a `Params` object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
```
params = Params(
R1 = 1e6, # ohm
C1 = 1e-9, # farad
A = 5, # volt
f = 1000, # Hz
)
```
Now we can pass the `Params` object `make_system` which computes some additional parameters and defines `init`.
`make_system` uses the given radius to compute `area` and the given `v_term` to compute the drag coefficient `C_d`.
```
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
unpack(params)
init = State(V_out = 0)
omega = 2 * np.pi * f
tau = R1 * C1
cutoff = 1 / R1 / C1
t_end = 3 / f
return System(params, init=init, t_end=t_end,
omega=omega, cutoff=cutoff)
```
Let's make a `System`
```
system = make_system(params)
```
Here's the slope function,
```
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
"""
V_out, = state
unpack(system)
V_in = A * np.cos(omega * t)
V_R1 = V_in - V_out
I_R1 = V_R1 / R1
I_C1 = I_R1
dV_out = I_C1 / C1
return dV_out
```
As always, let's test the slope function with the initial conditions.
```
slope_func(system.init, 0, system)
```
And then run the simulation.
```
ts = linspace(0, system.t_end, 301)
results, details = run_ode_solver(system, slope_func, t_eval=ts)
details
```
Here are the results.
```
# results
```
Here's the plot of position as a function of time.
```
def plot_results(results):
xs = results.V_out.index
ys = results.V_out.values
t_end = get_last_label(results)
if t_end < 10:
xs *= 1000
xlabel = 'Time (ms)'
else:
xlabel = 'Time (s)'
plot(xs, ys)
decorate(xlabel=xlabel,
ylabel='$V_{out}$ (volt)',
legend=False)
plot_results(results)
```
And velocity as a function of time:
```
fs = [1, 10, 100, 1000, 10000, 100000]
for i, f in enumerate(fs):
system = make_system(Params(params, f=f))
ts = linspace(0, system.t_end, 301)
results, details = run_ode_solver(system, slope_func, t_eval=ts)
subplot(3, 2, i+1)
plot_results(results)
```
| true |
code
| 0.596903 | null | null | null | null |
|
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Matplotlib Basics</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="https://matplotlib.org/_static/logo2.png" alt="NumPy Logo" style="height: 150px;"></div>
## Overview:
* **Teaching:** 30 minutes
* **Exercises:** 30 minutes
### Questions
1. How are line plots created using Matplotlib?
1. What methods exist to customize the look of these plots?
### Objectives
1. Create a basic line plot.
1. Add labels and grid lines to the plot.
1. Plot multiple series of data.
1. Plot imshow, contour, and filled contour plots.
## Plotting with Matplotlib
Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
The first step is to set up our notebook environment so that matplotlib plots appear inline as images:
```
%matplotlib inline
```
Next we import the matplotlib library's `pyplot` interface; this interface is the simplest way to create new Matplotlib figures. To shorten this long name, we import it as `plt` to keep things short but clear.
```
import matplotlib.pyplot as plt
import numpy as np
```
Now we generate some data to use while experimenting with plotting:
```
times = np.array([ 93., 96., 99., 102., 105., 108., 111., 114., 117.,
120., 123., 126., 129., 132., 135., 138., 141., 144.,
147., 150., 153., 156., 159., 162.])
temps = np.array([310.7, 308.0, 296.4, 289.5, 288.5, 287.1, 301.1, 308.3,
311.5, 305.1, 295.6, 292.4, 290.4, 289.1, 299.4, 307.9,
316.6, 293.9, 291.2, 289.8, 287.1, 285.8, 303.3, 310.])
```
Now we come to two quick lines to create a plot. Matplotlib has two core objects: the `Figure` and the `Axes`. The `Axes` is an individual plot with an x-axis, a y-axis, labels, etc; it has all of the various plotting methods we use. A `Figure` holds one or more `Axes` on which we draw; think of the `Figure` as the level at which things are saved to files (e.g. PNG, SVG)

Below the first line asks for a `Figure` 10 inches by 6 inches. We then ask for an `Axes` or subplot on the `Figure`. After that, we call `plot`, with `times` as the data along the x-axis (independant values) and `temps` as the data along the y-axis (the dependant values).
```
# Create a figure
fig = plt.figure(figsize=(10, 6))
# Ask, out of a 1x1 grid, the first axes.
ax = fig.add_subplot(1, 1, 1)
# Plot times as x-variable and temperatures as y-variable
ax.plot(times, temps)
```
From there, we can do things like ask the axis to add labels for x and y:
```
# Add some labels to the plot
ax.set_xlabel('Time')
ax.set_ylabel('Temperature')
# Prompt the notebook to re-display the figure after we modify it
fig
```
We can also add a title to the plot:
```
ax.set_title('GFS Temperature Forecast', fontdict={'size':16})
fig
```
Of course, we can do so much more...
```
# Set up more temperature data
temps_1000 = np.array([316.0, 316.3, 308.9, 304.0, 302.0, 300.8, 306.2, 309.8,
313.5, 313.3, 308.3, 304.9, 301.0, 299.2, 302.6, 309.0,
311.8, 304.7, 304.6, 301.8, 300.6, 299.9, 306.3, 311.3])
```
Here we call `plot` more than once to plot multiple series of temperature on the same plot; when plotting we pass `label` to `plot` to facilitate automatic creation. This is added with the `legend` call. We also add gridlines to the plot using the `grid()` call.
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# Plot two series of data
# The label argument is used when generating a legend.
ax.plot(times, temps, label='Temperature (surface)')
ax.plot(times, temps_1000, label='Temperature (1000 mb)')
# Add labels and title
ax.set_xlabel('Time')
ax.set_ylabel('Temperature')
ax.set_title('Temperature Forecast')
# Add gridlines
ax.grid(True)
# Add a legend to the upper left corner of the plot
ax.legend(loc='upper left')
```
We're not restricted to the default look of the plots, but rather we can override style attributes, such as `linestyle` and `color`. `color` can accept a wide array of options for color, such as `red` or `blue` or HTML color codes. Here we use some different shades of red taken from the Tableau color set in matplotlib, by using `tab:red` for color.
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# Specify how our lines should look
ax.plot(times, temps, color='tab:red', label='Temperature (surface)')
ax.plot(times, temps_1000, color='tab:red', linestyle='--',
label='Temperature (isobaric level)')
# Same as above
ax.set_xlabel('Time')
ax.set_ylabel('Temperature')
ax.set_title('Temperature Forecast')
ax.grid(True)
ax.legend(loc='upper left')
```
### Exercise
* Use `add_subplot` to create two different subplots on the figure
* Create one subplot for temperature, and one for dewpoint
* Set the title of each subplot as appropriate
* Use `ax.set_xlim` and `ax.set_ylim` to control the plot boundaries
* **BONUS:** Experiment with passing `sharex` and `sharey` to `add_subplot` to <a href="https://matplotlib.org/gallery/subplots_axes_and_figures/shared_axis_demo.html#sphx-glr-gallery-subplots-axes-and-figures-shared-axis-demo-py">share plot limits</a>
```
# Fake dewpoint data to plot
dewpoint = 0.9 * temps
dewpoint_1000 = 0.9 * temps_1000
# Create the figure
fig = plt.figure(figsize=(10, 6))
# YOUR CODE GOES HERE
```
#### Solution
```
# %load solutions/subplots.py
```
## Scatter Plots
Maybe it doesn't make sense to plot your data as a line plot, but with markers (a scatter plot). We can do this by setting the `linestyle` to none and specifying a marker type, size, color, etc.
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# Specify no line with circle markers
ax.plot(temps, temps_1000, linestyle='None', marker='o', markersize=5)
ax.set_xlabel('Temperature (surface)')
ax.set_ylabel('Temperature (1000 hPa)')
ax.set_title('Temperature Cross Plot')
ax.grid(True)
```
You can also use the `scatter` methods, which is slower, but will give you more control, such as being able to color the points individually based upon a third variable.
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# Specify no line with circle markers
ax.scatter(temps, temps_1000)
ax.set_xlabel('Temperature (surface)')
ax.set_ylabel('Temperature (1000 hPa)')
ax.set_title('Temperature Cross Plot')
ax.grid(True)
```
### Exercise
* Beginning with our code above, add the `c` keyword argument to the `scatter` call and color the points by the difference between the surface and 1000 hPa temperature.
* Add a 1:1 line to the plot (slope of 1, intercept of zero). Use a black dashed line.
* **BONUS:** Change the color map to be something more appropriate for this plot.
* **BONUS:** Try to add a colorbar to the plot (have a look at the matplotlib documentation for help).
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# YOUR CODE GOES HERE
ax.set_xlabel('Temperature (surface)')
ax.set_ylabel('Temperature (1000 hPa)')
ax.set_title('Temperature Cross Plot')
ax.grid(True)
```
#### Solution
```
# %load solutions/color_scatter.py
```
## imshow/contour
- `imshow` displays the values in an array as colored pixels, similar to a heat map.
- `contour` creates contours around data.
- `contourf` creates filled contours around data.
First let's create some fake data to work with - let's use a bivariate normal distribution.
```
x = y = np.arange(-3.0, 3.0, 0.025)
X, Y = np.meshgrid(x, y)
Z1 = np.exp(-X**2 - Y**2)
Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2)
Z = (Z1 - Z2) * 2
```
Let's start with a simple imshow plot.
```
fig, ax = plt.subplots()
im = ax.imshow(Z, interpolation='bilinear', cmap='RdYlGn',
origin='lower', extent=[-3, 3, -3, 3])
```
We can also create contours around the data.
```
fig, ax = plt.subplots()
ax.contour(X, Y, Z)
fig, ax = plt.subplots()
c = ax.contour(X, Y, Z, levels=np.arange(-2, 2, 0.25))
ax.clabel(c)
fig, ax = plt.subplots()
c = ax.contourf(X, Y, Z)
```
### Exercise
* Create a figure using imshow and contour that is a heatmap in the colormap of your choice. Overlay black contours with a 0.5 contour interval.
```
# YOUR CODE GOES HERE
```
#### Solution
```
# %load solutions/imshow_contour.py
```
## Resources
The goal of this tutorial is to provide an overview of the use of the Matplotlib library. It covers creating simple line plots, but it is by no means comprehensive. For more information, try looking at the:
- [Matplotlib Documentation](http://matplotlib.org)
- [Matplotlib `plot` documentation](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot)
| true |
code
| 0.630059 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/whobbes/fastai/blob/master/keras_lesson1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## FastAI setup
```
# Get the file from fast.ai URL, unzip it, and put it into the folder 'data'
# This uses -qq to make the unzipping less verbose.
!wget http://files.fast.ai/data/dogscats.zip && unzip -qq dogscats.zip -d data/
```
## Introduction to our first task: 'Dogs vs Cats'
```
%reload_ext autoreload
# %autoreload 2
# %matplotlib inline
PATH = "data/dogscats/"
sz=224
batch_size=64
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.layers import Dropout, Flatten, Dense
from keras.applications import ResNet50
from keras.models import Model, Sequential
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
from keras.applications.resnet50 import preprocess_input
train_data_dir = f'{PATH}train'
validation_data_dir = f'{PATH}valid'
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input,
shear_range=0.2, zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = train_datagen.flow_from_directory(train_data_dir,
target_size=(sz, sz),
batch_size=batch_size, class_mode='binary')
validation_generator = test_datagen.flow_from_directory(validation_data_dir,
shuffle=False,
target_size=(sz, sz),
batch_size=batch_size, class_mode='binary')
base_model = ResNet50(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers: layer.trainable = False
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
%%time
model.fit_generator(train_generator, train_generator.n // batch_size, epochs=3, workers=4,
validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
split_at = 140
for layer in model.layers[:split_at]: layer.trainable = False
for layer in model.layers[split_at:]: layer.trainable = True
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
%%time
model.fit_generator(train_generator, train_generator.n // batch_size, epochs=1, workers=3,
validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
```
| true |
code
| 0.656851 | null | null | null | null |
|
# Scikit-learn DBSCAN OD Clustering
<img align="right" src="https://anitagraser.github.io/movingpandas/pics/movingpandas.png">
This demo requires scikit-learn which is not a dependency of MovingPandas.
```
%matplotlib inline
import urllib
import os
import numpy as np
import pandas as pd
from geopandas import GeoDataFrame, read_file
from shapely.geometry import Point, LineString, Polygon, MultiPoint
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
from sklearn.cluster import DBSCAN
from geopy.distance import great_circle
import sys
sys.path.append("..")
import movingpandas as mpd
import warnings
warnings.simplefilter("ignore")
```
## Ship movements (AIS data)
```
df = read_file('../data/ais.gpkg')
df['t'] = pd.to_datetime(df['Timestamp'], format='%d/%m/%Y %H:%M:%S')
df = df.set_index('t')
df = df[df.SOG>0]
MIN_LENGTH = 100 # meters
TRIP_ID = 'MMSI'
traj_collection = mpd.TrajectoryCollection(df, TRIP_ID, min_length=MIN_LENGTH)
print("Finished creating {} trajectories".format(len(traj_collection)))
trips = mpd.ObservationGapSplitter(traj_collection).split(gap=timedelta(minutes=5))
print("Extracted {} individual trips from {} continuous vessel tracks".format(len(trips), len(traj_collection)))
KMS_PER_RADIAN = 6371.0088
EPSILON = 0.1 / KMS_PER_RADIAN
trips.get_start_locations()
def make_od_line(row, od_clusters):
return LineString([od_clusters.loc[row['od'][0]].geometry, od_clusters.loc[row['od'][-1]].geometry])
def get_centermost_point(cluster):
centroid = (MultiPoint(cluster).centroid.x, MultiPoint(cluster).centroid.y)
centermost_point = min(cluster, key=lambda point: great_circle(point, centroid).m)
return Point(tuple(centermost_point)[1], tuple(centermost_point)[0])
def extract_od_gdf(trips):
origins = trips.get_start_locations()
origins['type'] = '0'
origins['traj_id'] = [trip.id for trip in trips]
destinations = trips.get_end_locations()
destinations['type'] = '1'
destinations['traj_id'] = [trip.id for trip in trips]
od = origins.append(destinations)
od['lat'] = od.geometry.y
od['lon'] = od.geometry.x
return od
def dbscan_cluster_ods(od_gdf, eps):
matrix = od_gdf[['lat', 'lon']].to_numpy()
db = DBSCAN(eps=eps, min_samples=1, algorithm='ball_tree', metric='haversine').fit(np.radians(matrix))
cluster_labels = db.labels_
num_clusters = len(set(cluster_labels))
clusters = pd.Series([matrix[cluster_labels == n] for n in range(num_clusters)])
return cluster_labels, clusters
def extract_od_clusters(od_gdf, eps):
cluster_labels, clusters = dbscan_cluster_ods(od_gdf, eps)
od_gdf['cluster'] = cluster_labels
od_by_cluster = pd.DataFrame(od_gdf).groupby(['cluster'])
clustered = od_by_cluster['ShipType'].unique().to_frame(name='types')
clustered['n'] = od_by_cluster.size()
clustered['symbol_size'] = clustered['n']*10 # for visualization purposes
clustered['sog'] = od_by_cluster['SOG'].mean()
clustered['geometry'] = clusters.map(get_centermost_point)
clustered = clustered[clustered['n']>0].sort_values(by='n', ascending=False)
return clustered
def extract_od_matrix(trips, eps, directed=True):
od_gdf = extract_od_gdf(trips)
matrix_nodes = extract_od_clusters(od_gdf, eps)
od_by_traj_id = pd.DataFrame(od_gdf).sort_values(['type']).groupby(['traj_id']) # Groupby preserves the order of rows within each group.
od_by_traj_id = od_by_traj_id['cluster'].unique().to_frame(name='clusters') # unique() preserves input order according to https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.unique.html
if directed:
od_matrix = od_by_traj_id.groupby(od_by_traj_id['clusters'].apply(tuple)).count().rename({'clusters':'n'}, axis=1)
else:
od_matrix = od_by_traj_id.groupby(od_by_traj_id['clusters'].apply(sorted).apply(tuple)).count().rename({'clusters':'n'}, axis=1)
od_matrix['od'] = od_matrix.index
od_matrix['geometry'] = od_matrix.apply(lambda x: make_od_line(row=x, od_clusters=matrix_nodes), axis=1 )
return od_matrix, matrix_nodes
od_matrix, matrix_nodes = extract_od_matrix(trips, EPSILON*2, directed=True)
np.max(od_matrix.n)
from holoviews import dim
( GeoDataFrame(od_matrix).hvplot(title='OD flows', geo=True, tiles='OSM', line_width=dim('n'), alpha=0.5, frame_height=600, frame_width=600) *
GeoDataFrame(matrix_nodes).hvplot(c='sog', size='symbol_size', hover_cols=['cluster', 'n'], geo=True, cmap='RdYlGn')
)
```
## Bird migration data
```
df = read_file('../data/gulls.gpkg')
df['t'] = pd.to_datetime(df['timestamp'])
df = df.set_index('t')
traj_collection = mpd.TrajectoryCollection(df, 'individual-local-identifier', min_length=MIN_LENGTH)
print("Finished creating {} trajectories".format(len(traj_collection)))
trips = mpd.TemporalSplitter(traj_collection).split(mode='month')
print("Extracted {} individual trips from {} continuous tracks".format(len(trips), len(traj_collection)))
EPSILON = 100 / KMS_PER_RADIAN
def extract_od_gdf(trips):
origins = trips.get_start_locations()
origins['type'] = '0'
origins['traj_id'] = [trip.id for trip in trips]
destinations = trips.get_end_locations()
destinations['type'] = '1'
destinations['traj_id'] = [trip.id for trip in trips]
od = origins.append(destinations)
od['lat'] = od.geometry.y
od['lon'] = od.geometry.x
return od
def extract_od_clusters(od_gdf, eps):
cluster_labels, clusters = dbscan_cluster_ods(od_gdf, eps)
od_gdf['cluster'] = cluster_labels
od_by_cluster = pd.DataFrame(od_gdf).groupby(['cluster'])
clustered = od_by_cluster.size().to_frame(name='n')
clustered['geometry'] = clusters.map(get_centermost_point)
clustered = clustered[clustered['n']>0].sort_values(by='n', ascending=False)
return clustered
od_matrix, matrix_nodes = extract_od_matrix(trips, EPSILON, directed=False)
( GeoDataFrame(od_matrix).hvplot(title='OD flows', geo=True, tiles='OSM', hover_cols=['n'], line_width=dim('n')*0.05, alpha=0.5, frame_height=600, frame_width=600) *
GeoDataFrame(matrix_nodes).hvplot(c='n', size=dim('n')*0.1, hover_cols=['cluster', 'n'], geo=True, cmap='RdYlGn')
)
```
### Comparing OD flows and TrajectoryCollectionAggregator
```
aggregator = mpd.TrajectoryCollectionAggregator(trips, max_distance=1000000, min_distance=100000, min_stop_duration=timedelta(minutes=5))
flows = aggregator.get_flows_gdf()
clusters = aggregator.get_clusters_gdf()
( flows.hvplot(title='Generalized aggregated trajectories', geo=True, hover_cols=['weight'], line_width='weight', alpha=0.5, color='#1f77b3', tiles='OSM', frame_height=600, frame_width=400) *
clusters.hvplot(geo=True, color='red', size='n')
+
GeoDataFrame(od_matrix).hvplot(title='OD flows', geo=True, tiles='OSM', hover_cols=['n'], line_width=dim('n')*0.05, alpha=0.5, frame_height=600, frame_width=400) *
GeoDataFrame(matrix_nodes).hvplot(c='n', size=dim('n')*0.1, hover_cols=['cluster', 'n'], geo=True, cmap='RdYlGn')
)
```
| true |
code
| 0.421373 | null | null | null | null |
|
# MNIST Handwritten Digit Recognition Project using MLP & CNNs
```
import matplotlib.pyplot as plt
from keras.datasets import mnist
(X_train,y_train),(X_test,y_test)=mnist.load_data()
plt.subplot(331)
plt.imshow(X_train[0],cmap=plt.get_cmap('gray'))
plt.subplot(332)
plt.imshow(X_train[1],cmap=plt.get_cmap('rainbow'))
plt.subplot(333)
plt.imshow(X_train[2],cmap=plt.get_cmap('pink'))
plt.subplot(334)
plt.imshow(X_train[3])
plt.subplot(335)
plt.imshow(X_train[4])
plt.subplot(336)
plt.imshow(X_train[5])
plt.subplot(337)
plt.imshow(X_train[6])
plt.subplot(338)
plt.imshow(X_train[7])
plt.subplot(339)
plt.imshow(X_train[8])
X_train[5].shape
```
### MultiLayer Perceptron for recognizing handwritten digits (mnist Handwritten digit dataset)
```
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
#random seed
np.random.seed(12)
(X_train,y_train),(X_test,y_test)=mnist.load_data()
#flatten each 28*28 images into 784 vectors
total_pixels=X_train.shape[1]*X_train.shape[2]
X_train=X_train.reshape(X_train.shape[0],total_pixels).astype('float32')
X_test=X_test.reshape(X_test.shape[0],total_pixels).astype('float32')
#normalize input of range 0-255 to 0-1
X_train=X_train/255
X_test=X_test/255
#one hot encode the outputs
y_train=np_utils.to_categorical(y_train)
y_test=np_utils.to_categorical(y_test)
total_classes=y_test.shape[1]
#define simple model ( baseline model)
def baseline_model():
model=Sequential()
model.add(Dense(total_pixels,input_dim=total_pixels,init='normal',activation='relu'))
model.add(Dense(total_classes,init='normal',activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
return model
#build model
model=baseline_model()
model.fit(X_train,y_train,validation_data=(X_test,y_test),nb_epoch=10,batch_size=200,verbose=2)
scores=model.evaluate(X_test,y_test,verbose=0)
print("Baseline Model Error: %.2f%%" %(100-scores[1]*100))
#making predictions on test data
predictions=model.predict([X_test])
predictions[2000]
#the predicted digit
print(np.argmax(predictions[2000]))
#plot to find what actually the digit is
plt.imshow(X_test[2000].reshape(28,28),cmap=plt.get_cmap('gray'))
predictions[9999]
print(np.argmax(predictions[9999]))
plt.imshow(X_test[9999].reshape(28,28),cmap=plt.get_cmap('gray'))
predictions[5555]
print(np.argmax(predictions[5555]))
plt.imshow(X_test[5555].reshape(28,28),cmap=plt.get_cmap('gray'))
predictions[8876]
print(np.argmax(predictions[8876]))
plt.imshow(X_test[8876].reshape(28,28),cmap=plt.get_cmap('gray'))
predictions[11]
print(np.argmax(predictions[11]))
plt.imshow(X_test[11].reshape(28,28),cmap=plt.get_cmap('gray'))
```
| true |
code
| 0.769687 | null | null | null | null |
|
#Gaussian bayes classifier
In this assignment we will use a Gaussian bayes classfier to classify our data points.
# Import packages
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
from sklearn.metrics import classification_report,accuracy_score
from matplotlib import cm
```
# Load training data
Our data has 2D feature $x1, x2$. Data from the two classes is are in $\texttt{class1_train}$ and $\texttt{class2_train}$ respectively. Each file has two columns corresponding to the 2D feature.
```
class1_train = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L3/class1_train').to_numpy()
class2_train = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L3/class2_train').to_numpy()
print(class1_train)
```
# Visualize training data
Generate 2D scatter plot of the training data. Plot the points from class 1 in red and the points from class 2 in blue.
```
plt.scatter(x=class1_train[:,0],y=class1_train[:,1],color='red',label='class1_train',marker='+')
plt.scatter(x=class2_train[:,0],y=class2_train[:,1],color='blue',label='class2_train',marker='*')
plt.legend()
plt.xlabel('X1')
plt.ylabel('X2')
plt.show()
```
# Maximum likelihood estimate of parameters
We will model the likelihood, $P(\mathbf{x}|C_1)$ and $P(\mathbf{x}|C_2)$ as $\mathcal{N}(\mathbf{\mu_1},|\Sigma_1)$ and $\mathcal{N}(\mathbf{\mu_2},|\Sigma_2)$ respectively. The prior probability of the classes are called, $P(C_1)=\pi_1$ and $P(C_2)=\pi_2$.
The maximum likelihood estimate of the parameters as follows:
\begin{align*}
\pi_k &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)}{N}\\
\mathbf{\mu_k} &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)\mathbf{x}^i}{\sum_{i=1}^N \mathbb{1}(t^i=k)}\\
\Sigma_k &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)(\mathbf{x}^i-\mathbf{\mu_k})(\mathbf{x}^i-\mathbf{\mu_k})^T}{\sum_{i=1}^N \mathbb{1}(t^i=k)}\\
\end{align*}
Here, $t^i$ is the target or class of $i^{th}$ sample. $\mathbb{1}(t^i=k)$ is 1 if $t^i=k$ and 0 otherwise.
Compute maximum likelihood values estimates of $\pi_1$, $\mu_1$, $\Sigma_1$ and $\pi_2$, $\mu_2$, $\Sigma_2$
Also print these values
```
num_rows1, num_cols1=class1_train.shape
num_rows2, num_cols2=class2_train.shape
prior1=(num_rows1)/(num_rows1+num_rows2)
prior2=(num_rows2)/(num_rows1+num_rows2)
mean1=class1_train.mean(axis=0)
mean2=class2_train.mean(axis=0)
sig1=np.cov(class1_train,rowvar=False)
sig2=np.cov(class2_train,rowvar=False)
```
# Visualize the likelihood
Now that you have the parameters, let us visualize how the likelihood looks like.
1. Use $\texttt{np.mgrid}$ to generate points uniformly spaced in -5 to 5 along 2 axes
1. Use $\texttt{multivariate_normal.pdf}$ to get compute the Gaussian likelihood for each class
1. Use $\texttt{plot_surface}$ to plot the likelihood of each class.
1. Use $\texttt{contourf}$ to plot the likelihood of each class.
You may find the code in the lecture notebook helpful.
For the plots, use $\texttt{cmap=cm.Reds}$ for class 1 and $\texttt{cmap=cm.Blues}$ for class 2. Use $\texttt{alpha=0.5}$ to overlay both plots together.
```
from matplotlib import cm
x, y = np.mgrid[-5:5:.01, -5:5:.01]
pos = np.empty(x.shape + (2,))
pos[:, :, 0] = x; pos[:, :, 1] = y
rv1 = multivariate_normal(mean =mean1, cov =sig1)
rv2 = multivariate_normal(mean =mean2, cov =sig2)
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(131, projection='3d')
plt.xlabel('x')
plt.ylabel('y')
ax.plot_surface(x,y,rv1.pdf(pos), cmap=cm.Reds,alpha=.5)
ax.plot_surface(x,y,rv2.pdf(pos), cmap=cm.Blues,alpha=.5)
plt.subplot(132)
plt.contourf(x, y, rv1.pdf(pos), cmap=cm.Reds)
plt.colorbar()
plt.subplot(133)
plt.contourf(x, y, rv2.pdf(pos), cmap=cm.Blues)
plt.colorbar()
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
#Visualize the posterior
Use the prior and the likelihood you've computed to obtain the posterior distribution for each class.
Like in the case of the likelihood above, make same similar surface and contour plots for the posterior.
```
likelihood1=rv1.pdf(pos)
likelihood2=rv2.pdf(pos)
posterior1 = likelihood1* prior1 /(likelihood1*prior1+likelihood2*prior2)
posterior2 = likelihood2* prior2 /(likelihood1*prior1+likelihood2*prior2)
from matplotlib import cm
x, y = np.mgrid[-5:5:.01, -5:5:.01]
pos = np.empty(x.shape + (2,))
pos[:, :, 0] = x; pos[:, :, 1] = y
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(131, projection='3d')
plt.xlabel('x')
plt.ylabel('y')
ax.plot_surface(x,y,posterior1, cmap=cm.Reds,alpha=.5)
ax.plot_surface(x,y,posterior2, cmap=cm.Blues,alpha=.5)
plt.subplot(132)
plt.contourf(x, y,posterior1, cmap=cm.Reds,alpha=.5)
plt.colorbar()
plt.contourf(x, y, posterior2, cmap=cm.Blues,alpha=.5)
plt.colorbar()
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
# Decision boundary
1. Decision boundary can be obtained by $P(C_2|x)>P(C_1|x)$ in python. Use $\texttt{contourf}$ to plot the decision boundary. Use $\texttt{cmap=cm.Blues}$ and $\texttt{alpha=0.5}$
1. Also overlay the scatter plot of train data points from the 2 classes on the same plot. Use red color for class 1 and blue color for class 2
```
des=posterior2>posterior1
plt.contourf(x, y,posterior1, cmap=cm.Reds,alpha=.5)
plt.contourf(x, y, posterior2, cmap=cm.Blues,alpha=.5)
plt.contourf(x, y,des, cmap=cm.Greens,alpha=.5)
plt.scatter(x=class1_train[:,0],y=class1_train[:,1],color='red',label='class1_train',marker='+')
plt.scatter(x=class2_train[:,0],y=class2_train[:,1],color='blue',label='class2_train',marker='*')
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
# Test Data
Now let's use our trained model to classify test data points
1. $\texttt{test_data}$ contains the $x1,x2$ features of different data points
1. $\texttt{test_label}$ contains the true class of the data points. 0 means class 1. 1 means class 2.
1. Classify the test points based on whichever class has higher posterior probability for each data point
1. Use $\texttt{classification_report}$ to test the classification performance
```
test = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L3/test').to_numpy()
test_data, test_label = test[:,:2], test[:,2]
likelihood1=rv1.pdf(test_data)
likelihood2=rv2.pdf(test_data)
p1_test= likelihood1* prior1 /(likelihood1*prior1+likelihood2*prior2)
p2_test= likelihood2* prior2 /(likelihood1*prior1+likelihood2*prior2)
classify_test_label=p2_test>p1_test
classify_test_label=np.where(classify_test_label,1,0)
classify_test_label
test_label
print(accuracy_score(test_label,classify_test_label))
print(classification_report(test_label,classify_test_label))
```
| true |
code
| 0.689332 | null | null | null | null |
|
# Keras Callbacks
- Keras Callbacks provide useful tools to babysit training process
- ModelCheckpoint
- Earlystopping
- ReduceLROnPlateau
```
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.utils.np_utils import to_categorical
from keras import optimizers
from keras.callbacks import *
from keras.layers import *
```
### Load Dataset
```
data = load_digits()
X_data = data.images
y_data = data.target
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size = 0.3, random_state = 777)
# reshaping X data => flatten into 1-dimensional
X_train = X_train.reshape((X_train.shape[0], -1))
X_test = X_test.reshape((X_test.shape[0], -1))
# converting y data into categorical (one-hot encoding)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
```
## 1. ModelCheckpoint
- **ModelCheckpoint** is used to 'checkpoint' model results on training
- Oftentimes, it is used to save only best model
```
def create_model():
model = Sequential()
model.add(Dense(100, input_shape = (X_train.shape[1],)))
model.add(Activation('relu'))
model.add(Dense(100))
model.add(Activation('relu'))
model.add(Dense(y_train.shape[1]))
model.add(Activation('sigmoid'))
model.compile(optimizer = 'Adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
return model
model = create_model()
```
### Creating callbacks list
- ModelCheckpoint instances are stored in list and passed on when training
```
callbacks = [ModelCheckpoint(filepath = 'saved_model.hdf5', monitor='val_acc', verbose=1, mode='max')]
model.fit(X_train, y_train, epochs = 10, batch_size = 500, callbacks = callbacks, validation_data = (X_test, y_test))
results = model.evaluate(X_test, y_test)
print('Accuracy: ', results[1])
```
### Loading saved weights
- Saved weights can be loaded and used without further training
- This is especially useful when training time is long and model has to be reused a number of times
```
another_model = create_model()
another_model.load_weights('saved_model.hdf5')
another_model.compile(optimizer = 'Adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
results = another_model.evaluate(X_test, y_test)
print('Accuracy: ', results[1])
```
### Selecting best model
- Best model during whole epoch can be selected using ModelCheckpoint
- Set **'save_best_only'** parameter as True
- Usually, validation accuracy (val acc) is monitored and used as criterion for best model
```
callbacks = [ModelCheckpoint(filepath = 'best_model.hdf5', monitor='val_acc', verbose=1, save_best_only = True, mode='max')]
model = create_model()
model.fit(X_train, y_train, epochs = 10, batch_size = 500, callbacks = callbacks, validation_data = (X_test, y_test))
best_model = create_model()
best_model.load_weights('best_model.hdf5')
best_model.compile(optimizer = 'Adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
results = best_model.evaluate(X_test, y_test)
print('Accuracy: ', results[1])
```
## 2. Early stopping
- Cease training when model seems to overfit, i.e., target metric has stopped improving for certain epochs
- One can set **'patience'** parameter, which denotes number of epochs that model will endure without any improvements
- e.g., if patience = 1, training will stop when metric has stopped improving for 2 epochs
```
callbacks = [EarlyStopping(monitor = 'acc', patience = 1)]
model = create_model()
# you could see that model stops training after 7 epochs
model.fit(X_train, y_train, epochs = 20, batch_size = 500, callbacks = callbacks, validation_data = (X_test, y_test))
```
## 3. Reduce learning rate
- In general, it is more desirable to lower down learning rate (learning rate decay) as training proceeds
- However, coming up with optimal learning rate decay scheme is not easy
- So, one of heuristics would be reducing learning rate when plateau is reached, in other words, when loss stops decreasing for certain number of epochs
- learning rate will be decreased by factor of 'factor' parameter when objective metric has not improved for 'patience' parameter
<br>
<img src="https://i.ytimg.com/vi/s6jC7Wc9iMI/maxresdefault.jpg" style="width: 600px"/>
```
# halve learning rate when validation loss has not reduced for more than 5 epochs
callbacks = [ReduceLROnPlateau(monitor = 'val_loss', factor = 0.5, patience = 5)]
model = create_model()
model.fit(X_train, y_train, epochs = 20, batch_size = 500, callbacks = callbacks, validation_data = (X_test, y_test))
results = model.evaluate(X_test, y_test)
print('Accuracy: ', results[1])
```
| true |
code
| 0.860164 | null | null | null | null |
|
# ReadData
```
import pandas as pd
import numpy as np
T2path = 'StataReg/0419-base/T2.csv'
T2 = pd.read_csv(T2path)
T3_Whole = T2[T2['Selected'] == 1]
T3path = 'StataReg/0419-base/Data.dta'
T3 = pd.read_stata(T3path)
T3_Whole['WholeNum'].sum()
len(T2['NotInHighMobility'] == 0)
(T2['NotInHighMobility'] == 0).value_counts()
T2[T2['NotInHighMobility'] == 0]['Geographic Area Name'].to_list()
T3_Whole[T3_Whole['NotInHighMobility'] == 0]['Geographic Area Name'].to_list()
```
# MissingNo
```
import missingno as misno
misno.matrix(T2[T3.columns[1:-5]])
```
# Correlation Matrix Heatmap
```
XY_cols = T3.columns[3:-5]
XY_cols
newdf = T3[XY_cols]
newdf
from matplotlib import pyplot as plt
import seaborn as sns
# Compute the correlation matrix
corr = newdf.corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(230, 20, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
```
# Republican Rate and CvdVax
```
T3_Whote.columns
T3_Whole_repub = T3_Whote[]
import seaborn as sns
from matplotlib import pyplot as plt
sns.set_theme(style="white")
dims = (10, 6)
# df = mylib.load_data()
fig, ax = plt.subplots(figsize=dims)
ax = sns.scatterplot(x='republican_rate', y='CvdVax_MWhiteRate', data = T3_Whole, size='WholeNum', sizes = (30, 300))
# ax = sns.regplot(x=OrgDF['republican_rate'], y=OrgDF['Vax_White'])
ax.set_title('Republican Rate and White COVID-19 Vax Rate')
fig, ax = plt.subplots(figsize=dims)
ax = sns.scatterplot(x='republican_rate', y='CvdVax_MBlackRate', data = T3_Whole, size='WholeNum', sizes = (30, 300))
# ax = sns.regplot(x=OrgDF['republican_rate'], y=OrgDF['Vax_Black'])
ax.set_title('Republican Rate and Black COVID-19 Vax Rate')
fig, ax = plt.subplots(figsize=dims)
ax = sns.scatterplot(x='republican_rate', y='FluVax_NHWhiteRate', data = T3_Whole, size='WholeNum', sizes = (30, 300))
# ax = sns.regplot(x=OrgDF['republican_rate'], y=OrgDF['FluVax_White'], )
ax.set_title('Republican Rate and White Flu Vax Rate')
fig, ax = plt.subplots(figsize=dims)
ax = sns.scatterplot(x='republican_rate', y='FluVax_BlackRate', data = T3_Whole, size='WholeNum', sizes = (30, 300))
# ax = sns.regplot(x=OrgDF['republican_rate'], y=OrgDF['FluVax_Black'])
ax.set_title('Republican Rate and Black Flu Vax Rate')
# ax = sns.regplot(x=OrgDF['Vax_Black'], y=OrgDF['republican_rate'], color="g")
```
# Covid Bar Chart (Weighted)
```
import pandas as pd
T2path = 'StataReg/0419-base/T2.csv'
T2 = pd.read_csv(T2path)
T3_Whole = T2[T2['Selected'] == 1].reset_index(drop = True)
T3path = 'StataReg/0419-base/Data.dta'
T3 = pd.read_stata(T3path)
print(T3_Whole.shape)
T3_Whole['Prop_Weights'] = T3_Whole['WholeNum'] / T3_Whole['WholeNum'].sum() * len(T3_Whole)
print(T3_Whole['Prop_Weights'].sum())
import pandas as pd
RawData = T3_Whole
print(RawData.shape)
L = []
for idx, row in RawData.iterrows():
d = row.to_dict()
dn = {}
dn['Vaccination'] = 'COVID-19'
dn['Race'] = 'Black'
dn['VaxRate'] = d['CvdVax_MBlackRate'] # * d['Prop_Weights']
dn['Prop_Weights'] = d['Prop_Weights']
dn['Republican'] = d['republican']
dn['Vaccination Rate (%)'] = d['CvdVax_MBlackRate'] * d['Prop_Weights']
L.append(dn)
dn = {}
dn['Vaccination'] = 'COVID-19'
dn['Race'] = 'White'
dn['VaxRate'] = d['CvdVax_MWhiteRate'] # * d['Prop_Weights']
dn['Prop_Weights'] = d['Prop_Weights']
dn['Republican'] = d['republican']
dn['Vaccination Rate (%)'] = d['CvdVax_MWhiteRate']* d['Prop_Weights']
L.append(dn)
newdf = pd.DataFrame(L)
# print(newdf.shape)
newdf
import numpy as np
df = newdf[newdf['Race'] == 'Black']
values = df['VaxRate']
weights= df['Prop_Weights']
average = np.ma.average(values, weights = weights, axis=0)
variance = np.dot(weights, (values - average) ** 2) / weights.sum()
std = np.sqrt(variance)
print(average,std)
black_w_average = average
import numpy as np
df = newdf[newdf['Race'] == 'White']
values = df['VaxRate']
weights= df['Prop_Weights']
average = np.ma.average(values, weights = weights, axis=0)
variance = np.dot(weights, (values - average) ** 2) / weights.sum()
std = np.sqrt(variance)
print(average,std)
white_w_average = average
print(white_w_average - black_w_average)
newdf
import numpy as np
print('Republican')
df = newdf[newdf['Race'] == 'Black']
df = df[df['Republican'] == 1]
values = df['VaxRate']
weights= df['Prop_Weights']
average = np.ma.average(values, weights = weights, axis=0)
variance = np.dot(weights, (values - average) ** 2) / weights.sum()
std = np.sqrt(variance)
print('Black (w-mean, w-std):', average, std)
black_w_average = average
import numpy as np
df = newdf[newdf['Race'] == 'White']
df = df[df['Republican'] == 1]
values = df['VaxRate']
weights= df['Prop_Weights']
average = np.ma.average(values, weights = weights, axis=0)
variance = np.dot(weights, (values - average) ** 2) / weights.sum()
std = np.sqrt(variance)
print('White (w-mean, w-std):', average,std)
white_w_average = average
print('Diff:', white_w_average - black_w_average)
import numpy as np
print('Democrat')
df = newdf[newdf['Race'] == 'Black']
df = df[df['Republican'] == 0]
values = df['VaxRate']
weights= df['Prop_Weights']
average = np.ma.average(values, weights = weights, axis=0)
variance = np.dot(weights, (values - average) ** 2) / weights.sum()
std = np.sqrt(variance)
print('Black (w-mean, w-std):', average, std)
black_w_average = average
import numpy as np
df = newdf[newdf['Race'] == 'White']
df = df[df['Republican'] == 0]
values = df['VaxRate']
weights= df['Prop_Weights']
average = np.ma.average(values, weights = weights, axis=0)
variance = np.dot(weights, (values - average) ** 2) / weights.sum()
std = np.sqrt(variance)
print('White (w-mean, w-std):', average,std)
white_w_average = average
print('Diff:', white_w_average - black_w_average)
from matplotlib import pyplot as plt
import seaborn as sns
# sns.set_theme(style="ticks")
sns.set_theme(style="whitegrid")
sns.set(font_scale=1.4)
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
dims = (10, 6)
fig, ax = plt.subplots(figsize=dims)
ax.set(ylim=(0, 65))
ax = sns.barplot(x="Race" , y="Vaccination Rate (%)", #estimator=sum,
palette = ['grey', 'red'], alpha = 0.5,
data=newdf, errwidth = 2, errcolor = 'black', capsize=.4)
change_width(ax, 0.6)
county_num = len(RawData)
date = '2021-04-19'
ax.set_ylabel("Weighted Vaccination Rate (%)")
ax.set_title('COVID-19 Vaccination Rate ({} Counties, {})'.format(county_num, date))
plt.show()
fig.savefig('bar-covid.pdf', dpi=5000)
print('April 19')
Rate = RawData[[ 'CvdVax_Disparity', 'CvdVax_MBlackRate', 'CvdVax_MWhiteRate',]].describe()
Rate.to_clipboard()
Rate
```
# Covid Bar Chart (Original)
```
newdf
from matplotlib import pyplot as plt
import seaborn
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
# import mylib
time = '2021-04-19'
dims = (10, 6)
# df = mylib.load_data()
fig, ax = plt.subplots(figsize=dims)
# seaborn.violinplot(ax=ax, data=df, **violin_options)
ax.set(ylim=(0, 65))
# df.to_csv('759_rate.csv')
ax = sns.barplot(x="Race" , y="VaxRate", palette = ['grey', 'red'], alpha = 0.5,
data=newdf, errwidth = 2, errcolor = 'black', capsize=.3)
change_width(ax, 0.5)
ax.set_title('COVID-19 Vaccination Rate ({} Counties, {})'.format(county_num, time))
plt.show()
```
# Covid Distribution
```
import pandas as pd
from matplotlib import pyplot# as plt
sns.set_theme(style="whitegrid")
sns.set(font_scale=1.4)
RawData = T3_Whole
print(RawData.shape)
dims = (10, 6)
# df = mylib.load_data()
fig, ax = pyplot.subplots(figsize=dims)
ax = sns.distplot(RawData['CvdVax_MBlackRate'], hist=True, kde=True,
bins=int(100), color = 'grey',
kde_kws={'linewidth': 1}, label='Black')
ax = sns.distplot(RawData['CvdVax_MWhiteRate'], hist=True, kde=True,
bins=int(100), color = 'red',
kde_kws={'linewidth': 1}, label='White')
pyplot.legend(loc='best')
counties_num = len(RawData)
ax.set_title('COVID-19 Vaccination Rate Distribution ({} Counties, {})'.format(county_num, date))
ax.set(xlabel='COVID-19 Vaccination Rate (%)')
ax.set(ylabel ='Density')
ax.set(xlim=(0, 100))
ax.set(ylim=(0, 0.155))
fig.savefig('dist-covid.pdf', dpi=5000)
```
# Flu Bar Chart (Weighted)
```
import pandas as pd
T2path = 'StataReg/0419-base/T2.csv'
T2 = pd.read_csv(T2path)
T3_Whole = T2[T2['Selected'] == 1].reset_index(drop = True)
T3path = 'StataReg/0419-base/Data.dta'
T3 = pd.read_stata(T3path)
print(T3_Whole.shape)
T3_Whole['Prop_Weights'] = T3_Whole['WholeNum'] / T3_Whole['WholeNum'].sum() * len(T3_Whole)
print(T3_Whole['Prop_Weights'].sum())
import pandas as pd
RawData = T3_Whole
print(RawData.shape)
L = []
for idx, row in RawData.iterrows():
d = row.to_dict()
dn = {}
dn = {}
dn['Vaccination'] = 'Flu'
dn['Race'] = 'Black'
dn['VaxRate'] = d['FluVax_BlackRate'] # * d['Prop_Weights']
dn['Prop_Weights'] = d['Prop_Weights']
dn['Vaccination Rate (%)'] = d['FluVax_BlackRate']* d['Prop_Weights']
# dn['Population'] = d['Total_Whole']
# dn['Rate-Population'] = (dn['Vaccination Rate (%)'] , dn['Population'])
L.append(dn)
dn = {}
dn['Vaccination'] = 'Flu'
dn['Race'] = 'White'
dn['VaxRate'] = d['FluVax_NHWhiteRate'] # * d['Prop_Weights']
dn['Prop_Weights'] = d['Prop_Weights']
dn['Vaccination Rate (%)'] = d['FluVax_NHWhiteRate']* d['Prop_Weights']
L.append(dn)
newdf = pd.DataFrame(L)
# print(newdf.shape)
newdf
import numpy as np
df = newdf[newdf['Race'] == 'Black']
values = df['VaxRate']
weights= df['Prop_Weights']
average = np.ma.average(values, weights = weights, axis=0)
variance = np.dot(weights, (values - average) ** 2) / weights.sum()
std = np.sqrt(variance)
print(average,std)
black_w_average = average
import numpy as np
df = newdf[newdf['Race'] == 'White']
values = df['VaxRate']
weights= df['Prop_Weights']
average = np.ma.average(values, weights = weights, axis=0)
variance = np.dot(weights, (values - average) ** 2) / weights.sum()
std = np.sqrt(variance)
print(average,std)
white_w_average = average
print(white_w_average - black_w_average)
from matplotlib import pyplot as plt
import seaborn as sns
# sns.set_theme(style="ticks")
sns.set_theme(style="whitegrid")
sns.set(font_scale=1.4)
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
# import mylib
newdf = newdf[newdf['Vaccination'] == 'Flu']
dims = (10, 6)
# df = mylib.load_data()
fig, ax = plt.subplots(figsize=dims)
# seaborn.violinplot(ax=ax, data=df, **violin_options)
ax.set(ylim=(0, 65))
if True:
# df.to_csv('759_rate.csv')
ax = sns.barplot(x="Race" , y="Vaccination Rate (%)", palette = ['grey', 'red'], # ci = 'sd',
alpha = 0.5, # estimator = sum,
data=newdf, errwidth = 2, errcolor = 'black', capsize=.4)
change_width(ax, 0.6)
else:
ax = sns.barplot(x="Vaccination" , y="Vaccination Rate (%)", hue = 'Race', palette = ['grey', 'red'], alpha = 0.5,
data=newdf, errwidth = 2, errcolor = 'black', capsize=.4)
# ax.set_ylabel("Weighted Vaccination Rate (%)")
# ax.set_title('Flu Vaccination Rate (759 Counties, 2019)')
county_num = len(RawData)
date = '2019'
ax.set_ylabel("Weighted Vaccination Rate (%)")
ax.set_title('Flu Vaccination Rate ({} Counties, {})'.format(county_num, date))
plt.show()
print('April 19')
Rate = RawData[[ 'FluVax_Disparity', 'FluVax_BlackRate', 'FluVax_NHWhiteRate',]].describe()
Rate.to_clipboard()
Rate
fig.savefig('bar-flu.pdf', dpi=5000)
```
# Flu Bar Chart (Original)
```
from matplotlib import pyplot as plt
import seaborn
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
# import mylib
time = '2019'
dims = (10, 6)
# df = mylib.load_data()
fig, ax = plt.subplots(figsize=dims)
# seaborn.violinplot(ax=ax, data=df, **violin_options)
ax.set(ylim=(0, 65))
# df.to_csv('759_rate.csv')
ax = sns.barplot(x="Race" , y="VaxRate", palette = ['grey', 'red'], alpha = 0.5,
data=newdf, errwidth = 2, errcolor = 'black', capsize=.3)
change_width(ax, 0.5)
ax.set_title('COVID-19 Vaccination Rate ({} Counties, {})'.format(county_num, time))
ax.set_ylabel('Flu Vaccination Rate %')
plt.show()
```
# Flu Distribution
```
import pandas as pd
from matplotlib import pyplot# as plt
sns.set_theme(style="whitegrid")
sns.set(font_scale=1.4)
# Rate = T2[['Rate_Diff', 'Rate_Black', 'Rate_White', 'FluDiff', 'FluBlack', 'FluWhite',]].mean()*100
# Rate
RawData = T3_Whole
print(RawData.shape)
dims = (10, 6)
# df = mylib.load_data()
fig, ax = pyplot.subplots(figsize=dims)
ax = sns.distplot(RawData['FluVax_BlackRate'], hist=True, kde=True,
bins=int(100), color = 'grey',
kde_kws={'linewidth': 1}, label='Black')
ax = sns.distplot(RawData['FluVax_NHWhiteRate'], hist=True, kde=True,
bins=int(100), color = 'red',
kde_kws={'linewidth': 1}, label='White')
pyplot.legend(loc='best')
county_num = len(RawData)
date = '2019'
ax.set_title('Flu Vaccination Rate Distribution ({} Counties, {})'.format(county_num, date))
ax.set(xlabel='Flu Vaccination Rate (%)')
ax.set(ylabel ='Density')
ax.set(xlim=(0, 100))
ax.set(ylim=(0, 0.155))
fig.savefig('dist-flu.pdf', dpi=5000)
```
# Black and NHWhite Weighted Proportion
```
import pandas as pd
T2path = 'StataReg/0419-base/T2.csv'
T2 = pd.read_csv(T2path)
T3_Whole = T2[T2['Selected'] == 1].reset_index(drop = True)
T3path = 'StataReg/0419-base/Data.dta'
T3 = pd.read_stata(T3path)
print(T3_Whole.shape)
T3_Whole['Prop_Weights'] = T3_Whole['WholeNum'] / T3_Whole['WholeNum'].sum() * len(T3_Whole)
print(T3_Whole['Prop_Weights'].sum())
import pandas as pd
RawData = T3_Whole
print(RawData.shape)
L = []
for idx, row in RawData.iterrows():
d = row.to_dict()
dn = {}
# dn['Vaccination'] = 'COVID-19'
dn['Race'] = 'Black'
# dn['VaxRate'] = d['CvdVax_MBlackRate'] # * d['Prop_Weights']
dn['Prop_Weights'] = d['Prop_Weights']
dn['Republican'] = d['republican']
# dn['Vaccination Rate (%)'] = d['CvdVax_MBlackRate'] * d['Prop_Weights']
dn['RacePropotion'] = d['BlackNum'] / d['WholeNum']
L.append(dn)
dn = {}
# dn['Vaccination'] = 'COVID-19'
dn['Race'] = 'White'
# dn['VaxRate'] = d['CvdVax_MWhiteRate'] # * d['Prop_Weights']
dn['Prop_Weights'] = d['Prop_Weights']
dn['Republican'] = d['republican']
dn['RacePropotion'] = d['NHWhiteNum'] / d['WholeNum']
L.append(dn)
dn = {}
# dn['Vaccination'] = 'COVID-19'
dn['Race'] = 'White-Black'
# dn['VaxRate'] = d['CvdVax_MWhiteRate'] # * d['Prop_Weights']
dn['Prop_Weights'] = d['Prop_Weights']
dn['Republican'] = d['republican']
dn['RacePropotion'] = d['NHWhiteNum'] / d['WholeNum'] - d['BlackNum'] / d['WholeNum']
L.append(dn)
newdf = pd.DataFrame(L)
# print(newdf.shape)
newdf
2268 / 3
import numpy as np
print('Republican')
df = newdf[newdf['Race'] == 'Black']
df = df[df['Republican'] == 1]
repu_black_values = df['RacePropotion']
repu_black_weights= df['Prop_Weights']
average = np.ma.average(repu_black_values, weights = repu_black_weights, axis=0)
variance = np.dot(repu_black_weights, (repu_black_values - average) ** 2) / repu_black_weights.sum()
std = np.sqrt(variance)
print('Black (w-mean, w-std):', average, std, ', county number:', len(df))
black_w_average = average
import numpy as np
df = newdf[newdf['Race'] == 'White']
df = df[df['Republican'] == 1]
repu_white_values = df['RacePropotion']
repu_white_weights= df['Prop_Weights']
average = np.ma.average(repu_white_values, weights = repu_white_weights, axis=0)
variance = np.dot(repu_white_weights, (repu_white_values - average) ** 2) / repu_white_weights.sum()
std = np.sqrt(variance)
print('White (w-mean, w-std):', average,std, ', county number:', len(df))
white_w_average = average
print('Diff:', white_w_average - black_w_average)
import numpy as np
df = newdf[newdf['Race'] == 'White-Black']
df = df[df['Republican'] == 1]
repu_diff_values = df['RacePropotion']
repu_diff_weights= df['Prop_Weights']
average = np.ma.average(repu_diff_values, weights = repu_diff_weights, axis=0)
variance = np.dot(repu_diff_weights, (repu_diff_values - average) ** 2) / repu_diff_weights.sum()
std = np.sqrt(variance)
print('White-Black (w-mean, w-std):', average, std, ', county number:', len(df))
import numpy as np
print('Democratic')
df = newdf[newdf['Race'] == 'Black']
df = df[df['Republican'] == 0]
demo_black_values = df['RacePropotion']
demo_black_weights= df['Prop_Weights']
average = np.ma.average(demo_black_values, weights = demo_black_weights, axis=0)
variance = np.dot(demo_black_weights, (demo_black_values - average) ** 2) / demo_black_weights.sum()
std = np.sqrt(variance)
print('Black (w-mean, w-std):', average, std, ', county number:', len(df))
black_w_average = average
import numpy as np
df = newdf[newdf['Race'] == 'White']
df = df[df['Republican'] == 0]
demo_white_values = df['RacePropotion']
demo_white_weights= df['Prop_Weights']
average = np.ma.average(demo_white_values, weights = demo_white_weights, axis=0)
variance = np.dot(demo_white_weights, (demo_white_values - average) ** 2) / demo_white_weights.sum()
std = np.sqrt(variance)
print('White (w-mean, w-std):', average,std, ', county number:', len(df))
white_w_average = average
print('Diff:', white_w_average - black_w_average)
import numpy as np
df = newdf[newdf['Race'] == 'White-Black']
df = df[df['Republican'] == 0]
demo_diff_values = df['RacePropotion']
demo_diff_weights= df['Prop_Weights']
average = np.ma.average(demo_diff_values, weights = demo_diff_weights, axis=0)
variance = np.dot(demo_diff_weights, (demo_diff_values - average) ** 2) / demo_diff_weights.sum()
std = np.sqrt(variance)
print('White-Black (w-mean, w-std):', average, std, ', county number:', len(df))
# np.ma.average?
from scipy import stats
norm_weight = repu_diff_weights / repu_diff_weights.sum()
repu_diff_weighted = (repu_diff_values * norm_weight).values
print(len(repu_diff_weighted))
norm_weight = demo_diff_weights / demo_diff_weights.sum()
demo_diff_weighted = (demo_diff_values * norm_weight).values
print(len(demo_diff_weighted))
print(stats.ttest_ind(repu_diff_values, demo_diff_values))
print(stats.ttest_ind(repu_diff_weighted, demo_diff_weighted))
from scipy import stats
norm_weight = repu_white_weights / repu_white_weights.sum()
repu_white_weighted = (repu_white_values * norm_weight).values
print(len(repu_white_weighted))
norm_weight = demo_white_weights / demo_white_weights.sum()
demo_white_weighted = (demo_white_values * norm_weight).values
print(len(demo_white_weighted))
print('For white')
print(stats.ttest_ind(repu_white_values, demo_white_values))
print(stats.ttest_ind(repu_white_weighted, demo_white_weighted))
from scipy import stats
norm_weight = repu_black_weights / repu_black_weights.sum()
repu_black_weighted = (repu_black_values * norm_weight).values
print(len(repu_black_weighted))
norm_weight = demo_black_weights / demo_black_weights.sum()
demo_black_weighted = (demo_black_values * norm_weight).values
print(len(demo_black_weighted))
print('For Black')
print(stats.ttest_ind(repu_black_values, demo_black_values))
print(stats.ttest_ind(repu_black_weighted, demo_black_weighted))
# demo_diff_values
from scipy import stats
stats.ttest_ind(repu_diff_values, demo_diff_values)
```
| true |
code
| 0.536799 | null | null | null | null |
|
# Classification
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
from utils import Or70, Pu50, Gr30
color_list3 = [Or70, Pu50, Gr30]
import matplotlib.pyplot as plt
from cycler import cycler
marker_cycle = cycler(marker=['s', 'o', '^'])
color_cycle = cycler(color=color_list3)
line_cycle = cycler(linestyle=['-', '--', ':'])
plt.rcParams['axes.prop_cycle'] = (color_cycle +
marker_cycle +
line_cycle)
```
Classification might be the most well-known application of Bayesian methods, made famous in the 1990s as the basis of the first generation of [spam filters](https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering).
In this chapter, I'll demonstrate Bayesian classification using data collected and made available by Dr. Kristen Gorman at the Palmer Long-Term Ecological Research Station in Antarctica (see Gorman, Williams, and Fraser, ["Ecological Sexual Dimorphism and Environmental Variability within a Community of Antarctic Penguins (Genus *Pygoscelis*)"](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0090081), March 2014).
We'll use this data to classify penguins by species.
The following cell downloads the raw data.
```
# Load the data files from
# https://github.com/allisonhorst/palmerpenguins
# With gratitude to Allison Horst (@allison_horst)
import os
if not os.path.exists('penguins_raw.csv'):
!wget https://github.com/allisonhorst/palmerpenguins/raw/master/inst/extdata/penguins_raw.csv
```
## Penguin Data
I'll use Pandas to load the data into a `DataFrame`.
```
import pandas as pd
df = pd.read_csv('penguins_raw.csv')
df.shape
```
The dataset contains one row for each penguin and one column for each variable.
```
df.head()
```
For convenience, I'll create a new column called `Species2` that contains a shorter version of the species names.
```
def shorten(species):
return species.split()[0]
df['Species2'] = df['Species'].apply(shorten)
```
Three species of penguins are represented in the dataset: Adélie, Chinstrap and Gentoo.
These species are shown in this illustration (by Allison Horst, available under the [CC-BY](https://creativecommons.org/licenses/by/2.0/) license):
<img width="400" src="https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/images/EaAWkZ0U4AA1CQf.jpeg" alt="Drawing of three penguin species">
The measurements we'll use are:
* Body Mass in grams (g).
* Flipper Length in millimeters (mm).
* Culmen Length in millimeters.
* Culmen Depth in millimeters.
If you are not familiar with the word "culmen", it refers to the [top margin of the beak](https://en.wikipedia.org/wiki/Bird_measurement#Culmen).
The culmen is shown in the following illustration (also by Allison Horst):
<img width="300" src="https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/images/EaAXQn8U4AAoKUj.jpeg">
These measurements will be most useful for classification if there are substantial differences between species and small variation within species. To see whether that is true, and to what degree, I'll plot cumulative distribution functions (CDFs) of each measurement for each species.
The following function takes the `DataFrame` and a column name.
It returns a dictionary that maps from each species name to a `Cdf` of the values in the column named `colname`.
```
def make_cdf_map(df, colname, by='Species2'):
"""Make a CDF for each species."""
cdf_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
cdf_map[species] = Cdf.from_seq(group, name=species)
return cdf_map
```
The following function plots a `Cdf` of the values in the given column for each species:
```
from empiricaldist import Cdf
from utils import decorate
def plot_cdfs(df, colname, by='Species2'):
"""Make a CDF for each species.
df: DataFrame
colname: string column name
by: string column name
returns: dictionary from species name to Cdf
"""
cdf_map = make_cdf_map(df, colname, by)
for species, cdf in cdf_map.items():
cdf.plot(label=species, marker='')
decorate(xlabel=colname,
ylabel='CDF')
```
Here's what the distributions look like for culmen length.
```
colname = 'Culmen Length (mm)'
plot_cdfs(df, colname)
```
It looks like we can use culmen length to identify Adélie penguins, but the distributions for the other two species almost entirely overlap.
Here are the distributions for flipper length.
```
colname = 'Flipper Length (mm)'
plot_cdfs(df, colname)
```
Using flipper length, we can distinguish Gentoo penguins from the other two species. So with just these two features, it seems like we should be able to classify penguins with some accuracy.
All of these CDFs show the sigmoid shape characteristic of the normal distribution; I will take advantage of that observation in the next section.
Here are the distributions for culmen depth.
```
colname = 'Culmen Depth (mm)'
plot_cdfs(df, colname)
```
And here are the distributions of body mass.
```
colname = 'Body Mass (g)'
plot_cdfs(df, colname)
```
Culmen depth and body mass distinguish Gentoo penguins from the other two species, but these features might not add a lot of additional information, beyond what we get from flipper length and culmen length.
## Normal Models
Let's use these features to classify penguins. We'll proceed in the usual Bayesian way:
1. Define a prior distribution with the three possible species and a prior probability for each,
2. Compute the likelihood of the data for each hypothetical species, and then
3. Compute the posterior probability of each hypothesis.
To compute the likelihood of the data under each hypothesis, I'll use the data to estimate the parameters of a normal distribution for each species.
The following function takes a `DataFrame` and a column name; it returns a dictionary that maps from each species name to a `norm` object.
`norm` is defined in SciPy; it represents a normal distribution with a given mean and standard deviation.
```
from scipy.stats import norm
def make_norm_map(df, colname, by='Species2'):
"""Make a map from species to norm object."""
norm_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
mean = group.mean()
std = group.std()
norm_map[species] = norm(mean, std)
return norm_map
```
For example, here's the dictionary of `norm` objects for flipper length:
```
flipper_map = make_norm_map(df, 'Flipper Length (mm)')
flipper_map.keys()
```
Now suppose we measure a penguin and find that its flipper is 193 cm. What is the probability of that measurement under each hypothesis?
The `norm` object provides `pdf`, which computes the probability density function (PDF) of the normal distribution. We can use it to compute the likelihood of the observed data in a given distribution.
```
data = 193
flipper_map['Adelie'].pdf(data)
```
The result is a probability density, so we can't interpret it as a probability. But it is proportional to the likelihood of the data, so we can use it to update the prior.
Here's how we compute the likelihood of the data in each distribution.
```
hypos = flipper_map.keys()
likelihood = [flipper_map[hypo].pdf(data) for hypo in hypos]
likelihood
```
Now we're ready to do the update.
## The Update
As usual I'll use a `Pmf` to represent the prior distribution. For simplicity, let's assume that the three species are equally likely.
```
from empiricaldist import Pmf
prior = Pmf(1/3, hypos)
prior
```
Now we can do the update in the usual way.
```
posterior = prior * likelihood
posterior.normalize()
posterior
```
A penguin with a 193 mm flipper is unlikely to be a Gentoo, but might be either an Adélie or Chinstrap (assuming that the three species were equally likely before the measurement).
The following function encapsulates the steps we just ran.
It takes a `Pmf` representing the prior distribution, the observed data, and a map from each hypothesis to the distribution of the feature.
```
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
```
The return value is the posterior distribution.
Here's the previous example again, using `update_penguin`:
```
posterior1 = update_penguin(prior, 193, flipper_map)
posterior1
```
As we saw in the CDFs, flipper length does not distinguish strongly between Adélie and Chinstrap penguins.
But culmen length *can* make this distinction, so let's use it to do a second round of classification.
First we estimate distributions of culmen length for each species like this:
```
culmen_map = make_norm_map(df, 'Culmen Length (mm)')
```
Now suppose we see a penguin with culmen length 48 mm.
We can use this data to update the prior.
```
posterior2 = update_penguin(prior, 48, culmen_map)
posterior2
```
A penguin with culmen length 48 mm is about equally likely to be a Chinstrap or Gentoo.
Using one feature at a time, we can often rule out one species or another, but we generally can't identify species with confidence.
We can do better using multiple features.
## Naive Bayesian Classification
To make it easier to do multiple updates, I'll use the following function, which takes a prior `Pmf`, a sequence of measurements and a corresponding sequence of dictionaries containing estimated distributions.
```
def update_naive(prior, data_seq, norm_maps):
"""Naive Bayesian classifier
prior: Pmf
data_seq: sequence of measurements
norm_maps: sequence of maps from species to distribution
returns: Pmf representing the posterior distribution
"""
posterior = prior.copy()
for data, norm_map in zip(data_seq, norm_maps):
posterior = update_penguin(posterior, data, norm_map)
return posterior
```
It performs a series of updates, using one variable at a time, and returns the posterior `Pmf`.
To test it, I'll use the same features we looked at in the previous section: culmen length and flipper length.
```
colnames = ['Flipper Length (mm)', 'Culmen Length (mm)']
norm_maps = [flipper_map, culmen_map]
```
Now suppose we find a penguin with flipper length 193 mm and culmen length 48.
Here's the update:
```
data_seq = 193, 48
posterior = update_naive(prior, data_seq, norm_maps)
posterior
```
It is almost certain to be a Chinstrap.
```
posterior.max_prob()
```
We can loop through the dataset and classify each penguin with these two features.
```
import numpy as np
df['Classification'] = np.nan
for i, row in df.iterrows():
data_seq = row[colnames]
posterior = update_naive(prior, data_seq, norm_maps)
df.loc[i, 'Classification'] = posterior.max_prob()
```
This loop adds a column called `Classification` to the `DataFrame`; it contains the species with the maximum posterior probability for each penguin.
So let's see how many we got right.
```
len(df)
valid = df['Classification'].notna()
valid.sum()
same = df['Species2'] == df['Classification']
same.sum()
```
There are 344 penguins in the dataset, but two of them are missing measurements, so we have 342 valid cases.
Of those, 324 are classified correctly, which is almost 95%.
```
same.sum() / valid.sum()
```
The following function encapsulates these steps.
```
def accuracy(df):
"""Compute the accuracy of classification."""
valid = df['Classification'].notna()
same = df['Species2'] == df['Classification']
return same.sum() / valid.sum()
```
The classifier we used in this section is called "naive" because it ignores correlations between the features. To see why that matters, I'll make a less naive classifier: one that takes into account the joint distribution of the features.
## Joint Distributions
I'll start by making a scatter plot of the data.
```
import matplotlib.pyplot as plt
def scatterplot(df, var1, var2):
"""Make a scatter plot."""
grouped = df.groupby('Species2')
for species, group in grouped:
plt.plot(group[var1], group[var2],
label=species, lw=0, alpha=0.3)
decorate(xlabel=var1, ylabel=var2)
```
Here's a scatter plot of culmen length and flipper length for the three species.
```
var1 = 'Flipper Length (mm)'
var2 = 'Culmen Length (mm)'
scatterplot(df, var1, var2)
```
Within each species, the joint distribution of these measurements forms an oval shape, at least roughly. The orientation of the ovals is along a diagonal, which indicates that there is a correlation between culmen length and flipper length.
If we ignore these correlations, we are assuming that the features are independent. To see what that looks like, I'll make a joint distribution for each species assuming independence.
The following function makes a discrete `Pmf` that approximates a normal distribution.
```
def make_pmf_norm(dist, sigmas=3, n=101):
"""Make a Pmf approximation to a normal distribution."""
mean, std = dist.mean(), dist.std()
low = mean - sigmas * std
high = mean + sigmas * std
qs = np.linspace(low, high, n)
ps = dist.pdf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
```
We can use it, along with `make_joint`, to make a joint distribution of culmen length and flipper length for each species.
```
from utils import make_joint
joint_map = {}
for species in hypos:
pmf1 = make_pmf_norm(flipper_map[species])
pmf2 = make_pmf_norm(culmen_map[species])
joint_map[species] = make_joint(pmf1, pmf2)
```
The following figure compares a scatter plot of the data to the contours of the joint distributions, assuming independence.
```
from utils import plot_contour
scatterplot(df, var1, var2)
for species in hypos:
plot_contour(joint_map[species], alpha=0.5)
```
The contours of a joint normal distribution form ellipses.
In this example, because the features are uncorrelated, the ellipses are aligned with the axes.
But they are not well aligned with the data.
We can make a better model of the data, and use it to compute better likelihoods, with a multivariate normal distribution.
## Multivariate Normal Distribution
As we have seen, a univariate normal distribution is characterized by its mean and standard deviation.
A multivariate normal distribution is characterized by the means of the features and the **covariance matrix**, which contains **variances**, which quantify the spread of the features, and the **covariances**, which quantify the relationships among them.
We can use the data to estimate the means and covariance matrix for the population of penguins.
First I'll select the columns we want.
```
features = df[[var1, var2]]
```
And compute the means.
```
mean = features.mean()
mean
```
We can also compute the covariance matrix:
```
cov = features.cov()
cov
```
The result is a `DataFrame` with one row and one column for each feature. The elements on the diagonal are the variances; the elements off the diagonal are covariances.
By themselves, variances and covariances are hard to interpret. We can use them to compute standard deviations and correlation coefficients, which are easier to interpret, but the details of that calculation are not important right now.
Instead, we'll pass the covariance matrix to `multivariate_normal` which is a SciPy function that creates an object that represents a multivariate normal distribution.
As arguments it takes a sequence of means and a covariance matrix:
```
from scipy.stats import multivariate_normal
multinorm = multivariate_normal(mean, cov)
```
The following function makes a `multivariate_normal` object for each species.
```
def make_multinorm_map(df, colnames):
"""Make a map from each species to a multivariate normal."""
multinorm_map = {}
grouped = df.groupby('Species2')
for species, group in grouped:
features = group[colnames]
mean = features.mean()
cov = features.cov()
multinorm_map[species] = multivariate_normal(mean, cov)
return multinorm_map
```
Here's how we make this map for the first two features, flipper length and culmen length.
```
multinorm_map = make_multinorm_map(df, [var1, var2])
```
## Visualizing a Multivariate Normal Distribution
This section uses some NumPy magic to generate contour plots for multivariate normal distributions. If that's interesting for you, great! Otherwise, feel free to skip to the results. In the next section we'll do the actual classification, which turns out to be easier than the visualization.
I'll start by making a contour map for the distribution of features among Adélie penguins.
Here are the univariate distributions for the two features we'll use and the multivariate distribution we just computed.
```
norm1 = flipper_map['Adelie']
norm2 = culmen_map['Adelie']
multinorm = multinorm_map['Adelie']
```
I'll make a discrete `Pmf` approximation for each of the univariate distributions.
```
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
```
And use them to make a mesh grid that contains all pairs of values.
```
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
X.shape
```
The mesh is represented by two arrays: the first contains the quantities from `pmf1` along the `x` axis; the second contains the quantities from `pmf2` along the `y` axis.
In order to evaluate the multivariate distribution for each pair of values, we have to "stack" the arrays.
```
pos = np.dstack((X, Y))
pos.shape
```
The result is a 3-D array that you can think of as a 2-D array of pairs. When we pass this array to `multinorm.pdf`, it evaluates the probability density function of the distribution for each pair of values.
```
densities = multinorm.pdf(pos)
densities.shape
```
The result is an array of probability densities. If we put them in a `DataFrame` and normalize them, the result is a discrete approximation of the joint distribution of the two features.
```
from utils import normalize
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
normalize(joint)
```
Here's what the result looks like.
```
plot_contour(joint)
decorate(xlabel=var1,
ylabel=var2)
```
The contours of a multivariate normal distribution are still ellipses, but now that we have taken into account the correlation between the features, the ellipses are no longer aligned with the axes.
The following function encapsulate the steps we just did.
```
def make_joint(norm1, norm2, multinorm):
"""Make a joint distribution.
norm1: `norm` object representing the distribution of the first feature
norm2: `norm` object representing the distribution of the second feature
multinorm: `multivariate_normal` object representing the joint distribution
"""
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
pos = np.dstack((X, Y))
densities = multinorm.pdf(pos)
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
return joint
```
The following figure shows a scatter plot of the data along with the contours of the multivariate normal distribution for each species.
```
scatterplot(df, var1, var2)
for species in hypos:
norm1 = flipper_map[species]
norm2 = culmen_map[species]
multinorm = multinorm_map[species]
joint = make_joint(norm1, norm2, multinorm)
plot_contour(joint, alpha=0.5)
```
Because the multivariate normal distribution takes into account the correlations between features, it is a better model for the data. And there is less overlap in the contours of the three distributions, which suggests that they should yield better classifications.
## A Less Naive Classifier
In a previous section we used `update_penguin` to update a prior `Pmf` based on observed data and a collection of `norm` objects that model the distribution of observations under each hypothesis. Here it is again:
```
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
```
Last time we used this function, the values in `norm_map` were `norm` objects, but it also works if they are `multivariate_normal` objects.
We can use it to classify a penguin with flipper length 193 and culmen length 48:
```
data = 193, 48
update_penguin(prior, data, multinorm_map)
```
A penguin with those measurements is almost certainly an Chinstrap.
Now let's see if this classifier does any better than the naive Bayesian classifier.
I'll apply it to each penguin in the dataset:
```
df['Classification'] = np.nan
for i, row in df.iterrows():
data = row[colnames]
posterior = update_penguin(prior, data, multinorm_map)
df.loc[i, 'Classification'] = posterior.idxmax()
```
And compute the accuracy:
```
accuracy(df)
```
It turns out to be only a little better: the accuracy is 95.3%, compared to 94.7% for the naive Bayesian classifier.
## Summary
In this chapter, we implemented a naive Bayesian classifier, which is "naive" in the sense that it assumes that the features is uses for classification are independent.
To see how bad that assumption is, we also implemented a classifier that uses the a multivariate normal distribution to model the joint distribution of the features, which includes their dependencies.
In this example, the non-naive classifier is only marginally better.
In one way, that's disappointing. After all that work, it would have been nice to see a bigger improvement.
But in another way, it's good news. In general, a naive Bayesian classifier is easier to implement and requires less computation. If it works nearly as well as a more complex algorithm, it might be a good choice for practical purposes.
Speaking of practical purposes, you might have noticed that this example isn't very useful. If we want to identify the species of a penguin, there are easier ways than measuring its flippers and beak.
But there *are* scientific uses for this type of classification. One of them is the subject of the research paper we started with: [sexual dimorphism](https://en.wikipedia.org/wiki/Sexual_dimorphism), that is, differences in shape between male and female animals.
In some species, like angler fish, males and females look very different. In other species, like mockingbirds, they are difficult to tell apart.
And dimorphism is worth studying because it provides insight into social behavior, sexual selection, and evolution.
One way to quantify the degree of sexual dimorphism in a species is to use a classification algorithm like the one in this chapter. If you can find a set of features that makes it possible to classify individuals by sex with high accuracy, that's evidence of high dimorphism.
As an exercise, you can use the dataset from this chapter to classify penguins by sex and see which of the three species is the most dimorphic.
## Exercises
**Exercise:** In my example I used culmen length and flipper length because they seemed to provide the most power to distinguish the three species. But maybe we can do better by using more features.
Make a naive Bayesian classifier that uses all four measurements in the dataset: culmen length and depth, flipper length, and body mass.
Is it more accurate than the model with two features?
```
# Solution goes here
# Solution goes here
# Solution goes here
```
**Exercise:** One of the reasons the penguin dataset was collected was to quantify sexual dimorphism in different penguin species, that is, physical differences between male and female penguins. One way to quantify dimorphism is to use measurements to classify penguins by sex. If a species is more dimorphic, we expect to be able to classify them more accurately.
As an exercise, pick a species and use a Bayesian classifier (naive or not) to classify the penguins by sex. Which features are most useful? What accuracy can you achieve?
Note: One Gentoo penguin has an invalid value for `Sex`. I used the following code to select one species and filter out invalid data.
```
gentoo = (df['Species2'] == 'Gentoo')
subset = df[gentoo].copy()
subset['Sex'].value_counts()
valid = df['Sex'] != '.'
valid.sum()
subset = df[valid & gentoo].copy()
```
OK, you can finish it off from here.
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
| true |
code
| 0.725302 | null | null | null | null |
|
# Reinforcement Learning
# Introduction
- "A gazelle calf struggles to its feet minutes after being born. Half an hour later it is running at 20 miles per hour." - Sutton and Barto
<img src="images/gazelle.jpeg" style="width: 600px;"/>
- Google's AlphaGo used deep reinforcement learning in order to defeat world champion Lee Sedol at Go.
<img src="images/go.jpg" style="width: 600px;"/>
# Goal
- Agent interacts dynamically with its environment, moves from one state to another.
- Based on the actions taken by the agent, rewards are given.
- Guidelines for which action to take in each state is called a policy.
- Try to efficiently find an optimal policy in which rewards are maximized.
<img src="images/RL_diagram.png" style="width: 600px;">
## This is Different from Supervised Learning
* Supervised Learning
* "learning from examples provided by a knowledgeable external supervisor"
* For any state that the agent may be in, the supervisor can supply enough relevant examples of the outcomes which result from similar states so that we may make an accurate prediction.
* Reinforcement Learning
* No supervisor exists
* Agent must learn from experience as it explores the range of possible states
* Continuously update policy in response to new information.
# Examples
<table class="table table-bordered">
<font size="3">
<tl>
<th>
agent
</th>
<th>
environment
</th>
<th>
actions
</th>
<th>
rewards
</th>
<th>
policy
</th>
</tl>
<tr>
<td>
robot arm
</td>
<td>
set of arm positions
</td>
<td>
bend elbow, close hand, extend arm, etc.
</td>
<td>
reward when door successfully opened
</td>
<td>
most efficient set of movements to open door
</td>
</tl>
<tr>
<td>
board game player
</td>
<td>
set of all game configs.
</td>
<td>
legal moves
</td>
<td>
winning the game
</td>
<td>
optimal strategy
</td>
</tr>
<tr>
<td>
mouse
</td>
<td>
maze
</td>
<td>
running, turning
</td>
<td>
cheese
</td>
<td>
most direct path to cheese
</td>
</tr>
<tr>
<td>
credit card company
</td>
<td>
set of all customers in default
</td>
<td>
set of collections actions
</td>
<td>
cost for each attempt, reward for successful collection
</td>
<td>
optimal strategy for debt collections
</td>
</tr>
<tr>
<td>
marketing team
</td>
<td>
sets of potential customers and ads that can be shown
</td>
<td>
showing an ad to a potential customer
</td>
<td>
cost of placing ad, value of customer's business
</td>
<td>
optimal ad placement strategy
</td>
</tr>
<tr>
<td>
call center
</td>
<td>
status of each customer in queue
</td>
<td>
connecting customers to representatives
</td>
<td>
customer satisfaction
</td>
<td>
optimal queueing strategy
</td>
</tr>
<tr>
<td>
Website Designer
</td>
<td>
set of possible layout options
</td>
<td>
changing layout
</td>
<td>
increased click-through rate
</td>
<td>
ideal layout
</td>
</tr>
</font>
</table>
# Exploration vs Exploitation
- In the absence of a Supervisor, the agent must exlore the environment in order to gain information about rewards, while exploiting it's current information to maximize it's rewards.
- Balancing this tradeoff is a common theme
# Multi-Armed Bandits - A single state example
Multi-armed bandit problems are some of the simplest reinforcement learning (RL) problems to solve. We have an agent which we allow to choose actions, and each action has a reward that is returned according to a given, underlying probability distribution. The game is played over many episodes (single actions in this case) and the goal is to maximize your reward.
One way to approach this is to select each one in turn and keep track of how much you received, then keep going back to the one that paid out the most. This is possible, but, as stated before, each bandit has an underlying probability distribution associated with it, meaning that you may need more samples before finding the right one. But, each pull you spend trying to figure out the best bandit to play takes you away from maximizing your reward. This basic balancing act is known as the explore-exploit dilemma.
* Given N different arms to choose from, each with an unknown reward, what strategy should we use to explore and learn the values of each arm, while exploiting our current knowledge to maximize profit?
* This is a very common approach for optimizing online marketing campaigns.
* This can be thought of as a single-state reinforcement learning problem
<img src="images/MAB.jpg" style="width: 400px;"/>
## Epsilon-greedy
- A fraction (1 - $\epsilon$) of the time, choose the arm with the largest estimated value (exploit)
- The other $\epsilon$ of the time, chose a random arm (explore)
- Tune $\epsilon$ in order to balance tradeoff
<img src="images/epsilongreedy.png" style="width: 400px;"/>
# Problem Setup
To get started, let’s describe the problem in a bit more technical detail. What we wish to do, is develop an estimate $Q_t(a)$:
$$Q_t(a)=E[R_n|A_n=a]$$
Where $Q_t(a)$ is the estimated, expected reward $R_n$, when action $A_n$ is taken at step n. We’re going to iteratively build a model that will converge towards the true value of each action. We’re going to use a Gaussian (normal) distribution for all of the underlying probability distributions that we’ll explore so that the mean corresponds to the true value (after all, given enough samples, we would expect our rewards to converge to the mean of the selected action).
The simplest way to proceed is to take the greedy action or take the action we think will maximize our reward at each time step. Another way of writing this is:
$$A_n=argmax_a(Q_n(a))$$
We can denote this maximum expectation or greedy action as A*n. This is the exploit side of our aforementioned explore-exploit dilemma, and it makes lots of sense if the goal is to maximize our reward. Of course, doing this repeatedly only works well once we have a good sense of our expected rewards for each actions (unless we get rather lucky). So, we need to figure out an algorithm that explores enough of our search space so that we can exploit the best actions.
# Average Reward Method
Before jumping into this, there’s one last concept to introduce. In typical RL applications, we may need hundreds of thousands of iterations, if not millions or more. It quickly becomes very computationally intensive to run simulations of these sorts and keep track of all that data just to calculate the average reward. To avoid this, we can use a handy formula so that all we need to track are two values: the mean and number of steps taken. If we need to calculate the mean at step n, m_n, we can do it with the previous mean, m_n−1 and n as follows:
$$m_n=m_{n-1}+\frac{R_n-m_{n-1}}{n}$$
## Building a greedy k-Armed Bandit
We’re going to define a class called <b>eps_bandit </b> to be able to run our experiment. This class takes number of arms, k, epsilon value eps, number of iterations iter as inputs. We'll also define a term mu that we can use to adjust the average rewards of each of the arms.
### First the modules:
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
```
epsilon-greedy k-bandit problem
Inputs
k: number of arms (int)
eps: probability of random action 0 < eps < 1 (float)
iters: number of steps (int)
mu: set the average rewards for each of the k-arms.
Set to "random" for the rewards to be selected from
a normal distribution with mean = 0.
Set to "sequence" for the means to be ordered from
0 to k-1.
Pass a list or array of length = k for user-defined
values.
```
class eps_bandit:
def __init__(self, k, eps, iters, mu='random'):
# Number of arms
self.k = k
# Search probability
self.eps = eps
# Number of iterations
self.iters = iters
# Step count
self.n = 0
# Step count for each arm
self.k_n = np.zeros(k)
# Total mean reward
self.mean_reward = 0
self.reward = np.zeros(iters)
# Mean reward for each arm _ this is the estimated action value
self.k_reward = np.zeros(k)
if type(mu) == list or type(mu).__module__ == np.__name__:
# User-defined averages
self.mu = np.array(mu)
elif mu == 'random':
# Draw means from probability distribution
self.mu = np.random.normal(0, 1, k)
elif mu == 'sequence':
# Increase the mean for each arm by one
self.mu = np.linspace(0, k-1, k)
def pull(self):
# Generate random number
p = np.random.rand()
if self.eps == 0 and self.n == 0:
a = np.random.choice(self.k)
elif p < self.eps:
# Randomly select an action
a = np.random.choice(self.k)
else:
# Take greedy action
a = np.argmax(self.k_reward)
# reward function is a normal distribution with mean mu and varaiance 1
reward = np.random.normal(self.mu[a], 1)
# Update counts
self.n += 1
self.k_n[a] += 1
# Update total
self.mean_reward = self.mean_reward + (
reward - self.mean_reward) / self.n
# Update results for a_k
self.k_reward[a] = self.k_reward[a] + (
reward - self.k_reward[a]) / self.k_n[a]
def run(self):
for i in range(self.iters):
self.pull()
self.reward[i] = self.mean_reward
def reset(self):
# Resets results while keeping settings
self.n = 0
self.k_n = np.zeros(k)
self.mean_reward = 0
self.reward = np.zeros(iters)
self.k_reward = np.zeros(k)
```
There are plenty of different ways to define this class. I did it so that once we initialize our problem, we just call the **run()** method and can examine the outputs. By default, the average rewards for each arm are drawn from a normal distribution around 0. Setting mu="sequence" will cause the rewards to range from 0 to k-1 to make it easy to know which actions provide the best rewards when evaluating the results and which actions were taken. Finally, you could also set your own average rewards by passing values to mu.
Let’s set up some comparisons using different values of ϵ\epsilonϵ. For each of these, we’ll set k=10, run 1,000 steps for each episode and run 1,000 episodes. After each episode, we will reset the bandits and copy the averages across the different bandits to keep things consistent.
## Let's run the bandit for three different epsilons
```
k = 10
iters = 1000
eps_0_rewards = np.zeros(iters)
eps_01_rewards = np.zeros(iters)
eps_1_rewards = np.zeros(iters)
episodes = 1000
# Run experiments
for i in range(episodes):
# Initialize bandits
eps_0 = eps_bandit(k, 0, iters)
eps_01 = eps_bandit(k, 0.01, iters, eps_0.mu.copy())
eps_1 = eps_bandit(k, 0.1, iters, eps_0.mu.copy())
# Run experiments
eps_0.run()
eps_01.run()
eps_1.run()
# Update long-term averages
eps_0_rewards = eps_0_rewards + (
eps_0.reward - eps_0_rewards) / (i + 1)
eps_01_rewards = eps_01_rewards + (
eps_01.reward - eps_01_rewards) / (i + 1)
eps_1_rewards = eps_1_rewards + (
eps_1.reward - eps_1_rewards) / (i + 1)
plt.figure(figsize=(12,8))
plt.plot(eps_0_rewards, label="$\epsilon=0$ (greedy)")
plt.plot(eps_01_rewards, label="$\epsilon=0.01$")
plt.plot(eps_1_rewards, label="$\epsilon=0.1$")
plt.legend(bbox_to_anchor=(1.3, 0.5))
plt.xlabel("Iterations")
plt.ylabel("Average Reward")
plt.title("Average $\epsilon-greedy$ Rewards after " + str(episodes)
+ " Episodes")
plt.show()
```
Looking at the results, the greedy function under performs the other two consistently, with ϵ=0.01 coming in between the two and ϵ=0.1 performing the best of the three here. Below, we can see the effect is clearer using the sequence argument, and can get a feel for how often the optimal action is taken per episode because the averages remain consistent across episodes.
## Now we count the average taken action
### To do so, we use sequence reward function to make the comparision tracktable
```
k = 10
iters = 1000
eps_0_rewards = np.zeros(iters)
eps_01_rewards = np.zeros(iters)
eps_1_rewards = np.zeros(iters)
eps_0_selection = np.zeros(k)
eps_01_selection = np.zeros(k)
eps_1_selection = np.zeros(k)
episodes = 1000
# Run experiments
for i in range(episodes):
# Initialize bandits
eps_0 = eps_bandit(k, 0, iters, mu='sequence')
eps_01 = eps_bandit(k, 0.01, iters, eps_0.mu.copy())
eps_1 = eps_bandit(k, 0.1, iters, eps_0.mu.copy())
# Run experiments
eps_0.run()
eps_01.run()
eps_1.run()
# Update long-term averages
eps_0_rewards = eps_0_rewards + (
eps_0.reward - eps_0_rewards) / (i + 1)
eps_01_rewards = eps_01_rewards + (
eps_01.reward - eps_01_rewards) / (i + 1)
eps_1_rewards = eps_1_rewards + (
eps_1.reward - eps_1_rewards) / (i + 1)
# Average actions per episode
eps_0_selection = eps_0_selection + (
eps_0.k_n - eps_0_selection) / (i + 1)
eps_01_selection = eps_01_selection + (
eps_01.k_n - eps_01_selection) / (i + 1)
eps_1_selection = eps_1_selection + (
eps_1.k_n - eps_1_selection) / (i + 1)
plt.figure(figsize=(12,8))
plt.plot(eps_0_rewards, label="$\epsilon=0$ (greedy)")
plt.plot(eps_01_rewards, label="$\epsilon=0.01$")
plt.plot(eps_1_rewards, label="$\epsilon=0.1$")
for i in range(k):
plt.hlines(eps_0.mu[i], xmin=0,
xmax=iters, alpha=0.5,
linestyle="--")
plt.legend(bbox_to_anchor=(1.3, 0.5))
plt.xlabel("Iterations")
plt.ylabel("Average Reward")
plt.title("Average $\epsilon-greedy$ Rewards after " +
str(episodes) + " Episodes")
plt.show()
bins = np.linspace(0, k-1, k)
plt.figure(figsize=(12,8))
plt.bar(bins, eps_0_selection,
width = 0.33, color='b',
label="$\epsilon=0$")
plt.bar(bins+0.33, eps_01_selection,
width=0.33, color='g',
label="$\epsilon=0.01$")
plt.bar(bins+0.66, eps_1_selection,
width=0.33, color='r',
label="$\epsilon=0.1$")
plt.legend(bbox_to_anchor=(1.2, 0.5))
plt.xlim([0,k])
plt.title("Actions Selected by Each Algorithm")
plt.xlabel("Action")
plt.ylabel("Number of Actions Taken")
plt.show()
opt_per = np.array([eps_0_selection, eps_01_selection,
eps_1_selection]) / iters * 100
df = pd.DataFrame(opt_per, index=['$\epsilon=0$',
'$\epsilon=0.01$', '$\epsilon=0.1$'],
columns=["a = " + str(x) for x in range(0, k)])
print("Percentage of actions selected:")
df
```
Viewing the average selection of the algorithms, we see why the larger ϵ value performs well, it takes the optimal selection 80% of the time.
Play around with the different values of both ϵ and k to see how these results change. For example, decreasing the search space would likely benefit smaller values of ϵ as exploration would be less beneficial and vice versa. Additionally, increasing the number of iterations will begin to benefit the lower value of ϵ because it will have less random noise.
## ϵ-Decay Strategies
The ϵ-greedy strategies have an obvious weakness in that they continue to include random noise no matter how many examples they see. It would be better for these to settle on an optimal solution and continue to exploit it. To this end, we can introduce ϵ-decay which reduces the probability of exploration with every step. This works by defining ϵ as a function of the number of steps, n.
$$\epsilon(n)=\frac{1}{1+n\beta}$$
Where β<1 is introduced as a scaling factor to reduce the scaling rate so that the algorithm has sufficient opportunity to explore. In this case, we also include +1 in the denominator to prevent infinities from appearing. Given this, we can make a few small changes to our previous class of bandits to define an eps_decay_bandit class that works on the same principles.
```
class eps_decay_bandit:
def __init__(self, k, iters, mu='random'):
# Number of arms
self.k = k
# Number of iterations
self.iters = iters
# Step count
self.n = 0
# Step count for each arm
self.k_n = np.zeros(k)
# Total mean reward
self.mean_reward = 0
self.reward = np.zeros(iters)
# Mean reward for each arm
self.k_reward = np.zeros(k)
if type(mu) == list or type(mu).__module__ == np.__name__:
# User-defined averages
self.mu = np.array(mu)
elif mu == 'random':
# Draw means from probability distribution
self.mu = np.random.normal(0, 1, k)
elif mu == 'sequence':
# Increase the mean for each arm by one
self.mu = np.linspace(0, k-1, k)
def pull(self):
# Generate random number
p = np.random.rand()
if p < 1 / (1 + self.n / self.k):
# Randomly select an action
a = np.random.choice(self.k)
else:
# Take greedy action
a = np.argmax(self.k_reward)
reward = np.random.normal(self.mu[a], 1)
# Update counts
self.n += 1
self.k_n[a] += 1
# Update total
self.mean_reward = self.mean_reward + (
reward - self.mean_reward) / self.n
# Update results for a_k
self.k_reward[a] = self.k_reward[a] + (
reward - self.k_reward[a]) / self.k_n[a]
def run(self):
for i in range(self.iters):
self.pull()
self.reward[i] = self.mean_reward
def reset(self):
# Resets results while keeping settings
self.n = 0
self.k_n = np.zeros(k)
self.mean_reward = 0
self.reward = np.zeros(iters)
self.k_reward = np.zeros(k)
k = 10
iters = 1000
eps_decay_rewards = np.zeros(iters)
eps_1_rewards = np.zeros(iters)
episodes = 1000
# Run experiments
for i in range(episodes):
# Initialize bandits
eps_decay = eps_decay_bandit(k, iters)
eps_1 = eps_bandit(k, 0.1, iters, eps_decay.mu.copy())
# Run experiments
eps_decay.run()
eps_1.run()
# Update long-term averages
eps_decay_rewards = eps_decay_rewards + (
eps_decay.reward - eps_decay_rewards) / (i + 1)
eps_1_rewards = eps_1_rewards + (
eps_1.reward - eps_1_rewards) / (i + 1)
plt.figure(figsize=(12,8))
plt.plot(eps_decay_rewards, label="$\epsilon-decay$")
plt.plot(eps_1_rewards, label="$\epsilon=0.1$")
plt.legend(bbox_to_anchor=(1.2, 0.5))
plt.xlabel("Iterations")
plt.ylabel("Average Reward")
plt.title("Average $\epsilon-decay$ and" +
"$\epsilon-greedy$ Rewards after "
+ str(episodes) + " Episodes")
plt.show()
```
The ϵ-decay strategy outperforms our previous best algorithm as it sticks to the optimal action once it is found.
There’s one last method to balance the explore-exploit dilemma in k-bandit problems, optimistic initial values.
## Optimistic Initial Value
This approach differs significantly from the previous examples we explored because it does not introduce random noise to find the best action, A*_n. Instead, we over estimate the rewards of all the actions and continuously select the maximum. In this case, the algorithm explores early on as it seeks to maximize its returns while additional information allows the values to converge to their true means. This approach does require some additional background knowledge to be included in the set up because we need at least some idea of what the rewards are so that we can over estimate them.
For this implementation, we don’t need a new class. Instead, we can simply use our eps_bandit class and set ϵ=0 and provide high, initial values for the estimates. Also, I like to initialize the pull count for each arm as 1 instead of 0 to encourage slightly slower convergence and ensure good exploration.
```
k = 10
iters = 1000
oiv_rewards = np.zeros(iters)
eps_decay_rewards = np.zeros(iters)
eps_1_rewards = np.zeros(iters)
# Select initial values
oiv_init = np.repeat(5., k)
episodes = 1000
# Run experiments
for i in range(episodes):
# Initialize bandits
oiv_bandit = eps_bandit(k, 0, iters)
oiv_bandit.k_reward = oiv_init.copy()
oiv_bandit.k_n = np.ones(k)
eps_decay = eps_decay_bandit(k, iters, oiv_bandit.mu.copy())
eps_1 = eps_bandit(k, 0.1, iters, oiv_bandit.mu.copy())
# Run experiments
oiv_bandit.run()
eps_decay.run()
eps_1.run()
# Update long-term averages
oiv_rewards = oiv_rewards + (
oiv_bandit.reward - oiv_rewards) / (i + 1)
eps_decay_rewards = eps_decay_rewards + (
eps_decay.reward - eps_decay_rewards) / (i + 1)
eps_1_rewards = eps_1_rewards + (
eps_1.reward - eps_1_rewards) / (i + 1)
plt.figure(figsize=(12,8))
plt.plot(oiv_rewards, label="Optimistic")
plt.plot(eps_decay_rewards, label="$\epsilon-decay$")
plt.plot(eps_1_rewards, label="$\epsilon=0.1$")
plt.legend(bbox_to_anchor=(1.2, 0.5))
plt.xlabel("Iterations")
plt.ylabel("Average Reward")
plt.title("Average Bandit Strategy Rewards after " +
str(episodes) + " Episodes")
plt.show()
```
We can see that, in this case, the optimistic initial value approach outperformed both our ϵ−greedy and the ϵ−decay algorithms. We can see too, the estimates the algorithm has for each of arms in the last episode.
```
df = pd.DataFrame({"number of selections": oiv_bandit.k_n - 1,
"actual reward": oiv_bandit.mu,
"estimated reward": oiv_bandit.k_reward})
df = df.applymap(lambda x: np.round(x, 2))
df['number of selections'] = df['number of selections'].astype('int')
df
```
The estimates are far off the actual rewards in all cases except the one with 977 pulls. This highlights a lot of what we’ll be doing in reinforcement learning more generally. We don’t necessarily care about acquiring accurate representations of the environment we are interacting with. Instead, we intend to learn optimal behavior in those situations and seek to behave accordingly. This can open up a whole discussion about model-free versus model-based learning that we’ll have to postpone for another time.
## Upper Confidence Bound Bandit
The first bandit strategy we’ll examine is known as the Upper-Confidence-Bound method (UCB) which attempts to explore the action space based on the uncertainty or variance in a‘s value.
The selection criterion is given as:
$$A_n=argmax_a(Q_n(a)+c\sqrt{\frac{log(n)}{N_n(a)}})$$
```
class ucb_bandit:
def __init__(self, k, c, iters, mu='random'):
# Number of arms
self.k = k
# Exploration parameter
self.c = c
# Number of iterations
self.iters = iters
# Step count
self.n = 1
# Step count for each arm
self.k_n = np.ones(k)
# Total mean reward
self.mean_reward = 0
self.reward = np.zeros(iters)
# Mean reward for each arm
self.k_reward = np.zeros(k)
if type(mu) == list or type(mu).__module__ == np.__name__:
# User-defined averages
self.mu = np.array(mu)
elif mu == 'random':
# Draw means from probability distribution
self.mu = np.random.normal(0, 1, k)
elif mu == 'sequence':
# Increase the mean for each arm by one
self.mu = np.linspace(0, k-1, k)
def pull(self):
# Select action according to UCB Criteria
a = np.argmax(self.k_reward + self.c * np.sqrt(
(np.log(self.n)) / self.k_n))
reward = np.random.normal(self.mu[a], 1)
# Update counts
self.n += 1
self.k_n[a] += 1
# Update total
self.mean_reward = self.mean_reward + (
reward - self.mean_reward) / self.n
# Update results for a_k
self.k_reward[a] = self.k_reward[a] + (
reward - self.k_reward[a]) / self.k_n[a]
def run(self):
for i in range(self.iters):
self.pull()
self.reward[i] = self.mean_reward
def reset(self, mu=None):
# Resets results while keeping settings
self.n = 1
self.k_n = np.ones(self.k)
self.mean_reward = 0
self.reward = np.zeros(iters)
self.k_reward = np.zeros(self.k)
if mu == 'random':
self.mu = np.random.normal(0, 1, self.k)
k = 10
iters = 1000
ucb_rewards = np.zeros(iters)
# Initialize bandits
ucb = ucb_bandit(k, 2, iters)
eps_decay_rewards = np.zeros(iters)
eps_1_rewards = np.zeros(iters)
episodes = 1000
# Run experiments
for i in range(episodes):
ucb.reset('random')
eps_decay = eps_decay_bandit(k, iters)
eps_1 = eps_bandit(k, 0.1, iters, eps_decay.mu.copy())
# Run experiments
ucb.run()
eps_decay.run()
eps_1.run()
# Update long-term averages
ucb_rewards = ucb_rewards + (
ucb.reward - ucb_rewards) / (i + 1)
eps_decay_rewards = eps_decay_rewards + (
eps_decay.reward - eps_decay_rewards) / (i + 1)
eps_1_rewards = eps_1_rewards + (
eps_1.reward - eps_1_rewards) / (i + 1)
plt.figure(figsize=(12,8))
plt.plot(ucb_rewards, label="UCB")
plt.plot(eps_decay_rewards, label="eps_decay")
plt.plot(eps_1_rewards, label="eps_1")
plt.legend(bbox_to_anchor=(1.2, 0.5))
plt.xlabel("Iterations")
plt.ylabel("Average Reward")
plt.title("Average UCB Rewards after "
+ str(episodes) + " Episodes")
plt.show()
```
## Gradient Bandit
Gradient algorithms take a different approach than the ones that we’ve seen thus far. This is a measure of the relative value of a given action over and above the other actions that are available. The algorithm learns a preference, $H_t(a)$, which causes it to select the higher preferenced actions more frequently. The preferences are calculated using softmax.
$$Pr(A_t=a)=\frac{e^{H_t(a)}}{\sum_{b=1}^ke^{H_t(a)}}=\pi_t(n)$$
There is a new term $\pi_t(n)$ which has been introduced here. This is essentially the probability of taking action
a at time t. The algorithm is initialized with $H_0(a)=0$, so that initially, all actions have an equal probability of selection.
In this case, the algorithm doesn’t update the average of the rewards, but it updates the $H_t(a)=0$ value for each action using **stochastic gradient ascent**. Each time an action is taken, a reward is returned which is weighted by the probability of the action and the learning rate. This becomes the new value for $H_t(A_t)$ Because the probabilities are all relative to one another, they are all updated in turn. The procedure can be expressed as follows:
$$H_{t+1}(A_t)=H_{t}(A_t)+\alpha(R_t-\bar{R_t})(1-\pi_t(n))$$
$$H_{t+1}(a)=H_{t}(a)-\alpha(R_t-\bar{R_t})\pi_t(n)\forall a\neq A_t$$
```
def softmax(x):
return np.exp(x - x.max()) / np.sum(np.exp(x - x.max()), axis=0)
class grad_bandit:
def __init__(self, k, alpha, iters, mu='random'):
# Number of arms
self.k = k
self.actions = np.arange(k)
# Number of iterations
self.iters = iters
# Step count
self.n = 1
# Step count for each arm
self.k_n = np.ones(k)
# Total mean reward
self.mean_reward = 0
self.reward = np.zeros(iters)
# Mean reward for each arm
self.k_reward = np.zeros(k)
# Initialize preferences
self.H = np.zeros(k)
# Learning rate
self.alpha = alpha
if type(mu) == list or type(mu).__module__ == np.__name__:
# User-defined averages
self.mu = np.array(mu)
elif mu == 'random':
# Draw means from probability distribution
self.mu = np.random.normal(0, 1, k)
elif mu == 'sequence':
# Increase the mean for each arm by one
self.mu = np.linspace(0, k-1, k)
def softmax(self):
self.prob_action = np.exp(self.H - np.max(self.H)) \
/ np.sum(np.exp(self.H - np.max(self.H)), axis=0)
def pull(self):
# Update probabilities
self.softmax()
# Select highest preference action
a = np.random.choice(self.actions, p=self.prob_action)
reward = np.random.normal(self.mu[a], 1)
# Update counts
self.n += 1
self.k_n[a] += 1
# Update total
self.mean_reward = self.mean_reward + (
reward - self.mean_reward) / self.n
# Update results for a_k
self.k_reward[a] = self.k_reward[a] + (
reward - self.k_reward[a]) / self.k_n[a]
# Update preferences
self.H[a] = self.H[a] + \
self.alpha * (reward - self.mean_reward) * (1 -
self.prob_action[a])
actions_not_taken = self.actions!=a
self.H[actions_not_taken] = self.H[actions_not_taken] - \
self.alpha * (reward - self.mean_reward)* self.prob_action[actions_not_taken]
def run(self):
for i in range(self.iters):
self.pull()
self.reward[i] = self.mean_reward
def reset(self, mu=None):
# Resets results while keeping settings
self.n = 0
self.k_n = np.zeros(self.k)
self.mean_reward = 0
self.reward = np.zeros(iters)
self.k_reward = np.zeros(self.k)
self.H = np.zeros(self.k)
if mu == 'random':
self.mu = np.random.normal(0, 1, self.k)
k = 10
iters = 1000
# Initialize bandits
grad = grad_bandit(k, 0.1, iters, mu='random')
ucb = ucb_bandit(k, 2, iters, mu=grad.mu)
ucb.mu = grad.mu
ucb_rewards = np.zeros(iters)
grad_rewards = np.zeros(iters)
opt_grad = 0
opt_ucb = 0
episodes = 1000
# Run experiments
for i in range(episodes):
# Reset counts and rewards
grad.reset('random')
ucb.reset()
ucb.mu = grad.mu
# Run experiments
grad.run()
ucb.run()
# Update long-term averages
grad_rewards = grad_rewards + (
grad.reward - grad_rewards) / (i + 1)
ucb_rewards = ucb_rewards + (
ucb.reward - ucb_rewards) / (i + 1)
# Count optimal actions
opt_grad += grad.k_n[np.argmax(grad.mu)]
opt_ucb += ucb.k_n[np.argmax(ucb.mu)]
plt.figure(figsize=(12,8))
plt.plot(grad_rewards, label="Gradient")
plt.plot(ucb_rewards, label="UCB")
plt.legend(bbox_to_anchor=(1.3, 0.5))
plt.xlabel("Iterations")
plt.ylabel("Average Reward")
plt.title("Average Gradient Bandit Rewards after "
+ str(episodes) + " Episodes")
plt.show()
```
We see that the UCB bandit outperformed the gradient bandit over the entire range, however, looking a bit deeper, we can see that the gradient bandit performed much better and more consistently once it learned.
```
width = 0.45
bins = np.linspace(0, k-1, k) - width/2
plt.figure(figsize=(12,8))
plt.bar(bins, grad.k_n,
width=width,
label="Gradient Bandit")
plt.bar(bins+0.45, ucb.k_n,
width=width,
label="UCB")
plt.legend(bbox_to_anchor=(1.3, 0.5))
plt.title("Actions Selected by Each Algorithm")
plt.xlabel("Action")
plt.ylabel("Number of Actions Taken")
plt.show()
opt_per = np.array([grad.k_n, ucb.k_n]) / iters * 100
df = pd.DataFrame(np.vstack([opt_per,
grad.mu.reshape(-1, 1).T.round(2)]),
index=["Grad", "UCB", "Expected Reward"],
columns=["a = " + str(x) for x in range(0, k)])
print("Percentage of actions selected:")
df
```
The gradient bandit outperformed the UCB approach on the final 1,000 pull run selecting the optimal action (action 3 in this case) 62.9% of the time versus 19.7%. Although the gradient-based approach wasn’t able to perform as well over the entire time horizon, notice that it was much more successful in differentiating the best action from the second best action than the UCB bandit, which spent 0.9% more of its actions selecting 2 instead of 3.
The gradient bandit performed comparably to the UCB bandit, although underperforming it for all episodes, it remains important to understand because it relates closely to one of the key concepts in machine learning: stochastic gradient ascent/descent (see section 2.8 of Reinforcement Learning: An Introduction for a derivation of this). This makes up the backbone of numerous optimization strategies as the algorithm adjusts weights in the direction of minimum or maximum gradient (depending on what is being optimized for). This has an especially powerful analog in reinforcement learning known as policy gradients which we’ll cover in a future article.
## Nonstationary Bandits
All of the environments we’ve examined have been stationary environments: once the returns are selected, the means remain constant. Most real-world applications don’t follow this pattern. Instead, the rewards drift over time meaning that the underlying reward function is dynamic. This behavior can cause your bandit’s, once optimal behavior, to drift over time as the action degrades or other strategies become more beneficial. To deal with this, we can introduce a step-size parameter, $\beta$ to the equation where 0<$\beta$≤1. The parameter $\beta$, weights more recent observations more heavily than older observations and acts like a discount factor stretching back into the past. This leads to the result where our $Q$ estimate can be written as:
$$Q_{n+1}=(1-\beta)^nQ_1+\sum_{i=1}^n\beta(1-\beta)^{n-1}R_i$$
This is essentially a weighted average of all the past rewards and our initial estimate for Q and can be implemented in a similar update procedure. To see this in action, let’s define our mean reward as a function of the total number of pulls. As such, the mean reward will drift with each action n. We’ll make it a non-linear function as well just to make it a bit more interesting as the rewards shift.
We’ll define a new bandit class, nonstationary_bandits with the option of using either ϵ-decay or ϵ-greedy methods. Also note, that if we set our β=1, then we are implementing a non-weighted algorithm, so the greedy move will be to select the highest average action instead of the highest weighted action. Check back with the last post if you need a refresher on the ideas that underpin these bandit types.
```
class nonstationary_bandit:
def __init__(self, k, beta, epsilon, iters, Q_init=None, c='random'):
# Number of arms
self.k = k
self.actions = np.arange(k)
self.epsilon = epsilon
# Number of iterations
self.iters = iters
# Step count
self.n = 0
# Step count for each arm
self.k_n = np.ones(k)
# Total mean reward
self.mean_reward = 0
self.reward = np.zeros(iters)
# Mean reward for each arm
self.k_reward = np.zeros(k)
# Initialize estimates
if not Q_init:
self.Q_init = np.zeros(k)
else:
self.Q_init = Q_init
self.Q = self.Q_init.copy()
# Step size parameter
self.beta = beta
if type(c) == list or type(c).__module__ == np.__name__:
# User-defined averages
self.c = np.array(c)
elif c == 'random':
# Draw value from normal distribution
self.c = np.random.normal(0, 1, k)
elif c == 'sequence':
# Increase the mean for each arm by one
self.c = np.linspace(0, k-1, k)
def pull(self):
# Select highest average
if self.beta == 1:
a = np.argmax(self.k_reward)
else:
a = np.argmax(self.Q)
# Possibly take random action
p = np.random.rand()
if self.epsilon == 'decay':
if p < 1 / (1 + self.n):
a = np.random.choice(self.k)
else:
if p < self.epsilon:
a = np.random.choice(self.k)
exp_reward = self.c[a] + np.sin(self.n * np.pi /
self.iters + self.c[a])
reward = np.random.normal(exp_reward, 1)
# Update counts
self.n += 1
self.k_n[a] += 1
# Update total
self.mean_reward = self.mean_reward + (
reward - self.mean_reward) / self.n
# Update results for a_k
self.k_reward[a] = self.k_reward[a] + (
reward - self.k_reward[a]) / self.k_n[a]
# Update Q-values
self.Q[a] += self.beta * (reward - self.Q[a])
def run(self):
for i in range(self.iters):
self.pull()
self.reward[i] = self.mean_reward
def reset(self, mu=None):
# Resets results while keeping settings
self.n = 0
self.k_n = np.zeros(self.k)
self.mean_reward = 0
self.reward = np.zeros(iters)
self.k_reward = np.zeros(self.k)
k = 10
iters = 1000
# Initialize bandits
ns_eps_decay = nonstationary_bandit(k, 1, 'decay', iters)
ns_eps_decay_weighted = nonstationary_bandit(
k, 0.1, 'decay', iters, c=ns_eps_decay.c)
ns_eps_greedy = nonstationary_bandit(
k, 1, 0.1, iters, c=ns_eps_decay.c)
ns_eps_greedy_weighted = nonstationary_bandit(
k, 0.1, 0.1, iters, c=ns_eps_decay.c)
ns_eps_decay_rewards = np.zeros(iters)
ns_eps_decay_w_rewards = np.zeros(iters)
ns_eps_greedy_rewards = np.zeros(iters)
ns_eps_greedy_w_rewards = np.zeros(iters)
episodes = 1000
# Run experiments
for i in range(episodes):
# Reset counts and rewards
ns_eps_decay.reset()
ns_eps_decay_weighted.reset()
ns_eps_greedy.reset()
ns_eps_greedy_weighted.reset()
# Run experiments
ns_eps_decay.run()
ns_eps_decay_weighted.run()
ns_eps_greedy.run()
ns_eps_greedy_weighted.run()
# Update long-term averages
ns_eps_decay_rewards = ns_eps_decay_rewards + (
ns_eps_decay.reward - ns_eps_decay_rewards) / (i + 1)
ns_eps_decay_w_rewards = ns_eps_decay_w_rewards + (
ns_eps_decay_weighted.reward -
ns_eps_decay_w_rewards) / (i + 1)
ns_eps_greedy_rewards = ns_eps_greedy_rewards + (
ns_eps_greedy.reward - ns_eps_greedy_rewards) / (i + 1)
ns_eps_greedy_w_rewards = ns_eps_greedy_w_rewards + (
ns_eps_greedy_weighted.reward -
ns_eps_greedy_w_rewards) / (i + 1)
x = np.arange(iters) * np.pi / iters
plt.figure(figsize=(12,8))
plt.plot(ns_eps_decay_rewards,
label=r"$\epsilon$-decay")
plt.plot(ns_eps_decay_w_rewards,
label=r"weighted $\epsilon$-decay")
plt.plot(ns_eps_greedy_rewards,
label=r"$\epsilon$-greedy")
plt.plot(ns_eps_greedy_w_rewards,
label=r"weighted $\epsilon$-greedy")
for c in ns_eps_decay.c:
plt.plot(c * np.sin(x * np.pi + c), '--')
plt.legend(bbox_to_anchor=(1.3, 0.5))
plt.xlabel("Iterations")
plt.ylabel("Average Reward")
plt.title("Average Rewards after "
+ str(episodes) + " Episodes")
plt.show()
```
The solid lines indicate the average rewards of the different non-stationary bandit algorithms that we implemented above, while the dashed lines show the change in expected rewards over time.
The discounting helps the weighted ϵ algorithms outperform their non-weighted counterparts. Overall, none are spectacular, but it is very difficult to maintain high returns when the underlying reward functions are changing; it creates an aura of uncertainty.
| true |
code
| 0.641254 | null | null | null | null |
|
# Quickstart guide
This example demonstrates how to build a simple content-based audio retrieval model and evaluate the retrieval accuracy on a small song dataset, CAL500. This dataset consists of 502 western pop songs, performed by 499 unique artists. Each song is tagged by at least three people using a standard survey and a fixed tag vocabulary of 174 musical concepts.
This package includes a loading utility for getting and processing this dataset, which makes loading quite easy.
```
from cbar.datasets import fetch_cal500
X, Y = fetch_cal500()
```
Calling `fetch_cal500()` initally downloads the CAL500 dataset to a subfolder of your home directory. You can specify a different location using the `data_home` parameter (`fetch_cal500(data_home='path')`). Subsequents calls simply load the dataset.
The raw dataset consists of about 10,000 39-dimensional features vectors
per minute of audio content which were created by
1. Sliding a half-overlapping short-time window of 12 milliseconds over each song's waveform data.
2. Extracting the 13 mel-frequency cepstral coefficients.
3. Appending the instantaneous first-order and second-order derivatives.
Each song is, then, represented by exactly 10,000 randomly subsampled, real-valued feature vectors as a *bag-of-frames*. The *bag-of-frames* features are further processed into one *k*-dimensional feature vector by encoding the feature vectors using a codebook and pooling them into one compact vector.
Specifically, *k*-means is used to cluster all frame vectors into *k* clusters. The resulting cluster centers correspond to the codewords in the codebook. Each frame vector is assigned to its closest cluster center and a song represented as the counts of frames assigned to each of the *k* cluster centers.
By default, `fetch_cal500()` uses a codebook size of 512 but this size is easily modified with the `codebook_size` parameter (`fetch_cal500(codebook_size=1024)`).
```
X.shape, Y.shape
```
Let's split the data into training data and test data, fit the model on the training data, and evaluate it on the test data. Import and instantiate the model first.
```
from cbar.loreta import LoretaWARP
model = LoretaWARP(n0=0.1, valid_interval=1000)
```
Then split the data and fit the model using the training data.
```
from cbar.cross_validation import train_test_split_plus
(X_train, X_test,
Y_train, Y_test,
Q_vec, weights) = train_test_split_plus(X, Y)
%time model.fit(X_train, Y_train, Q_vec, X_test, Y_test)
```
Now, predict the scores for each query with all songs. Ordering the songs from highest to lowest score corresponds to the ranking.
```
Y_score = model.predict(Q_vec, X_test)
```
Evaluate the predictions.
```
from cbar.evaluation import Evaluator
from cbar.utils import make_relevance_matrix
n_relevant = make_relevance_matrix(Q_vec, Y_train).sum(axis=1)
evaluator = Evaluator()
evaluator.eval(Q_vec, weights, Y_score, Y_test, n_relevant)
evaluator.prec_at
```
## Cross-validation
The `cv` function in the `cross_validation` module offers an easy way to evaluate a retrieval method on multiple splits of the data. Let's run the same experiment on three folds.
```
from cbar.cross_validation import cv
cv('cal500', 512, n_folds=3, method='loreta', n0=0.1, valid_interval=1000)
```
The cross-validation results including retrieval method parameters are written to a JSON file. For each dataset three separate result files for mean average precision (MAP), precision-at-*k*, and precision-at-10 as a function of relevant training examples are written to disk. Here are the mean average precision values of the last cross-validation run.
```
import json
import os
from cbar.settings import RESULTS_DIR
results = json.load(open(os.path.join(RESULTS_DIR, 'cal500_ap.json')))
results[results.keys()[-1]]['precision']
```
## Start cross-validation with the CLI
This package comes with a simple CLI which makes it easy to start cross-validation experiments from the command line. The CLI enables you to specify a dataset and a retrieval method as well as additional options in one line.
To start an experiment on the CAL500 dataset with the LORETA retrieval method, use the following command.
```
$ cbar crossval --dataset cal500 loreta
```
This simple command uses all the default parameters for LORETA but you can specify all parameters as arguments to the `loreta` command. To see the available options for the `loreta` command, ask for help like this.
```
$ cbar crossval loreta --help
Usage: cbar crossval loreta [OPTIONS]
Options:
-n, --max-iter INTEGER Maximum number of iterations
-i, --valid-interval INTEGER Rank of parameter matrix W
-k INTEGER Rank of parameter matrix W
--n0 FLOAT Step size parameter 1
--n1 FLOAT Step size parameter 2
-t, --rank-thresh FLOAT Threshold for early stopping
-l, --lambda FLOAT Regularization constant
--loss [warp|auc] Loss function
-d, --max-dips INTEGER Maximum number of dips
-v, --verbose Verbosity
--help Show this message and exit.
```
| true |
code
| 0.656658 | null | null | null | null |
|
```
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/spring20/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week1_intro/submit.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
### OpenAI Gym
We're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.
That's where OpenAI gym comes into play. It's a python library that wraps many classical decision problems including robot control, videogames and board games.
So here's how it works:
```
import gym
env = gym.make("MountainCar-v0")
env.reset()
plt.imshow(env.render('rgb_array'))
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)
```
Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away.
### Gym interface
The three main methods of an environment are
* __reset()__ - reset environment to initial state, _return first observation_
* __render()__ - show current environment state (a more colorful version :) )
* __step(a)__ - commit action __a__ and return (new observation, reward, is done, info)
* _new observation_ - an observation right after commiting the action __a__
* _reward_ - a number representing your reward for commiting action __a__
* _is done_ - True if the MDP has just finished, False if still in progress
* _info_ - some auxilary stuff about what just happened. Ignore it ~~for now~~.
```
obs0 = env.reset()
print("initial observation code:", obs0)
# Note: in MountainCar, observation is just two numbers: car position and velocity
print("taking action 2 (right)")
new_obs, reward, is_done, _ = env.step(2)
print("new observation code:", new_obs)
print("reward:", reward)
print("is game over?:", is_done)
# Note: as you can see, the car has moved to the right slightly (around 0.0005)
```
### Play with it
Below is the code that drives the car to the right. However, if you simply use the default policy, the car will not reach the flag at the far right due to gravity.
__Your task__ is to fix it. Find a strategy that reaches the flag.
You are not required to build any sophisticated algorithms for now, feel free to hard-code :)
```
from IPython import display
# Create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(
gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1,
)
actions = {'left': 0, 'stop': 1, 'right': 2}
def policy(obs, t):
# Write the code for your policy here. You can use the observation
# (a tuple of position and velocity), the current time step, or both,
# if you want.
position, velocity = obs
if velocity >= 0:
return actions['right']
else:
return actions['left']
plt.figure(figsize=(4, 3))
display.clear_output(wait=True)
obs = env.reset()
for t in range(TIME_LIMIT):
plt.gca().clear()
action = policy(obs, t) # Call your policy
obs, reward, done, _ = env.step(action) # Pass the action chosen by the policy to the environment
# We don't do anything with reward here because MountainCar is a very simple environment,
# and reward is a constant -1. Therefore, your goal is to end the episode as quickly as possible.
# Draw game image on display.
plt.imshow(env.render('rgb_array'))
display.clear_output(wait=True)
display.display(plt.gcf())
if done:
print("Well done!")
break
else:
print("Time limit exceeded. Try again.")
display.clear_output(wait=True)
```
| true |
code
| 0.452415 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/sayakpaul/TF-2.0-Hacks/blob/master/TF_2_0_and_cloud_functions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
The purpose of this notebook is to show how easy it is to serve a machine learning model via [Cloud Functions](https://console.cloud.google.com/functions/) on the Google Cloud Platform. It is absolutely possible to do this via Colab. In this notebook, we will be
- building a simple neural network model to classify the apparels as listed in the FashionMNIST dataset
- serializing the model weights in a way that is compatible with the Cloud Functions' ecosystem
- using the `gcloud` CLI to deploy our model on GCP via Cloud Functions
So, let's get started.
```
# install `tensorflow` 2.0 latest
pip install tensorflow==2.0.0b1
# import tensorflow and verify the version
import tensorflow as tf
print(tf.__version__)
# all the imports we care about in this notebook
import matplotlib.pyplot as plt
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
# load and prepare our data
fashion_mnist = mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# the humble model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# kickstart the training
model.fit(x_train, y_train, validation_data=(x_test, y_test),
epochs=5, batch_size=128,
verbose=1)
# save the weights
model.save_weights('fashion_mnist_weights')
```
This will give birth to two files:
- fashion_mnist_weights.data-00000-of-00001
- fashion_mnist_weights.index
```
# sample prediction
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
test_img = plt.imread('test.png')
prob = model.predict(test_img.reshape(1, 28, 28))
print(class_names[prob.argmax()])
```
The test image looks like the following, by the way:

Once the model weights are saved we need to create a `.py` file named `main.py` as required by Cloud Functions. The `main.py` file should look like so:
```python
import numpy
import tensorflow as tf
from google.cloud import storage
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
from PIL import Image
# we keep model as global variable so we don't have to reload it
# in case of warm invocations
model = None
def get_me_the_model():
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
def download_blob(bucket_name, source_blob_name, destination_file_name):
"""downloads a blob from the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print('Blob {} downloaded to {}.'.format(
source_blob_name,
destination_file_name))
def handler(request):
global model
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# model load which only happens during cold invocations
if model is None:
download_blob('<your_gs_buckets_name>', 'tensorflow/fashion_mnist_weights.index', '/tmp/fashion_mnist_weights.index')
download_blob('<your_gs_buckets_name>', 'tensorflow/fashion_mnist_weights.data-00000-of-00001', '/tmp/fashion_mnist_weights.data-00000-of-00001')
model = get_me_the_model()
model.load_weights('/tmp/fashion_mnist_weights')
download_blob('<your_gs_buckets_name>', 'tensorflow/test.png', '/tmp/test.png')
image = numpy.array(Image.open('/tmp/test.png'))
input_np = (numpy.array(Image.open('/tmp/test.png'))/255)
input_np = input_np.reshape(1, 28, 28)
predictions = model.predict(input_np)
print(predictions)
print("Image is "+class_names[numpy.argmax(predictions)])
return class_names[numpy.argmax(predictions)]
```
**Note** that in place of `<your_gs_buckets_name>` enter the bucket's name (without `gs://`) in which you have stored the model weights. Also note that, I have stored them in a folder named **tensorflow**. When the model is deployed as a cloud function, `main.py`will download the model from the storage bucket and will store it into **tmp** folder.
Now to get started with the deployment process, first authenticate yourself.
```
!gcloud auth login
```
Set the GCP project (preferably billing enabled).
```
!gcloud config set project fast-ai-exploration
```
And deploy!
```
!gcloud functions deploy handler --runtime python37 --trigger-http --memory 2048
!gcloud functions call handler
```
**Notice** that the function `handler()` in `main.py` internally calls the test image, so you don't need to worry about it.
**Reference**:
- https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions
| true |
code
| 0.73904 | null | null | null | null |
|
# Probabilistic PCA
Probabilistic principal components analysis (PCA) is a
dimensionality reduction technique that
analyzes data via a lower dimensional latent space
(Tipping & Bishop, 1999). It is often
used when there are missing values in the data or for multidimensional
scaling.
We demonstrate with an example in Edward. A webpage version is available at
http://edwardlib.org/tutorials/probabilistic-pca.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import edward as ed
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from edward.models import Normal
plt.style.use('ggplot')
```
## Data
We use simulated data. We'll talk about the individual variables and
what they stand for in the next section. For this example, each data
point is 2-dimensional, $\mathbf{x}_n\in\mathbb{R}^2$.
```
def build_toy_dataset(N, D, K, sigma=1):
x_train = np.zeros((D, N))
w = np.random.normal(0.0, 2.0, size=(D, K))
z = np.random.normal(0.0, 1.0, size=(K, N))
mean = np.dot(w, z)
for d in range(D):
for n in range(N):
x_train[d, n] = np.random.normal(mean[d, n], sigma)
print("True principal axes:")
print(w)
return x_train
ed.set_seed(142)
N = 5000 # number of data points
D = 2 # data dimensionality
K = 1 # latent dimensionality
x_train = build_toy_dataset(N, D, K)
```
We visualize the data set.
```
plt.scatter(x_train[0, :], x_train[1, :], color='blue', alpha=0.1)
plt.axis([-10, 10, -10, 10])
plt.title("Simulated data set")
plt.show()
```
## Model
Consider a data set $\mathbf{X} = \{\mathbf{x}_n\}$ of $N$ data
points, where each data point is $D$-dimensional, $\mathbf{x}_n \in
\mathbb{R}^D$. We aim to represent each $\mathbf{x}_n$ under a latent
variable $\mathbf{z}_n \in \mathbb{R}^K$ with lower dimension, $K <
D$. The set of principal axes $\mathbf{W}$ relates the latent variables to
the data.
Specifically, we assume that each latent variable is normally distributed,
\begin{equation*}
\mathbf{z}_n \sim N(\mathbf{0}, \mathbf{I}).
\end{equation*}
The corresponding data point is generated via a projection,
\begin{equation*}
\mathbf{x}_n \mid \mathbf{z}_n
\sim N(\mathbf{W}\mathbf{z}_n, \sigma^2\mathbf{I}),
\end{equation*}
where the matrix $\mathbf{W}\in\mathbb{R}^{D\times K}$ are known as
the principal axes. In probabilistic PCA, we are typically interested in
estimating the principal axes $\mathbf{W}$ and the noise term
$\sigma^2$.
Probabilistic PCA generalizes classical PCA. Marginalizing out the the
latent variable, the distribution of each data point is
\begin{equation*}
\mathbf{x}_n \sim N(\mathbf{0}, \mathbf{W}\mathbf{W}^Y + \sigma^2\mathbf{I}).
\end{equation*}
Classical PCA is the specific case of probabilistic PCA when the
covariance of the noise becomes infinitesimally small, $\sigma^2 \to 0$.
We set up our model below. In our analysis, we fix $\sigma=2.0$, and
instead of point estimating $\mathbf{W}$ as a model parameter, we
place a prior over it in order to infer a distribution over principal
axes.
```
w = Normal(loc=tf.zeros([D, K]), scale=2.0 * tf.ones([D, K]))
z = Normal(loc=tf.zeros([N, K]), scale=tf.ones([N, K]))
x = Normal(loc=tf.matmul(w, z, transpose_b=True), scale=tf.ones([D, N]))
```
## Inference
The posterior distribution over the principal axes $\mathbf{W}$ cannot
be analytically determined. Below, we set up our inference variables
and then run a chosen algorithm to infer $\mathbf{W}$. Below we use
variational inference to minimize the $\text{KL}(q\|p)$ divergence
measure.
```
qw = Normal(loc=tf.get_variable("qw/loc", [D, K]),
scale=tf.nn.softplus(tf.get_variable("qw/scale", [D, K])))
qz = Normal(loc=tf.get_variable("qz/loc", [N, K]),
scale=tf.nn.softplus(tf.get_variable("qz/scale", [N, K])))
inference = ed.KLqp({w: qw, z: qz}, data={x: x_train})
inference.run(n_iter=500, n_print=100, n_samples=10)
```
## Criticism
To check our inferences, we first inspect the model's learned
principal axes.
```
sess = ed.get_session()
print("Inferred principal axes:")
print(sess.run(qw.mean()))
```
The model has recovered the true principal axes up to finite data and
also up to identifiability (there's a symmetry in the
parameterization).
Another way to criticize the model is to visualize the observed data
against data generated from our fitted model. The blue dots represent
the original data, while the red is the inferred.
```
# Build and then generate data from the posterior predictive distribution.
x_post = ed.copy(x, {w: qw, z: qz})
x_gen = sess.run(x_post)
plt.scatter(x_gen[0, :], x_gen[1, :], color='red', alpha=0.1)
plt.axis([-10, 10, -10, 10])
plt.title("Data generated from model")
plt.show()
```
The generated data looks close to the true data.
## Acknowledgements
We thank Mayank Agrawal for writing the initial version of this
tutorial.
| true |
code
| 0.786684 | null | null | null | null |
|
##### Copyright 2018 The TF-Agents Authors.
### Get Started
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/tf_agents/colabs/2_environments_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/tf_agents/colabs/2_environments_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
```
# Note: If you haven't installed tf-agents or gym yet, run:
!pip install tf-agents-nightly
!pip install tf-nightly
!pip install 'gym==0.10.11'
```
### Imports
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import numpy as np
from tf_agents.environments import py_environment
from tf_agents.environments import tf_environment
from tf_agents.environments import tf_py_environment
from tf_agents.environments import utils
from tf_agents.specs import array_spec
from tf_agents.environments import wrappers
from tf_agents.environments import suite_gym
from tf_agents.trajectories import time_step as ts
tf.compat.v1.enable_v2_behavior()
```
# Introduction
The goal of Reinforcement Learning (RL) is to design agents that learn by interacting with an environment. In the standard RL setting, the agent receives an observation at every time step and chooses an action. The action is applied to the environment and the environment returns a reward and and a new observation. The agent trains a policy to choose actions to maximize the sum of rewards, also known as return.
In TF-Agents, environments can be implemented either in Python or TensorFlow. Python environments are usually easier to implement, understand or debug, but TensorFlow environments are more efficient and allow natural parallelization. The most common workflow is to implement an environment in Python and use one of our wrappers to automatically convert it into TensorFlow.
Let us look at Python environments first. TensorFlow environments follow a very similar API.
# Python Environments
Python environments have a `step(action) -> next_time_step` method that applies an action to the environment, and returns the following information about the next step:
1. `observation`: This is the part of the environment state that the agent can observe to choose its actions at the next step.
2. `reward`: The agent is learning to maximize the sum of these rewards across multiple steps.
3. `step_type`: Interactions with the environment are usually part of a sequence/episode. e.g. multiple moves in a game of chess. step_type can be either `FIRST`, `MID` or `LAST` to indicate whether this time step is the first, intermediate or last step in a sequence.
4. `discount`: This is a float representing how much to weight the reward at the next time step relative to the reward at the current time step.
These are grouped into a named tuple `TimeStep(step_type, reward, discount, observation)`.
The interface that all python environments must implement is in `environments/py_environment.PyEnvironment`. The main methods are:
```
class PyEnvironment(object):
def reset(self):
"""Return initial_time_step."""
self._current_time_step = self._reset()
return self._current_time_step
def step(self, action):
"""Apply action and return new time_step."""
if self._current_time_step is None:
return self.reset()
self._current_time_step = self._step(action)
return self._current_time_step
def current_time_step(self):
return self._current_time_step
def time_step_spec(self):
"""Return time_step_spec."""
@abc.abstractmethod
def observation_spec(self):
"""Return observation_spec."""
@abc.abstractmethod
def action_spec(self):
"""Return action_spec."""
@abc.abstractmethod
def _reset(self):
"""Return initial_time_step."""
@abc.abstractmethod
def _step(self, action):
"""Apply action and return new time_step."""
self._current_time_step = self._step(action)
return self._current_time_step
```
In addition to the `step()` method, environments also provide a `reset()` method that starts a new sequence and provides an initial `TimeStep`. It is not necessary to call the `reset` method explicitly. We assume that environments reset automatically, either when they get to the end of an episode or when step() is called the first time.
Note that subclasses do not implement `step()` or `reset()` directly. They instead override the `_step()` and `_reset()` methods. The time steps returned from these methods will be cached and exposed through `current_time_step()`.
The `observation_spec` and the `action_spec` methods return a nest of `(Bounded)ArraySpecs` that describe the name, shape, datatype and ranges of the observations and actions respectively.
In TF-Agents we repeatedly refer to nests which are defined as any tree like structure composed of lists, tuples, named-tuples, or dictionaries. These can be arbitrarily composed to maintain structure of observations and actions. We have found this to be very useful for more complex environments where you have many observations and actions.
## Using Standard Environments
TF Agents has built-in wrappers for many standard environments like the OpenAI Gym, DeepMind-control and Atari, so that they follow our `py_environment.PyEnvironment` interface. These wrapped evironments can be easily loaded using our environment suites. Let's load the CartPole environment from the OpenAI gym and look at the action and time_step_spec.
```
environment = suite_gym.load('CartPole-v0')
print('action_spec:', environment.action_spec())
print('time_step_spec.observation:', environment.time_step_spec().observation)
print('time_step_spec.step_type:', environment.time_step_spec().step_type)
print('time_step_spec.discount:', environment.time_step_spec().discount)
print('time_step_spec.reward:', environment.time_step_spec().reward)
```
So we see that the environment expects actions of type `int64` in [0, 1] and returns `TimeSteps` where the observations are a `float32` vector of length 4 and discount factor is a `float32` in [0.0, 1.0]. Now, let's try to take a fixed action `(1,)` for a whole episode.
```
action = 1
time_step = environment.reset()
print(time_step)
while not time_step.is_last():
time_step = environment.step(action)
print(time_step)
```
## Creating your own Python Environment
For many clients, a common use case is to apply one of the standard agents (see agents/) in TF-Agents to their problem. To do this, they have to frame their problem as an environment. So let us look at how to implement an environment in Python.
Let's say we want to train an agent to play the following (Black Jack inspired) card game:
1. The game is played using an infinite deck of cards numbered 1...10.
2. At every turn the agent can do 2 things: get a new random card, or stop the current round.
3. The goal is to get the sum of your cards as close to 21 as possible at the end of the round, without going over.
An environment that represents the game could look like this:
1. Actions: We have 2 actions. Action 0: get a new card, and Action 1: terminate the current round.
2. Observations: Sum of the cards in the current round.
3. Reward: The objective is to get as close to 21 as possible without going over, so we can achieve this using the following reward at the end of the round:
sum_of_cards - 21 if sum_of_cards <= 21, else -21
```
class CardGameEnv(py_environment.PyEnvironment):
def __init__(self):
self._action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=1, name='action')
self._observation_spec = array_spec.BoundedArraySpec(
shape=(1,), dtype=np.int32, minimum=0, name='observation')
self._state = 0
self._episode_ended = False
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _reset(self):
self._state = 0
self._episode_ended = False
return ts.restart(np.array([self._state], dtype=np.int32))
def _step(self, action):
if self._episode_ended:
# The last action ended the episode. Ignore the current action and start
# a new episode.
return self.reset()
# Make sure episodes don't go on forever.
if action == 1:
self._episode_ended = True
elif action == 0:
new_card = np.random.randint(1, 11)
self._state += new_card
else:
raise ValueError('`action` should be 0 or 1.')
if self._episode_ended or self._state >= 21:
reward = self._state - 21 if self._state <= 21 else -21
return ts.termination(np.array([self._state], dtype=np.int32), reward)
else:
return ts.transition(
np.array([self._state], dtype=np.int32), reward=0.0, discount=1.0)
```
Let's make sure we did everything correctly defining the above environment. When creating your own environment you must make sure the observations and time_steps generated follow the correct shapes and types as defined in your specs. These are used to generate the TensorFlow graph and as such can create hard to debug problems if we get them wrong.
To validate our environment we will use a random policy to generate actions and we will iterate over 5 episodes to make sure things are working as intended. An error is raised if we receive a time_step that does not follow the environment specs.
```
environment = CardGameEnv()
utils.validate_py_environment(environment, episodes=5)
```
Now that we know the environment is working as intended, let's run this environment using a fixed policy: ask for 3 cards and then end the round.
```
get_new_card_action = 0
end_round_action = 1
environment = CardGameEnv()
time_step = environment.reset()
print(time_step)
cumulative_reward = time_step.reward
for _ in range(3):
time_step = environment.step(get_new_card_action)
print(time_step)
cumulative_reward += time_step.reward
time_step = environment.step(end_round_action)
print(time_step)
cumulative_reward += time_step.reward
print('Final Reward = ', cumulative_reward)
```
## Environment Wrappers
An environment wrapper takes a python environment and returns a modified version of the environment. Both the original environment and the modified environment are instances of `py_environment.PyEnvironment`, and multiple wrappers can be chained together.
Some common wrappers can be found in `environments/wrappers.py`. For example:
1. `ActionDiscretizeWrapper`: Converts a continuous action space to a discrete action space.
2. `RunStats`: Captures run statistics of the environment such as number of steps taken, number of episodes completed etc.
3. `TimeLimit`: Terminates the episode after a fixed number of steps.
4. `VideoWrapper`: Captures a video of the environment.
### Example 1: Action Discretize Wrapper
InvertedPendulum is a PyBullet environment that accepts continuous actions in the range `[-1, 1]`. If we want to train a discrete action agent such as DQN on this environment, we have to discretize (quantize) the action space. This is exactly what the `ActionDiscretizeWrapper` does. Compare the `action_spec` before and after wrapping:
```
env = suite_gym.load('Pendulum-v0')
print('Action Spec:', env.action_spec())
discrete_action_env = wrappers.ActionDiscretizeWrapper(env, num_actions=5)
print('Discretized Action Spec:', discrete_action_env.action_spec())
```
The wrapped `discrete_action_env` is an instance of `py_environment.PyEnvironment` and can be treated like a regular python environment.
# TensorFlow Environments
The interface for TF environments is defined in `environments/tf_environment.TFEnvironment` and looks very similar to the Python environments. TF Environments differ from python envs in a couple of ways:
* They generate tensor objects instead of arrays
* TF environments add a batch dimension to the tensors generated when compared to the specs.
Converting the python environments into TFEnvs allows tensorflow to parellalize operations. For example, one could define a `collect_experience_op` that collects data from the environment and adds to a `replay_buffer`, and a `train_op` that reads from the `replay_buffer` and trains the agent, and run them in parallel naturally in TensorFlow.
```
class TFEnvironment(object):
def time_step_spec(self):
"""Describes the `TimeStep` tensors returned by `step()`."""
def observation_spec(self):
"""Defines the `TensorSpec` of observations provided by the environment."""
def action_spec(self):
"""Describes the TensorSpecs of the action expected by `step(action)`."""
def reset(self):
"""Returns the current `TimeStep` after resetting the Environment."""
return self._reset()
def current_time_step(self):
"""Returns the current `TimeStep`."""
return self._current_time_step()
def step(self, action):
"""Applies the action and returns the new `TimeStep`."""
return self._step(action)
@abc.abstractmethod
def _reset(self):
"""Returns the current `TimeStep` after resetting the Environment."""
@abc.abstractmethod
def _current_time_step(self):
"""Returns the current `TimeStep`."""
@abc.abstractmethod
def _step(self, action):
"""Applies the action and returns the new `TimeStep`."""
```
The `current_time_step()` method returns the current time_step and initializes the environment if needed.
The `reset()` method forces a reset in the environment and returns the current_step.
If the `action` doesn't depend on the previous `time_step` a `tf.control_dependency` is needed in `Graph` mode.
For now, let us look at how `TFEnvironments` are created.
## Creating your own TensorFlow Environment
This is more complicated than creating environments in Python, so we will not cover it in this colab. An example is available [here](https://github.com/tensorflow/agents/blob/master/tf_agents/environments/tf_environment_test.py). The more common use case is to implement your environment in Python and wrap it in TensorFlow using our `TFPyEnvironment` wrapper (see below).
## Wrapping a Python Environment in TensorFlow
We can easily wrap any Python environment into a TensorFlow environment using the `TFPyEnvironment` wrapper.
```
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
print(isinstance(tf_env, tf_environment.TFEnvironment))
print("TimeStep Specs:", tf_env.time_step_spec())
print("Action Specs:", tf_env.action_spec())
```
Note the specs are now of type: `(Bounded)TensorSpec`.
## Usage Examples
### Simple Example
```
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
# reset() creates the initial time_step after resetting the environment.
time_step = tf_env.reset()
num_steps = 3
transitions = []
reward = 0
for i in range(num_steps):
action = tf.constant([i % 2])
# applies the action and returns the new TimeStep.
next_time_step = tf_env.step(action)
transitions.append([time_step, action, next_time_step])
reward += next_time_step.reward
time_step = next_time_step
np_transitions = tf.nest.map_structure(lambda x: x.numpy(), transitions)
print('\n'.join(map(str, np_transitions)))
print('Total reward:', reward.numpy())
```
### Whole Episodes
```
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
time_step = tf_env.reset()
rewards = []
steps = []
num_episodes = 5
for _ in range(num_episodes):
episode_reward = 0
episode_steps = 0
while not time_step.is_last():
action = tf.random_uniform([1], 0, 2, dtype=tf.int32)
time_step = tf_env.step(action)
episode_steps += 1
episode_reward += next_time_step.reward.numpy()
rewards.append(episode_reward)
steps.append(episode_steps)
num_steps = np.sum(steps)
avg_length = np.mean(steps)
avg_reward = np.mean(rewards)
print('num_episodes:', num_episodes, 'num_steps:', num_steps)
print('avg_length', avg_length, 'avg_reward:', avg_reward)
```
| true |
code
| 0.817483 | null | null | null | null |
|
(dynamic_programming)=
# Dynamic Programming
``` {index} Dynamic Programming
```
Dynamic algorithms are a family of programs which (broadly speaking) share the feature of utilising solutions to subproblems in coming up with an optimal solution. We will discuss the conditions which problems need to satisfy to be solved dynamically but let us first have a look at some basic examples. The concept is quite challenging to digest, so there are many examples in this section.
## Simple Examples
* **Dynamic Fibonacci** If you have read the section about [recursion](https://primer-computational-mathematics.github.io/book/b_coding/Fundamentals%20of%20Computer%20Science/2_Recursion.html#recursion) you probably know that the basic implementation for generating Fibonacci numbers is very inefficient (\\(O(2^n)\\)!). Now, as we generate the next numbers in the Fibonacci sequence, we will remember them for further use. The *trade-off between memory and time* is dramatic in this case. Compare the two versions of the function:
```
# For comparing the running times
import time
# Inefficient
def fib(n):
assert n >= 0
if n < 2:
return 1
else:
return fib(n-1) + fib(n-2)
# Dynamic version
def dynamicFib(n):
assert n >= 0
#prepare a table for memoizing
prevFib = 1
prevPrevFib = 1
temp = 0
#build up on your previous results
for i in range(2,n+1):
temp = prevFib + prevPrevFib
prevPrevFib = prevFib
prevFib = temp
return prevFib
start = time.time()
print(fib(32))
end = time.time()
print("Time for brute:" + str(end - start))
start = time.time()
print(dynamicFib(32))
end = time.time()
print("Time for dynamic:" + str(end - start))
```
The time difference is enormous! As you can probably spot, `dynamicFib` is \\(O(n)\\)! With the use of three integer variables (`prevFib`, `prevPrevFib` and `temp`) we brought exponential time complexity down to linear. How did it happen? Let us now depict the work done by `fib(5)` on a graph:
```{figure} algo_images/FibTree.png
:width: 60%
```
Wow! This is quite a tree! However, the worst feature is that many of the nodes are repeated (e.g. node \\(2\\) appears 3 times). These repeated results which are constantly recomputed bring us to the exponential complexity. Consider the dynamic solution graph:
```{figure} algo_images/FibDyn.png
:width: 10%
```
Now, the number of nodes grows linearly with \\(n\\). We have `merged` the subproblems to avoid redundancy.
-----------------
* **Shortest Path on a Grid** Dynamic programming is often used for optimisation problems. Consider a square grid with numbers from 0 to 9 in each square. An example would be:
```
+-+-+-+-+
|1|0|8|4|
+-+-+-+-+
|3|5|1|0|
+-+-+-+-+
|6|8|9|3|
+-+-+-+-+
|1|2|4|5|
+-+-+-+-+
```
It is allowed to move only **down or right**. What is the value of the minimum path from the upper-left corner to the lower-right corner?
The initial approach to the problem might be to check all the possible paths and return the minimal one. This is exponential and too slow. To come up with a faster approach we need to find subproblems.
Let us imagine that we have reached the lower-right corner (hurray!). We could get there from the tile above it and left of it. This might at first seem like a *Divide and Conquer* problem, but let us keep on looking for an overlap:
```{figure} algo_images/DownRightGrid.png
:width: 30%
```
In the simplest case of four tiles, we can already see that the upper-left tile is considered twice. We should then leverage this repetition. This overlapping generalises for larger grids. In our algorithm, we will remember the optimal path to the tiles already visited and build on this knowledge:
```
# grid is a square matrix
def shortestPathDownRight(grid):
# dictionary that will keep the optimal paths
# to the already visited tiles
table = {}
# n - length of the side of the square
n = len(grid)
assert n == 0 or len(grid[0]) == n
table[(0,0)] = grid[0][0]
# top and most left column have trival optimal paths
for i in range(1, n):
table[(0,i)] = table[(0,i-1)] + grid[0][i]
table[(i,0)] = table[(i-1,0)] + grid[i][0]
# the dynamic magic
for i in range(1,n):
for j in range(1,n):
table[(i,j)] = min(table[(i-1,j)],table[(i,j-1)]) + grid[i][j]
return table[(n-1,n-1)]
grid = [[1,0,8,4],[3,5,1,0],[6,8,9,3],[1,2,4,5]]
print(shortestPathDownRight(grid))
```
What is the time complexity of this algorithm? Based on the nested loop we can deduce that it is \\(O(n^2)\\). Quite an improvement!
--------
### Space Complexity
We usually do not concern ourselves with the amount of memory a program utilises. Dynamic programs are an exception to this rule. The amount of memory they use can be a limiting factor for some machines, so we need to take it into account. In the case of `dynamicFib` this was \\(O(1)\\) as we only needed to keep track of the last two Fibonacci numbers. In case of `shortestPathDownRight` we need to create a grid of size \\(n \times n\\), so \\(O(n^2)\\). The notion of space complexity is very similar to time complexity, so we will not discuss it in depth.
--------
* **Cutting Rods** We are given a rod of integer length \\(n\\) and a list of length `prices` \\(n\\). The \\(i\\)th entry in the list corresponds to the profit we can get from selling the rod of length \\(i\\). How should we cut the rod so we maximise our profit?
The key to our dynamic algorithm will be the observation that:
We cut the rod at position \\(k\\). Now we have a rod of lenght \\(k\\) and \\(n-k\\). Let us assume we know the maximum price for these two rods. Now we need to consider all the \\(0 \leq k \leq \lfloor \frac{n}{2} \rfloor + 1\\) (the problem is symmetric so computing for \\(\frac{n}{2} \leq k\\) would be redundant. For \\(k=0\\) we just take `prices[n]`. The cutting introduces subproblems which are smaller than the initial problem and they are overlapping! Let us put this into code:
```
# For comparing the running times
import time
# For testing
from random import randint
def dynamicCutRod(n,prices):
# setting the initial values of variables
assert n >= 0 and n == len(prices)
# trival cases
if n == 0:
return 0
if n == 1:
return prices[0]
# setting up needed variables
table = {}
currLen = 2
table[0] = 0
table[1] = prices[0]
while currLen < n + 1:
# no cuts for a given len
table[currLen] = prices[currLen - 1]
# considering all possible cuts for a give len
for k in range(1,currLen//2 + 1):
# take the maximal one
if table[currLen] < table[k] + table[currLen - k]:
table[currLen] = table[k] + table[currLen - k]
currLen += 1
return table[n]
# for testing purposes
def bruteForceRecursive(n,prices):
assert n >=0
if n == 0:
return 0
if n == 1:
return prices[0]
currLen = n
res = prices[n-1]
for k in range(1,n//2 + 1):
res = max(res, bruteForceRecursive(k,prices) + bruteForceRecursive(n-k,prices))
return res
# testing
for i in range(1, 11):
prices = []
for j in range(i):
prices.append(randint(1,10))
assert bruteForceRecursive(len(prices),prices) == dynamicCutRod(len(prices),prices)
# comparing times
prices = []
for i in range(20):
prices.append(randint(1,10))
start = time.time()
print(bruteForceRecursive(len(prices),prices))
end = time.time()
print("Time for brute:" + str(end - start))
start = time.time()
print(dynamicCutRod(len(prices),prices))
end = time.time()
print("Time for dynamic:" + str(end - start))
```
Time complexity? For each \\(0\leq i \leq n \\) we need to consider \\(i\\) different cuts. This is the sum of arithmetic series so \\(O(n^2)\\). We memoize only the optimal solutions to subproblems, therefore space complexity is \\(O(n)\\).
## Characteristic features of dynamic problems
All problems which can be tackled with the use of Dynamic Programming (DP) have two main features:
1) **Optimal substructure**: In broad terms, that solution to the problem can be formulated in terms of independent subproblems. Then you choose a solution/combination of optimal solutions to subproblems and show that this choice leads to an optimal global solution. For example, for the `shortestPath` problem we choose between the optimal solution to the tile above and tile to the left. For the rod problem, we choose between all possible \\(n\\) cuts of a rod and assume we have solutions to the shorter rods.
2) **Overlapping subproblems**: The subproblems which we split the main question into should repeat. Eliminating the need to recompute the subproblems (by memoising them) should speed up our algorithm. For example, `dynamicFib` used two variables to keep the results of computing previous Fibonacci numbers.
## Common approaches to DP
There are two main schools of thought when it comes to solving DP algorithms:
1) ***Memoization*** is using the *memory-time trade-off*. It remembers the results to the subproblems in a dictionary `memo` and recalls them when needed. If we have the result in memory, we take if from there, otherwise compute it.
2) ***Bottom-up*** approach starts with the trivial cases of the problem and builds on them. I relies on the the fact that subproblems can be sorted. So we first compute the optimal cutting for a rod of length 1 and then length 2 and so on...
Both usually lead to the same time complexities.
## Advanced Examples
* **0-1 Knapsack** The problem is about a backpack (knapsack) of an integer volume \\(S\\), we are given lists `spaces` and `vals` which correspond to the size and value of the objects we might want to pack. The trick is to pack the objects of the highest accumulate value which fit into the backpack. We will consider prefixes of the `sizes` list in order of length and put the intermediate results in a table. Compare the bottom-up and memoized approaches:
```
# For comparing the running times
import time
# For testing
from random import randint
# bottom up approach
def knapSack(wt, val, W, n):
K = [[0 for x in range(W + 1)] for x in range(n + 1)]
# Build table K[][] in bottom up manner
for i in range(n + 1):
for w in range(W + 1):
if i == 0 or w == 0:
K[i][w] = 0
elif wt[i-1] <= w:
K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w])
else:
K[i][w] = K[i-1][w]
return K[n][W]
# memoized
memo = {}
def knapSackMemo(wt,val, W, n):
# base case
if n == 0 or W == 0:
memo[(n,W)] = 0
return 0
# check if memoized
if (n,W) in memo:
return memo[(n,W)]
# if not, calcuate
else:
if wt[n-1] <= W:
memo[(n,W)] = max(knapSackMemo(wt,val,W-wt[n-1],n-1)+val[n-1],
knapSackMemo(wt,val,W,n-1))
return memo[(n,W)]
else:
memo[(n,W)] = knapSackMemo(wt,val,W,n-1)
return memo[(n,W)]
# brute force for testing
def bruteForce(wt, val, W):
res = 0
# all combinations of the elements
for i in range(2**len(wt)):
sumSize = 0
sumVal = 0
for j in range(len(wt)):
if (i >> j) & 1:
sumSize += wt[j]
sumVal += val[j]
if sumSize > W:
sumVal = 0
break
res = max(sumVal, res)
return res
# testing
for _ in range(10):
sizes = []
vals = []
S = randint(0,200)
memo = {}
for _ in range(13):
sizes.append(randint(1,10))
vals.append(randint(1,20))
br = bruteForce(sizes,vals, S)
btup = knapSack(sizes,vals,S,len(sizes))
mem = knapSackMemo(sizes,vals,S,len(sizes))
assert btup == br and mem == br
start = time.time()
print(bruteForce(sizes,vals, S))
end = time.time()
print("Time for brute:" + str(end - start))
start = time.time()
print(knapSack(sizes,vals,S,len(sizes)))
end = time.time()
print("Time for bottom-up:" + str(end - start))
memo = {}
start = time.time()
print(knapSackMemo(sizes,vals,S,len(sizes)))
end = time.time()
print("Time for memoized:" + str(end - start))
```
In this case, the memoized approach is the fastest as it does not usually require to consider all `(n,W)` pairs. However, both bottom-up and memoized approach have space (and time) complexity of \\(O(nS)\\). The brute force has time complexity of \\(O(2^n)\\), where \\(n\\) is the length of the `sizes` list.
-------------
* **Scheduling** With all the chaos that COVID caused universities are going fully remote. The head of your department asked the lecturers to send the timeslots in which they can do lectures. This is a list `timeslots` which consist of pairs of integers - the start and end of the timeslot. If timeslots overlap, then we choose only one of them. Lectures can start at the same time the previous lecture finished. You aim to pick the subset of `timeslots` which assures that the sum of teaching hours is maximum. Return the maximum number of hours.
The question is similar to the previous one. We are given a list of items (`timeslots`) and need to come up with an optimisation. The value of each subset is the sum of hours of its elements. To speed things up in later stages, we will sort the timeslots by the ending time. This is done in \\(O(nlog(n))\\). We will then consider the prefixes of the sorted array and memoize optimal results. `memo[n]` will store the maximum number of hours from the prefix of `n` length.
```
# For comparing the running times
import time
# For testing
from random import randint
# utility function to speed up the search
def binarySearch(timeslots,s,low,high):
if low >= high:
return low
mid = (low + high) //2
if timeslots[mid][1] <= s:
return binarySearch(timeslots, s, mid+1, high)
else:
return binarySearch(timeslots, s, low, mid)
# init memo
memo = {}
# assumes that timeslots array is sorted by ending time
def schedule(timeslots, n):
# base case
if n == 0:
memo[0] = 0
return 0
# memoized case
elif n in memo:
return memo[n]
# else calculate
else:
s,e = timeslots[n-1]
# in log time
ind = min(binarySearch(timeslots,s,0,len(timeslots)),n-1)
memo[n] = max(schedule(timeslots,n-1), schedule(timeslots,ind) + (e - s))
return memo[n]
# brute force for testing
def bruteForce(timeslots):
res = 0
# all combinations of the elements
for i in range(2**len(timeslots)):
sumHours = 0
already_chosen = []
for j in range(len(timeslots)):
if (i >> j) & 1:
s, e = timeslots[j]
sumHours += e - s
already_chosen.append(timeslots[j])
# checking if a valid combination of timeslots
for k in range(len(already_chosen)-1):
if not (s >= already_chosen[k][1] or e <= already_chosen[k][0]):
sumHours = 0
break
res = max(sumHours, res)
return res
# testing
for _ in range(10):
memo = {}
timeslots = []
for _ in range(12):
s = randint(0,100)
e = randint(s,100)
timeslots.append((s,e))
timeslots.sort(key = lambda slot: (slot[1],slot[0]))
br = bruteForce(timeslots)
mem = schedule(timeslots,len(timeslots))
assert br == mem
start = time.time()
print(bruteForce(timeslots))
end = time.time()
print("Time for brute:" + str(end - start))
memo = {}
start = time.time()
print(schedule(timeslots,len(timeslots)))
end = time.time()
print("Time for memo:" + str(end - start))
```
Time complexity of this solution is \\(O(nlog(n))\\). Why?
## Exercises
* **Tiling problem** You are given a \\(2 \times n\\) board and \\(2 \times 1\\) tiles. Count the number of ways the tiles can be arranged (horizontally and vertically) on the board.
**HINT**: You have seen this algorithm before.
```{admonition} Answer
:class: dropdown
Consider the end of the board. There are two cases, we can either have to fill in a \\(2 \times 2\\) subboard or \\(1 \times 2\\) subboard. We assume that the rest of the board is already covered and this is the same problem but smaller. There are two ways of arranging tiles on the \\(2 \times 2\\) board and one way to do this on \\(1 \times 2\\) board. Watch out though! The case in which we place tiles vertically on the \\(2 \times 2\\) board covers some of the cases when we fill \\(1 \times 2\\). Therefore we should ignore it. The final formula is as follow:
\\[ count[n] = count[n-1] + count[n-2] \\]
That is Fibonacci numbers.
```
------------
* **Maximum sum subarray** You are given an array with \\(1\\) and \\(-1\\) elements only. Find the array of the maximum sum in \\(O(n)\\). Return this maximal sum. E.g. [-1,1,1,-1,1,1,1,-1] -> 4.
**HINT**: Consider the sums of prefixes of the array.
````{admonition} Answer
:class: dropdown
```python
def maxSubarray(arr):
assert len(arr) > 0
sumPref = 0
minSum = 0
maxSum = 0
for i in range(0,len(arr)):
# current prefix sum
sumPref = sumPref + arr[i]
# keep track of the minimal sum (can be negative)
minSum = min(minSum, sumPref)
# take away the minSum, this will give max
maxSum = max(maxSum, sumPref-minSum)
return maxSum
print(maxSubarray([-1,1,1,-1,1,1,1,-1]))
```
````
---------
* **Longest snake sequence** You are given a grid of natural numbers. A snake is a sequence of numbers which goes down or to the right. The adjacent numbers in a snake differ by a maximum 1. Define a dynamic function which will return the length of the longest snake. E.g.
**9**, 6, 5, 2
**8**, **7**, **6**, **5**
7, 3, 1, **6**
1, 1, 1, **7**
-> 7
````{admonition} Answer
:class: dropdown
```python
def longestSnake(grid):
#dictionary that will keep the optimal paths to the already visited tiles
table = {}
# n - length of the side of the square
n = len(grid)
assert n == 0 or len(grid[0]) == n
table[(0,0)] = 1
# top and most left column have trival optimal paths
for i in range(1, n):
table[(0,i)] = table[(0,i-1)] + 1 if abs(grid[0][i] - grid[0][i-1]) < 2 else 1
table[(i,0)] = table[(i-1,0)] + 1 if abs(grid[i][0] - grid[i-1][0]) < 2 else 1
# the dynamic magic
for i in range(1,n):
for j in range(1,n):
table[(i,j)] = max(
table[(i-1,j)] + 1 if abs(grid[i][j] - grid[i-1][j]) < 2 else 1,
table[(i,j-1)] + 1 if abs(grid[i][j] - grid[i][j-1]) < 2 else 1)
return max(table.values())
mat = [[9, 6, 5, 2],
[8, 7, 6, 5],
[7, 3, 1, 6],
[1, 1, 1, 7]]
print(longestSnake(mat))
```
````
## References
* Cormen, Leiserson, Rivest, Stein, "Introduction to Algorithms", Third Edition, MIT Press, 2009
* Prof. Erik Demaine, Prof. Srini Devadas, MIT OCW 6.006 ["Introduction to Algorithms"](https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/), Fall 2011
* GeeksforGeeks, [Find maximum length Snake sequence](https://www.geeksforgeeks.org/find-maximum-length-snake-sequence/)
* Polish Informatics Olimpiad, "Advetures of Bajtazar, 25 Years of Informatics Olympiad", PWN 2018
| true |
code
| 0.893309 | null | null | null | null |
|
# Spectral GP Learning with Deltas
In this paper, we demonstrate another approach to spectral learning with GPs, learning a spectral density as a simple mixture of deltas. This has been explored, for example, as early as Lázaro-Gredilla et al., 2010.
```
import gpytorch
import torch
```
## Load Data
For this notebook, we'll be using a sample set of timeseries data of BART ridership on the 5 most commonly traveled stations in San Francisco. This subsample of data was selected and processed from Pyro's examples http://docs.pyro.ai/en/stable/_modules/pyro/contrib/examples/bart.html
```
import os
import urllib.request
smoke_test = ('CI' in os.environ)
if not smoke_test and not os.path.isfile('../BART_sample.pt'):
print('Downloading \'BART\' sample dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1A6LqCHPA5lHa5S3lMH8mLMNEgeku8lRG', '../BART_sample.pt')
torch.manual_seed(1)
if smoke_test:
train_x, train_y, test_x, test_y = torch.randn(2, 100, 1), torch.randn(2, 100), torch.randn(2, 100, 1), torch.randn(2, 100)
else:
train_x, train_y, test_x, test_y = torch.load('../BART_sample.pt', map_location='cpu')
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
print(train_x.shape, train_y.shape, test_x.shape, test_y.shape)
train_x_min = train_x.min()
train_x_max = train_x.max()
train_x = train_x - train_x_min
test_x = test_x - train_x_min
train_y_mean = train_y.mean(dim=-1, keepdim=True)
train_y_std = train_y.std(dim=-1, keepdim=True)
train_y = (train_y - train_y_mean) / train_y_std
test_y = (test_y - train_y_mean) / train_y_std
```
## Define a Model
The only thing of note here is the use of the kernel. For this example, we'll learn a kernel with 2048 deltas in the mixture, and initialize by sampling directly from the empirical spectrum of the data.
```
class SpectralDeltaGP(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, num_deltas, noise_init=None):
likelihood = gpytorch.likelihoods.GaussianLikelihood(noise_constraint=gpytorch.constraints.GreaterThan(1e-11))
likelihood.register_prior("noise_prior", gpytorch.priors.HorseshoePrior(0.1), "noise")
likelihood.noise = 1e-2
super(SpectralDeltaGP, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
base_covar_module = gpytorch.kernels.SpectralDeltaKernel(
num_dims=train_x.size(-1),
num_deltas=num_deltas,
)
base_covar_module.initialize_from_data(train_x[0], train_y[0])
self.covar_module = gpytorch.kernels.ScaleKernel(base_covar_module)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
model = SpectralDeltaGP(train_x, train_y, num_deltas=1500)
if torch.cuda.is_available():
model = model.cuda()
```
## Train
```
model.train()
mll = gpytorch.mlls.ExactMarginalLogLikelihood(model.likelihood, model)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[40])
num_iters = 1000 if not smoke_test else 4
with gpytorch.settings.max_cholesky_size(0): # Ensure we dont try to use Cholesky
for i in range(num_iters):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
if train_x.dim() == 3:
loss = loss.mean()
loss.backward()
optimizer.step()
if i % 10 == 0:
print(f'Iteration {i} - loss = {loss:.2f} - noise = {model.likelihood.noise.item():e}')
scheduler.step()
# Get into evaluation (predictive posterior) mode
model.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.max_cholesky_size(0), gpytorch.settings.fast_pred_var():
test_x_f = torch.cat([train_x, test_x], dim=-2)
observed_pred = model.likelihood(model(test_x_f))
varz = observed_pred.variance
```
## Plot Results
```
from matplotlib import pyplot as plt
%matplotlib inline
_task = 3
plt.subplots(figsize=(15, 15), sharex=True, sharey=True)
for _task in range(2):
ax = plt.subplot(3, 1, _task + 1)
with torch.no_grad():
# Initialize plot
# f, ax = plt.subplots(1, 1, figsize=(16, 12))
# Get upper and lower confidence bounds
lower = observed_pred.mean - varz.sqrt() * 1.98
upper = observed_pred.mean + varz.sqrt() * 1.98
lower = lower[_task] # + weight * test_x_f.squeeze()
upper = upper[_task] # + weight * test_x_f.squeeze()
# Plot training data as black stars
ax.plot(train_x[_task].detach().cpu().numpy(), train_y[_task].detach().cpu().numpy(), 'k*')
ax.plot(test_x[_task].detach().cpu().numpy(), test_y[_task].detach().cpu().numpy(), 'r*')
# Plot predictive means as blue line
ax.plot(test_x_f[_task].detach().cpu().numpy(), (observed_pred.mean[_task]).detach().cpu().numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x_f[_task].detach().cpu().squeeze().numpy(), lower.detach().cpu().numpy(), upper.detach().cpu().numpy(), alpha=0.5)
# ax.set_ylim([-3, 3])
ax.legend(['Training Data', 'Test Data', 'Mean', '95% Confidence'], fontsize=16)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.tick_params(axis='both', which='minor', labelsize=16)
ax.set_ylabel('Passenger Volume (Normalized)', fontsize=16)
ax.set_xlabel('Hours (Zoomed to Test)', fontsize=16)
ax.set_xticks([])
plt.xlim([1250, 1680])
plt.tight_layout()
```
| true |
code
| 0.693071 | null | null | null | null |
|
_Lambda School Data Science_
# Make Explanatory Visualizations
### Objectives
- identify misleading visualizations and how to fix them
- use Seaborn to visualize distributions and relationships with continuous and discrete variables
- add emphasis and annotations to transform visualizations from exploratory to explanatory
- remove clutter from visualizations
### Links
- [How to Spot Visualization Lies](https://flowingdata.com/2017/02/09/how-to-spot-visualization-lies/)
- [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary)
- [Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html)
- [Searborn example gallery](http://seaborn.pydata.org/examples/index.html) & [tutorial](http://seaborn.pydata.org/tutorial.html)
- [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/)
- [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked)
- [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/)
# Avoid Misleading Visualizations
Did you find/discuss any interesting misleading visualizations in your Walkie Talkie?
## What makes a visualization misleading?
[5 Ways Writers Use Misleading Graphs To Manipulate You](https://venngage.com/blog/misleading-graphs/)
## Two y-axes
<img src="https://kieranhealy.org/files/misc/two-y-by-four-sm.jpg" width="800">
Other Examples:
- [Spurious Correlations](https://tylervigen.com/spurious-correlations)
- <https://blog.datawrapper.de/dualaxis/>
- <https://kieranhealy.org/blog/archives/2016/01/16/two-y-axes/>
- <http://www.storytellingwithdata.com/blog/2016/2/1/be-gone-dual-y-axis>
## Y-axis doesn't start at zero.
<img src="https://i.pinimg.com/originals/22/53/a9/2253a944f54bb61f1983bc076ff33cdd.jpg" width="600">
## Pie Charts are bad
<img src="https://i1.wp.com/flowingdata.com/wp-content/uploads/2009/11/Fox-News-pie-chart.png?fit=620%2C465&ssl=1" width="600">
## Pie charts that omit data are extra bad
- A guy makes a misleading chart that goes viral
What does this chart imply at first glance? You don't want your user to have to do a lot of work in order to be able to interpret you graph correctly. You want that first-glance conclusions to be the correct ones.
<img src="https://pbs.twimg.com/media/DiaiTLHWsAYAEEX?format=jpg&name=medium" width='600'>
<https://twitter.com/michaelbatnick/status/1019680856837849090?lang=en>
- It gets picked up by overworked journalists (assuming incompetency before malice)
<https://www.marketwatch.com/story/this-1-chart-puts-mega-techs-trillions-of-market-value-into-eye-popping-perspective-2018-07-18>
- Even after the chart's implications have been refuted, it's hard a bad (although compelling) visualization from being passed around.
<https://www.linkedin.com/pulse/good-bad-pie-charts-karthik-shashidhar/>
**["yea I understand a pie chart was probably not the best choice to present this data."](https://twitter.com/michaelbatnick/status/1037036440494985216)**
## Pie Charts that compare unrelated things are next-level extra bad
<img src="http://www.painting-with-numbers.com/download/document/186/170403+Legalizing+Marijuana+Graph.jpg" width="600">
## Be careful about how you use volume to represent quantities:
radius vs diameter vs volume
<img src="https://static1.squarespace.com/static/5bfc8dbab40b9d7dd9054f41/t/5c32d86e0ebbe80a25873249/1546836082961/5474039-25383714-thumbnail.jpg?format=1500w" width="600">
## Don't cherrypick timelines or specific subsets of your data:
<img src="https://wattsupwiththat.com/wp-content/uploads/2019/02/Figure-1-1.png" width="600">
Look how specifically the writer has selected what years to show in the legend on the right side.
<https://wattsupwiththat.com/2019/02/24/strong-arctic-sea-ice-growth-this-year/>
Try the tool that was used to make the graphic for yourself
<http://nsidc.org/arcticseaicenews/charctic-interactive-sea-ice-graph/>
## Use Relative units rather than Absolute Units
<img src="https://imgs.xkcd.com/comics/heatmap_2x.png" width="600">
## Avoid 3D graphs unless having the extra dimension is effective
Usually you can Split 3D graphs into multiple 2D graphs
3D graphs that are interactive can be very cool. (See Plotly and Bokeh)
<img src="https://thumbor.forbes.com/thumbor/1280x868/https%3A%2F%2Fblogs-images.forbes.com%2Fthumbnails%2Fblog_1855%2Fpt_1855_811_o.jpg%3Ft%3D1339592470" width="600">
## Don't go against typical conventions
<img src="http://www.callingbullshit.org/twittercards/tools_misleading_axes.png" width="600">
# Tips for choosing an appropriate visualization:
## Use Appropriate "Visual Vocabulary"
[Visual Vocabulary - Vega Edition](http://ft.com/vocabulary)
## What are the properties of your data?
- Is your primary variable of interest continuous or discrete?
- Is in wide or long (tidy) format?
- Does your visualization involve multiple variables?
- How many dimensions do you need to include on your plot?
Can you express the main idea of your visualization in a single sentence?
How hard does your visualization make the user work in order to draw the intended conclusion?
## Which Visualization tool is most appropriate?
[Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html)
## Anatomy of a Matplotlib Plot
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import AutoMinorLocator, MultipleLocator, FuncFormatter
np.random.seed(19680801)
X = np.linspace(0.5, 3.5, 100)
Y1 = 3+np.cos(X)
Y2 = 1+np.cos(1+X/0.75)/2
Y3 = np.random.uniform(Y1, Y2, len(X))
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(1, 1, 1, aspect=1)
def minor_tick(x, pos):
if not x % 1.0:
return ""
return "%.2f" % x
ax.xaxis.set_major_locator(MultipleLocator(1.000))
ax.xaxis.set_minor_locator(AutoMinorLocator(4))
ax.yaxis.set_major_locator(MultipleLocator(1.000))
ax.yaxis.set_minor_locator(AutoMinorLocator(4))
ax.xaxis.set_minor_formatter(FuncFormatter(minor_tick))
ax.set_xlim(0, 4)
ax.set_ylim(0, 4)
ax.tick_params(which='major', width=1.0)
ax.tick_params(which='major', length=10)
ax.tick_params(which='minor', width=1.0, labelsize=10)
ax.tick_params(which='minor', length=5, labelsize=10, labelcolor='0.25')
ax.grid(linestyle="--", linewidth=0.5, color='.25', zorder=-10)
ax.plot(X, Y1, c=(0.25, 0.25, 1.00), lw=2, label="Blue signal", zorder=10)
ax.plot(X, Y2, c=(1.00, 0.25, 0.25), lw=2, label="Red signal")
ax.plot(X, Y3, linewidth=0,
marker='o', markerfacecolor='w', markeredgecolor='k')
ax.set_title("Anatomy of a figure", fontsize=20, verticalalignment='bottom')
ax.set_xlabel("X axis label")
ax.set_ylabel("Y axis label")
ax.legend()
def circle(x, y, radius=0.15):
from matplotlib.patches import Circle
from matplotlib.patheffects import withStroke
circle = Circle((x, y), radius, clip_on=False, zorder=10, linewidth=1,
edgecolor='black', facecolor=(0, 0, 0, .0125),
path_effects=[withStroke(linewidth=5, foreground='w')])
ax.add_artist(circle)
def text(x, y, text):
ax.text(x, y, text, backgroundcolor="white",
ha='center', va='top', weight='bold', color='blue')
# Minor tick
circle(0.50, -0.10)
text(0.50, -0.32, "Minor tick label")
# Major tick
circle(-0.03, 4.00)
text(0.03, 3.80, "Major tick")
# Minor tick
circle(0.00, 3.50)
text(0.00, 3.30, "Minor tick")
# Major tick label
circle(-0.15, 3.00)
text(-0.15, 2.80, "Major tick label")
# X Label
circle(1.80, -0.27)
text(1.80, -0.45, "X axis label")
# Y Label
circle(-0.27, 1.80)
text(-0.27, 1.6, "Y axis label")
# Title
circle(1.60, 4.13)
text(1.60, 3.93, "Title")
# Blue plot
circle(1.75, 2.80)
text(1.75, 2.60, "Line\n(line plot)")
# Red plot
circle(1.20, 0.60)
text(1.20, 0.40, "Line\n(line plot)")
# Scatter plot
circle(3.20, 1.75)
text(3.20, 1.55, "Markers\n(scatter plot)")
# Grid
circle(3.00, 3.00)
text(3.00, 2.80, "Grid")
# Legend
circle(3.70, 3.80)
text(3.70, 3.60, "Legend")
# Axes
circle(0.5, 0.5)
text(0.5, 0.3, "Axes")
# Figure
circle(-0.3, 0.65)
text(-0.3, 0.45, "Figure")
color = 'blue'
ax.annotate('Spines', xy=(4.0, 0.35), xytext=(3.3, 0.5),
weight='bold', color=color,
arrowprops=dict(arrowstyle='->',
connectionstyle="arc3",
color=color))
ax.annotate('', xy=(3.15, 0.0), xytext=(3.45, 0.45),
weight='bold', color=color,
arrowprops=dict(arrowstyle='->',
connectionstyle="arc3",
color=color))
ax.text(4.0, -0.4, "Made with http://matplotlib.org",
fontsize=10, ha="right", color='.5')
plt.show()
```
# Making Explanatory Visualizations with Seaborn
Today we will reproduce this [example by FiveThirtyEight:](https://fivethirtyeight.com/features/al-gores-new-movie-exposes-the-big-flaw-in-online-movie-ratings/)
```
from IPython.display import display, Image
url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png'
example = Image(url=url, width=400)
display(example)
```
Using this data: https://github.com/fivethirtyeight/data/tree/master/inconvenient-sequel
Links
- [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/)
- [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked)
- [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/)
## Make prototypes
This helps us understand the problem
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.style.use('fivethirtyeight')
fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11))
fake.plot.bar(color='C1', width=0.9);
fake2 = pd.Series(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2,
3, 3, 3,
4, 4,
5, 5, 5,
6, 6, 6, 6,
7, 7, 7, 7, 7,
8, 8, 8, 8,
9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9);
```
## Annotate with text
```
```
## Reproduce with real data
```
df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv')
```
# ASSIGNMENT
Replicate the lesson code. I recommend that you [do not copy-paste](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit).
# STRETCH OPTIONS
#### 1) Reproduce another example from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/).
#### 2) Reproduce one of the following using a library other than Seaborn or Matplotlib.
For example:
- [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/) (try the [`altair`](https://altair-viz.github.io/gallery/index.html#maps) library)
- [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) (try the [`statsmodels`](https://www.statsmodels.org/stable/index.html) library)
- or another example of your choice!
#### 3) Make more charts!
Choose a chart you want to make, from [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary).
Find the chart in an example gallery of a Python data visualization library:
- [Seaborn](http://seaborn.pydata.org/examples/index.html)
- [Altair](https://altair-viz.github.io/gallery/index.html)
- [Matplotlib](https://matplotlib.org/gallery.html)
- [Pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html)
Reproduce the chart. [Optionally, try the "Ben Franklin Method."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) If you want, experiment and make changes.
Take notes. Consider sharing your work with your cohort!
```
```
| true |
code
| 0.652131 | null | null | null | null |
|
# Table of Contents
<p><div class="lev1 toc-item"><a href="#Python-Basics" data-toc-modified-id="Python-Basics-1"><span class="toc-item-num">1 </span>Python Basics</a></div><div class="lev2 toc-item"><a href="#Imports" data-toc-modified-id="Imports-11"><span class="toc-item-num">1.1 </span>Imports</a></div><div class="lev2 toc-item"><a href="#Datentypen-und-Variablen" data-toc-modified-id="Datentypen-und-Variablen-12"><span class="toc-item-num">1.2 </span>Datentypen und Variablen</a></div><div class="lev2 toc-item"><a href="#List--und-Dict-Comprehensions" data-toc-modified-id="List--und-Dict-Comprehensions-13"><span class="toc-item-num">1.3 </span>List- und Dict-Comprehensions</a></div><div class="lev2 toc-item"><a href="#Benannte-Funktionen-und-anonyme-Lamda-Funktionen" data-toc-modified-id="Benannte-Funktionen-und-anonyme-Lamda-Funktionen-14"><span class="toc-item-num">1.4 </span>Benannte Funktionen und anonyme Lamda-Funktionen</a></div><div class="lev1 toc-item"><a href="#Klassen-und-Objekte" data-toc-modified-id="Klassen-und-Objekte-2"><span class="toc-item-num">2 </span>Klassen und Objekte</a></div><div class="lev1 toc-item"><a href="#Distanz--und-Ähnlichkeits-Messung" data-toc-modified-id="Distanz--und-Ähnlichkeits-Messung-3"><span class="toc-item-num">3 </span>Distanz- und Ähnlichkeits-Messung</a></div><div class="lev2 toc-item"><a href="#Euklidische-Distanz" data-toc-modified-id="Euklidische-Distanz-31"><span class="toc-item-num">3.1 </span>Euklidische Distanz</a></div><div class="lev2 toc-item"><a href="#Vergleich-von-Personendatensätzen" data-toc-modified-id="Vergleich-von-Personendatensätzen-32"><span class="toc-item-num">3.2 </span>Vergleich von Personendatensätzen</a></div><div class="lev2 toc-item"><a href="#Text-Comparison" data-toc-modified-id="Text-Comparison-33"><span class="toc-item-num">3.3 </span>Text Comparison</a></div><div class="lev2 toc-item"><a href="#Autoren-Ähnlichkeit-WIP" data-toc-modified-id="Autoren-Ähnlichkeit-WIP-34"><span class="toc-item-num">3.4 </span>Autoren-Ähnlichkeit WIP</a></div><div class="lev1 toc-item"><a href="#Modules-&-Packages" data-toc-modified-id="Modules-&-Packages-4"><span class="toc-item-num">4 </span>Modules & Packages</a></div>
# Python Basics
## Imports
```
# Standard import of library
import math
import spacy
# Import with alias
import pandas as pd
```
## Datentypen und Variablen
Python ist eine schwach typisierte Sprache. Daher müssen die Datentypen von Variablen nicht explizit angegeben werden.
```
# String (iterable, immutable)
iAmAStringVariable = 'I am a string variable.'
# Integer (not iterable, immutable)
iAmAnInteger = 123
# Float (not iterable, immutable)
iAmAnFloat = 123.456
# List (iterable, mutable, mixed)
# Zugriff über den Offset-Index erfolgt über Eckige-Klammer-Notation
iAmAList = [ 1, 2, 'string', 3.45, True ]
iAmAList[0] # -> 1
iAmAList[2] # -> 'string'
# Tuple (iterable, immutable, mixed)
# Tuple können Anzahl n Einträge haben; sie sind – anders als der Name vll.
# suggeriert – nicht auf zwei Einträge beschränkt.
# Zugriff über den Offset-Index erfolgt über Eckige-Klammer-Notation
iAmATuple = ('Key', 123)
iAmAQuintuple = ('Key', 123, True, 3.45, [])
iAmAQuintuple[2] # -> True
# Dictionary (iterable, mutable, mixed)
# Ein Dictionary ist eine Liste von Schlüssel-Wert-Paaren.
# Zugriff über den Schlüssel erfolgt über Eckige-Klammer-Notation
iAmADictionary = {
"key_1": "value_2",
"key_2": 123,
"key_3": [ 1, 2, 3 ]
}
iAmADictionary['key_3'] # -> [ 1, 2, 3 ]
# Bool (not iterable, immutable)
iAmABoolean = False
# Objects [class instances] (depends on implementation)
#iAmAnObject =
```
## List- und Dict-Comprehensions
List- und Dict-Comprehensions verarbeiten iterierbare Datentypen und geben eine Liste bzw. ein Dictionary zurück. Sie repräsentieren einen auf eine Zeile gezogenen For-Loop.
Sie werden häufig zum Säubern von Daten benutzt.
List-Comprehensions
```
# Einfache List-Comprehension:
# Merksatz: Transform( item ) for item in list.
listOfWords = [ 'cat', 'dog', 'monkey', 'horse', 'chicken']
[ len(item) for item in listOfWords ] # -> [ 3, 3, 6, 5, 7 ]
# Eine einfache If-Condition wird ans Ende der Expression gesetzt.
[ item for item in [1,2,3,4,5] if item %2 == 0 ]
# Eine If-Else-Condition wird hingegen direkt hiner das erste Transformations-Element der Expression gesetzt.
[ print(item, 'hat einen Rest.') if item %2 != 0 else print(item, 'hat keinen Rest.') for item in [1,2,3,4,5] ];
# Wenn das Transformations-Element einer List-Comprehension wiederum selbst ein Iterable ist,
# kann das Transformations-Element durch eine zweite, eingeschobene List-Comprehension ersetzt werden.
# Aus Gründen von Lesbarkeit und Verständnis sollten verschachtelte List-Comprehensions mit bedacht
# eingesetzt und gegebenenfalls (zum Teil) durch eundeutigere For-Loops ersetzt werden.
[ [ str(i) for i in lst if i % 2 == 0 ] for lst in [[1,2,3], [4,5,6], [7,8,9]] ]
# Für Dict-Comprehensions gilt im Grunde das selbe wie für List-Comprehensions. Ausnamen sind, dass sie durch
# geschweifte Klammern gerahmt werden und das Transformations-Element als Schlüssel-Wert-Paar aufgebaut sein muss.
{ str(i): 0 for i in [1,2,3,4,5] }
# Beispiele:
# Ein Dictionary kann bspw. mittels einer List-Comprehension in eine Liste von Tuplen verwandelt werden.
[ (i,j) for i,j in { "i": 1, "j": 2 }.items() ] # -> [ ( 'i', 1 ), ( 'j', 2 ) ]
# Permutation der Elemente einer Liste von Tupeln.
[ (j,i) for i,j in [ ( 'y', 'x' ), ( 'b', 'a' ) ] ] # -> [('x', 'y'), ('a', 'b')]
```
## Benannte Funktionen und anonyme Lamda-Funktionen
Benannte Funktionen werden mit einem individuellen Namen initialisiert und stehen im weiteren unter diesem Namen zur Verfügung. Anonyme oder Lamda-Funktionen hingegen müssen, um persitent verfügbar zu sein, in eine Variable gespeichert werden, oder an Ort und Stelle als _instant function_ aufgerufen werden.
```
# Benannte Funktionen werden mit dem Schlüsselwort def eingeleitet, es folgt der Funktionsname und
# in runden Klammern gegebenenfalls eine Liste von Parametern, welche beim Aufruf als Argumente
# übergeben werden können.
def Hello():
"""I am a Hello func."""
print('Hello!')
return "Hello!"
Hello()
# Parameter können mit einem Default-Wert angegeben werden.
def Greet(greeting='Hello'):
"""I am a Greet func."""
print(f'{greeting}!')
return f'{greeting}!'
Greet()
Greet('Live long and prosper')
# Eine anonyme oder Lambda-Funktion muss auf einer einzigen Zeile geschrieben werden – es sind also keine Loops-
# oder mehrzeilige Konstrukte möglich. Eine Lambda-Funktion wird durch das Schlüsselwort lambda eingeleitet, es folgen
# die Parameter; ein Doppelpunkt trennt den folgenden Rückgabewert ab. Eine Lambda-Funktion besteht also nur aus
# Argumenten und einer Expression, die als Rückgabewert während der Rückgabe evaluiert wird. Damit die Funktion
# mit einem Namen aufgerufen werden kann, muss sie in einer Variablen gespeichert werden.
func = lambda name: f'Hello, {name}!'
func('George')
# Soll die Lambda-Funktion an Ort udn Stelle evaluiert werden, muss der Funktionskörper in runde Klammern
# gepackt und mit direkt folgenden runden Klammern direkt mit Parametern aufgerufen werden.
( lambda name: f'Hello, {name}' )('Timmy')
# Currying bedeutet, dass eine Funktion eine weitere, bereits teilweise mit Argumenten befüllte Funktion zurückgibt.
# Auf diese Weise können Funktionen mit Default-Werten 'vorgeladen' werden.
def Greet(phrase):
return lambda name: print(f'{phrase}, {name}!')
Hello = Greet('Hello')
GoodMorning = Greet('Good Morning')
Bye = Greet('Bye')
Hello('George')
GoodMorning('George')
Bye('George')
```
# Klassen und Objekte
Klassen sind Blaupausen die Entitäten der realen Welt schematisch und reduziert in Programmiersprachen abbilden. Klassen dienen dabei dazu Daten und Methoden, die diese Daten verarbeiten, zusammenfassen zu können. Will man eine Bibliothek modellierne, könnte jedes Buch als Klasse _Buch_ formalisiert werden. Die Klasse würde Attribute (Merkmale) wie _Buchtitel_, _ISBN_, _ausgeliehen_, u.a.m. enthalten. Methoden, also in der Klasse definierte Funktionen, könnten die Attribute ausgeben und/oder verändern, e.g. die Methode _Ausleihstatus_ würde einen String wie `f'Ausgeliehen: {self.ausgeliehen}.'` zurückgeben.
Mehr zu OOP in Python, [hier](https://realpython.com/python3-object-oriented-programming/).
```
class Monkey:
def __init__(self, name, color, favorite_foods):
self.name = name
self.color = color
self.cuteness = 'Over 9000!'
if type(favorite_foods) == list or type(favorite_foods) == tuple:
self.favorite_foods = favorite_foods
elif type(favorite_foods) == str:
self.favorite_foods = [favorite_foods]
else:
print('Error: fav foods not set.')
self.favorite_foods = []
def __repr__(self):
return f'Monkey {self.name}'
def __str__(self):
return f'I am a monkey names {self.name} and my favorte foods are {", ".join(self.favorite_foods)}.'
george = Monkey('George', 'red', ['Bananas', 'Pizza'])
george # Gives __repr__
print(george) # Gives __str__
evelyn = Monkey('Evelyn', 'yellow', 'Pudding')
monkeys = [ george, evelyn ]
[ print(monkey) for monkey in monkeys ]
print(monkeys)
```
# Distanz- und Ähnlichkeits-Messung
## Euklidische Distanz
d: Euklidische-Distanz-Funktion, nimmt zwei Vektoren mit n Dimensionen an. Aufgrund der Quadrierung ist es irrelevant von welchem Punkt aus die Distanz-Messung beginnt.
$d(p,q) = d(q,p) = \sqrt{\sum_{i=1}^n (q_i-p_i)^2}$
Hierbei gibt n die Anzahl der Dimensionen (columns) angibt, die einem Vektor (row) in der Matrix zukommen. Um die Distanz zweier Vektoren zu bestimmen, werden also die Werte der jeweiligen Dimensionen subtrahiert, das Ergebnis quadriert, um negative Zahlen als Ergebnis zu vermeiden, im Anschluss summiert und dann die Wurzel gezogen.
Sind die zwei drei-dimensionalen Vektoren p und q als
p = [1,3,2]
q = [3,1,2]
gegeben, dann ist die eukldidische Distanz der Vektoren
$\sqrt{(1-3)^2 + (3-1)^2 + (2-2)^2} = \sqrt{(-2)^2 + (2)^2 + (0)^2} = \sqrt{4+4+0} = \sqrt{8} = 2.8284271247461903$
Durch die Quadrierung gehen große Differenzen stärker in die Abstandsberechnung ein als kleine.
```
# Euklidische Distanz als benannte Funktion
def euclidianDistance(p,q, silent=False):
"""Calculates eucledian distance between two n-dimensional vectors.
The vectors are represented as list of integers or floats.
Example:
euclidianDistance([1, 2],[3,4])
Distance is 2.8284271247461903."""
squares = [ (p[i] - q[i])**2 for i in range(len(p)) ]
summed_squares = sum(squares)
distance = math.sqrt(summed_squares)
if silent == False:
print(f'Distance is {distance}.')
return distance
euclidianDistance([1,1,1],[3,3,3])
# Euklidische Distanz als Lambda-Funktion
euklid = lambda p,q: math.sqrt(sum([ (p[i] - q[i])**2 for i in range(len(p)) ]))
euklid([1,1,1],[3,3,3])
# Euklidische Distanz als instantan evaluierte Lambda-Funktion
(lambda p,q: math.sqrt(sum([ (p[i] - q[i])**2 for i in range(len(p)) ])))([1,1,1],[3,3,3])
p = [1,3,2]
q = [3,1,2]
euklid(p,q)
```
## Vergleich von Personendatensätzen
```
# Mit der Magic-Variable % können Shell-Befehle im Notebook ausgeführt werden.
%ls input/
# Ein ? vor einer Funktion oder Methode ohn () gibt Informationen dazu zurück.
#?pandas.read_csv
Personen = pandas.read_csv('./input/people.csv', # Pfad zur Datei
sep=';', # Spalten-Trenner in der CSV
decimal=",", # Dezimalzeichen
skip_blank_lines=True)# Leere Zeilen überspringen
Personen.head(10) # head gibt den Anfang des Dataframes aus, tail den Schluss
# Auf eine oder mehrere Spalten wird in Eckiger-Klammer-Notation zugegriffen.
# Ist eine Spalte gewünscht wird ein String benutzt, sind mehrere Spalten gewünscht
# dann wird eine Liste von Strings erwartet. Zurückgegeben wird eine Series oder ein DataFrame.
Personen['person']
Personen[['person', 'gender']]
# Auf eine Row wird mittels iloc-Methode und Index in eckigen Klammern zugegriffen.
Personen.iloc[4]
# Die Schnittmenge kann durch Eckige-Klammern-Notation weiter eingegrenzt werden,
# bis auf die Ebene einer einzigen Zelle.
Personen.iloc[4]['height'] # == Personen.iloc[4].height
Personen['person'].iloc[2]
# Auschluss von Spalten geschieht durch droppen der entsprechenden Columns in einer Kopie des DataFrames.
decimated_pers = Personen.drop(['person', 'gender'], axis=1)
decimated_pers
personVector = lambda index: [ float(i) for i in Personen[['education', 'family', 'income']].iloc[index].tolist() ]
euclidianDistance(personVector(0), personVector(6))
for index in Personen.index.tolist():
print(Personen.iloc[index].person)
for index2 in Personen.index.tolist():
euclidianDistance(vector(index), vector(index2))
# DataFrames können mittels diverser Methoden wieder in Dateien geschrieben werden.
decimated_pers.to_csv('./output/deciated.csv',
sep=';',
quotechar='"',
quoting=1,
index=False)
```
## Text Comparison
```
txt_0 = "I am a really cute dog. I like jumping."
txt_1 = "I am a really nice cat. I like to cuddle."
txt_2 = "I am a really cute dog. I liked to jump."
txt_3 = "You are a really cute cat. You like to cuddle."
txt_4 = "You were a really nice dog and you liked cuddling."
txt_5 = "You are a really nice cat. You like to cuddle."
nlp = spacy.load('en')
docs = list(nlp.pipe([txt_0, txt_1, txt_2, txt_3, txt_4, txt_5]))
def buildBagOfWords(docs):
global_lemmata = []
[[ global_lemmata.append(token.lemma_) if token.lemma_ != '-PRON-' else global_lemmata.append(token.orth_) for token in doc ] for doc in docs ];
bagOfWords = { lemma: [] for lemma in list(set(global_lemmata)) }
lemmatized_texts = [[ token.lemma_ if token.lemma_ != '-PRON-' else token.orth_ for token in doc ] for doc in docs ]
for lemma in list(set(global_lemmata)):
for text in lemmatized_texts:
bagOfWords[lemma].append(text.count(lemma))
return bagOfWords
words = buildBagOfWords(docs)
texts_df = pandas.DataFrame.from_dict(words)
texts_df.head(10)
vector = lambda index, df: df.iloc[index].tolist()
euclidianDistance(vector(0, texts_df), vector(2, texts_df))
def buildCrossMatrix(texts_df,silent=True):
crossMatrix = { str(i): [] for i in texts_df.index.tolist() }
for i in texts_df.index.tolist():
for z in texts_df.index.tolist():
distance = euclidianDistance(vector(i, texts_df), vector(z, texts_df), silent)
crossMatrix[str(i)].append(distance)
cross_df = pandas.DataFrame.from_dict(crossMatrix)
cross_df.head()
return cross_df
buildCrossMatrix(texts_df)
buildCrossMatrix(PersonenDatensatz[['height', 'skin', 'hair', 'education', 'income', 'family']])
```
## Autoren-Ähnlichkeit WIP
```
word_length = { str(i): 0 for i in range(1, max([ max([ len(token.lemma_) if token.lemma_ != '-PRON-' else len(token.orth_) for token in doc ]) for doc in docs ])+1)}
word_length
for token in docs[0]:
if token.lemma_ != '-PRON-':
word_length[str(len(token.lemma_))] += 1
else:
word_length[str(len(token.orth_))] += 1
word_length
```
# Modules & Packages
```
import sys
import importlib
# Die Package-Ordner, e.g. distance, liegen im Ordner 'modules'.
# Die Python-Code-Datei 'distances.py' liegt im Ordner 'distance'.
# sys.path.append(r'./modules/')
# print(sys.path)
# Aus dem Ordner 'distance' importiere die Datei 'distances'.
from modules.distance import distances
importlib.reload(distances)
distance = distances.Distance()
distance.euclidianDistance([1, 2],[3,4])
distance.helloWorld()
?distance.euclidianDistance
```
| true |
code
| 0.466663 | null | null | null | null |
|
# Comparison of clustering of node embeddings with a traditional community detection method
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/community_detection/attacks_clustering_analysis.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/community_detection/attacks_clustering_analysis.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
## Introduction
The goal of this use case is to demonstrate how node embeddings from graph convolutional neural networks trained in unsupervised manner are comparable to standard community detection methods based on graph partitioning.
Specifically, we use unsupervised [graphSAGE](http://snap.stanford.edu/graphsage/) approach to learn node embeddings of terrorist groups in a publicly available dataset of global terrorism events, and analyse clusters of these embeddings. We compare the clusters to communities produced by [Infomap](http://www.mapequation.org), a state-of-the-art approach of graph partitioning based on information theory.
We argue that clustering based on unsupervised graphSAGE node embeddings allow for richer representation of the data than its graph partitioning counterpart as the former takes into account node features together with the graph structure, while the latter utilises only the graph structure.
We demonstrate, using the terrorist group dataset, that the infomap communities and the graphSAGE embedding clusters (GSEC) provide qualitatively different insights into underlying data patterns.
### Data description
__The Global Terrorism Database (GTD)__ used in this demo is available here: https://www.kaggle.com/START-UMD/gtd. GTD is an open-source database including information on terrorist attacks around the world from 1970 through 2017. The GTD includes systematic data on domestic as well as international terrorist incidents that have occurred during this time period and now includes more than 180,000 attacks. The database is maintained by researchers at the National Consortium for the Study of Terrorism and Responses to Terrorism (START), headquartered at the University of Maryland. For information refer to the initial data source: https://www.start.umd.edu/gtd/.
Full dataset contains information on more than 180,000 Terrorist Attacks.
### Glossary:
For this particular study we adopt the following terminology:
- __a community__ is a group of nodes produced by a traditional community detection algorithm (infomap community in this use case)
- __a cluster__ is a group of nodes that were clustered together using a clustering algorithm applied to node embeddings (here, [DBSCAN clustering](https://www.aaai.org/Papers/KDD/1996/KDD96-037.pdf) applied to unsupervised GraphSAGE embeddings).
For more detailed explanation of unsupervised graphSAGE see [Unsupervised graphSAGE demo](../embeddings/graphsage-unsupervised-sampler-embeddings.ipynb).
The rest of the demo is structured as follows. First, we load the data and preprocess it (see `utils.py` for detailed steps of network and features generation). Then we apply infomap and visualise results of one selected community obtained from this method. Next, we apply unsupervised graphSAGE on the same data and extract node embeddings. We first tune DBSCAN hyperparameters, and apply DBSCAN that produces sufficient numbers of clusters with minimal number of noise points. We look at the resulting clusters, and investigate a single selected cluster in terms of the graph structure and features of the nodes in this cluster. Finally, we conclude our investigation with the summary of the results.
```
# install StellarGraph if running on Google Colab
import sys
if 'google.colab' in sys.modules:
%pip install -q stellargraph[demos]==1.3.0b
# verify that we're using the correct version of StellarGraph for this notebook
import stellargraph as sg
try:
sg.utils.validate_notebook_version("1.3.0b")
except AttributeError:
raise ValueError(
f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>."
) from None
import pandas as pd
import numpy as np
import networkx as nx
import igraph as ig
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn import preprocessing, feature_extraction, model_selection
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import random
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
%matplotlib inline
import warnings
warnings.filterwarnings('ignore') # supress warnings due to some future deprications
import stellargraph as sg
from stellargraph.data import EdgeSplitter
from stellargraph.mapper import GraphSAGELinkGenerator, GraphSAGENodeGenerator
from stellargraph.layer import GraphSAGE, link_classification
from stellargraph.data import UniformRandomWalk
from stellargraph.data import UnsupervisedSampler
from sklearn.model_selection import train_test_split
from tensorflow import keras
from stellargraph import globalvar
import mplleaflet
from itertools import count
import utils
```
### Data loading
This is the raw data of terrorist attacks that we use as a starting point of our analysis.
```
dt_raw = pd.read_csv(
"~/data/globalterrorismdb_0718dist.csv",
sep=",",
engine="python",
encoding="ISO-8859-1",
)
```
### Loading preprocessed features
The resulting feature set is the aggregation of features for each of the terrorist groups (`gname`). We collect such features as total number of attacks per terrorist group, number of perpetrators etc. We use `targettype` and `attacktype` (number of attacks/targets of particular type) and transform it to a wide representation of data (e.g each type is a separate column with the counts for each group). Refer to `utils.py` for more detailed steps of the preprocessing and data cleaning.
```
gnames_features = utils.load_features(input_data=dt_raw)
gnames_features.head()
```
### Edgelist creation
The raw dataset contains information for the terrotist attacks. It contains information about some incidents being related. However, the resulting graph is very sparse. Therefore, we create our own schema for the graph, where two terrorist groups are connected, if they had at least one terrorist attack in the same country and in the same decade. To this end we proceed as follows:
- we group the event data by `gname` - the names of the terrorist organisations.
- we create a new feature `decade` based on `iyear`, where one bin consists of 10 years (attacks of 70s, 80s, and so on).
- we add the concatination of of the decade and the country of the attack: `country_decade` which will become a link between two terrorist groups.
- finally, we create an edgelist, where two terrorist groups are linked if they have operated in the same country in the same decade. Edges are undirected.
In addition, some edges are created based on the column `related` in the raw event data. The description of the data does not go into details how these incidents were related. We utilise this information creating a link for terrorist groups if the terrorist attacks performed by these groups were related. If related events corresponded to the same terrorist group, they were discarded (we don't use self-loops in the graph). However, the majority of such links are already covered by `country_decade` edge type. Refer to `utils.py` for more information on graph creation.
```
G = utils.load_network(input_data=dt_raw)
print(nx.info(G))
```
### Connected components of the network
Note that the graph is disconnected, consisting of 21 connected components.
```
print(nx.number_connected_components(G))
```
Get the sizes of the connected components:
```
Gcc = sorted(nx.connected_component_subgraphs(G), key=len, reverse=True)
cc_sizes = []
for cc in list(Gcc):
cc_sizes.append(len(cc.nodes()))
print(cc_sizes)
```
The distribution of connected components' sizes shows that there is a single large component, and a few isolated groups. We expect the community detection/node embedding clustering algorithms discussed below to discover non-trivial communities that are not simply the connected components of the graph.
## Traditional community detection
We perform traditional community detection via `infomap` implemented in `igraph`. We translate the original `networkx` graph object to `igraph`, apply `infomap` to it to detect communities, and assign the resulting community memberships back to the `networkx` graph.
```
# translate the object into igraph
g_ig = ig.Graph.Adjacency(
(nx.to_numpy_matrix(G) > 0).tolist(), mode=ig.ADJ_UNDIRECTED
) # convert via adjacency matrix
g_ig.summary()
# perform community detection
random.seed(123)
c_infomap = g_ig.community_infomap()
print(c_infomap.summary())
```
We get 160 communities, meaning that the largest connected components of the graph are partitioned into more granular groups.
```
# plot the community sizes
infomap_sizes = c_infomap.sizes()
plt.title("Infomap community sizes")
plt.xlabel("community id")
plt.ylabel("number of nodes")
plt.bar(list(range(1, len(infomap_sizes) + 1)), infomap_sizes)
# Modularity metric for infomap
c_infomap.modularity
```
The discovered infomap communities have smooth distribution of cluster sizes, which indicates that the underlying graph structure has a natural partitioning. The modularity score is also pretty high indicating that nodes are more tightly connected within clusters than expected from random graph, i.e., the discovered communities are tightly-knit.
```
# assign community membership results back to networkx, keep the dictionary for later comparisons with the clustering
infomap_com_dict = dict(zip(list(G.nodes()), c_infomap.membership))
nx.set_node_attributes(G, infomap_com_dict, "c_infomap")
```
### Visualisation of the infomap communities
We can visualise the resulting communities using maps as the constructed network is based partially on the geolocation. The raw data have a lat-lon coordinates for each of the attacks. Terrorist groups might perform attacks in different locations. However, currently the simplified assumption is made, and the average of latitude and longitude for each terrorist group is calculated. Note that it might result in some positions being "off", but it is good enough to provide a glimpse whether the communities are consistent with the location of that groups.
```
# fill NA based on the country name
dt_raw.latitude[
dt_raw["gname"] == "19th of July Christian Resistance Brigade"
] = 12.136389
dt_raw.longitude[
dt_raw["gname"] == "19th of July Christian Resistance Brigade"
] = -86.251389
# filter only groups that are present in a graph
dt_filtered = dt_raw[dt_raw["gname"].isin(list(G.nodes()))]
# collect averages of latitude and longitude for each of gnames
avg_coords = dt_filtered.groupby("gname")[["latitude", "longitude"]].mean()
print(avg_coords.shape)
print(len(G.nodes()))
```
As plotting the whole graph is not feasible, we investigate a single community
__Specify community id in the range of infomap total number of clusters__ (`len(infomap_sizes)`)
```
com_id = 50 # smaller number - larger community, as it's sorted
# extraction of a subgraph from the nodes in this community
com_G = G.subgraph(
[n for n, attrdict in G.nodes.items() if attrdict["c_infomap"] == com_id]
)
print(nx.info(com_G))
# plot community structure only
pos = nx.random_layout(com_G, seed=123)
plt.figure(figsize=(10, 8))
nx.draw_networkx(com_G, pos, edge_color="#26282b", node_color="blue", alpha=0.3)
plt.axis("off")
plt.show()
# plot on the map
nodes = com_G.nodes()
com_avg_coords = avg_coords[avg_coords.index.isin(list(nodes))]
com_avg_coords.fillna(
com_avg_coords.mean(), inplace=True
) # fill missing values with the average
new_order = [1, 0]
com_avg_coords = com_avg_coords[com_avg_coords.columns[new_order]]
pos = com_avg_coords.T.to_dict("list") # layout is based on the provided coordindates
fig, ax = plt.subplots(figsize=(12, 6))
nx.draw_networkx_edges(com_G, pos, edge_color="grey")
nx.draw_networkx_nodes(
com_G, pos, nodelist=nodes, with_labels=True, node_size=200, alpha=0.5
)
nx.draw_networkx_labels(com_G, pos, font_color="#362626", font_size=50)
mplleaflet.display(fig=ax.figure)
```
(**N.B.:** the above interactive plot will only appear after running the cell, and is not rendered in GitHub!)
### Summary of results based on infomap
Infomap is a robust community detection algorithm, and shows good results that are in line with the expectations. Most of the communities are tightly connected and reflect the geographical position of the events of the terrorist groups. That is because the graph schema is expressed as
> _two terrorist groups are connected if they have terrorist events in the same country in the same decade_.
However, no node features are taken into account in the case of the traditional community detection.
Next, we explore the GSEC approach, where node features are used along with the graph structure.
## Node represenatation learning with unsupervised graphSAGE
Now we apply unsupervised GraphSAGE that takes into account node features as well as graph structure, to produce *node embeddings*. In our case, similarity of node embeddings depicts similarity of the terrorist groups in terms of their operations, targets and attack types (node features) as well as in terms of time and place of attacks (graph structure).
```
# we reload the graph to get rid of assigned attributes
G = utils.load_network(input_data=dt_raw) # to make sure that graph is clean
# filter features to contain only gnames that are among nodes of the network
filtered_features = gnames_features[gnames_features["gname"].isin(list(G.nodes()))]
filtered_features.set_index("gname", inplace=True)
filtered_features.shape
filtered_features.head() # take a glimpse at the feature data
```
We perform a log-transform of the feature set to rescale feature values.
```
# transforming features to be on log scale
node_features = filtered_features.transform(lambda x: np.log1p(x))
# sanity check that there are no misspelled gnames left
set(list(G.nodes())) - set(list(node_features.index.values))
```
### Unsupervised graphSAGE
(For a detailed unsupervised GraphSAGE workflow with a narrative, see [Unsupervised graphSAGE demo](../embeddings/graphsage-unsupervised-sampler-embeddings.ipynb))
```
Gs = sg.StellarGraph.from_networkx(G, node_features=node_features)
print(Gs.info())
# parameter specification
number_of_walks = 3
length = 5
batch_size = 50
epochs = 10
num_samples = [20, 20]
layer_sizes = [100, 100]
learning_rate = 1e-2
unsupervisedSamples = UnsupervisedSampler(
Gs, nodes=G.nodes(), length=length, number_of_walks=number_of_walks
)
generator = GraphSAGELinkGenerator(Gs, batch_size, num_samples)
train_gen = generator.flow(unsupervisedSamples)
assert len(layer_sizes) == len(num_samples)
graphsage = GraphSAGE(
layer_sizes=layer_sizes, generator=generator, bias=True, dropout=0.0, normalize="l2"
)
```
We now build a Keras model from the GraphSAGE class that we can use for unsupervised predictions. We add a `link_classfication` layer as unsupervised training operates on node pairs.
```
x_inp, x_out = graphsage.in_out_tensors()
prediction = link_classification(
output_dim=1, output_act="sigmoid", edge_embedding_method="ip"
)(x_out)
model = keras.Model(inputs=x_inp, outputs=prediction)
model.compile(
optimizer=keras.optimizers.Adam(lr=learning_rate),
loss=keras.losses.binary_crossentropy,
metrics=[keras.metrics.binary_accuracy],
)
history = model.fit(
train_gen,
epochs=epochs,
verbose=2,
use_multiprocessing=False,
workers=1,
shuffle=True,
)
```
### Extracting node embeddings
```
node_ids = list(Gs.nodes())
node_gen = GraphSAGENodeGenerator(Gs, batch_size, num_samples).flow(node_ids)
embedding_model = keras.Model(inputs=x_inp[::2], outputs=x_out[0])
node_embeddings = embedding_model.predict(node_gen, workers=4, verbose=1)
```
#### 2D t-sne plot of the resulting node embeddings
Here we visually check whether embeddings have some underlying cluster structure.
```
node_embeddings.shape
# TSNE visualisation to check whether the embeddings have some structure:
X = node_embeddings
if X.shape[1] > 2:
transform = TSNE # PCA
trans = transform(n_components=2, random_state=123)
emb_transformed = pd.DataFrame(trans.fit_transform(X), index=node_ids)
else:
emb_transformed = pd.DataFrame(X, index=node_ids)
emb_transformed = emb_transformed.rename(columns={"0": 0, "1": 1})
alpha = 0.7
fig, ax = plt.subplots(figsize=(7, 7))
ax.scatter(emb_transformed[0], emb_transformed[1], alpha=alpha)
ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$")
plt.title("{} visualization of GraphSAGE embeddings".format(transform.__name__))
plt.show()
```
#### t-sne colored by infomap
We also depict the same t-sne plot colored by infomap communities. As we can observe t-sne of GraphSAGE embeddings do not really separate the infomap communities.
```
emb_transformed["infomap_clusters"] = emb_transformed.index.map(infomap_com_dict)
plt.scatter(
emb_transformed[0],
emb_transformed[1],
c=emb_transformed["infomap_clusters"],
cmap="Spectral",
edgecolors="black",
alpha=0.3,
s=100,
)
plt.title("t-sne with colors corresponding to infomap communities")
```
Next, we apply dbscan algorithm to cluster the embeddings. dbscan has two hyperparameters: `eps` and `min_samples`, and produces clusters along with the noise points (the points that could not be assigned to any particular cluster, indicated as -1). These tunable parameters directly affect the cluster results. We use greedy search over the hyperparameters and check what are the good candidates.
```
db_dt = utils.dbscan_hyperparameters(
node_embeddings, e_lower=0.1, e_upper=0.9, m_lower=5, m_upper=15
)
# print results where there are more clusters than 1, and sort by the number of noise points
db_dt.sort_values(by=["n_noise"])[db_dt.n_clusters > 1]
```
Pick the hyperparameters, where the clustering results have as little noise points as possible, but also create number of clusters of reasonable size.
```
# perform dbscan with the chosen parameters:
db = DBSCAN(eps=0.1, min_samples=5).fit(node_embeddings)
```
Calculating the clustering statistics:
```
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
n_noise_ = list(labels).count(-1)
print("Estimated number of clusters: %d" % n_clusters_)
print("Estimated number of noise points: %d" % n_noise_)
print("Silhouette Coefficient: %0.3f" % metrics.silhouette_score(node_embeddings, labels))
```
We plot t-sne again but with the colours corresponding to dbscan points.
```
emb_transformed["dbacan_clusters"] = labels
X = emb_transformed[emb_transformed["dbacan_clusters"] != -1]
plt.scatter(
X[0],
X[1],
c=X["dbacan_clusters"],
cmap="Spectral",
edgecolors="black",
alpha=0.3,
s=100,
)
plt.title("t-sne with colors corresponding to dbscan cluster. Without noise points")
```
### Investigating GSEC and infomap qualitative differences
Let's take a look at the resulting GSEC clusters, and explore, as an example, one particular cluster of a reasonable size, which is not a subset of any single infomap community.
Display cluster sizes for first 15 largest clusters:
```
clustered_df = pd.DataFrame(node_embeddings, index=node_ids)
clustered_df["cluster"] = db.labels_
clustered_df.groupby("cluster").count()[0].sort_values(ascending=False)[0:15]
```
We want to display clusters that differ from infomap communities, as they are more interesting in this context. Therefore we calculate for each DBSCAN cluster how many different infomap communities it contains. The results are displayed below.
```
inf_db_cm = clustered_df[["cluster"]]
inf_db_cm["infomap"] = inf_db_cm.index.map(infomap_com_dict)
dbscan_different = inf_db_cm.groupby("cluster")[
"infomap"
].nunique() # if 1 all belong to same infomap cluster
# show only those clusters that are not the same as infomap
dbscan_different[dbscan_different != 1]
```
For example, DBSCAN `cluster_12` has nodes that were assigned to 2 different infomap clusters, while `cluster_31` has nodes from 8 different infomap communities.
### Single cluster visualisation
Now that we've selected a GSEC cluster (id=20) of reasonable size that contains nodes belonging to 4 different infomap communities, let's explore it.
__To visualise a particular cluster, specify its number here:__
```
# specify the cluster id here:
cluster_id = 20
# manually look at the terrorist group names
list(clustered_df.index[clustered_df.cluster == cluster_id])
# create a subgraph from the nodes in the cluster
cluster_G = G.subgraph(list(clustered_df.index[clustered_df.cluster == cluster_id]))
```
List for each of the `gname` (terrorist group name) in the cluster the assigned infomap community id. This shows whether the similar community was produced by infomap or not.
```
comparison = {
k: v
for k, v in infomap_com_dict.items()
if k in list(clustered_df.index[clustered_df.cluster == cluster_id])
}
comparison
```
As another metric of clustering quality, we display how many edges are inside this cluster vs how many nodes are going outside (only one of edge endpoints is inside a cluster)
```
external_internal_edges = utils.cluster_external_internal_edges(G, inf_db_cm, "cluster")
external_internal_edges[external_internal_edges.cluster == cluster_id]
# plot the structure only
pos = nx.fruchterman_reingold_layout(cluster_G, seed=123, iterations=30)
plt.figure(figsize=(10, 8))
nx.draw_networkx(cluster_G, pos, edge_color="#26282b", node_color="blue", alpha=0.3)
plt.axis("off")
plt.show()
```
Recall that terrorist groups (nodes) are connected, when at least one of the attacks was performed in the same decade in the same country. Therefore the connectivity indicates spatio-temporal similarity.
There are quite a few clusters that are similar to infomap clusters. We pick a cluster that is not a subset of any single infomap community. We can see that there are disconnected groups of nodes in this cluster.
So why are these disconnected components combined into one cluster? GraphSAGE embeddings directly depend on both node features and an underlying graph structure. Therefore, it makes sense to investigate similarity of features of the nodes in the cluster. It can highlight why these terrorist groups are combined together by GSEC.
```
cluster_feats = filtered_features[
filtered_features.index.isin(
list(clustered_df.index[clustered_df.cluster == cluster_id])
)
]
# show only non-zero columns
features_nonzero = cluster_feats.loc[:, (cluster_feats != 0).any()]
features_nonzero.style.background_gradient(cmap="RdYlGn_r")
```
We can see that most of the isolated nodes in the cluster have features similar to those in the tight clique, e.g., in most cases they have high number of attacks, high success ratio, and attacks being focused mostly on bombings, and their targets are often the police.
Note that there are terrorist groups that differ from the rest of the groups in the cluster in terms of their features. By taking a closer look we can observe that these terrorist groups are a part of a tight clique. For example, _Martyr al-Nimr Battalion_ has number of bombings equal to 0, but it is a part of a fully connected subgraph.
Interestingly, _Al-Qaida in Saudi Arabia_ ends up in the same cluster as _Al-Qaida in Yemen_, though they are not connected directly in the network.
Thus we can observe that clustering on GraphSAGE embeddings combines groups based both on the underlying structure as well as features.
```
nodes = cluster_G.nodes()
com_avg_coords = avg_coords[avg_coords.index.isin(list(nodes))]
new_order = [1, 0]
com_avg_coords = com_avg_coords[com_avg_coords.columns[new_order]]
com_avg_coords.fillna(
com_avg_coords.mean(), inplace=True
) # fill missing values with the average
pos = com_avg_coords.T.to_dict("list") # layout is based on the provided coordindates
fig, ax = plt.subplots(figsize=(22, 12))
nx.draw_networkx_nodes(
cluster_G, pos, nodelist=nodes, with_labels=True, node_size=200, alpha=0.5
)
nx.draw_networkx_labels(cluster_G, pos, font_color="red", font_size=50)
nx.draw_networkx_edges(cluster_G, pos, edge_color="grey")
mplleaflet.display(fig=ax.figure)
```
(**N.B.:** the above interactive plot will only appear after running the cell, and is not rendered in GitHub!)
These groups are also very closely located, but in contrast with infomap, this is not the only thing that defines the clustering groups.
#### GSEC results
What we can see from the results above is a somehow different picture from the infomap (note that by rerunning this notebook, the results might change due to the nature of the GraphSAGE predictions). Some clusters are identical to the infomap, showing that GSEC __capture__ the network structure. But some of the clusters differ - they are not necessary connected. By observing the feature table we can see that it has terrorist groups with __similar characteristics__.
## Conclusion
In this use case we demonstrated the conceptual differences of traditional community detection and unsupervised GraphSAGE embeddings clustering (GSEC) on a real dataset. We can observe that the traditional approach to the community detection via graph partitioning produces communities that are related to the graph structure only, while GSEC combines the features and the structure. However, in the latter case we should be very conscious of the network schema and the features, as these significantly affect both the clustering and the interpretation of the resulting clusters.
For example, if we don't want the clusters where neither number of kills nor country play a role, we might want to exclude `nkills` from the feature set, and create a graph, where groups are connected only via the decade, and not via country. Then, the resulting groups would probably be related to the terrorist activities in time, and grouped by their similarity in their target and the types of attacks.
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/community_detection/attacks_clustering_analysis.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/community_detection/attacks_clustering_analysis.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
| true |
code
| 0.679418 | null | null | null | null |
|
# Handling Text Data
Below are a few examples of how to play with text data. We'll walk through some exercises in class with this!
```
import pandas as pd
text_data = pd.read_csv("pa3_orig/Bills Mafia.csv")
text_data.head()
documents = [t for t in text_data.text]
documents[0]
from sklearn.feature_extraction.text import CountVectorizer
def fit_vectorizer(vectorizer, sample_doc_index=0, documents= documents):
X = vectorizer.fit_transform(documents)
features = vectorizer.get_feature_names()
print(len(vectorizer.get_feature_names()))
print("First ten features: {}".format(", ".join(features[:10])))
print("Sample Doc: {}".format(documents[sample_doc_index]))
print("Sample Doc Features: {}".format(", ".join([features[i] for i in X[doc_index].nonzero()[1]])))
return X, features
X, features = fit_vectorizer(CountVectorizer(analyzer='word'))
```
Things we also might thing we want:
- Filtering out stuff (what? Why?)
- Characters instead of words (why?)
- Ngrams (huh? Why?)
- ... what else?
```
X, features = fit_vectorizer(CountVectorizer(analyzer="word",
ngram_range=(1, 3),
min_df=10,
max_df=0.75, stop_words='english'))
char_vectorizer = CountVectorizer(analyzer='char',
ngram_range=(2, 6),
min_df=10, max_df=0.75)
X, features = fit_vectorizer(char_vectorizer)
import spacy
nlp = spacy.load("en_core_web_sm")
def spacy_tokenizer(tweet):
return ["{}".format(c.lemma_) for c in nlp(tweet)]
vectorizer = CountVectorizer(tokenizer=spacy_tokenizer)
X, features = fit_vectorizer(vectorizer)
```
# Dimensionality Reduction
## Toy Example (from Varun)
```
import numpy as np
import math
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
# First generate some data
mu = np.array([0,0])
Sigma = np.array([[ 46.28249177, 26.12496001],
[ 26.12496001, 19.55457642]])
X = np.random.multivariate_normal(mu,Sigma,1000)
fig = plt.figure(figsize=[8,8])
plt.scatter(X[:,0],X[:,1])
plt.xlabel('x', fontsize=12)
plt.ylabel('y', fontsize=12)
plt.grid(axis='both')
# perform PCA
L,U=np.linalg.eig(Sigma)
# eigenvalues
print(L)
# eigenvectors
U
# first plot the eigenvectors
ah=0.1 # size of arrow head
f=1.1 # axes range
plt.figure(figsize=(8,8))
plt.subplot(111,aspect='equal')
plt.arrow(0,0,U[0,0],U[1,0],color='r',linewidth=2,head_width=ah,head_length=ah)
plt.arrow(0,0,U[0,1],U[1,1],color='r',linewidth=2,head_width=ah,head_length=ah)
plt.text(f*U[0,0],f*U[1,0],r'Eigenvector 1, $\vec{v_1}$ = %.2f $\vec{x}$ + %.2f $\vec{y}$' % (U[0,0],U[1,0]), fontsize=15)
plt.text(f*U[0,1],f*U[1,1],r'Eigenvector 2, $\vec{v_1}$ = %.2f $\vec{x}$ + %.2f $\vec{y}$' % (U[0,1],U[1,1]), fontsize=15)
plt.xlim([-f,f])
plt.ylim([-f,f])
plt.xlabel('x',fontsize=15)
plt.ylabel('y',fontsize=15)
plt.grid()
plt.show()
U[0,0]*math.sqrt(L[0]),U[1,0]*math.sqrt(L[0])
# plot the eigenvectors with the data
plt.figure(figsize=(8,8))
plt.plot(X[:,0],X[:,1],'bo',markersize=5,zorder=0,)
plt.axis('equal')
plt.grid()
plt.title('Principal Components (eigenvectors) of random data', fontsize=12)
plt.xlabel('x', fontsize=12)
plt.ylabel('y', fontsize=12)
plt.arrow(0,0,U[0,0]*math.sqrt(L[0]),U[1,0]*math.sqrt(L[0]),color='r',linewidth=2,head_width=1,head_length=1)
plt.arrow(0,0,U[0,1]*math.sqrt(L[1]),U[1,1]*math.sqrt(L[1]),color='r',linewidth=2,head_width=1,head_length=1)
plt.show()
# projecting data onto the principal components (no dimensionality reduction here)
Z = np.dot(X,U)
plt.figure(figsize=(8,8))
plt.axis('equal')
plt.grid()
plt.plot(Z[:,0],Z[:,1],'bo',markersize=5)
plt.xlabel('Principal Component 1',fontsize=15)
plt.ylabel('Principal Component 2',fontsize=15)
plt.show()
# projecting data onto the first principal component
Z = np.dot(X,U[:,1])
plt.figure(figsize=(8,8))
plt.axis('equal')
plt.grid()
plt.plot(Z,np.zeros([len(Z),]),'bo',markersize=5)
plt.xlabel('Principal Component 1',fontsize=15)
#plt.ylabel('Principal Component 2',fontsize=15)
plt.show()
```
# Applying PCA to text data: An exercise
```
from sklearn.decomposition import PCA
data = pd.read_csv("notebooks/programming_assignments/474_s22_assignments/assignment1/part2_data.csv")
data.head()
X, features = fit_vectorizer(CountVectorizer(analyzer="word",
ngram_range=(1, 3),
min_df=10,
max_df=0.75, stop_words='english'),
documents = data.title)
len(data), X.shape, len(features)
pca = PCA(n_components=10)
pca.fit(X.todense())
print(pca.explained_variance_ratio_)
print(pca.singular_values_)
components = pca.components_
df = pd.DataFrame({"feature": features, "PC1": components[0,:], "PC2": components[1,:]})
df.sort_values("PC2")
```
| true |
code
| 0.543287 | null | null | null | null |
|
# Text Classification using Gradient Boosting Classifier and TfidfVectorizer
This Code Template is for Text Classification using Gradient Boositng Classifier along with Text Feature technique TfidfVectorizer from Scikit-learn in python.
### Required Packages
```
!pip install nltk
!pip install imblearn
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
import re, string
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords, wordnet
from nltk.stem import SnowballStemmer, WordNetLemmatizer
from nltk.stem import PorterStemmer
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
from imblearn.over_sampling import RandomOverSampler
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import plot_confusion_matrix,classification_report
from sklearn.ensemble import GradientBoostingClassifier
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
**Target** variable for prediction.
```
target=''
```
Text column containing all the text data
```
text=""
```
## Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
#convert to lowercase, strip and remove punctuations
def preprocess(text):
text = text.lower()
text = text.strip()
text = re.compile('<.*?>').sub('', text)
text = re.compile('[%s]' % re.escape(string.punctuation)).sub(' ', text)
text = re.sub('\s+', ' ', text)
text = re.sub(r'\[[0-9]*\]',' ',text)
text = re.sub(r'[^\w\s]', '', str(text).lower().strip())
text = re.sub(r'\d',' ',text)
text = re.sub(r'\s+',' ',text)
return text
# STOPWORD REMOVAL
def stopword(string):
a= [i for i in string.split() if i not in stopwords.words('english')]
return ' '.join(a)
# STEMMING
# Initialize the Stemmer
ps = PorterStemmer()
# Stem the sentence
def stemmer(string):
return ps.stem(string)
# LEMMATIZATION
# Initialize the lemmatizer
wl = WordNetLemmatizer()
# This is a helper function to map NTLK position tags
def get_wordnet_pos(tag):
if tag.startswith('J'):
return wordnet.ADJ
elif tag.startswith('V'):
return wordnet.VERB
elif tag.startswith('N'):
return wordnet.NOUN
elif tag.startswith('R'):
return wordnet.ADV
else:
return wordnet.NOUN
# Lemmatize the sentence
def lemmatizer(string):
word_pos_tags = nltk.pos_tag(word_tokenize(string)) # Get position tags
a=[wl.lemmatize(tag[0], get_wordnet_pos(tag[1])) for idx, tag in enumerate(word_pos_tags)] # Map the position tag and lemmatize the word/token
return " ".join(a)
def textlemmapreprocess(string):
return lemmatizer(stopword(preprocess(string)))
def textstempreprocess(string):
return stemmer(stopword(preprocess(string)))
def textfinalpreprocess(df, modifier = 'stemmer'):
if modifier == 'lemmatization':
return(df[text].apply(lambda x: textlemmapreprocess(x)))
elif modifier == 'stemmer':
return(df[text].apply(lambda x: textstempreprocess(x)))
def data_preprocess(df, target):
df = df.dropna(axis=0, how = 'any')
df[target] = LabelEncoder().fit_transform(df[target])
return df
df = data_preprocess(df, target)
df[text] = textfinalpreprocess(df, modifier = 'stemmer') #modifier has two options: 'stemmer', 'lemmatization'
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[text]
Y=df[target]
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
Since we are using a univariate dataset, we can directly split our data into training and testing subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Feature Transformation
**TfidfVectorizer** converts a collection of raw documents to a matrix of TF-IDF features.
It's equivalent to CountVectorizer followed by TfidfTransformer.
For more information on TfidfVectorizer [click here](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html)
```
vectorizer = TfidfVectorizer()
vectorizer.fit(x_train)
x_train = vectorizer.transform(x_train)
x_test = vectorizer.transform(x_test)
```
## Model
Gradient Boosting for classification.
GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage n_classes_ regression trees are fit on the negative gradient of the binomial or multinomial deviance loss function. Binary classification is a special case where only a single regression tree is induced.
#### Model Tuning Parameters
1.loss{‘deviance’, ‘exponential’}, default=’deviance’
>The loss function to be optimized. ‘deviance’ refers to deviance (= logistic regression) for classification with probabilistic outputs. For loss ‘exponential’ gradient boosting recovers the AdaBoost algorithm.
2.learning_ratefloat, default=0.1
>Learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators.
3.n_estimatorsint, default=100
>The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance.
4.subsamplefloat, default=1.0
>The fraction of samples to be used for fitting the individual base learners. If smaller than 1.0 this results in Stochastic Gradient Boosting. subsample interacts with the parameter n_estimators. Choosing subsample < 1.0 leads to a reduction of variance and an increase in bias.
5.criterion{‘friedman_mse’, ‘mse’, ‘mae’}, default=’friedman_mse’
>The function to measure the quality of a split. Supported criteria are ‘friedman_mse’ for the mean squared error with improvement score by Friedman, ‘mse’ for mean squared error, and ‘mae’ for the mean absolute error. The default value of ‘friedman_mse’ is generally the best as it can provide a better approximation in some cases.
6.min_samples_splitint or float, default=2
>The minimum number of samples required to split an internal node:
If int, then consider min_samples_split as the minimum number.
If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples) are the minimum number of samples for each split.
7.min_samples_leafint or float, default=1
>The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least min_samples_leaf training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.
If int, then consider min_samples_leaf as the minimum number.
If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
8.min_weight_fraction_leaffloat, default=0.0
>The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
9.max_depthint, default=3
>The maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables.
For more information on GBClassifier and it's parameters [click here](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html)
```
model=GradientBoostingClassifier(n_estimators=500, random_state=123)
model.fit(x_train,y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* where:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Anu Rithiga B , Github: [Profile - Iamgrootsh7](https://github.com/iamgrootsh7)
| true |
code
| 0.49585 | null | null | null | null |
|
# 2 - Updated Sentiment Analysis
In the previous notebook, we got the fundamentals down for sentiment analysis. In this notebook, we'll actually get decent results.
We will use:
- packed padded sequences
- pre-trained word embeddings
- different RNN architecture
- bidirectional RNN
- multi-layer RNN
- regularization
- a different optimizer
This will allow us to achieve ~84% test accuracy.
## Preparing Data
As before, we'll set the seed, define the `Fields` and get the train/valid/test splits.
We'll be using *packed padded sequences*, which will make our RNN only process the non-padded elements of our sequence, and for any padded element the `output` will be a zero tensor. To use packed padded sequences, we have to tell the RNN how long the actual sequences are. We do this by setting `include_lengths = True` for our `TEXT` field. This will cause `batch.text` to now be a tuple with the first element being our sentence (a numericalized tensor that has been padded) and the second element being the actual lengths of our sentences.
```
import torch
from torchtext import data
from torchtext import datasets
SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
import spacy
#import en_core_web_sm
# spacy.load('en') # IOError: [E050] Can't find model 'en'.
spacy_en = spacy.load('en_core_web_sm')
def tokenizer(text): # create a tokenizer function
return [tok.text for tok in spacy_en.tokenizer(text)]
TEXT = data.Field(sequential=True, tokenize=tokenizer, lower=True)
LABEL = data.LabelField(dtype = torch.float)
print(TEXT)
print(LABEL)
```
We then load the IMDb dataset.
```
from torchtext import datasets
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
```
Then create the validation set from our training set.
```
import random
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
```
Next is the use of pre-trained word embeddings. Now, instead of having our word embeddings initialized randomly, they are initialized with these pre-trained vectors.
We get these vectors simply by specifying which vectors we want and passing it as an argument to `build_vocab`. `TorchText` handles downloading the vectors and associating them with the correct words in our vocabulary.
Here, we'll be using the `"glove.6B.100d" vectors"`. `glove` is the algorithm used to calculate the vectors, go [here](https://nlp.stanford.edu/projects/glove/) for more. `6B` indicates these vectors were trained on 6 billion tokens and `100d` indicates these vectors are 100-dimensional.
You can see the other available vectors [here](https://github.com/pytorch/text/blob/master/torchtext/vocab.py#L113).
The theory is that these pre-trained vectors already have words with similar semantic meaning close together in vector space, e.g. "terrible", "awful", "dreadful" are nearby. This gives our embedding layer a good initialization as it does not have to learn these relations from scratch.
**Note**: these vectors are about 862MB, so watch out if you have a limited internet connection.
By default, TorchText will initialize words in your vocabulary but not in your pre-trained embeddings to zero. We don't want this, and instead initialize them randomly by setting `unk_init` to `torch.Tensor.normal_`. This will now initialize those words via a Gaussian distribution.
```
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
```
As before, we create the iterators, placing the tensors on the GPU if one is available.
Another thing for packed padded sequences all of the tensors within a batch need to be sorted by their lengths. This is handled in the iterator by setting `sort_within_batch = True`.
```
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
sort_within_batch = True,
device = device)
```
## Build the Model
The model features the most drastic changes.
### Different RNN Architecture
We'll be using a different RNN architecture called a Long Short-Term Memory (LSTM). Why is an LSTM better than a standard RNN? Standard RNNs suffer from the [vanishing gradient problem](https://en.wikipedia.org/wiki/Vanishing_gradient_problem). LSTMs overcome this by having an extra recurrent state called a _cell_, $c$ - which can be thought of as the "memory" of the LSTM - and the use use multiple _gates_ which control the flow of information into and out of the memory. For more information, go [here](https://colah.github.io/posts/2015-08-Understanding-LSTMs/). We can simply think of the LSTM as a function of $x_t$, $h_t$ and $c_t$, instead of just $x_t$ and $h_t$.
$$(h_t, c_t) = \text{LSTM}(x_t, h_t, c_t)$$
Thus, the model using an LSTM looks something like (with the embedding layers omitted):

The initial cell state, $c_0$, like the initial hidden state is initialized to a tensor of all zeros. The sentiment prediction is still, however, only made using the final hidden state, not the final cell state, i.e. $\hat{y}=f(h_T)$.
### Bidirectional RNN
The concept behind a bidirectional RNN is simple. As well as having an RNN processing the words in the sentence from the first to the last (a forward RNN), we have a second RNN processing the words in the sentence from the **last to the first** (a backward RNN). At time step $t$, the forward RNN is processing word $x_t$, and the backward RNN is processing word $x_{T-t+1}$.
In PyTorch, the hidden state (and cell state) tensors returned by the forward and backward RNNs are stacked on top of each other in a single tensor.
We make our sentiment prediction using a concatenation of the last hidden state from the forward RNN (obtained from final word of the sentence), $h_T^\rightarrow$, and the last hidden state from the backward RNN (obtained from the first word of the sentence), $h_T^\leftarrow$, i.e. $\hat{y}=f(h_T^\rightarrow, h_T^\leftarrow)$
The image below shows a bi-directional RNN, with the forward RNN in orange, the backward RNN in green and the linear layer in silver.

### Multi-layer RNN
Multi-layer RNNs (also called *deep RNNs*) are another simple concept. The idea is that we add additional RNNs on top of the initial standard RNN, where each RNN added is another *layer*. The hidden state output by the first (bottom) RNN at time-step $t$ will be the input to the RNN above it at time step $t$. The prediction is then made from the final hidden state of the final (highest) layer.
The image below shows a multi-layer unidirectional RNN, where the layer number is given as a superscript. Also note that each layer needs their own initial hidden state, $h_0^L$.

### Regularization
Although we've added improvements to our model, each one adds additional parameters. Without going into overfitting into too much detail, the more parameters you have in in your model, the higher the probability that your model will overfit (memorize the training data, causing a low training error but high validation/testing error, i.e. poor generalization to new, unseen examples). To combat this, we use regularization. More specifically, we use a method of regularization called *dropout*. Dropout works by randomly *dropping out* (setting to 0) neurons in a layer during a forward pass. The probability that each neuron is dropped out is set by a hyperparameter and each neuron with dropout applied is considered indepenently. One theory about why dropout works is that a model with parameters dropped out can be seen as a "weaker" (less parameters) model. The predictions from all these "weaker" models (one for each forward pass) get averaged together withinin the parameters of the model. Thus, your one model can be thought of as an ensemble of weaker models, none of which are over-parameterized and thus should not overfit.
### Implementation Details
Another addition to this model is that we are not going to learn the embedding for the `<pad>` token. This is because we want to explitictly tell our model that padding tokens are irrelevant to determining the sentiment of a sentence. This means the embedding for the pad token will remain at what it is initialized to (we initialize it to all zeros later). We do this by passing the index of our pad token as the `padding_idx` argument to the `nn.Embedding` layer.
To use an LSTM instead of the standard RNN, we use `nn.LSTM` instead of `nn.RNN`. Also, note that the LSTM returns the `output` and a tuple of the final `hidden` state and the final `cell` state, whereas the standard RNN only returned the `output` and final `hidden` state.
As the final hidden state of our LSTM has both a forward and a backward component, which will be concatenated together, the size of the input to the `nn.Linear` layer is twice that of the hidden dimension size.
Implementing bidirectionality and adding additional layers are done by passing values for the `num_layers` and `bidirectional` arguments for the RNN/LSTM.
Dropout is implemented by initializing an `nn.Dropout` layer (the argument is the probability of dropping out each neuron) and using it within the `forward` method after each layer we want to apply dropout to. **Note**: never use dropout on the input or output layers (`text` or `fc` in this case), you only ever want to use dropout on intermediate layers. The LSTM has a `dropout` argument which adds dropout on the connections between hidden states in one layer to hidden states in the next layer.
As we are passing the lengths of our sentences to be able to use packed padded sequences, we have to add a second argument, `text_lengths`, to `forward`.
Before we pass our embeddings to the RNN, we need to pack them, which we do with `nn.utils.rnn.packed_padded_sequence`. This will cause our RNN to only process the non-padded elements of our sequence. The RNN will then return `packed_output` (a packed sequence) as well as the `hidden` and `cell` states (both of which are tensors). Without packed padded sequences, `hidden` and `cell` are tensors from the last element in the sequence, which will most probably be a pad token, however when using packed padded sequences they are both from the last non-padded element in the sequence.
We then unpack the output sequence, with `nn.utils.rnn.pad_packed_sequence`, to transform it from a packed sequence to a tensor. The elements of `output` from padding tokens will be zero tensors (tensors where every element is zero). Usually, we only have to unpack output if we are going to use it later on in the model. Although we aren't in this case, we still unpack the sequence just to show how it is done.
The final hidden state, `hidden`, has a shape of _**[num layers * num directions, batch size, hid dim]**_. These are ordered: **[forward_layer_0, backward_layer_0, forward_layer_1, backward_layer 1, ..., forward_layer_n, backward_layer n]**. As we want the final (top) layer forward and backward hidden states, we get the top two hidden layers from the first dimension, `hidden[-2,:,:]` and `hidden[-1,:,:]`, and concatenate them together before passing them to the linear layer (after applying dropout).
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers,
bidirectional, dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.rnn = nn.LSTM(embedding_dim,
hidden_dim,
num_layers=n_layers,
bidirectional=bidirectional,
dropout=dropout)
self.fc = nn.Linear(hidden_dim * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text, text_lengths):
#text = [sent len, batch size]
embedded = self.dropout(self.embedding(text))
#embedded = [sent len, batch size, emb dim]
#pack sequence
packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths)
packed_output, (hidden, cell) = self.rnn(packed_embedded)
#unpack sequence
output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output)
#output = [sent len, batch size, hid dim * num directions]
#output over padding tokens are zero tensors
#hidden = [num layers * num directions, batch size, hid dim]
#cell = [num layers * num directions, batch size, hid dim]
#concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers
#and apply dropout
hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1))
#hidden = [batch size, hid dim * num directions]
return self.fc(hidden)
```
Like before, we'll create an instance of our RNN class, with the new parameters and arguments for the number of layers, bidirectionality and dropout probability.
To ensure the pre-trained vectors can be loaded into the model, the `EMBEDDING_DIM` must be equal to that of the pre-trained GloVe vectors loaded earlier.
We get our pad token index from the vocabulary, getting the actual string representing the pad token from the field's `pad_token` attribute, which is `<pad>` by default.
```
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
HIDDEN_DIM = 256
OUTPUT_DIM = 1
N_LAYERS = 2
BIDIRECTIONAL = True
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = RNN(INPUT_DIM,
EMBEDDING_DIM,
HIDDEN_DIM,
OUTPUT_DIM,
N_LAYERS,
BIDIRECTIONAL,
DROPOUT,
PAD_IDX)
```
We'll print out the number of parameters in our model.
Notice how we have almost twice as many parameters as before!
```
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
```
The final addition is copying the pre-trained word embeddings we loaded earlier into the `embedding` layer of our model.
We retrieve the embeddings from the field's vocab, and check they're the correct size, _**[vocab size, embedding dim]**_
```
pretrained_embeddings = TEXT.vocab.vectors
print(pretrained_embeddings.shape)
```
We then replace the initial weights of the `embedding` layer with the pre-trained embeddings.
**Note**: this should always be done on the `weight.data` and not the `weight`!
```
model.embedding.weight.data.copy_(pretrained_embeddings)
```
As our `<unk>` and `<pad>` token aren't in the pre-trained vocabulary they have been initialized using `unk_init` (an $\mathcal{N}(0,1)$ distribution) when building our vocab. It is preferable to initialize them both to all zeros to explicitly tell our model that, initially, they are irrelevant for determining sentiment.
We do this by manually setting their row in the embedding weights matrix to zeros. We get their row by finding the index of the tokens, which we have already done for the padding index.
**Note**: like initializing the embeddings, this should be done on the `weight.data` and not the `weight`!
```
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
print(model.embedding.weight.data)
```
We can now see the first two rows of the embedding weights matrix have been set to zeros. As we passed the index of the pad token to the `padding_idx` of the embedding layer it will remain zeros throughout training, however the `<unk>` token embedding will be learned.
## Train the Model
Now to training the model.
The only change we'll make here is changing the optimizer from `SGD` to `Adam`. SGD updates all parameters with the same learning rate and choosing this learning rate can be tricky. `Adam` adapts the learning rate for each parameter, giving parameters that are updated more frequently lower learning rates and parameters that are updated infrequently higher learning rates. More information about `Adam` (and other optimizers) can be found [here](http://ruder.io/optimizing-gradient-descent/index.html).
To change `SGD` to `Adam`, we simply change `optim.SGD` to `optim.Adam`, also note how we do not have to provide an initial learning rate for Adam as PyTorch specifies a sensibile default initial learning rate.
```
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
```
The rest of the steps for training the model are unchanged.
We define the criterion and place the model and criterion on the GPU (if available)...
```
criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
```
We implement the function to calculate accuracy...
```
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
```
We define a function for training our model.
As we have set `include_lengths = True`, our `batch.text` is now a tuple with the first element being the numericalized tensor and the second element being the actual lengths of each sequence. We separate these into their own variables, `text` and `text_lengths`, before passing them to the model.
**Note**: as we are now using dropout, we must remember to use `model.train()` to ensure the dropout is "turned on" while training.
```
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
text, text_lengths = batch.text
predictions = model(text, text_lengths).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
Then we define a function for testing our model, again remembering to separate `batch.text`.
**Note**: as we are now using dropout, we must remember to use `model.eval()` to ensure the dropout is "turned off" while evaluating.
```
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
text, text_lengths = batch.text
predictions = model(text, text_lengths).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
And also create a nice function to tell us how long our epochs are taking.
```
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
Finally, we train our model...
```
N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut2-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
```
...and get our new and vastly improved test accuracy!
```
model.load_state_dict(torch.load('tut2-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
```
## User Input
We can now use our model to predict the sentiment of any sentence we give it. As it has been trained on movie reviews, the sentences provided should also be movie reviews.
When using a model for inference it should always be in evaluation mode. If this tutorial is followed step-by-step then it should already be in evaluation mode (from doing `evaluate` on the test set), however we explicitly set it to avoid any risk.
Our `predict_sentiment` function does a few things:
- sets the model to evaluation mode
- tokenizes the sentence, i.e. splits it from a raw string into a list of tokens
- indexes the tokens by converting them into their integer representation from our vocabulary
- gets the length of our sequence
- converts the indexes, which are a Python list into a PyTorch tensor
- add a batch dimension by `unsqueeze`ing
- converts the length into a tensor
- squashes the output prediction from a real number between 0 and 1 with the `sigmoid` function
- converts the tensor holding a single value into an integer with the `item()` method
We are expecting reviews with a negative sentiment to return a value close to 0 and positive reviews to return a value close to 1.
```
import spacy
nlp = spacy.load('en')
def predict_sentiment(model, sentence):
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
length = [len(indexed)]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(1)
length_tensor = torch.LongTensor(length)
prediction = torch.sigmoid(model(tensor, length_tensor))
return prediction.item()
```
An example negative review...
```
predict_sentiment(model, "This film is terrible")
```
An example positive review...
```
predict_sentiment(model, "This film is great")
```
## Next Steps
We've now built a decent sentiment analysis model for movie reviews! In the next notebook we'll implement a model that gets comparable accuracy with far fewer parameters and trains much, much faster.
| true |
code
| 0.846514 | null | null | null | null |
|
# From the solar wind to the ground
> Abstract: We demonstrate a basic analysis of a geomagnetic storm using hapiclient & viresclient to access data from the solar wind (OMNI IMF), Low Earth Orbit (Swarm-derived auroral electrojet estimates), and the ground (INTERMAGNET observatory magnetic measurements).
## Packages to use
- [`hapiclient`](https://github.com/hapi-server/client-python) to access solar wind data from [OMNI](https://omniweb.gsfc.nasa.gov/) (alternatively we could use [`pysat`](https://pysat.readthedocs.io/en/latest/quickstart.html))
- For more examples with hapiclient, take a look at [the demonstration notebooks](https://github.com/hapi-server/client-python-notebooks)
- [`viresclient`](https://github.com/ESA-VirES/VirES-Python-Client/) to access AEJs from Swarm, and B-field from ground observatories
- [`xarray`](https://xarray.pydata.org/) and [`matplotlib`](https://matplotlib.org/) for data wrangling and plotting
- See the [xarray tutorial website](https://xarray-contrib.github.io/xarray-tutorial/) to learn more
```
%load_ext watermark
%watermark -i -v -p viresclient,hapiclient,pandas,xarray,matplotlib
from copy import deepcopy
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
from viresclient import SwarmRequest
from hapiclient import hapi, hapitime2datetime
```
## Time selection
Let's choose an interesting time period to study - the ["St. Patrick's day storm" of 17th March 2015](https://doi.org/10.1186/s40623-016-0525-y). You can look at the wider context of this event using the interactive [Space Weather Data Portal from the University of Colorado](https://lasp.colorado.edu/space-weather-portal/data/display?active-range=%5B1425967200000,1426831200000%5D&outer-range=%5B1262552105447,1559362748308%5D&plots=%5B%7B%22datasets%22:%7B%22sdo_eve_diodes_l2%22:%5B%22diode171%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22sdo_aia_0094_0335_0193_image_files%22:%5B%22url%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22ac_h0_mfi%22:%5B%22Magnitude%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22ac_h1_epm%22:%5B%22P7%22,%22P8%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22ac_h0_swe%22:%5B%22Vp%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22gracea_density%22:%5B%22neutral_density%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22usgs_geomag_brw_definitive%22:%5B%22X%22%5D,%22usgs_geomag_frn_definitive%22:%5B%22X%22%5D,%22usgs_geomag_cmo_definitive%22:%5B%22X%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22usgs_geomag_brw_definitive%22:%5B%22Y%22%5D,%22usgs_geomag_frn_definitive%22:%5B%22Y%22%5D,%22usgs_geomag_cmo_definitive%22:%5B%22Y%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22usgs_geomag_brw_definitive%22:%5B%22Z%22%5D,%22usgs_geomag_frn_definitive%22:%5B%22Z%22%5D,%22usgs_geomag_cmo_definitive%22:%5B%22Z%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22swt_bfield_maps%22:%5B%22url%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22swt_efield_maps%22:%5B%22url%22%5D%7D,%22options%22:%7B%7D%7D,%7B%22datasets%22:%7B%22swt_voltage_maps%22:%5B%22url%22%5D%7D,%22options%22:%7B%7D%7D%5D)
We will use the same time window to fetch data from the different sources:
```
START_TIME = '2015-03-15T00:00:00Z'
END_TIME = '2015-03-20T00:00:00Z'
```
## Solar wind data (OMNI)
HAPI is an access protocol supported by a wide array of heliophysics datasets. We can use the Python package "hapiclient" to retrieve data from HAPI servers. In this case we will access the [OMNI HRO2 dataset](https://omniweb.gsfc.nasa.gov/html/HROdocum.html) which provides consolidated solar wind data, and then we will show how we can load these data into pandas and xarray objects.
> OMNI Combined, Definitive 1-minute IMF and Definitive Plasma Data Time-Shifted to the Nose of the Earth's Bow Shock, plus Magnetic Indices - J.H. King, N. Papatashvilli (AdnetSystems, NASA GSFC)
To generate code snippets to use, and to see what data are available:
http://hapi-server.org/servers/#server=CDAWeb&dataset=OMNI_HRO2_1MIN¶meters=flow_speed&start=2000-01-01T00:00:00Z&stop=2000-02-01T00:00:00Z&return=script&format=python
Here we will access five-minute-resolution measurements of the Interplanetary Magnetic Field (IMF) vector and the bulk flow speed of the solar wind:
```
def fetch_omni_data(start, stop):
server = 'https://cdaweb.gsfc.nasa.gov/hapi'
dataset = 'OMNI_HRO2_5MIN'
parameters = 'BX_GSE,BY_GSM,BZ_GSM,flow_speed';
data, meta = hapi(server, dataset, parameters, start, stop)
return data, meta
data, meta = fetch_omni_data(START_TIME, END_TIME)
```
Data are automatically loaded as a [NumPy structured array](https://numpy.org/doc/stable/user/basics.rec.html) and metadata as a dictionary:
```
data
meta
```
We are now able to extract an array for a particular value like `data["BZ_GSM"]`, and use the metadata to get full descriptions and units for the chosen parameter.
The metadata sometimes contains fill values used during data gaps (e.g. the 9999... values appearing above). Let's use those to replace the gaps with NaN values:
```
def fill2nan(hapidata_in, hapimeta):
"""Replace bad values (fill values given in metadata) with NaN"""
hapidata = deepcopy(hapidata_in)
# HAPI returns metadata for parameters as a list of dictionaries
# - Loop through them
for metavar in hapimeta['parameters']:
varname = metavar['name']
fillvalstr = metavar['fill']
if fillvalstr is None:
continue
vardata = hapidata[varname]
mask = vardata==float(fillvalstr)
nbad = np.count_nonzero(mask)
print('{}: {} fills NaNd'.format(varname, nbad))
vardata[mask] = np.nan
return hapidata, hapimeta
data, meta = fill2nan(data,meta)
data
```
We can load the data into a pandas DataFrame to more readily use for analysis:
```
def to_pandas(hapidata):
df = pd.DataFrame(
columns=hapidata.dtype.names,
data=hapidata,
).set_index("Time")
# Convert from hapitime to Python datetime
df.index = hapitime2datetime(df.index.values.astype(str))
# df.index = pd.DatetimeIndex(df.index.values.astype(str))
# Remove timezone awareness
df.index = df.index.tz_convert("UTC").tz_convert(None)
# Rename to Timestamp to match viresclient
df.index.name = "Timestamp"
return df
df = to_pandas(data)
df
```
How can we get the extra information like the units from the metadata? Let's construct dictionaries, `units` and `description`, that allow easier access to these:
```
def get_units_description(meta):
units = {}
description = {}
for paramdict in meta["parameters"]:
units[paramdict["name"]] = paramdict.get("units", None)
description[paramdict["name"]] = paramdict.get("description", None)
return units, description
units, description = get_units_description(meta)
units, description
```
The [`xarray.Dataset`](http://xarray.pydata.org/en/stable/data-structures.html#dataset) object has advantages for handling multi-dimensional data and for attaching of metadata like units. Let's convert the data to an `xarray.Dataset`:
```
def to_xarray(hapidata, hapimeta):
# Here we will conveniently re-use the pandas function we just built,
# and use the pandas API to build the xarray Dataset.
# NB: if performance is important, it's better to build the Dataset directly
ds = to_pandas(hapidata).to_xarray()
units, description = get_units_description(hapimeta)
for param in ds:
ds[param].attrs = {
"units": units[param],
"description": description[param]
}
return ds
ds_sw = to_xarray(data, meta)
ds_sw
```
Now let's plot these data:
```
def plot_solar_wind(ds_sw):
fig, axes = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(15, 5))
for IMF_var in ["BX_GSE", "BY_GSM", "BZ_GSM"]:
ds_sw[IMF_var].plot.line(
x="Timestamp", linewidth=1, alpha=0.8, ax=axes[0], label=IMF_var
)
axes[0].legend()
axes[0].set_ylabel("IMF\n[nT]")
axes[0].set_xlabel("")
ds_sw["flow_speed"].plot.line(
x="Timestamp", linewidth=1, alpha=0.8, ax=axes[1]
)
axes[1].set_ylabel("flow_speed\n[km/s]")
axes[1].set_xlabel("")
axes[0].grid()
axes[1].grid()
fig.suptitle("Interplanetary Magnetic Field and Solar Wind flow")
return fig, axes
fig_sw, axes_sw = plot_solar_wind(ds_sw)
```
## Auroral electrojets as measured by Swarm
Since spacecraft move, it is difficult to extract a simple time series that can be easily tracked. From the complex Swarm product portfolio, we will pick a particular derived parameter: the peak auroral electrojet intensities derived from each pass over the current system. This signal tracks reasonably well from one orbit to the next (when separated into four orbital segments - accounting for two passes over the auroral oval in different local time sectors, and over the northern and southern hemispheres).
To keep things a bit simpler, we will retrieve data only from Swarm Alpha over the Northern Hemisphere. The auroral electrojet peaks and boundaries for Swarm Alpha are contained within the product named `SW_OPER_AEJAPBL_2F`. Here is how we can access these data:
```
def fetch_Swarm_AEJ(start_time, end_time):
request = SwarmRequest()
# Meaning of AEJAPBL: (AEJ) Auroral electrojets
# (A) Swarm Alpha
# (PBL) Peaks and boundaries from LC method
# J_QD is the current intensity along QD-latitude contours
# QDOrbitDirection is a flag (1, -1) marking the direction of the
# satellite (ascending, descending) relative to the QD pole
# MLT is magnetic local time, evaluated according to the
# quasi-dipole magnetic longitude and the sub-solar point
# (see doi.org/10.1007/s11214-016-0275-y)
request.set_collection("SW_OPER_AEJAPBL_2F")
request.set_products(
measurements=["J_QD", "PointType"],
auxiliaries=["QDOrbitDirection", "MLT"]
)
# PointType of 0 refers to WEJ (westward electrojet) peaks
# PointType of 1 refers to EEJ (eastward electrojet) peaks
# See https://nbviewer.jupyter.org/github/pacesm/jupyter_notebooks/blob/master/AEBS/AEBS_00_data_access.ipynb#AEJxPBL-product
request.set_range_filter("Latitude", 0, 90) # Northern hemisphere
request.set_range_filter("PointType", 0, 1) # Extract only peaks
data = request.get_between(START_TIME, END_TIME, asynchronous=False, show_progress=False)
ds_AEJ_peaks = data.as_xarray()
return ds_AEJ_peaks
ds_AEJ_peaks = fetch_Swarm_AEJ(START_TIME, END_TIME)
ds_AEJ_peaks
```
Now we need some complex logic to plot the eastward and westward electrojet intensities, separated for each local time sector:
```
def plot_AEJ_envelope(ds_AEJ_peaks):
# Masks to identify which sector the satellite is in
# and which current type (WEJ/EEJ) is given
mask_asc = ds_AEJ_peaks["QDOrbitDirection"] == 1
mask_desc = ds_AEJ_peaks["QDOrbitDirection"] == -1
mask_WEJ = ds_AEJ_peaks["PointType"] == 0
mask_EEJ = ds_AEJ_peaks["PointType"] == 1
fig, axes = plt.subplots(nrows=2, sharex=True, sharey=True, figsize=(15, 5))
# Select and plot from the ascending orbital segments
# on axes 0
# Eastward electrojet:
_ds = ds_AEJ_peaks.where(mask_EEJ & mask_asc, drop=True)
_ds["J_QD"].plot.line(x="Timestamp", ax=axes[0], label="EEJ")
# Westward electrojet:
_ds = ds_AEJ_peaks.where(mask_WEJ & mask_asc, drop=True)
_ds["J_QD"].plot.line(x="Timestamp", ax=axes[0], label="WEJ")
# Identify approximate MLT of sector
_ds = ds_AEJ_peaks.where(mask_asc, drop=True)
mlt = round(float(_ds["MLT"].mean()))
axes[0].set_ylabel(axes[0].get_ylabel() + f"\nMLT: ~{mlt}")
# ... and for descending segments
# on axes 1
# Eastward electrojet:
_ds = ds_AEJ_peaks.where(mask_EEJ & mask_desc, drop=True)
_ds["J_QD"].plot.line(x="Timestamp", ax=axes[1], label="EEJ")
# Westward electrojet:
_ds = ds_AEJ_peaks.where(mask_WEJ & mask_desc, drop=True)
_ds["J_QD"].plot.line(x="Timestamp", ax=axes[1], label="WEJ")
# Identify approximate MLT of sector
_ds = ds_AEJ_peaks.where(mask_desc, drop=True)
mlt = round(float(_ds["MLT"].mean()))
axes[1].set_ylabel(axes[1].get_ylabel() + f"\nMLT: ~{mlt}")
axes[1].legend()
axes[0].set_xlabel("")
axes[1].set_xlabel("")
axes[0].grid()
axes[1].grid()
fig.suptitle("Auroral electrojet envelope measured by Swarm Alpha")
return fig, axes
fig_aej, axes_aej = plot_AEJ_envelope(ds_AEJ_peaks)
```
This shows us the envelope of the auroral electrojet system - how the strength of the Eastward (EEJ) and Westward (WEJ) electrojets evolve over time - but only over the two local time sectors that the spacecraft is moving through. The strengths of the electric current along the contours of Quasi-Dipole latitude, `J_QD`, have been calculated.
### Peak ground magnetic disturbances below satellite tracks
Swarm also provides predictions of the location and strength of the peak disturbance on the ground (along the satellite ground-track) caused by the auroral electrojets. Note that this is from the AEJ_PBS (using the SECS method) collection rather than the AEJ_PBL (using the LC method) used above.
```
def fetch_Swarm_AEJ_disturbances(start_time, end_time):
request = SwarmRequest()
request.set_collection("SW_OPER_AEJAPBS_2F:GroundMagneticDisturbance")
request.set_products(
measurements=["B_NE"],
auxiliaries=["OrbitNumber", "QDOrbitDirection"]
)
request.set_range_filter("Latitude", 0, 90) # Northern hemisphere only
data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False)
ds = data.as_xarray()
# Add vector magnitude
ds["B_Total"] = "Timestamp", np.sqrt((ds["B_NE"].data**2).sum(axis=1))
ds["B_Total"].attrs["units"] = "nT"
return ds
ds_AEJ_disturbances = fetch_Swarm_AEJ_disturbances(START_TIME, END_TIME)
ds_AEJ_disturbances
```
This dataset contains two samples per pass over each half of the auroral oval, estimating the ground location of the peak magnetic disturbance due to each of the EEJ and WEJ currents, and the associated strength (`B_NE`) of the North and East components of the disturbance. Let's look at an approximation of the overall strongest ground disturbances, by inspecting the maximum strength found over 90-minute windows (i.e. approximately each orbit):
```
def plot_Swarm_ground_disturbance(ds_AEJ_disturbances):
fig, ax = plt.subplots(figsize=(15, 3))
ds_resample = ds_AEJ_disturbances.resample({'Timestamp':'90Min'}).max()
ds_resample["B_Total"].plot.line(x="Timestamp", ax=ax)
fig.suptitle("Peak ground disturbance estimated from Swarm Alpha")
ax.set_ylabel("Magnetic disturbance\n[nT]")
ax.set_xlabel("")
ax.grid()
return fig, ax
fig_Sw_ground, ax_Sw_ground = plot_Swarm_ground_disturbance(ds_AEJ_disturbances)
```
## Ground observatory data (INTERMAGNET)
> We acknowledge usage of INTERMAGNET data
> See <https://intermagnet.github.io/data_conditions.html> for more
As well as access to Swarm data, VirES also provides access to ground observatory data from INTERMAGNET. We can fetch data from the minute resolution dataset (`SW_OPER_AUX_OBSM2_`), specifying desired observatories according to their [3-letter IAGA codes](https://www.intermagnet.org/imos/imomap-eng.php). These data have been rotated from the geodetic reference frame to the geocentric frame (NEC).
We'll select three observatories in Sweden: Abisko (ABK), Lycksele (LYC) and Uppsala (UPS), which form a chain across about 10 degrees of latitude along a similar longitude.
```
def fetch_ground_obs(IAGA_codes, start_time, end_time):
request = SwarmRequest()
request.set_collection(*[f"SW_OPER_AUX_OBSM2_:{c}" for c in IAGA_codes], verbose=False)
request.set_products(
measurements=["B_NEC", "IAGA_code"],
)
data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False)
ds = data.as_xarray(reshape=True)
return ds
ds_ground_obs = fetch_ground_obs(["ABK", "LYC", "UPS"], START_TIME, END_TIME)
ds_ground_obs
```
By specifiying `reshape=True` when loading the xarray object, a multi-dimensional dataset is formed with a new `IAGA_code` axis. Here we show the three vector components from each observatory:
```
ds_ground_obs["B_NEC"].plot.line(x="Timestamp", row="NEC", col="IAGA_code", sharey=False);
```
Let's calculate $|\frac{dB}{dt}|$ and plot that instead. This is a good indication of the GIC risk, as a more rapidly changing magnetic field will induce a larger electric field in the ground.
```
def plot_groundobs_dbdt(ds_ground_obs):
ds_ground_obs = ds_ground_obs.assign(
dBdt=(ds_ground_obs["B_NEC"].diff("Timestamp")**2).sum(dim="NEC").pipe(np.sqrt)
)
fig, axes = plt.subplots(nrows=3, figsize=(15, 7), sharey=True, sharex=True)
for i, obs in enumerate(ds_ground_obs["IAGA_code"].values):
_ds = ds_ground_obs.sel(IAGA_code=obs)
lat = np.round(float(_ds["Latitude"]), 1)
lon = np.round(float(_ds["Longitude"]), 1)
label = f"{obs} (Lat {lat}, Lon {lon})"
ds_ground_obs["dBdt"].sel(IAGA_code=obs).plot.line(x="Timestamp", ax=axes[i], label=label)
axes[i].set_title("")
axes[i].legend()
axes[i].set_xlabel("")
axes[i].set_ylabel("dB/dt\n[nT / min]")
axes[i].grid()
fig.tight_layout()
return fig, axes
fig_grdbdt, axes_grdbdt = plot_groundobs_dbdt(ds_ground_obs)
```
| true |
code
| 0.616878 | null | null | null | null |
|
# Airbnb price prediction
## Data exploration
```
import numpy as np
import pandas as pd
import seaborn as sns
import os
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style= "darkgrid")
```
### Load data
```
dir_seatle = "data/Seatle"
dir_boston = "data/Boston/"
seatle_data = pd.read_csv(os.path.join(dir_seatle, "listings.csv"))
boston_data = pd.read_csv(os.path.join(dir_boston, "listings.csv"))
seatle_data.head()
seatle_data.columns
boston_data.columns
seatle_data.describe()
seatle_data.describe()
```
## Data cleaning
First of all, the column of price is not in the right format we need to convert it to float.
```
# Convert columns with dollar symbol ($) and , symbol (,) to float
seatle_data["price"] = seatle_data["price"].str.replace(',', '').str.replace('$', '').astype(float)
boston_data["price"] = boston_data["price"].str.replace(',', '').str.replace('$', '').astype(float)
seatle_data.dropna(subset=["price"], inplace=True)
boston_data.dropna(subset=["price"], inplace=True)
```
Remove outliers
```
# remove instance with more than 500 dollars
seatle_data = seatle_data[seatle_data['price'] < 500]
boston_data = boston_data[boston_data['price'] < 500]
print("Seattle listings: {}".format(len(seatle_data)))
print("Boston listings: {}".format(len(boston_data)))
#make a list of wanted columns
cols_to_keep = [
'id', 'space', 'neighborhood_overview','host_since', 'host_response_time', 'host_response_rate',
'host_is_superhost', 'neighbourhood','zipcode','latitude', 'longitude',
'is_location_exact', 'property_type', 'room_type', 'accommodates',
'bathrooms', 'bedrooms', 'beds', 'bed_type', 'amenities',
'price', 'extra_people', 'minimum_nights','maximum_nights',
'availability_30', 'availability_60', 'availability_90',
'availability_365', 'number_of_reviews',
'review_scores_rating', 'review_scores_accuracy', 'review_scores_cleanliness',
'review_scores_checkin', 'review_scores_communication','review_scores_location', 'review_scores_value',
'reviews_per_month']
seatle_data = seatle_data[cols_to_keep]
boston_data = boston_data[cols_to_keep]
def get_cat_num_columns(df):
'''return the list of categorical and numeric columns'''
num_columns = df.select_dtypes(include=np.number).columns.tolist()
cat_columns = df.columns.drop(num_columns)
return cat_columns, num_columns
def get_nan_percentage(df):
''' return the nan percentage for each column in df.
'''
# percentage of values that are missing
total_nan = df.isna().sum().sort_values(ascending=False)
percentage_nan = (total_nan / df.shape[0]) * 100
tabel = pd.concat([total_nan, percentage_nan], axis=1, keys=['Total_nan_values', 'Percentage_of_nan_values'])
return tabel
seatle_cat_cols, seatle_num_cols = get_cat_num_columns(seatle_data)
boston_cat_cols, boston_num_cols = get_cat_num_columns(boston_data)
nan_data = get_nan_percentage(seatle_data)
nan_perc = 10
nan_data["Percentage_of_nan_values"][nan_data["Percentage_of_nan_values"] > nan_perc].plot.barh()
plt.title("Seatle columns with nan values percentage higher than {}".format(nan_perc))
nan_data = get_nan_percentage(boston_data)
nan_perc = 10
nan_data["Percentage_of_nan_values"][nan_data["Percentage_of_nan_values"] > nan_perc].plot.barh()
plt.title("Seatle columns with nan values percentage higher than {}".format(nan_perc))
```
### Fill missing values
First we verify if the target has missing values in both datasets
```
# count nan values
print("Seatle price has {} nan values".format(seatle_data["price"].isnull().sum()))
print("Boston price has {} nan values".format(boston_data["price"].isnull().sum()))
```
#### Handling values type
Handling numeric and categorical values separately.
**Categorical fill nan**
For categorical missing values, the most frequent class for each feature is used for imputation.
```
# impute missing values with the most frequent class
# seatle data
for var in seatle_cat_cols:
seatle_data[var].fillna(seatle_data[var].value_counts().index[0], inplace=True)
# Boston data
for var in boston_cat_cols:
boston_data[var].fillna(boston_data[var].value_counts().index[0], inplace=True)
```
**Numerical fill ann**
Usually outliers in data impact the mean of the values and therefore the median is prefered in such a case. As we have some features with skewed distribution, we prepfer imputation by median to avoid outliers impact.
```
# an example of skewed feature
feat15 = seatle_num_cols[15]
sns.distplot(seatle_data[feat15])
# imputation using median
# Seatle data
for var in seatle_num_cols:
seatle_data[var].fillna((seatle_data[var].median()), inplace=True)
# Boston data
for var in boston_num_cols:
boston_data[var].fillna((boston_data[var].median()), inplace=True)
# verify if there is nan values
seatle_data.isnull().sum().max()
# verify if there is nan values
boston_data.isnull().sum().max()
```
Before work with numeric and categorical data, some numeric columns are declared as objects with others symbols. Therefore, we start by converting these columns to float.
```
# convert host_response_rate column which have % symbol to float
seatle_data["host_response_rate"] = seatle_data["host_response_rate"].str.replace('%', '').astype(float)
boston_data["host_response_rate"] = boston_data["host_response_rate"].str.replace('%', '').astype(float)
# Create 3 new col which holds only year, month and month-year seperately for host since col.
# This will help us in our analysis to answer business questions.
seatle_data['host_since_Year'] = pd.DatetimeIndex(seatle_data['host_since']).year.astype(int)
seatle_data['host_since_month'] = pd.DatetimeIndex(seatle_data['host_since']).month.astype(int)
seatle_data['host_since_year-month'] = pd.to_datetime(seatle_data['host_since']).dt.to_period('M')
boston_data['host_since_Year'] = pd.DatetimeIndex(boston_data['host_since']).year.astype(int)
boston_data['host_since_month'] = pd.DatetimeIndex(boston_data['host_since']).month.astype(int)
boston_data['host_since_year-month'] = pd.to_datetime(boston_data['host_since']).dt.to_period('M')
```
The zipcode column is not in the right format and dispose a small set of missing values. Moreover, the zipcoe is a pertinent information and cannot be simply imputed.
```
zip_percs = seatle_data["zipcode"].isnull().sum()/len(seatle_data) * 100
zip_percb = boston_data["zipcode"].isnull().sum()/len(boston_data) * 100
print("Seatle missing zipcode percentage: {:0.2f} %".format(zip_perc))
print("Boston missing zipcode percentage: {:0.2f} %".format(zip_percb))
# Convert zip code to numeric
seatle_data['zipcode'] = pd.to_numeric(seatle_data['zipcode'], errors='coerce')
boston_data['zipcode'] = pd.to_numeric(boston_data['zipcode'], errors='coerce')
# remove rows with misisng zipcode
seatle_data = seatle_data.dropna(subset=['zipcode'], how='any', axis =0)
boston_data = boston_data.dropna(subset=['zipcode'], how='any', axis =0)
# convert zip code to int
seatle_data['zipcode']=seatle_data['zipcode'].astype(int)
boston_data['zipcode']=boston_data['zipcode'].astype(int)
```
# Three insights
In this notebook we will explore three insights in both Seatle and Boston datasets. We will address the two datasets with main three points:
* Which city is most expensive?
* Which are the property types the most hosted?
* Which are the most expensive and cheapest neighbourhood in Seatle and Boston?
As a bonus, we predict the Airbnb price using LightGBM algorithm.
## Price comparison
First of all we plot the price histogram of both datasets to inspect the distribution. Because we have outliers in price, the plot is cliped on 500 dollars as maximum price for better visualization.
```
def plot_multiple_hist(df1,df2, col, thresh = None, bins = 20):
"""Plot multiple histogram in one gfigure
Arguments:
- df1: first dataframe
- df2: second fdataframe
- col: the variable name (column)
- thresh: threshold used for shifted data, if given threshold values < thresh are plotted
- bins: used for histogram plot
Outputs:
- The two histograms of col in df1 and df2
"""
f = plt.figure(figsize=(10,6))
if thresh:
data1 = df1[col][df1[col]< thresh]
data2 = df2[col][df2[col]< thresh]
else:
data1 = df1[col]
data2 = df2[col]
data1.hist(bins = bins, alpha = 0.5, label='Seatle')
data2.hist(bins = bins, alpha = 0.5, label='Boston')
plt.legend(fontsize=20)
plt.xlabel(col, fontsize=20)
plt.ylabel("Counts", fontsize=20)
if thresh :
plt.title("{} histogram (< {} )".format(col, thresh), fontsize=20)
else:
plt.title(col+ " histogram")
plt.savefig("figures/price_histogram.png", dpi = 600, bbox_inches='tight')
plot_multiple_hist(seatle_data,boston_data, "price", thresh=500)
```
It is clear that Boston prices are little bit higher and regrouped between 70 and 300, while most of Seatle prices are between 10 and 200.
In the following chart, the mean, the median, and the 3rd quartile are dropped for both datasets.
```
# get mean, median and 3rd quartile of the price.
se = seatle_data["price"].describe().drop(["count", "min", "max","std","25%"])
bo = boston_data["price"].describe().drop(["count", "min", "max","std","25%"])
# plot mean, median, 3rd Q
fig = plt.figure(figsize=(5,4))
ax = fig.add_subplot(111)
ind = np.array([1, 2, 3])
rects1 = ax.bar(ind, bo, 0.35, color='royalblue', label = 'Boston')
rects2 = ax.bar(ind+0.35, se, 0.35, color='seagreen', label = 'Seatle')
plt.xticks(ind+0.35/2 , ('mean', '50% (median)', '75% (3rd Q)'), fontsize=15)
plt.ylabel("Price", fontsize=20)
plt.xlabel("",fontsize=20)
# Finding the best position for legends and putting it
plt.legend(loc='best')
plt.savefig("figures/price_metrics.png", dpi = 600, bbox_inches='tight')
```
**Based on mean, median and the thrd quartile it is clear that Boston city is expensive than Seatle.**
## What type of property is most hosted
```
prop_seatle = seatle_data.groupby("property_type")["id"].count()
prop_boston = boston_data.groupby("property_type")["id"].count()
print("Seattle has {} properties ".format(len(prop_seatle)))
print("Boston has {} properties ".format(len(prop_boston)))
# properties only in Seattle
prop_not_in_boston = [x for x in prop_seatle.index.to_list() if x not in prop_boston.index.to_list()]
prop_not_in_boston
# Properties only in Boston
prop_not_in_seatle = [x for x in prop_boston.index.to_list() if x not in prop_seatle.index.to_list()]
prop_not_in_seatle
def plot_prop_byhosts(df, name="Seattle", color = "royalblue"):
"""Plot the numbers of hosting for each type of property"""
perc_seatle = df.groupby("property_type")["id"].count().sort_values(ascending = False)
perc_seatle.plot(kind = 'bar', width= 0.9, color=color, label = 'Seatle', fontsize=13)
plt.ylabel("Hosts", fontsize=20)
plt.xlabel("Property type", fontsize=20)
# Finding the best position for legends and putting it
plt.legend(loc='best', fontsize=15)
plt.savefig("figures/{}_propType.png".format(name), dpi = 600, bbox_inches='tight')
plt.show()
plot_prop_byhosts(seatle_data, name="Seattle")
plot_prop_byhosts(boston_data, name="Boston", color = "violet")
boston_data.groupby("property_type")["id"].count().sort_values(ascending = False)[:6].index.to_list()
```
We can notice that Apartement, House, Condominium, and Townhouse are the most hosted properties. Lets see their percentage to the overall all hosts!
```
perc_seatle = (seatle_data.groupby("property_type")["id"].count().sort_values(ascending = False)/len(seatle_data)*100)[:6].sum()
perc_boston = (boston_data.groupby("property_type")["id"].count().sort_values(ascending = False)/len(boston_data) * 100)[:6].sum()
print("Six property types cover over {} % hosts for Seattle, and {} % hosts for Boston ".format(perc_seatle, perc_boston))
```
## Price by Neighbourhood
We create a neighbourhoods_impact that plots and compares neighbourhoods based on price
```
seatl_nei = seatle_data["neighbourhood"].nunique()
print("there are {} neighbourhoods in Seatle city".format(seatl_nei))
boston_nei = boston_data["neighbourhood"].nunique()
print("there are {} neighbourhoods in Boston city".format(boston_nei))
def neighbourhoods_impact(data, group_feat, sort_feat, cols_to_plot, labels, name):
"""plot features based on neighbourhood impact
Arguments:
- data: dataframe of data.
- group_feat: feature used to goupeby.
- sort_feat: feature used to sort the grouped values.
- cols_to_plot: list of columns names to be plotted.
- labels: list of labels to describe the plot (same size as cols_to_plot).
- name: the name of the data used to save figure.
Outputs:
- Plot of two features based on grouped data
Example:
data = seattle_data
group_feat = "neghbourhood"
sort_feat = "count"
labels = ["N° of hosts", "mean price"]
name = "Seattle"
neighbourhoods_impact(data, group_feat, sort_feat, cols_to_plot, labels, name)
"""
gr = data.groupby(group_feat)["price"].describe().sort_values(by = sort_feat , ascending = False)[cols_to_plot][:10]
gr[group_feat] = gr.index
# plot top 10
gr.plot(x=group_feat , y=cols_to_plot,kind = "bar", figsize=(7,5), grid=True, label = labels, fontsize=13)
plt.ylabel("Counts/Price", fontsize=20)
plt.xlabel("{} Neighbourhood".format(name), fontsize=20)
plt.savefig("figures/{}_neigh_counts_price.png".format(name), dpi = 600, bbox_inches='tight')
group_feat = "neighbourhood"
sort_feat = "count"
cols_to_plot = ["count", "mean"]
labels = ["N° of hosts", "mean price"]
name = "Seattle"
neighbourhoods_impact(seatle_data, group_feat, sort_feat, cols_to_plot, labels, name)
group_feat = "neighbourhood"
sort_feat = "count"
cols_to_plot = ["count", "mean"]
labels = ["N° of hosts", "mean price"]
name = "Boston"
neighbourhoods_impact(boston_data, group_feat, sort_feat, cols_to_plot, labels, name)
group_feat = "neighbourhood"
sort_feat = "mean"
cols_to_plot = ["mean", "count"]
labels = [ "mean price", "N° of hosts"]
name = "Seattle expensive"
neighbourhoods_impact(seatle_data, group_feat, sort_feat, cols_to_plot, labels, name)
group_feat = "neighbourhood"
sort_feat = "mean"
cols_to_plot = ["mean", "count"]
labels = [ "mean price", "N° of hosts"]
name = "Boston expensive"
neighbourhoods_impact(boston_data, group_feat, sort_feat, cols_to_plot, labels, name)
```
If you look at the cheapest neighbourhoods you can check
```
cheap_nei_se = seatle_data.groupby("neighbourhood")["price"].describe()["mean"].sort_values(ascending = False).index[-5:]
print("cheapest neighbourhoods in Seatle city are {} ".format(list(cheap_nei_se)))
cheap_nei_bo = boston_data.groupby("neighbourhood")["price"].describe()["mean"].sort_values(ascending = False).index[-5:]
print("cheapest neighbourhoods in Boston city are {} ".format(list(cheap_nei_bo)))
```
# Conclusion
In this article, we explored Airbnb data from Seattle and Boston to understand three areas of interest: pricing, property type, and neighborhood impact. While we found useful information at each level, many questions remain, like which characteristics and their associated impact make each neighborhood different from the others. In addition, further inspection of the listings by seasonality could yield more information to accurately select the best features for the price prediction task.
| true |
code
| 0.374733 | null | null | null | null |
|
The purpose of this code is to compute the Absolute Magnitude of the candidates (Both KDE and RF) and plot them as a function of redshift. The conversion is as follows:
$M = m - 5log_{10}(\frac{d}{10\mathrm{pc}})$ or, in Mpc
$M = m - (5log_{10}(\frac{d}{1\mathrm{Mpc}})-5log_{10}(10^{5})) == m - 5log_{10}(\frac{d}{1\mathrm{Mpc}})-25$
where d is the Luminosity distance in parsecs. The distance is estimated using the photoz's and a $\Lambda$CDM Cosmology with parameters:
$H_0 = 70 Mpc^{-1},
\Omega_{\Lambda} = 0.725,
\Omega_{M} = 0.275$
Using the Friedmann Equation:
$ $
We can compute distance and, using the i_mag I can determine the absolute magnitude at z=0. I Finally need to convert to $M_i[z=2]$ by using the $\alpha_\nu$ values from Richards 2006/Ross2013
```
%matplotlib inline
import os
import sys
sys.path.insert(0, '/home/john/densityplot/densityplot')
from densityplot.hex_scatter import hex_contour as hex_contour
import numpy as np
from astropy.io import fits as pf
import camb
from camb import model
import matplotlib.pyplot as plt
from matplotlib import gridspec
#open the candidate data
#path = '/Users/johntimlin/Clustering/Combine_SpIES_Shela/Data_sets/Match_SpSh_Cand_wzcorrected_nooutlier_allinfo.fits'
#path = '/Users/johntimlin/Catalogs/QSO_candidates/201606/All_hzcandidate_correctphotoz_fromgaussian_allinfo.fits'
#path = '../Data_Sets/QSO_Candidates_allcuts_with_errors_visualinsp.fits'
path = '../Data_Sets/Only_point_sources.fits'
data = pf.open(path)[1].data
print data['imag']
print data.zphotNW
rz = (data.zphotNW>=2.9) & (data.zphotNW<=5.4) & (data.Good_obj == 0) & (data.dec>=-1.2) & (data.dec<=1.2)& (data['imag']>=20.2)
rshift = data.zphotNW[rz]
print rshift.dtype
r = rshift.astype('float64') #camb's luminosity distance calculator only accepts float 64 data types
print r.dtype
print len(r)
#mag_list.append(-1.0*pogson*(math.asinh(5.0*fluxs[flux_name][i]/bsoft[flux_name]) + ln10_min10 + math.log(bsoft[flux_name])) - extinctions[flux_name][i] )
pog_m = 22.5-2.5*np.log10(data.iflux[rz])
# b=1.8 × 10-10 for i-band
ash_m = -2.5/np.log(10) * (np.arcsinh((data.iflux[rz]/1e9)/(2*1.8e-10))+np.log(1.8e-10)) - 1.698/5.155 * data.extinctu[rz]
print pog_m
print ash_m
print 1.698/4.239 * data.extinctu[rz]
print -2.5*np.log10(1/3631.0e5)
#Open the Shen2007 data
shendat = '/Users/johntimlin/Clustering/Shen_test/Data/Shen2007_Clustering_sample.fits'
sdat = pf.open(shendat)[1].data
#Cut to their objects and get array of redshifts
sdx = (sdat.Sfl == 1) #& (sdat.z>=3.5) & (sdat.z<=5.4)
srz = sdat.z[sdx]
sr = srz.astype('float64')
#print simag
#First define Planck 2015 cosmological parameters
H = 70 #H0.
oc = 0.229 #physical density of CDM
ob = 0.046 #physical density of baryons
#Set up parameters in CAMB
pars = camb.CAMBparams()
#Conversion to density param: Omega_Matter = (oc+ob)/(H0/100.)**2
#Hard code the cosmolgy params
pars.H0=H #hubble param (No h!!)
pars.omegab=ob #Baryon density parameter
pars.omegac=oc #CDM density parameter
pars.omegav=0.725 #Vacuum density parameter
pars.set_dark_energy()
#Set up parameters in CAMB
pars = camb.CAMBparams()
#H0 is hubble parameter at z=0, ombh2 is the baryon density (physical), omch2 is the matter density (physical)
#mnu is sum of neutrino masses, omk is curvature parameter (set to 0 for flat), meffsterile is effective mass of sterile neutrinos
pars.set_cosmology(H0=H,ombh2=ob, omch2=oc,omk=0)#,mnu=0,meffsterile=0)
pars.set_dark_energy()
bkg = camb.get_background(pars) #Background parameters
Ldist = bkg.luminosity_distance(r) #Luminosity distance for SpIES cand
ShLdist = bkg.luminosity_distance(sr) #Luminosity distance for Shen qso
#Make the i_mag line targeted for shen 2007
sampz = np.linspace(0.5,5.4,10000)
line = bkg.luminosity_distance(sampz)
const_mag = np.ones(len(line))*20.2
const_magsp = np.ones(len(line))*22.5
#Compute the absolute magnitude at z=0
#M = (22.5-2.5*np.log10(data.iflux[rz])) - 5.0*np.log10(Ldist) - 25.0
M = ash_m - 5.0*np.log10(Ldist) - 25.0
M202 = const_mag - 5.0*np.log10(line) - 25.0
M23 = const_magsp - 5.0*np.log10(line) - 25.0
shenM = sdat['imag'][sdx] - 5.0*np.log10(ShLdist) - 25.0
#Compute the corrections to apply to M[z=0] to get M[z=2]
def Kcorr(z,alpha = -0.5):
#Ross13
K13 = -2.5*np.log10(1+z) - 2.5*alpha*np.log10(1+z)+2.5*alpha*np.log10(1.0+2.0)
return K13
#import the K-corrections from Richards 2006
K06 = pf.open('./K_correct_Richards06.fits')[1].data
K13 = pf.open('./K_correct_Ross13.fits')[1].data
#Pull out the redshift information from the data for SpIES and Shen
rshifts = data.zphotNW[rz]
srshift = sdat.z[sdx]
#Round the redshifts to 2 decimal places so that I can match to the correction values in Richards 2006
roundz = np.round(rshifts,decimals = 2)
roundt = np.round(sampz,decimals = 2)
roundsz = np.round(srshift,decimals = 2)
#Find the correction value that corresponds to the redshift in the file
Kcor=[]
Ktest = []
Ktestsp = []
Kshen = []
for i in roundz:
kc = K06.KCorr[np.where(K06.z == i)]
Kcor.append(kc[0])
for j in roundt:
kt = K06.KCorr[np.where(K06.z == j)]
Ktest.append(kt[0])
Ktestsp.append(kt[0])
for m in roundsz:
kt = K06.KCorr[np.where(K06.z == j)]
Kshen.append(kt[0])
KC = np.asarray(Kcor)
KT = np.asarray(Ktest)
KS = np.asarray(Kshen)
#Correct the Absolute values using the K-corrections found above
dcorrect = M-KC
lcorrect = M202-Ktest
spcorrect= M23 - Ktestsp
scorrect = shenM - Kshen
#Plotting Parameters (Replace with Group code call!)
params = {'legend.fontsize': 16, 'xtick.labelsize': 20, 'ytick.labelsize': 20, 'xtick.major.width':2, 'xtick.minor.width':2, 'ytick.major.width':2, 'ytick.minor.width':2, 'xtick.major.size':8, 'xtick.minor.size':6, 'ytick.major.size':8, 'ytick.minor.size':6}
plt.rcParams.update(params)
plt.rc("axes", linewidth=3.0)
plt.figure(1,figsize = (8,8))
plt.scatter(rshift,dcorrect,color = '#fd8d3c',edgecolor = None,s=1,alpha = 0.9)#,label = 'Timlin 2016 QSO sample' )
plt.scatter(1,1,s=80,color = '#fd8d3c',label = 'This study')
#plt.scatter(srz,scorrect,color='#e31a1c',edgecolor = None,s=1,alpha = 0.9)#,label = 'Shen 2007 QSO sample')
plt.scatter(1,1,s=80,color = '#e31a1c',label = 'Shen 2007')
plt.plot(sampz,lcorrect,color = 'k',linewidth = 2,label = r'$i$=20.2')
#plt.plot(sampz,spcorrect,color = 'g',linewidth = 2,label = r'$i$=22.5')
#plt.plot(sampz,spcorrect,color = 'k',linestyle = '--', dashes=(10,5,10,5),linewidth = 2,label = r'$i$=23.3')
plt.xlim(2.8,5.3)
plt.ylim(-30.5,-22.5)
plt.xlabel('Redshift',fontsize = 18)
plt.ylabel(r'$M_i$[z=2]',fontsize = 18)
plt.gca().invert_yaxis()
plt.minorticks_on()
plt.legend(loc=4,scatterpoints = 1)
#plt.savefig('Absolute_Mag_SpIES_Shen.pdf')
imag = -2.5/np.log(10) * (np.arcsinh((data.iflux[rz]/1e9)/(2*1.8e-10))+np.log(1.8e-10)) - 1.698/4.239 * data.extinctu[rz]
num,bins = np.histogram(imag,bins='fd')
print bins
fig = plt.figure(5,figsize = (8,4))
plt.hist(imag,bins, histtype = 'step',normed = False,color = '#FFA500',linewidth = 2)
plt.hist(sdat['imag'][sdx],bins='fd', histtype = 'step',normed = False,color = 'r',linewidth = 2)
plt.xlabel(r'$i$ Magnitude',fontsize = 14)
plt.ylabel('Number',fontsize = 14)
#plt.savefig('imag_hist.pdf',bbox_inches='tight')
plt.show()
```
# Plot for the paper
```
imag = 22.5-2.5*np.log10(data.iflux[rz])
num,bins = np.histogram(imag,bins='fd')
fig = plt.figure(5,figsize = (6,12))
gs = gridspec.GridSpec(2, 1, height_ratios=[0.6,0.4])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1],)
plt.axes(ax0)
plt.scatter(rshift,dcorrect,color = '#fd8d3c',edgecolor = None,s=1,alpha = 0.9)#,label = 'Timlin 2016 QSO sample' )
plt.scatter(1,1,s=80,color = '#fd8d3c',label = 'This study')
plt.scatter(srz,scorrect,color='#e31a1c',edgecolor = None,s=1,alpha = 0.9)#,label = 'Shen 2007 QSO sample')
plt.scatter(1,1,s=80,color = '#e31a1c',label = 'Shen 2007')
plt.plot(sampz,lcorrect,color = 'k',linewidth = 2,label = r'$i$=20.2')
#plt.plot(sampz,spcorrect,color = 'k',linestyle = '--', dashes=(10,5,10,5),linewidth = 2,label = r'$i$=23.3')
plt.xlim(2.8,5.3)
plt.ylim(-30.5,-22.5)
plt.xlabel('Redshift',fontsize = 16)
plt.ylabel(r'$M_i$[$z$=2] (AB mag)',fontsize = 16)
plt.gca().invert_yaxis()
plt.minorticks_on()
leg =plt.legend(loc=4,scatterpoints = 1)
leg.get_frame().set_alpha(0.35)
plt.axes(ax1)
plt.hist(sdat['imag'][sdx],bins='fd', histtype = 'step',normed = False,color = 'r',linewidth = 2,label= 'Shen 2007')
plt.hist(imag,bins, histtype = 'step',normed = False,color = '#FFA500',linewidth = 2,label = 'This study')
plt.xlabel(r'$i$-Magnitude (AB mag)',fontsize = 16)
plt.ylabel('Number of quasars',fontsize = 16)
plt.minorticks_on()
leg =plt.legend(loc=2)
leg.get_frame().set_alpha(0.35)
#plt.savefig('Absolute_Mag_SpIES_Shen.pdf',bbox_inches='tight',pad_inches=0.5)
```
# Find and save the brightest candidates
```
good = dcorrect[dcorrect <=-25.0]
print len(good)
plt.scatter(rshift[dcorrect<=-25],good,color = '#fd8d3c',edgecolor = None,s=1,alpha = 0.9)#,label = 'Timlin 2016 QSO sample' )
plt.scatter(1,1,s=80,color = '#fd8d3c',label = 'This study')
plt.xlim(2.8,5.3)
plt.ylim(-30.5,-22.5)
plt.show()
print len(data.ra[rz]), len(dcorrect)
print len(data.ra[rz][dcorrect<=-25.0])
tbhdu=pf.BinTableHDU.from_columns([pf.Column(name='RA',format='D',array=data.ra[rz][dcorrect<=-25.0]),
pf.Column(name='DEC',format='D',array=data.dec[rz][dcorrect<=-25.0])])
prihdr=pf.Header()
prihdr['COMMENT']="Brightest SpIES quasars"
prihdu=pf.PrimaryHDU(header=prihdr)
hdulist = pf.HDUList([prihdu,tbhdu])
#hdulist=pf.HDUList(data[dx])
hdulist.writeto('../Data_Sets/Brightest_candidates.fits')
```
| true |
code
| 0.287411 | null | null | null | null |
|
# Assignment 2
Before working on this assignment please read these instructions fully. In the submission area, you will notice that you can click the link to **Preview the Grading** for each step of the assignment. This is the criteria that will be used for peer grading. Please familiarize yourself with the criteria before beginning the assignment.
An NOAA dataset has been stored in the file `data/C2A2_data/BinnedCsvs_d400/fb441e62df2d58994928907a91895ec62c2c42e6cd075c2700843b89.csv`. The data for this assignment comes from a subset of The National Centers for Environmental Information (NCEI) [Daily Global Historical Climatology Network](https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt) (GHCN-Daily). The GHCN-Daily is comprised of daily climate records from thousands of land surface stations across the globe.
Each row in the assignment datafile corresponds to a single observation.
The following variables are provided to you:
* **id** : station identification code
* **date** : date in YYYY-MM-DD format (e.g. 2012-01-24 = January 24, 2012)
* **element** : indicator of element type
* TMAX : Maximum temperature (tenths of degrees C)
* TMIN : Minimum temperature (tenths of degrees C)
* **value** : data value for element (tenths of degrees C)
For this assignment, you must:
1. Read the documentation and familiarize yourself with the dataset, then write some python code which returns a line graph of the record high and record low temperatures by day of the year over the period 2005-2014. The area between the record high and record low temperatures for each day should be shaded.
2. Overlay a scatter of the 2015 data for any points (highs and lows) for which the ten year record (2005-2014) record high or record low was broken in 2015.
3. Watch out for leap days (i.e. February 29th), it is reasonable to remove these points from the dataset for the purpose of this visualization.
4. Make the visual nice! Leverage principles from the first module in this course when developing your solution. Consider issues such as legends, labels, and chart junk.
The data you have been given is near **Ann Arbor, Michigan, United States**, and the stations the data comes from are shown on the map below.
```
import matplotlib.pyplot as plt
import mplleaflet
import pandas as pd
def leaflet_plot_stations(binsize, hashid):
df = pd.read_csv('data/C2A2_data/BinSize_d{}.csv'.format(binsize))
station_locations_by_hash = df[df['hash'] == hashid]
lons = station_locations_by_hash['LONGITUDE'].tolist()
lats = station_locations_by_hash['LATITUDE'].tolist()
plt.figure(figsize=(8,8))
plt.scatter(lons, lats, c='r', alpha=0.7, s=200)
return mplleaflet.display()
leaflet_plot_stations(400,'fb441e62df2d58994928907a91895ec62c2c42e6cd075c2700843b89')
df = pd.read_csv('data/C2A2_data/BinnedCsvs_d400/fb441e62df2d58994928907a91895ec62c2c42e6cd075c2700843b89.csv')
df.head()
df['Year'], df['Month-Date'] = zip(*df['Date'].apply(lambda x: (x[:4], x[5:])))
df = df[df['Month-Date'] != '02-29']
df.head()
import numpy as np
temp_max = df[(df['Element'] == 'TMAX') & (df['Year'] != '2015')].groupby('Month-Date').aggregate({'Data_Value':np.max})
temp_min = df[(df['Element'] == 'TMIN') & (df['Year'] != '2015')].groupby('Month-Date').aggregate({'Data_Value':np.min})
temp_max_15 = df[(df['Element'] == 'TMAX') & (df['Year'] == '2015')].groupby('Month-Date').aggregate({'Data_Value':np.max})
temp_min_15 = df[(df['Element'] == 'TMIN') & (df['Year'] == '2015')].groupby('Month-Date').aggregate({'Data_Value':np.min})
broken_max = np.where(temp_max_15['Data_Value'] > temp_max['Data_Value'])[0]
broken_min = np.where(temp_min_15['Data_Value'] < temp_min['Data_Value'])[0]
print(broken_max)
print(broken_min)
plt.figure()
plt.plot(temp_max.values, label='Maximum Temp (2005-2014)')
plt.plot(temp_min.values, label='Minimum Temp (2005-2014)')
plt.gca().fill_between(range(len(temp_min)), temp_min['Data_Value'],temp_max['Data_Value'], facecolor='blue', alpha=0.25)
plt.xticks(range(0, len(temp_min), 20), temp_min.index[range(0, len(temp_min), 20)], rotation = '45')
plt.scatter(broken_max, temp_min_15.iloc[broken_max], s=10, color='red', label='High temp record broken (2015)')
plt.scatter(broken_min, temp_min_15.iloc[broken_min], s=10, color='green', label='Low temp record broken (2015)')
plt.legend(frameon = False)
plt.xlabel('Day of the Year')
plt.ylabel('Temperature (tenths of $^\circ$C)')
plt.title('Temperature Plot: Ann Arbor, Michigan, United States')
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.show()
```
| true |
code
| 0.520314 | null | null | null | null |
|
# Readout Cavity Calibration
*Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*
## Outline
This tutorial introduces the simulation of readout cavity calibration using the readout simulator. The outline of this tutorial is as follows:
- Introduction
- Preparation
- Calibrating the Readout Cavity Transition Frequencies
- Calibrating the Dispersive Shift and Coupling Strength
- Measuring the decay rate
- Summary
## Introduction
In superconducting circuit, to acquire the state of a qubit, we can probe the readout cavity coupled with this qubit to achieve the qubit state indirectly. Concretely, we first apply readout pulse signal and then detect and analyze the reflected signal. Because the phase shift and amplitude change depend on the qubit state, we are able to know whether the outcome is "0" or "1" by this change.
In the real experiment of calibration, the first step to is to find the parameters of readout cavity. This tutorial introduces how to use Quanlse to simulate the readout cavity calibration.
A coupled cavity-qubit system can be described by the Jaynes-Cummings Hamiltonian in the dispersive regime \[1\]:
$$
\hat{H}_{\rm JC} = \omega_r \hat{a}^\dagger \hat{a} + \frac{1}{2}\omega_q \hat{\sigma}_z + \chi \hat{a}^\dagger \hat{a} \hat{\sigma}_z,
$$
where $\hat{a}$, $\hat{a}^\dagger$ are annihilation and creation operators and $\hat{\sigma}_z$ is the Pauli-Z operator. $\omega_r$ and $\omega_q$ denote the bare frequencies of the readout cavity and the qubit, $\chi$ is the dispersive shift and takes the form \[2\]:
$$
\chi = \frac{g^2 \alpha}{\Delta_{qr}(\Delta_{qr} + \alpha)}.
$$
where $\alpha$ is the qubit anharmonicity, $\Delta_{qr} = \omega_q - \omega_r$ is qubit-cavity detuning and $g$ is the qubit-cavity coupling strength. The interaction term $\chi \hat{a}^\dagger \hat{a} \hat{\sigma}_z$ in $\hat{H}_{\rm JC}$ gives rise to a shift of $2\chi$ in the transition frequency of the readout cavity when the qubit state is $|0\rangle$ and $|1\rangle$. Therefore in the experiment, by performing frequency sweep for the cavity with qubit state prepared in $|0\rangle$ or $|1\rangle$ respectively, we can obtain transition frequency $f_0$ and $f_1$ and therefore the frequency shift $2\chi$. Finally, the cavity-qubit couping strength is indirectly calculated using the expression above.
In addition to the transition frequency and dispersive shift, the linewidth $\kappa$ can be measured to determine the photon decay rate of the readout cavity. To simulate the interaction between cavity-qubit system and the environment, the evolution of the system density matrix $\hat{\rho}(t)$ is given by Lindblad master equation \[3, 4\]:
$$
\frac{d \hat{\rho}(t)}{dt} = -i[\hat{H}(t), \hat{\rho}(t)] + \frac{\kappa}{2}[2 \hat{a} \hat{\rho}(t) \hat{a}^\dagger - \hat{\rho}(t) \hat{a}^\dagger \hat{a} - \hat{a}^\dagger \hat{a} \hat{\rho}(t)].
$$
The decay rate is therefore acquired by fitting the spectrum and extract the linewidth in the Lorentzian function.
The observable quantities we take are the two orthogonal quadratures $\hat{X} = \frac{1}{2}(\hat{a}^\dagger + \hat{a})$ and $\hat{Y} = \frac{i}{2}(\hat{a}^\dagger - \hat{a})$. In the experiment, through a series of signal processing on the pulse reflected from the readout cavity, we can obtain the voltage $V_I$ and $V_Q$ related to these two orthogonal quadratures.
In this tutorial, we simulate the calibration of readout cavity by solving the qubit-cavity dynamics: the cavity transition frequencies with different qubit states ($|0\rangle$ and $|1\rangle$) $\omega_{r0}$ and $\omega_{r1}$, the linewidth $\kappa$ and the dispersive shift $\chi$.
## Preparation
To run this tutorial we need to import the following necessary packages from Quanlse and other commonly-used Python libraries.
```
# Import tools from Quanlse
from Quanlse.Simulator.ReadoutSim3Q import readoutSim3Q
from Quanlse.Calibration.Readout import resonatorSpec, fitLorentzian, lorentzian
# Import tools from other python libraries
from scipy.signal import find_peaks
import numpy as np
import matplotlib.pyplot as plt
from math import pi
```
## Calibrating the Readout Cavity Transition Frequencies
In this section, we will calibrate the transition frequencies of the reading cavity when the qubit is in the ground state and the first excited state, respectively. Here, we first use the predefined function `readoutSim3Q()` to return the Class object `readoutModel` containing information of the readout cavity.
```
readoutModel = readoutSim3Q() # Initialize a readoutModel object
```
Then, we set the range of frequency sweep `freqRange`, the drive amplitude `amp` and the duration of the readout pulse `duration`.
```
freqRange = np.linspace(7.105, 7.125, 60) * 2 * pi # the range of frequency to probe the resonator, in 2 pi GHz
amp = 0.0005 * (2 * pi) # drive amplitude, in 2 pi GHz
duration = 1000 # duration of the readout pulse, in nanoseconds
```
Use the function `resonatorSpec` to simulate the frequency sweep of the readout cavity when qubit is in the ground state, and input the index of the resonator `onRes`, the range of the frequency sweep `freqRange`, the amplitude `amp` and the duration `duration` with `qubitState` set to be in the ground state.
```
vi0, vq0 = resonatorSpec(readoutModel=readoutModel, onRes=[0], freqRange=freqRange,
amplitude=amp, duration=duration, qubitState='ground')
```
The result returns the measured signal $V_I$ and $V_Q$. We plot $V_Q$ (or $V_I$) with respect to the drive frequency.
```
idx0 = find_peaks(vq0[0], height=max(vq0[0]))[0] # find the index of the transition frequency
w0 = freqRange[idx0][0] # transition frequency
print(f'The resonator transition frequency with qubit in ground state is {(w0 / (2 * pi)).round(3)} GHz')
plt.plot(freqRange / (2 * pi), np.array(vq0[0]))
plt.plot()
plt.xlabel('$\omega_d$ (GHz)')
plt.ylabel('signal (a.u.)')
plt.title('Readout resonator spectrum')
plt.vlines((freqRange / (2 * pi))[idx0], 0, max(vq0[0]), linestyles='dashed')
plt.show()
```
From the result of the simulation shown above, we can see that the read cavity transition frequency is around 7.118 GHz when the qubit is in the ground state. Next, we calibrate the read cavity transition frequency when the qubit is in the excited state using the same procedure.
```
vi1, vq1 = resonatorSpec(readoutModel=readoutModel, onRes=[0], freqRange=freqRange,
amplitude=amp, duration=duration, qubitState='excited')
idx1 = find_peaks(vq1[0], height=max(vq1[0]))[0]
w1 = freqRange[idx1][0]
print(f'The resonator transition frequency with qubit in excited state is {(w1 / (2 * pi)).round(3)} GHz')
plt.plot(freqRange / (2 * pi), np.array(vq1[0]))
plt.plot()
plt.xlabel('$\omega_d$ (GHz)')
plt.ylabel('signal (a.u.)')
plt.title('Readout resonator spectrum')
plt.vlines((freqRange / (2 * pi))[idx1], 0, max(vq1[0]), linestyles='dashed')
plt.show()
```
It can be seen in the spectrum that the readout cavity transition frequency is about 7.112 GHz when the qubit is in the first excited state.
## Calibrating the Dispersive Shift and Coupling Strength
In the previous section, we obtained the calibrated frequencies $f_0$ and $f_1$, so that the dispersion shift $\chi$ can be calculated directly by,
$$
\chi = \frac{|f_0 - f_1|}{2}.
$$
```
chi = abs(w0 - w1) / 2
print(f'The dispersive shift is {(chi * 1e3 / (2 * pi)).round(3)} MHz')
```
Combining the expressions of $\chi$ given in the "Introduction" section, we can derive the expression of cavity-qubit coupling strength in terms of other known parameters:
$$
g = \sqrt{\frac{\chi\Delta_{qr}(\Delta_{qr}+\alpha)}{\alpha}}.
$$
Extract the theoretical parameters from `readoutModel` and calculate the coupling strength $g$ given above.
```
# Extract parameters from the model
g = readoutModel.coupling[0] # therotical qubit-resonator coupling strength
wq = readoutModel.pulseModel.qubitFreq[0] # qubit bare frequency
alpha = readoutModel.pulseModel.qubitAnharm[0] # qubit anharmonicity
wr = (w0 + w1) / 2 # estimated resonator frequency
detuning = wq - wr # qubit-resonator detuning
# coupling strength calculation
def qrCoupling(chi, detuning, alpha):
g = np.sqrt(abs(chi * detuning * (detuning + alpha) / alpha))
return g
gEst = qrCoupling(chi, detuning, alpha) # Estimated qubit-resonator coupling strength
```
Compare the theoretical value and the estimated value of $g$.
```
print(f'Theoretical coupling strength is {g * 1e3 / (2 * pi)} MHz')
print(f'Estimated coupling strength is {(gEst * 1e3 / (2 * pi)).round(1)} MHz')
```
The coupling strength of the readout cavity and the qubit, obtained by calibrating the dispersion shift and indirect calculations, is 135 MHz, which is in good agreement with the theoretical value of 134.0 MHz.
## Measuring the decay rate
After we have the spectrum of cavity frequency, we are able to estimate the decay rate $\kappa$ by the linewidth of the Lorentzian function. Here, we use the function `fitLorentzian`, input the range of frequency sweep and the reflected signal to fit the spectrum and estimate the linewidth $\kappa$.
```
param, cov = fitLorentzian(freqRange, vq0[0]) # Fit the curve using lorentzian function
kappaEst = abs(param[2]) # Estimated linewidth
plt.plot(freqRange / (2 * pi), lorentzian(freqRange, param[0], param[1], param[2], param[3]), '.')
plt.plot(freqRange / (2 * pi), vq0[0])
plt.xlabel('$\omega_d$ (GHz)')
plt.ylabel('signal (a.u.)')
plt.show()
```
Compare the theoretical value and the estimated value of decay rate (or linewidth).
```
kappa = readoutModel.dissipation
print(f'Theoretical linewidth is {kappa * 1e3 / (2 * pi)} MHz')
print(f'Estimated linewidth is {(kappaEst * 1e3 / (2 * pi)).round(3)} MHz')
```
From the simulation results, we can see that the decay rate $\kappa$ set in the master equation is 2.0 MHz, while the linewidth obtained from the spectrum is 1.987 MHz, indicating that the interaciton strength between the reading cavity and the environment can be indirectly calibrated by scanning the frequency of the reading cavity and calculating the line width in the experiment.
## Summary
Users can click on this link [tutorial-readout-cavity-calibration.ipynb](https://github.com/baidu/Quanlse/blob/main/Tutorial/EN/tutorial-readout-cavity-calibration.ipynb) to jump to the corresponding GitHub page for this Jupyter Notebook documentation and run this tutorial. You can try different hardware parameters of the readout cavity and run the codes in this tutorial to simulate the cavity calibration in the superconducting quantum computing experiment.
## Reference
\[1\] [Blais, Alexandre, et al. "Cavity quantum electrodynamics for superconducting electrical circuits: An architecture for quantum computation." *Physical Review A* 69.6 (2004): 062320.](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.69.062320)
\[2\] [Koch, Jens, et al. "Charge-insensitive qubit design derived from the Cooper pair box." *Physical Review A* 76.4 (2007): 042319.](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.76.042319)
\[3\] [Lindblad, Goran. "On the generators of quantum dynamical semigroups." *Communications in Mathematical Physics* 48.2 (1976): 119-130.](https://link.springer.com/article/10.1007/bf01608499)
\[4\] [Bianchetti, R., et al. "Dynamics of dispersive single-qubit readout in circuit quantum electrodynamics." *Physical Review A* 80.4 (2009): 043840.](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.80.043840)
| true |
code
| 0.762877 | null | null | null | null |
|
# Lecture 34: VGGNet
```
%matplotlib inline
import tqdm
import copy
import time
import torch
import numpy as np
import torchvision
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import matplotlib.pyplot as plt
from torchvision import transforms,datasets, models
print(torch.__version__) # This code has been updated for PyTorch 1.0.0
```
## Load Data:
```
apply_transform = transforms.Compose([transforms.Resize(224),transforms.ToTensor()])
BatchSize = 4
trainset = datasets.CIFAR10(root='./CIFAR10', train=True, download=True, transform=apply_transform)
trainLoader = torch.utils.data.DataLoader(trainset, batch_size=BatchSize,
shuffle=True, num_workers=4) # Creating dataloader
testset = datasets.CIFAR10(root='./CIFAR10', train=False, download=True, transform=apply_transform)
testLoader = torch.utils.data.DataLoader(testset, batch_size=BatchSize,
shuffle=False, num_workers=4) # Creating dataloader
# Size of train and test datasets
print('No. of samples in train set: '+str(len(trainLoader.dataset)))
print('No. of samples in test set: '+str(len(testLoader.dataset)))
```
## Define network architecture
```
net = models.vgg16()
print(net)
# Counting number of trainable parameters
totalParams = 0
for name,params in net.named_parameters():
print(name,'-->',params.size())
totalParams += np.sum(np.prod(params.size()))
print('Total number of parameters: '+str(totalParams))
# Copying initial weights for visualization
init_weightConv1 = copy.deepcopy(net.features[0].weight.data) # 1st conv layer
init_weightConv2 = copy.deepcopy(net.features[2].weight.data) # 2nd conv layer
# Check availability of GPU
use_gpu = torch.cuda.is_available()
# use_gpu = False # Uncomment in case of GPU memory error
if use_gpu:
print('GPU is available!')
device = "cuda"
else:
print('GPU is not available!')
device = "cpu"
net = net.to(device)
```
## Define loss function and optimizer
```
criterion = nn.NLLLoss() # Negative Log-likelihood
optimizer = optim.Adam(net.parameters(), lr=1e-4) # Adam
```
## Train the network
```
iterations = 5
trainLoss = []
testAcc = []
start = time.time()
for epoch in range(iterations):
epochStart = time.time()
runningLoss = 0
net.train() # For training
for data in tqdm.tqdm_notebook(trainLoader):
inputs,labels = data
inputs, labels = inputs.to(device), labels.to(device)
# Initialize gradients to zero
optimizer.zero_grad()
# Feed-forward input data through the network
outputs = net(inputs)
# Compute loss/error
loss = criterion(F.log_softmax(outputs,dim=1), labels)
# Backpropagate loss and compute gradients
loss.backward()
# Update the network parameters
optimizer.step()
# Accumulate loss per batch
runningLoss += loss.item()
avgTrainLoss = runningLoss/(50000.0/BatchSize)
trainLoss.append(avgTrainLoss)
# Evaluating performance on test set for each epoch
net.eval() # For testing [Affects batch-norm and dropout layers (if any)]
running_correct = 0
with torch.no_grad():
for data in tqdm.tqdm_notebook(testLoader):
inputs,labels = data
inputs = inputs.to(device)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
if use_gpu:
predicted = predicted.cpu()
running_correct += (predicted == labels).sum()
avgTestAcc = float(running_correct)*100/10000.0
testAcc.append(avgTestAcc)
# Plotting training loss vs Epochs
fig1 = plt.figure(1)
plt.plot(range(epoch+1),trainLoss,'r-',label='train')
if epoch==0:
plt.legend(loc='upper left')
plt.xlabel('Epochs')
plt.ylabel('Training loss')
# Plotting testing accuracy vs Epochs
fig2 = plt.figure(2)
plt.plot(range(epoch+1),testAcc,'g-',label='test')
if epoch==0:
plt.legend(loc='upper left')
plt.xlabel('Epochs')
plt.ylabel('Testing accuracy')
epochEnd = time.time()-epochStart
print('Iteration: {:.0f} /{:.0f} ; Training Loss: {:.6f} ; Testing Acc: {:.3f} ; Time consumed: {:.0f}m {:.0f}s '\
.format(epoch + 1,iterations,avgTrainLoss,avgTestAcc,epochEnd//60,epochEnd%60))
end = time.time()-start
print('Training completed in {:.0f}m {:.0f}s'.format(end//60,end%60))
# Copying trained weights for visualization
trained_weightConv1 = copy.deepcopy(net.features[0].weight.data)
trained_weightConv2 = copy.deepcopy(net.features[2].weight.data)
if use_gpu:
trained_weightConv1 = trained_weightConv1.cpu()
trained_weightConv2 = trained_weightConv2.cpu()
```
## Visualization of weights
```
# functions to show an image
def imshow(img, strlabel):
npimg = img.numpy()
npimg = np.abs(npimg)
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 10
plt.rcParams["figure.figsize"] = fig_size
plt.figure()
plt.title(strlabel)
plt.imshow(np.transpose(npimg, (1, 2, 0)))
imshow(torchvision.utils.make_grid(init_weightConv1,nrow=8,normalize=True),'Initial weights: conv1')
imshow(torchvision.utils.make_grid(trained_weightConv1,nrow=8,normalize=True),'Trained weights: conv1')
imshow(torchvision.utils.make_grid(init_weightConv1-trained_weightConv1,nrow=8,normalize=True),'Difference of weights: conv1')
imshow(torchvision.utils.make_grid(init_weightConv2[0].unsqueeze(1),nrow=8,normalize=True),'Initial weights: conv2')
imshow(torchvision.utils.make_grid(trained_weightConv2[0].unsqueeze(1),nrow=8,normalize=True),'Trained weights: conv2')
imshow(torchvision.utils.make_grid(init_weightConv2[0].unsqueeze(1)-trained_weightConv2[0].unsqueeze(1),nrow=8,normalize=True),'Difference of weights: conv2')
```
| true |
code
| 0.780981 | null | null | null | null |
|
# TFX on KubeFlow Pipelines Example
This notebook should be run inside a KF Pipelines cluster.
### Install TFX and KFP packages
```
!pip3 install https://storage.googleapis.com/ml-pipeline/tfx/tfx-0.12.0rc0-py2.py3-none-any.whl
!pip3 install https://storage.googleapis.com/ml-pipeline/release/0.1.16/kfp.tar.gz --upgrade
```
### Enable DataFlow API for your GKE cluster
<https://console.developers.google.com/apis/api/dataflow.googleapis.com/overview>
## Get the TFX repo with sample pipeline
```
!git clone https://github.com/tensorflow/tfx
# copy the trainer code to a storage bucket as the TFX pipeline will need that code file in GCS
from tensorflow import gfile
gfile.Copy('tfx/examples/chicago_taxi_pipeline/taxi_utils.py', 'gs://<my bucket>/<path>/taxi_utils.py')
```
## Configure the TFX pipeline example
Reload this cell by running the load command to get the pipeline configuration file
```
%load tfx/examples/chicago_taxi_pipeline/taxi_pipeline_kubeflow.py
```
Configure:
- Set `_input_bucket` to the GCS directory where you've copied taxi_utils.py. I.e. gs://<my bucket>/<path>/
- Set `_output_bucket` to the GCS directory where you've want the results to be written
- Set GCP project ID (replace my-gcp-project). Note that it should be project ID, not project name.
The dataset in BigQuery has 100M rows, you can change the query parameters in WHERE clause to limit the number of rows used.
```
%load tfx/examples/chicago_taxi_pipeline/taxi_pipeline_kubeflow.py
```
## Compile the pipeline and submit a run to the Kubeflow cluster
```
# Get or create a new experiment
import kfp
client = kfp.Client()
experiment_name='TFX Examples'
try:
experiment_id = client.get_experiment(experiment_name=experiment_name).id
except:
experiment_id = client.create_experiment(experiment_name).id
pipeline_filename = 'chicago_taxi_pipeline_kubeflow.tar.gz'
#Submit a pipeline run
run_name = 'Run 1'
run_result = client.run_pipeline(experiment_id, run_name, pipeline_filename, {})
```
### Connect to the ML Metadata Store
```
!pip3 install ml_metadata
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
import os
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.mysql.host = os.getenv('MYSQL_SERVICE_HOST')
connection_config.mysql.port = int(os.getenv('MYSQL_SERVICE_PORT'))
connection_config.mysql.database = 'mlmetadata'
connection_config.mysql.user = 'root'
store = metadata_store.MetadataStore(connection_config)
# Get all output artifacts
store.get_artifacts()
# Get a specific artifact type
# TFX types
# types = ['ModelExportPath', 'ExamplesPath', 'ModelBlessingPath', 'ModelPushPath', 'TransformPath', 'SchemaPath']
store.get_artifacts_by_type('ExamplesPath')
```
| true |
code
| 0.617167 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.