text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
```python
import numpy as np
import matplotlib.pyplot as plt
```
# Compress Oil Flow Data using Probabilistic PCA
## 3 Phase Data
### Description
This is synthetic data modelling non-intrusive measurements on a pipe-line transporting a mixture of oil, water and gas. The flow in the pipe takes one out of three possible configurations: horizontally stratified, nested annular or homogeneous mixture flow. The data lives in a 12-dimensional measurement space, but for each configuration, there is only two degrees of freedom: the fraction of water and the fraction of oil. (The fraction of gas is redundant, since the three fractions must sum to one.) Hence, the data lives on a number of ‘sheets’ which locally are approximately 2-dimensional.
### Details
The files/variables contain:
1. Oilflow.txt contains 1000 data points,each data point is represented by a 12 dimensional vector.
2. Oilflow_label.txt contains the configuration of each data point represented as one-hot vector: (1, 0, 0) horizontally stratified, (0, 1, 0) nested annular and (0, 0, 1) homogeneous mixture.
source: adapted from https://github.com/anirudhseth/GPLVM-and-PCA/blob/master/Datasets
Let's first import the data and configurations into x and x_label respectively. Also create lists of colors for further visualization (different colors indicates different configurations)
```python
x = np.loadtxt('./data/Oilflow.txt')[..., np.newaxis]
x_label = np.loadtxt('./data/Oilflow_label.txt')
x_label = np.argmax(x_label, axis=1)
color = []
color_edge = []
shape = []
for i in range(x.shape[0]):
if x_label[i] == 0:
color += ['pink']
color_edge += ['orange']
elif x_label[i] == 1:
color += ['green']
color_edge += ['black']
elif x_label[i] == 2:
color += ['blue']
color_edge += ['purple']
```
Denote the dimension of each data point as d_dim (12) and define the dimension of the latent space as 2. Denote the n_dim as the number of data points.
```python
d_dim = x.shape[-2]
z_dim = 2
n_dim = x.shape[0]
```
Assume $p(\mathbf{z}_i) = \mathcal{N}(\mathbf{0}, \mathbf{I})$ and $ \mathbf{x}_i = \mathbf{W}\mathbf{z}_i + \boldsymbol{\mu} + \boldsymbol{\epsilon}_i$ where $\boldsymbol{\epsilon}_i \sim \mathcal{N}(\mathbf{0}, \sigma^2\mathbf{I})$.
We are going to use EM algorithm to optimize parameters $\mathbf{W}, \boldsymbol{\mu}$ and $\sigma^2$.
Denote $\mathbf{W}$ as W, $\boldsymbol{\mu}$ as mu and $\sigma^2$ as sigma_2.
We iterate EM via following update equations
### E-step
We assign $q(\mathbf{z}_n) = p(\mathbf{z}_n|\mathbf{x},\mathbf{W}_{old},\boldsymbol{\mu}_{old},\sigma^2_{old})$
We evaluate
\begin{align}
\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n] = \mathbf{M}^{-1}\mathbf{W}^T_{old}(\mathbf{x}_n-\boldsymbol{\mu}_M)
\end{align} where $\boldsymbol{\mu}_M = \frac{1}{N}\sum_{n=1}^N \mathbf{x}_n = \overline{\mathbf{x}}$
and
\begin{align}
\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n\mathbf{z}_n^T] &= Cov(\mathbf{z}_n,\mathbf{z}_n) + \mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n]\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n]^T\\
&=\sigma^2\mathbf{M}^{-1}+\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n]\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n]^T
\end{align}
where $\mathbf{M} =\mathbf{W}_{old}^T\mathbf{W}_{old}+\sigma_{old}^2\mathbf{I}$
```python
def E_step(W, sigma_2, mu):
# a @ b = np.matmul(a,b); a * b is element-wise multiplication
W_T = W.transpose((0, -1, -2))
M = W_T @ W + sigma_2 * np.eye(z_dim)
M_inv = np.linalg.inv(M)
# compute E[z_n] as post_mean and E[z_n]^T @ E[z_n] as post_mean_dot
post_mean = (M_inv @ W_T) @ (x-mu)
post_mean_T = post_mean.transpose((0, -1, -2))
post_cov = sigma_2 * M_inv
post_mean_dot = post_cov + post_mean @ post_mean_T
return post_mean, post_mean_dot
```
### M-step
Given the $\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n]$ and $\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n\mathbf{z}_n^T]$ computed in the E-step, we update the model parameters $\mathbf{W}$ and $\sigma^2$ (The $\boldsymbol{\mu}_M$ is computed as mu$=\frac{1}{N}\sum_{n=1}^N \mathbf{x}_n$).
\begin{align}
\boldsymbol{\mu}_M &= \overline{\mathbf{x}} =\frac{1}{N}\sum_{n=1}^N \mathbf{x}_n \\
\mathbf{W}_{new}&=\Big[\sum_{n=1}^{N}(\mathbf{x}_n-\boldsymbol{\mu}_M)\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n]^T \Big]\Big[\sum_{n=1}^{N}\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n\mathbf{z}_n^T] \Big]^{-1}\\
\sigma^2_{new} &= \frac{1}{ND}\sum_{n=1}^{N} \Big[||\mathbf{x}_n-\boldsymbol{\mu}_M||^2 + trace\big[\mathbb{E}_{q(\mathbf{z}_n)}[\mathbf{z}_n\mathbf{z}_n^T]\mathbf{W}^T\mathbf{W}\big] - 2\mathbb{E}_{q(\mathbf{z}_n)} [\mathbf{z}_n]^T\mathbf{W}^T(\mathbf{x}_n-\boldsymbol{\mu}_M)\Big]
\end{align}
```python
def M_step(x, W, sigma_2, mu, post_mean, post_mean_dot):
post_mean_T = post_mean.transpose((0, -1, -2))
W = ((x-mu) @ post_mean_T).sum(axis=0, keepdims=True) @ np.linalg.inv(post_mean_dot.sum(axis=0, keepdims=True))
W_T = W.transpose((0, -1, -2))
sigma_2 = (((x - mu) ** 2).sum(axis=-2)
+ np.trace(post_mean_dot @ W_T @ W, axis1=-2, axis2=-1)[..., np.newaxis]
- (2 * post_mean_T @ W_T @ (x - mu)).squeeze(-1)
).sum(axis=0) / (d_dim * n_dim)
return W, sigma_2
```
Put everythin together, we can learn the model parameters and visualize the learnt 2-dimensional latent space.
```python
# initialize parameters
# x = wz + mu
W = np.random.rand(1, d_dim, z_dim)
W_T = W.transpose((0, -1, -2))
mu = np.mean(x, axis=0)
# sigma square
sigma_2 = np.exp2(np.random.rand(1))
# stop precision
epsilon = 1.0
# maximum iterations
max_itr = 100
# iteration count
itr = 0
# plotting frequency
plot_freq = 1
while epsilon > 0.001 and itr < max_itr:
# E-step
post_mean, post_mean_dot = E_step(W, sigma_2, mu)
# M-step
# save old W and sigma_2 for computing stop criterion
_W = W.copy()
_sigma_2 = sigma_2.copy()
W, sigma_2 = M_step(x, W, sigma_2, mu, post_mean, post_mean_dot)
epsilon = np.sqrt(np.mean((W-_W)**2) + np.mean((sigma_2-_sigma_2)**2))
itr += 1
# evaluate log-likelihood of original data log p(x) as log_px
C = W @ W_T + sigma_2 * np.eye(d_dim)
log_px = -0.5 * n_dim * np.log(np.linalg.det(C)) - 0.5 * ((x - mu).transpose((0, -1, -2)) @ np.linalg.inv(C) @ (x-mu)).sum(axis=0).squeeze(-1)
print(f'itration:{itr}, log-likelihood:{log_px.item()}, epsilon:{epsilon}')
# plot the z space
if itr % plot_freq == 0:
plt.scatter(post_mean[:, 0, 0], post_mean[:, 1, 0], c=color,
linewidths=2,
marker="s",
edgecolor=color_edge,
s=30)
plt.xlabel("1-st dimension of the latent space")
plt.ylabel("2-nd dimension of the latent space")
plt.pause(3)
```
```python
```
```python
```
|
7df03cf81d8efe714f15d20610cb481a61a7a931
| 348,368 |
ipynb
|
Jupyter Notebook
|
L7/probabilistic_pca.ipynb
|
crslab/CS5340-notebooks
|
ddc403ad5664315ed74a602db6d9a9401f4cbea9
|
[
"MIT"
] | null | null | null |
L7/probabilistic_pca.ipynb
|
crslab/CS5340-notebooks
|
ddc403ad5664315ed74a602db6d9a9401f4cbea9
|
[
"MIT"
] | null | null | null |
L7/probabilistic_pca.ipynb
|
crslab/CS5340-notebooks
|
ddc403ad5664315ed74a602db6d9a9401f4cbea9
|
[
"MIT"
] | 1 |
2020-01-21T13:54:28.000Z
|
2020-01-21T13:54:28.000Z
| 590.454237 | 26,472 | 0.947047 | true | 2,331 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.872347 | 0.828939 | 0.723123 |
__label__eng_Latn
| 0.626419 | 0.518388 |
# Müller Brown I
Throughout the tutorials, we are going to demonstrate how to use totally accurate pathway simulator (TAPS). The tutorials aims for the user who wants to understands basic codes of the TAPS, presumably familiar with scientific Python programing like `numpy` and basic knowledge of chemical reaction pathway. Structre of TAPS begins with `Paths` class. `Paths` object contains other classes such as `Cartesian`, coordinate representation with kinetic energy calculatior, `Model`, potential and static energy calculator, `PathFinder`, pathway optimization algorithm and `Database` where calculated image data are stored. The reason for having this hierarchy is to efficietly link each class when they need each other. For example, static calculation such as gradient, potential or hessian requires the coordinate information and sometimes the data of previous calculation. When the calculation is conducted, `paths` object are handover to the `Model` object so that `Model` object can use the `Cartesian` or `Database` when they needed. Calculating kinetic energy works similarly. One might need only coordinate information but sometimes previous data can be necessary, for example one might wants to estimate the inertia of the atomic system with given only two angles. In that case, calcualtion of inertia is conducted using the data in the `Database` model.
Müller Brown (MB) potential is great example for the start. MB is 2D model potential involves a few parameters used for test bench. This tutorial will construct a pathway on the MB potential and calculate the potential and forces at each image. Our mission here is to understand basic understanding of TAPS throught doing
1. Construction of a pathway
2. Potential calculation
3. Direct action optimization (DAO)
examples.
Initial and the last point of the Muller Brown potential are (-0.55822365, 1.44172582) and (0.6234994, 0.02803776) respectively. We are going to make simple linear pathway connecting to ends using numpy.
```python
import numpy as np
N = 300
x = np.linspace(-0.55822365, 0.6234994, N)
y = np.linspace(1.44172582, 0.02803776, N)
coords = np.array([x, y])
coords.shape
```
(2, 300)
## Construction of a pathway
`Cartesian` contains `coords`, consecutive coordinate representation of images, `epoch`, time spend for transition between two states, and `unit`. `coords` should be $(D\times N)$ size array or $(3\times A \times N)$ array where $D$ is dimension, $N$ is the number of consecutive images (including both ends) and $A$ the number of atoms. Reason for $N$ is last index is because of the row-major ordring of `python`.
Creating `Cartesian` object is simply put the `numpy` array into `coords` instance.
```python
from taps.coords import Cartesian
coords = Cartesian(coords=coords)
print(coords.shape, coords.D, coords.N, coords.epoch)
```
(2, 300) 2 300 3
<!---`Cartesian` is an array like object that can be treat as an `numpy` array.
We are considering descrete example of a pathway, where atomic path are considered as a consecutive images. `coords` is coordinate having a shape $(D\times N)$ matrix where $D$ stands for dimension and $N$ stands for the number of consecutive images. Beginning....
-->
## Potential calculation
MB potential is given by
$$ V\left(x,y\right) = \sum_{\mu=1}^{4}{A_\mu e^{a_\mu \left(x-x_\mu^0\right)^2 + b_\mu \left(x-x_\mu^0\right) \left(y-y_\mu^0\right) + c_\mu\left(y-y_\mu^0\right)^2}}$$
`Model` object in TAPS has a few pre-defined toy model you can test your own algorithm. If you wants to know the parameters or info about that specific model, type "?" such as
```python
from taps.models import MullerBrown
?MullerBrown
model = MullerBrown()
print(model.A)
```
[-2. -1. -1.7 0.15]
[0;31mInit signature:[0m [0mMullerBrown[0m[0;34m([0m[0mresults[0m[0;34m=[0m[0;32mNone[0m[0;34m,[0m [0mlabel[0m[0;34m=[0m[0;32mNone[0m[0;34m,[0m [0mprj[0m[0;34m=[0m[0;32mNone[0m[0;34m,[0m [0m_cache[0m[0;34m=[0m[0;32mNone[0m[0;34m,[0m [0munit[0m[0;34m=[0m[0;34m'eV'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mDocstring:[0m
Muller Brown Potential
.. math::
\begin{equation}
V\left(x,y\right) =
\sum_{\mu=1}^{4}{A_\mu e^{a_\mu \left(x-x_\mu^0\right)^2
+ b_\mu \left(x-x_\mu^0\right) \left(y-y_\mu^0\right)
+ c_\mu\left(y-y_\mu^0\right)^2}}
\end{equation}
* Initial position = (-0.55822365, 1.44172582)
* Final position = (0.6234994, 0.02803776)
Parameters
----------
A = np.array([-200, -100, -170, 15])
a = np.array([-1, -1, -6.5, 0.7])
b = np.array([0, 0, 11, 0.6])
c = np.array([-10, -10, -6.5, 0.7])
x0 = np.array([1, 0, -0.5, -1])
y0 = np.array([0, 0.5, 1.5, 1])
potential_unit = 'unitless'
Example
-------
>>> import numpy as np
>>> N = 300
>>> x = np.linspace(-0.55822365, 0.6234994, N)
>>> y = np.linspace(1.44172582, 0.02803776, N)
>>> paths.coords = np.array([x, y])
[0;31mFile:[0m ~/anaconda3/envs/py37/lib/python3.7/site-packages/taps/models/mullerbrown.py
[0;31mType:[0m type
[0;31mSubclasses:[0m
`Paths` class contains `cooords`, which for the class `Cartesian` and `model` where `Model` class are stored. To calculate the properties along the pathway, we need a wrapper that connects both `Model` and `Cartesian`. `Paths` is classs for that conviently move around each objects. Calculating potential, gradients and Hessian can be conducted by scripting
```python
paths.get_potential()
paths.get_gradients()
paths.get_hessian()
```
as a default, it calculates properties throughout whole consecutive images except both end . If one wants to calculate including both end, one can use the keyword `index`. Index takes the list of step numbers and calculates only on that step.
```python
from taps.paths import Paths
paths = Paths(coords=coords, model=model)
print(paths.get_potential(index=np.s_[5:10]))
print(paths.get_gradients(index=[1, 2, 3]).shape)
```
[-1.44794167 -1.43962909 -1.42985987 -1.41866061 -1.40606162]
(2, 3)
## Visualization
In a 2D model calcualtion case, calculation are assumed to be very light. Thus, visualization of the package try to show the properties not only along the pathway but also the potential energy surface around it.
`Plotter` object visualize coordinate automatically with PES around it. It is not critical for the reaction calculation but it gives you insight around it. By default, 3D pathway like atomic system doesn't do PES map calcualtion. It only gives you the potential, kinetic and total energy along the pathway. Viewing the `paths` is simply,
```python
from taps.visualize import view
view(paths)
```
It showed something, but Since MB potential is exponentially increase outside its boundary, automatic resizeing or map leveling doesn't help to understand its view. In order to visuallize correctly you need to manipulate all the parameters involing `plotter`. Fortunately, in this example, we will use pre-defined parameter set that set to focus on important properties. By just typing keyword `viewer`.
```python
view(paths, viewer='MullerBrown')
```
Manipulating the `Cartesian` can be done by manually change the number of the `coords` array. However, manipulating the pathway manually is very diffcult process than changing just a coordinate of some atomic system. If one wants to test a random pathway rather than a liner pathway, one can fluctuate the pathway by adjusting sine components of the `coords`. TAPS has built in fluctuation function that do just that. You can type
```python
paths.fluctuate()
```
and it will randomize its pathway. Keyword `fluctuation` adjust the amplitutde of its fluctuation.
```python
paths.fluctuate(fluctuation=1)
```
```python
view(paths, viewer='MullerBrown')
```
Randomizing a pathway not only gives you more pleasing look but also make a pathway more unbiased. It can test pathway optimization algorithm working in any pathway in a simpler manner.
## Direct Action Optimization
`PathFinder` classs perfoms pathway optimization. Here we use direct action optimization method (DAO) to minimize the action of the pathway. `DAO` optimizes the pathway by lowering the value of its action using `scipy`'s `minimize` package. All together, this requires parameters needed for the `DAO` and parameters for `scipy`.
Parameters required for `DAO`, you can type "?"
```python
from taps.pathfinder import DAO
?DAO
```
```python
from taps.pathfinder import DAO
?DAO
```
[0;31mInit signature:[0m
[0mDAO[0m[0;34m([0m[0;34m[0m
[0;34m[0m [0maction_kwargs[0m[0;34m=[0m[0;32mNone[0m[0;34m,[0m[0;34m[0m
[0;34m[0m [0msearch_kwargs[0m[0;34m=[0m[0;32mNone[0m[0;34m,[0m[0;34m[0m
[0;34m[0m [0muse_grad[0m[0;34m=[0m[0;32mTrue[0m[0;34m,[0m[0;34m[0m
[0;34m[0m [0mlogfile[0m[0;34m=[0m[0;32mNone[0m[0;34m,[0m[0;34m[0m
[0;34m[0m [0;34m**[0m[0mkwargs[0m[0;34m,[0m[0;34m[0m
[0;34m[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m
[0;31mDocstring:[0m
Direct action optimizer (DAO)
Parameters
----------
use_grad: bool , default True
Whether to use graidnet form of the action.
search_kwargs: Dict,
Optimization keyword for scipy.minimize
action_kwargs: Dict of dict,
{'Onsager Machlup': {'gamma': 2},
'Total Energy restraint': ... }
[0;31mFile:[0m ~/anaconda3/envs/py37/lib/python3.7/site-packages/taps/pathfinder/dao.py
[0;31mType:[0m type
[0;31mSubclasses:[0m
We set target energy -0.45, muE 1. , tol 5e-2, gam 1. with Onsager Machlup action and energy conservation. Onsager Machlup action
$$ S_\mathrm{OM} = \frac{\Delta V}{2} + \frac{1}{4} \sum_{n=0}^{N}\left[\frac{dt}{2\gamma}\left(\left|\nabla V\left( \mathbf{x}^{\left(n+1\right)} \right)\right|^2+\left|\nabla V\left( \mathbf{x}^{\left(n\right)} \right)\right|^2\right) - \left(\nabla V\left( \mathbf{x}^{\left(n+1\right)} \right) - \nabla V \left( \mathbf{x}^{\left(n\right)} \right)\right) \cdot \mathbf{v}^{\left(n\right)}+\frac{\gamma}{dt}\mathbf{v}^{\left(n\right)\mathbf{T}} \cdot \mathbf{v}^{\left(n\right)} \right] $$
with additional energy conservation restraint
$$\Theta_\mathrm{OM}\left(\left\{\mathbf{X}_\ast\right\},E_\mathrm{t}\right)=S_\mathrm{OM}+\mu_\mathrm{E}\sum_{n=0}^{N-1}\left(E^{\left(n\right)}-E_\mathrm{t}\right)^2$$
```python
from taps.pathfinder import DAO
action_kwargs = {
'Onsager Machlup':{
'gam': 1.,
},
'Energy Restraint':{
'muE': 1.,
'Et': -0.45
}
}
search_kwargs = {"method":"L-BFGS-B"}
finder = DAO(action_kwargs=action_kwargs,
search_kwargs=search_kwargs)
paths.finder = finder
paths.coords.epoch=6
paths.search()
```
=================================
Parameters
=================================
Onsager Machlup
gam : 1.0
Energy Restraint
muE : 1.0
Et : -0.45
Iter nfev njev S dS_max
Converge : 11877 12169 12169 1.8258 0.0128
Converge : 11879 12175 12175 1.8258 0.0017
=================================
Results
=================================
Onsager Machlup : 1.2553677166473254
Energy Restraint : 0.5704382610974073
Total S : 1.8258059777447326
```python
view(paths, viewer='MullerBrown')
```
```python
```
```python
```
|
8b9468453a6228e778bcb76d10078e360d10b2e6
| 251,827 |
ipynb
|
Jupyter Notebook
|
notebooks/MullerBrown_I.ipynb
|
schinavro/taps
|
c03b4e23ed299824c1b062225b837a0b7cfff216
|
[
"MIT"
] | null | null | null |
notebooks/MullerBrown_I.ipynb
|
schinavro/taps
|
c03b4e23ed299824c1b062225b837a0b7cfff216
|
[
"MIT"
] | null | null | null |
notebooks/MullerBrown_I.ipynb
|
schinavro/taps
|
c03b4e23ed299824c1b062225b837a0b7cfff216
|
[
"MIT"
] | null | null | null | 427.550085 | 42,944 | 0.936814 | true | 3,565 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.849971 | 0.774583 | 0.658374 |
__label__eng_Latn
| 0.967854 | 0.367953 |
# 2022-01-26 Newton methods
* Office hours: Monday 9-10pm, Tuesday 2-3pm, Thursday 2-3pm
* This week will stay virtual. Plan is to start in-person the following Monday (Jan 31)
## Last time
* Discuss rootfinding as a modeling tool
* Limitations of bisection
* Convergence classes
* Intro to Newton methods
## Today
* Newton's method via Taylor series
* Convergence theory for fixed point methods
* Derive Newton's method via convergence theory
* Newton methods in computing culture
* Breaking Newton's method
```julia
using Plots
default(linewidth=4, legendfontsize=12)
f(x) = cos(x) - x
hasroot(f, a, b) = f(a) * f(b) < 0
function bisect_iter(f, a, b, tol)
hist = Float64[]
while abs(b - a) > tol
mid = (a + b) / 2
push!(hist, mid)
if hasroot(f, a, mid)
b = mid
else
a = mid
end
end
hist
end
```
bisect_iter (generic function with 1 method)
# Convergence classes
A convergent rootfinding algorithm produces a sequence of approximations $x_k$ such that $$\lim_{k \to \infty} x_k \to x_*$$ where $f(x_*) = 0$. For analysis, it is convenient to define the errors $e_k = x_k - x_*$. We say that an iterative algorithm is **$q$-linearly convergent** if $$\lim_{k \to \infty} |e_{k+1}| / |e_k| = \rho < 1.$$ (The $q$ is for "quotient".) A smaller convergence factor $\rho$ represents faster convergence. A slightly weaker condition ($r$-linear convergence or just **linear convergence**) is that
$$ |e_k| \le \epsilon_k $$
for all sufficiently large $k$ when the sequence $\{\epsilon_k\}$ converges $q$-linearly to 0.
```julia
hist = bisect_iter(f, -1, 3, 1e-10)
r = hist[end] # What are we trusting?
hist = hist[1:end-1]
scatter( abs.(hist .- r), yscale=:log10)
ks = 1:length(hist)
ρ = 0.5
plot!(ks, 4 * (ρ .^ ks))
```
# Newton-Raphson Method
Much of numerical analysis reduces to [Taylor series](https://en.wikipedia.org/wiki/Taylor_series), the approximation
$$ f(x) = f(x_0) + f'(x_0) (x-x_0) + f''(x_0) (x - x_0)^2 / 2 + \underbrace{\dotsb}_{O((x-x_0)^3)} $$
centered on some reference point $x_0$.
In numerical computation, it is exceedingly rare to look beyond the first-order approximation
$$ \tilde f_{x_0}(x) = f(x_0) + f'(x_0)(x - x_0) . $$
Since $\tilde f_{x_0}(x)$ is a linear function, we can explicitly compute the unique solution of $\tilde f_{x_0}(x) = 0$ as
$$ x = x_0 - \frac{f(x_0)}{f'(x_0)} . $$
This is Newton's Method (aka Newton-Raphson or Newton-Raphson-Simpson) for finding the roots of differentiable functions.
# An implementation
```julia
function newton(f, fp, x0; tol=1e-8, verbose=false)
x = x0
for k in 1:100 # max number of iterations
fx = f(x)
fpx = fp(x)
if verbose
println("[$k] x=$x f(x)=$fx f'(x)=$fpx")
end
if abs(fx) < tol
return x, fx, k
end
x = x - fx / fpx
end
end
f(x) = cos(x) - x
fp(x) = -sin(x) - 1
newton(f, fp, 1; tol=1e-15, verbose=true)
```
[1] x=1 f(x)=-0.45969769413186023 f'(x)=-1.8414709848078965
[2] x=0.7503638678402439 f(x)=-0.018923073822117442 f'(x)=-1.6819049529414878
[3] x=0.7391128909113617 f(x)=-4.6455898990771516e-5 f'(x)=-1.6736325442243012
[4] x=0.739085133385284 f(x)=-2.847205804457076e-10 f'(x)=-1.6736120293089505
[5] x=0.7390851332151607 f(x)=0.0 f'(x)=-1.6736120291832148
(0.7390851332151607, 0.0, 5)
# That's really fast!
* 10 digits of accuracy in 4 iterations.
* How is this convergence test different from the one we used for bisection?
* How can this break down?
$$ x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} $$
```julia
newton(f, fp, -pi/2+.1; verbose=true)
```
[1] x=-1.4707963267948965 f(x)=1.5706297434417247 f'(x)=-0.0049958347219742905
[2] x=312.9170549232224 f(x)=-312.59435002533314 f'(x)=-0.05350037037604283
[3] x=-5529.927542752894 f(x)=5530.676391917825 f'(x)=-0.33725953180603474
[4] x=10868.945936970244 f(x)=-10868.376227850798 f'(x)=-0.17815359146505727
[5] x=-50136.70732252356 f(x)=50135.70777741902 f'(x)=-1.0301593101044748
[6] x=-1468.7903453577164 f(x)=1468.8859787856973 f'(x)=-1.9954166200403913
[7] x=-732.6603742863299 f(x)=731.8761094362295 f'(x)=-1.6204261800544972
[8] x=-281.0038172368656 f(x)=280.8358913898124 f'(x)=-1.9857996296872256
[9] x=-139.58174866993488 f(x)=139.79912372811944 f'(x)=-0.02391184615360531
[10] x=5706.856156210999 f(x)=-5707.008659764705 f'(x)=-1.9883029222393844
[11] x=2836.5648158265976 f(x)=-2837.5220962814674 f'(x)=-1.2891610809292957
[12] x=635.503879177757 f(x)=-634.8839651181479 f'(x)=-1.784669713127157
[13] x=279.7607629875442 f(x)=-280.7481464325292 f'(x)=-0.8416524942745961
[14] x=-53.80700796583557 f(x)=52.88592082634005 f'(x)=-1.3893564966145653
[15] x=-15.74196061822768 f(x)=14.742538472479499 f'(x)=-1.0339908015219368
[16] x=-1.4840596284124175 f(x)=1.5706876106501757 f'(x)=-0.0037592697076601622
[17] x=416.333158287511 f(x)=-416.4052274406877 f'(x)=-1.99739963763799
[18] x=207.8594910282616 f(x)=-206.9888910802111 f'(x)=-1.4919915959184906
[19] x=69.12620885264326 f(x)=-68.1262712417355 f'(x)=-1.0111702413616044
[20] x=1.7525179991632598 f(x)=-1.933241162917233 f'(x)=-1.9835340045381016
[21] x=0.7778731690296721 f(x)=-0.0654654835715871 f'(x)=-1.7017658368004631
[22] x=0.7394040200105153 f(x)=-0.0005337303513481828 f'(x)=-1.6738476794194503
[23] x=0.7390851556610822 f(x)=-3.756576461011463e-8 f'(x)=-1.6736120457726615
[24] x=0.7390851332151608 f(x)=-2.220446049250313e-16 f'(x)=-1.6736120291832148
(0.7390851332151608, -2.220446049250313e-16, 24)
# Convergence of fixed-point (by mean value theorem)
Consider the iteration
$$x_{k+1} = g(x_k)$$
where $g$ is a continuously differentiable function.
Suppose that there exists a fixed point $x_* = g(x_*)$. By the [mean value theorem](https://en.wikipedia.org/wiki/Mean_value_theorem), we have that
$$ x_{k+1} - x_* = g(x_k) - g(x_*) = g'(c_k) (x_k - x_*) $$
for some $c_i$ between $x_k$ and $x_*$.
Taking absolute values, $$|e_{k+1}| = |g'(c_k)| |e_k|,$$ which converges to zero if $|g'(c_k)| < 1$.
# Convergence of fixed-point (by Taylor series)
Consider the iteration
$$x_{k+1} = g(x_k)$$
where $g$ is a continuously differentiable function.
Suppose that there exists a fixed point $x_* = g(x_*)$. There exists a Taylor series at $x_*$,
$$ g(x_k) = g(x_*) + g'(x_*)(x_k - x_*) + O((x_k-x_*)^2) $$
and thus
\begin{align}
x_{k+1} - x_* &= g(x_k) - g(x_*) \\
&= g'(x_*) (x_k - x_*) + O((x_k - x_*)^2).
\end{align}
In terms of the error $e_k = x_k - x_*$,
$$ \left\lvert \frac{e_{k+1}}{e_k} \right\rvert = \lvert g'(x_*) \rvert + O(e_k).$$
## Poll: Is this convergence A=q-linear, B=r-linear, C=neither?
Recall the definition of q-linear convergence
$$ \lim_{k\to\infty} \left\lvert \frac{e_{k+1}}{e_k} \right\rvert = \rho < 1. $$
# Aside: [Big $O$ ("big oh") notation](https://en.wikipedia.org/wiki/Big_O_notation)
## Limit $n\to\infty$
We'd say an algorithm costs $O(n^2)$ if its running time on input of size $n$ is less than $c n^2$ for some constant $c$ and sufficiently large $n$.
Sometimes we write $\operatorname{cost}(\texttt{algorithm}, n) = O(n^2)$ or (preferably) $\operatorname{cost}(\texttt{algorithm}) \in O(n^2)$.
Note that $O(\log n) \subset O(n) \subset O(n\log n) \subset O(n^2) \subset \dotsb$ so it's correct to say "binary search is in $O(n^2)$", even though a sharper statement is also true.
We say the algorithm is in $\Theta(n^2)$ ("big theta") if
$$ c_1 n^2 < \operatorname{cost}(\texttt{algorithm}) < c_2 n^2 $$
for some positive constants $c_1,c_2$ and sufficiently large $n$.
## Limit $h \to 0$
In numerical analysis, we often have a small real number, and now the definitions take the limit as the small number goes to zero. So we say a term in an expression is in $O(h^2)$ if
$$ \lim_{h\to 0} \frac{\operatorname{term}(h)}{h^2} < \infty . $$
Big $O$ terms can be manipulated as
\begin{align}
h O(h^k) &= O(h^{k+1}) \\
O(h^k)/h &= O(h^{k-1}) \\
c O(h^k) &= O(h^k) \\
O(h^k) - O(h^k) &= ?
\end{align}
# Example of a fixed point iteration
We wanted to solve $\cos x - x = 0$, which occurs when $g(x) = \cos x$ is a fixed point.
```julia
xstar, _ = newton(f, fp, 1.)
g(x) = cos(x)
gp(x) = -sin(x)
@show xstar
@show gp(xstar)
plot([x->x, g], xlims=(-2, 3))
scatter!([xstar], [xstar],
label="\$x_*\$")
```
xstar = 0.739085133385284
gp(xstar) = -0.6736120293089505
```julia
function fixed_point(g, x, n)
xs = [x]
for k in 1:n
x = g(x)
append!(xs, x)
end
xs
end
xs = fixed_point(g, 2., 15)
plot!(xs, g.(xs), seriestype=:path, marker=:auto)
```
# Verifying fixed point convergence theory
$$ \left\lvert \frac{e_{k+1}}{e_k} \right\rvert \to \lvert g'(x_*) \rvert $$
```julia
@show gp(xstar)
es = xs .- xstar
es[2:end] ./ es[1:end-1]
```
gp(xstar) = -0.6736120293089505
15-element Vector{Float64}:
-0.9161855415615605
-0.15197657010596488
-0.734870205299266
-0.624132525531327
-0.7026257933893496
-0.6523498121376077
-0.6870971782336925
-0.664168570025122
-0.6798044680427148
-0.6693659427636027
-0.6764378047956165
-0.6716930541785153
-0.6748976495459512
-0.6727427617641084
-0.6741962236114177
```julia
scatter(abs.(es), yscale=:log10, label="fixed point")
plot!(k -> abs(gp(xstar))^k, label="\$|g'|^k\$")
```
# Plotting Newton convergence
```julia
function newton_hist(f, fp, x0; tol=1e-12)
x = x0
hist = []
for k in 1:100 # max number of iterations
fx = f(x)
fpx = fp(x)
push!(hist, [x fx fpx])
if abs(fx) < tol
return vcat(hist...)
end
x = x - fx / fpx
end
end
```
newton_hist (generic function with 1 method)
```julia
xs = newton_hist(f, fp, 1.97)
@show x_star = xs[end,1]
plot(xs[1:end-1,1] .- x_star, yscale=:log10, marker=:auto)
```
x_star = xs[end, 1] = 0.7390851332151607
## Poll: Is this convergence A=q-linear, B=r-linear, C=neither?
# Formulations are not unique (constants)
If $x = g(x)$ then
$$x = \underbrace{x + c(g(x) - x)}_{g_2}$$
for any constant $c \ne 0$. Can we choose $c$ to make $\lvert g_2'(x_*) \rvert$ small?
```julia
c = .5
g2(x) = x + c * (cos(x) - x)
g2p(x) = 1 + c * (-sin(x) - 1)
@show g2p(xstar)
plot([x->x, g, g2], ylims=(-5, 5), label=["x" "g" "g2"])
```
g2p(xstar) = 0.16319398534552476
```julia
xs = fixed_point(g2, 1., 15)
xs .- xstar
```
16-element Vector{Float64}:
0.26091486661471597
0.03106601954878585
0.004893162344945079
0.0007941171212053622
0.00012947850276123773
2.112687301181193e-5
3.4475537732392425e-6
5.62475483634195e-7
9.16501970982253e-8
1.4814399151852342e-8
2.2752605355336186e-9
2.2894852680366284e-10
-1.0499723313017739e-10
-1.594951948291623e-10
-1.683889694348295e-10
-1.6984036399492197e-10
# Formulations are not unique (functions)
If $x = g(x)$ then
$$x = \underbrace{x + h(x) \big(g(x) - x\big)}_{g_3(x)}$$
for any smooth $h(x) \ne 0$. Can we choose $h(x)$ to make $\lvert g_3'(x) \rvert$ small?
```julia
h(x) = -1 / (gp(x) - 1)
g3(x) = x + h(x) * (g(x) - x)
plot([x-> x, cos, g2, g3], ylims=(-5, 5))
```
* We don't know $g'(x_*)$ in advance because we don't know $x_*$ yet.
* This method converges very fast
* We actually just derived Newton's method.
# A fresh derivation of Newton's method
* A rootfinding problem $f(x) = 0$ can be converted to a fixed point problem $$x = x + f(x) =: g(x)$$ but there is no guarantee that $g'(x_*) = 1 + f'(x_*)$ will have magnitude less than 1.
* Problem-specific algebraic manipulation can be used to make $|g'(x_*)|$ small.
* $x = x + h(x) f(x)$ is also a valid formulation for any $h(x)$ bounded away from $0$.
* Can we choose $h(x)$ such that $$ g'(x) = 1 + h'(x) f(x) + h(x) f'(x) = 0$$ when $f(x) = 0$?
In other words,
$$ x_{k+1} = x_k + \underbrace{\frac{-1}{f'(x_k)}}_{h(x_k)} f(x_k) . $$
# Quadratic convergence!
$$ \left\lvert \frac{e_{k+1}}{e_k} \right\rvert \to \lvert g'(x_*) \rvert $$
* What does it mean that $g'(x_*) = 0$?
* It turns out that Newton's method has _locally quadratic_ convergence to simple roots,
$$\lim_{k \to \infty} \frac{|e_{k+1}|}{|e_k|^2} < \infty.$$
* "The number of correct digits doubles each iteration."
* Now that we know how to make a good guess accurate, the effort lies in getting a good guess.
# Culture: fast inverse square root
The following code appeared literally (including comments) in the Quake III Arena source code (late 1990s).
```C
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the fuck?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
```
We now have [vector instructions](https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=rsqrt&expand=2989,1224,4470) for approximate inverse square root.
More at https://en.wikipedia.org/wiki/Fast_inverse_square_root
# How does it work?
Let's look at the last line
```c
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
```
We want a function $f(y)$ such that $f(1/\sqrt{x}) = 0$. One such function is
$$ f(y) = 1/y^2 - x, \quad f'(y) = -2/y^3.$$
There are others, e.g.,
$$f_1(y) = y^2 - 1/x,\quad f'(y) = 2 y,$$
but this would require a division.
Newton's method is
\begin{align}
y_{k+1} &= y_k - \frac{f(y_k)}{f'(y_k)} \\
&= y_k - \frac{1/y_k^2 - x}{-2/y_k^3} \\
&= y_k + \frac 1 2 (y_k - x y_k^3) \\
&= y_k \left(\frac 3 2 - \frac 1 2 x y_k^2\right)
\end{align}
# Rootfinding outlook
* Newton methods are immensely successful
* Convergence theory is local; we need good initial guesses (activity)
* Computing the derivative $f'(x)$ is *intrusive*
* Avoided by secant methods (approximate the derivative; activity)
* Algorithmic or numerical differentiation (future topics)
* Bisection is robust when conditions are met
* Line search (activity)
* When does Newton diverge?
* More topics
* Find *all* the roots
* Use Newton-type methods with bounds
* Times when Newton converges slowly
|
945cd8373304534408f501d1d3c2d09a44153cf3
| 204,756 |
ipynb
|
Jupyter Notebook
|
slides/2022-01-26-newton.ipynb
|
cu-numcomp/spring22
|
f4c1f9287bff2c10645809e65c21829064493a66
|
[
"MIT"
] | null | null | null |
slides/2022-01-26-newton.ipynb
|
cu-numcomp/spring22
|
f4c1f9287bff2c10645809e65c21829064493a66
|
[
"MIT"
] | null | null | null |
slides/2022-01-26-newton.ipynb
|
cu-numcomp/spring22
|
f4c1f9287bff2c10645809e65c21829064493a66
|
[
"MIT"
] | 2 |
2022-02-09T21:05:12.000Z
|
2022-03-11T20:34:46.000Z
| 107.709627 | 8,494 | 0.646926 | true | 5,552 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.867036 | 0.770586 |
__label__eng_Latn
| 0.702443 | 0.628661 |
# Dimensionality Reduction
## The Problem
There is an interesting tradeoff between model performance and a feature's dimensionality:
>*If the amount of available training data is fixed, then overfitting occurs if we keep adding dimensions. On the other hand, if we keep adding dimensions, the amount of **training data needs to grow exponentially fast to maintain the same coverage** and to avoid overfitting* ([Computer Vision for Dummies](http://www.visiondummy.com/2014/04/curse-dimensionality-affect-classification/)).
### Multi-Collinearity
In many cases, there is a high degree of correlation between many of the features in a dataset. For instance, suppose that you
## Sparsity
- High dimensionality increases the sparsity of your features (**what NLP techniques have we used that illustrate this point?**)
- The density of the training samples decreases when dimensionality increases:
- Distance measures (Euclidean, for instance) start losing their effectiveness, because there isn't much difference between the max and min distances in higher dimensions.
- Many models that rely upon assumptions of Gaussian distributions (like OLS linear regression), Gaussian mixture models, Gaussian processes, etc. become less and less effective since their distributions become flatter and "fatter tailed".
What is the amount of data needed to maintain **20% coverage** of the feature space? For 1 dimension, it is **20% of the entire population's dataset**. For a dimensionality of $D$:
$$
X^{D} = .20
$$
$$
(X^{D})^{\frac{1}{D}} = .20^{\frac{1}{D}}
$$
$$
X = \sqrt[D]{.20}
$$
You can approximate this as
```python
def coverage_requirement(requirement, D):
return requirement ** (1 / D)
x = []
y = []
for d in range(1,20):
y.append(coverage_requirement(0.10, d))
x.append(d)
import matplotlib.pyplot as plt
plt.plot(x,y)
plt.xlabel("Number of Dimensions")
plt.ylabel("Appromximate % of Population Dataset")
plt.title("% of Dataset Needed to Maintain 10% Coverage of Feature Space")
plt.show()
```
```python
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
reviews = pd.read_csv("mcdonalds-yelp-negative-reviews.csv", encoding='latin-1')
reviews = open("poor_amazon_toy_reviews.txt", encoding='latin-1')
#text = reviews["review"].values
text = reviews.readlines()
vectorizer = CountVectorizer(ngram_range=(3,3), min_df=0.01, max_df=0.75, max_features=200)
# tokenize and build vocab
vectorizer.fit(text)
vector = vectorizer.transform(text)
features = vector.toarray()
features_df = pd.DataFrame(features, columns=vectorizer.get_feature_names())
correlations = features_df.corr()
correlations_stacked = correlations.stack().reset_index()
#set column names
correlations_stacked.columns = ['Bi-Gram 1','Bi-Gram 2','Correlation']
correlations_stacked = correlations_stacked[correlations_stacked["Correlation"] < 1]
correlations_stacked = correlations_stacked.sort_values(by=['Correlation'], ascending=False)
correlations_stacked.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Bi-Gram 1</th>
<th>Bi-Gram 2</th>
<th>Correlation</th>
</tr>
</thead>
<tbody>
<tr>
<th>43</th>
<td>don waste your</td>
<td>waste your money</td>
<td>0.777888</td>
</tr>
<tr>
<th>197</th>
<td>waste your money</td>
<td>don waste your</td>
<td>0.777888</td>
</tr>
<tr>
<th>82</th>
<td>of the box</td>
<td>out of the</td>
<td>0.609369</td>
</tr>
<tr>
<th>110</th>
<td>out of the</td>
<td>of the box</td>
<td>0.609369</td>
</tr>
<tr>
<th>123</th>
<td>this for my</td>
<td>my year old</td>
<td>0.078176</td>
</tr>
</tbody>
</table>
</div>
```python
import seaborn as sns
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (15,15)
sns.heatmap(correlations)
```
# Principle Component Analysis
If you have an original matrix $Z$, you can decompose this matrix into two smaller matrices $X$ and $Q$.
## Important Points:
- Multiplying a vector by a matrix typically changes the direction of the vector. For instance:
<figure>
<figcaption><a href="https://lazyprogrammer.me/tutorial-principal-components-analysis-pca">Lazy Programmer-
Tutorial to PCA</a></figcaption>
</figure>
However, there are eigenvalues λ and eigenvectors $v$ such that
$$
\sum{X}v = \lambda v
$$
Multiplying the eigenvectors $v$ with the eigenvalue $\lambda$ does not change the direction of the eigenvector.
Multiplying the eigenvector $v$ by the covariance matrix $\sum{X}$ also does not change the direction of the eigenvector.
If our data $X$ is of shape $N \times D$, it turns out that we have $D$ eigenvalues and $D$ eigenvectors. This means we can arrange the eigenvalues $\lambda$ in decreasing order so that
$$
\lambda_3 > \lambda_2 > \lambda_5
$$
In this case, $\lambda_3$ is the largest eigenvalue, followed by $\lambda_2$, and then $\lambda_5$. Then, we can arrange
We can also rearrange the eigenvectors the same: $v_3$ will be the first column, $v_2$ will be the second column, and $v_5$ will be the third column.
We'll end up with two matrices $V$ and $\Lambda$:
<figure>
<figcaption><a href="https://lazyprogrammer.me/tutorial-principal-components-analysis-pca">Lazy Programmer-
Tutorial to PCA</a></figcaption>
</figure>
```python
# what is the shape of our features?
features.shape
```
(12700, 15)
```python
from sklearn.decomposition import PCA
pca = PCA(n_components=4)
Z = pca.fit_transform(features)
# what is the shape of Z?
Z.shape
```
(12700, 4)
```python
# what will happen if we take the correlation matrix and covariance matrix of our new reduced features?
import numpy as np
covariances = pd.DataFrame(np.cov(Z.transpose()))
plt.rcParams["figure.figsize"] = (5,5)
sns.heatmap(covariances)
```
```python
pca = PCA(n_components=2)
Z_two_dimensions = pca.fit_transform(features)
Z_two_dimensions
```
array([[ 0.19584061, -0.05193457],
[-0.03890385, -0.02632155],
[-0.03890385, -0.02632155],
...,
[-0.03890385, -0.02632155],
[-0.03890385, -0.02632155],
[-0.03890385, -0.02632155]])
```python
import matplotlib.pyplot as plt
plt.scatter(Z_two_dimensions[:,0], Z_two_dimensions[:, 1])
reduced_features_df = pd.DataFrame(Z_two_dimensions, columns=["x1", "x2"])
reduced_features_df["text"] = text
reduced_features_df.to_csv("reduced_features.csv")
```
# Singular Value Decomposition
Given an input matrix $A$, we want to try to represent it instead as three smaller matrices $U$, $\sum$, and $V$. Instead of **$n$ original terms**, we want to represent each document as **$r$ concepts** (other referred to as **latent dimensions**, or **latent factors**):
<figure>
<figcaption><i>
<a href="https://www.youtube.com/watch?v=P5mlg91as1c">Mining of Massive Datasets - Dimensionality Reduction: Singular Value Decomposition</a> by Leskovec, Rajaraman, and Ullman (Stanford University)</i></figcaption>
</figure>
Here, **$A$ is your matrix of word vectors** - you could use any of the word vectorization techniques we have learned so far, include one-hot encoding, word count, TF-IDF.
- $\sum$ will be a **diagonal matrix** with values that are positive and sorted in decreasing order. Its value indicate the **variance (information encoded on that new dimension)**- therefore, the higher the value, the stronger that dimension is in capturing data from A, the original features. For our purposes, we can think of the rank of this $\sum$ matrix as the number of desired dimensions. Instance, if we want to reduce $A$ from shape $1020 x 300$ to $1020 x 10$, we will want to reduce the rank of $\sum$ from 300 to 10.
- $U^T U = I$ and $V^T V = I$
## Measuring the Quality of the Reconstruction
A popular metric used for measuring the quality of the reconstruction is the [Frobenius Norm](https://en.wikipedia.org/wiki/Matrix_norm#Frobenius_norm). When you explain your methodology for reducing dimensions, usually managers / stakeholders will want to some way to compare multiple dimensionality techniques' ability to quantify its ability to retain information but trim dimensions:
$$
\begin{equation}
||A_{old}-A_{new}||_{F} = \sqrt{\sum_{ij}{(A^{old}_{ij}- A^{new}_{ij}}})^2
\end{equation}
$$
## Heuristic Step for How Many Dimensions to Keep
1. Sum the $\sum$ matrix's diagonal values:
$$
\begin{equation}
\sum_{i}^{m}\sigma_{i}
\end{equation}
$$
2. Define your threshold of "information" (variance) $\alpha$ to keep: usually 80% to 90%.
3. Define your cutoff point $C$: $$
\begin{equation}
C = \sum_{i}^{m}\sigma_{i} \alpha
\end{equation}
$$
4. Beginning with your largest singular value, sum your singular values $\sigma_{i}$ until it is greater than C. Retain only those dimensions.
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import svd
x = np.linspace(1,20, 20) # create the first dimension
x = np.concatenate((x,x))
y = x + np.random.normal(0,1, 40) # create the second dimension
z = x + np.random.normal(0,2, 40) # create the third dimension
a = x + np.random.normal(0,4, 40) # create the fourth dimension
plt.scatter(x,y) # plot just the first two dimensions
plt.show()
```
```python
A = np.stack([x,y,z,a]).T
A
```
array([[ 1. , 0.77950175, 0.7677888 , -0.56338469],
[ 2. , 2.37168063, 2.68471589, 5.07651543],
[ 3. , 4.23624726, 5.41206404, 6.0099617 ],
[ 4. , 4.46378345, 4.44011107, 9.75428485],
[ 5. , 6.50663634, 4.66200827, 4.11785887],
[ 6. , 5.94101546, 10.02615748, 9.45905519],
[ 7. , 6.08581691, 0.47947543, 9.33775111],
[ 8. , 8.50611795, 4.76759185, 2.58313411],
[ 9. , 8.85808728, 11.924234 , 3.16806057],
[10. , 9.81238686, 11.75431427, 9.3688049 ],
[11. , 8.91639482, 14.3247862 , 12.86762692],
[12. , 10.82815483, 9.81102502, 17.17389241],
[13. , 11.97544391, 12.74739555, 13.85953691],
[14. , 14.44142881, 13.73807306, 12.38912932],
[15. , 14.6098057 , 16.54791873, 13.84012328],
[16. , 15.33512682, 14.80703823, 20.31090617],
[17. , 15.95991959, 19.17082756, 11.31018836],
[18. , 17.41056707, 20.24597257, 20.76009469],
[19. , 19.07420736, 21.06709273, 11.94927062],
[20. , 19.68485709, 15.5035568 , 24.61855472],
[ 1. , 0.55202803, 1.53319735, 6.05005041],
[ 2. , 3.69366161, 4.41677271, -1.49068716],
[ 3. , 3.21751898, 4.63878563, 5.32230317],
[ 4. , 4.24055608, 1.04304654, -0.28281646],
[ 5. , 5.88922848, 3.28261049, 10.7436995 ],
[ 6. , 6.30668612, 4.44229131, 3.6406705 ],
[ 7. , 7.56914998, 7.3755215 , 8.3623776 ],
[ 8. , 10.54384416, 13.32846563, 10.57344076],
[ 9. , 10.04319803, 9.45062888, 14.89627194],
[10. , 10.66212523, 10.99819923, 12.36232314],
[11. , 11.14953649, 8.04430148, 10.37330748],
[12. , 12.63366622, 12.89333678, 15.37158399],
[13. , 14.08764033, 14.53184093, 9.81763298],
[14. , 13.41368703, 13.51218414, 11.62458517],
[15. , 14.64728374, 14.92915007, 14.00188219],
[16. , 17.24241005, 16.48392888, 11.60676132],
[17. , 18.13595467, 17.78644608, 13.5988243 ],
[18. , 19.60109224, 20.26131041, 17.23827819],
[19. , 18.24049712, 21.70792017, 16.70115288],
[20. , 19.38794542, 19.53899593, 22.5304716 ]])
```python
D = 1
U, s, V = svd(A)
print(f"s is {s}\n")
print(f"U is {U}\n")
print(f"V is {V}")
```
s is [152.86974397 21.23629493 10.8533152 4.10757824]
U is [[-0.0065041 0.05469218 -0.03365372 ... -0.22239603 -0.25909183
-0.25717117]
[-0.03966881 -0.1058881 0.07421117 ... -0.26639667 0.15604784
0.0815931 ]
[-0.06113708 -0.05823958 0.14883772 ... -0.10575752 -0.16214804
0.03279713]
...
[-0.24577592 0.08393435 0.02374991 ... 0.87896565 -0.02911555
-0.03335981]
[-0.24769877 0.12976656 0.13040048 ... -0.03325364 0.88036094
-0.06124731]
[-0.26632272 -0.1296538 -0.01417942 ... -0.02760683 -0.0678463
0.90535574]]
V is [[-0.49363198 -0.4960859 -0.51307612 -0.49696996]
[ 0.16544881 0.22035026 0.45036767 -0.84925934]
[-0.45544321 -0.47763446 0.73066685 0.17482211]
[ 0.72216733 -0.69080378 0.00691485 -0.0348808 ]]
```python
s[D:] = 0
S = np.zeros((A.shape[0], A.shape[1]))
S[:A.shape[1], :A.shape[1]] = np.diag(s)
A_reconstructed = U.dot(S.dot(V))
np.sum((A_reconstructed - A) ** 2) ** (1/2) # Frobenius norm
# reconstruct matrix
U.dot(S)
```
# Exercise (20 minutes)
**1. Use the Amazon Fire Dataset**:
- Vectorize the documents with either word embeddings or TF-IDF
- Reduce the # of dimensions to 2
- Visualize the results on a scatter plot and identify clusters of similar product reviews
**2. Challenge:** In the example we have just done, we modelled the relationship between `x` and `y` as a linear relationship. Your manager does not think SVD works as well with non-linear relationships, and is skeptical because many of the social media marketing data she sees has non-linear relationships (ie. the number of times a post was seen versus the number of reactions it got). Investigate what the impact of non-linear relationships on SVD performance is in terms of:
- **# of dimensions needed to capture 90% of the variance**
- **Frobenius Norm** reconstruction error
If SVD is not significantly affected by non-linearity, **explain to your manager why**.
If SVD is significantly affected, **evaluate whether or not SVD can still be used as a dimensionality reduction technique**.
```python
# define a matrix
A = np.array([[1, 2], [3, 4], [5, 6]])
print(A)
# Singular-value decomposition
U, s, VT = svd(A)
# create m x n Sigma matrix
Sigma = np.zeros((A.shape[0], A.shape[1]))
# populate Sigma with n x n diagonal matrix
Sigma[:A.shape[1], :A.shape[1]] = np.diag(s)
Sigma
```
[[1 2]
[3 4]
[5 6]]
array([[9.52551809, 0. ],
[0. , 0.51430058],
[0. , 0. ]])
|
ff0a33d64f4abb206b1f886f71bac42a2df6dd7c
| 48,851 |
ipynb
|
Jupyter Notebook
|
week4/Dimensionality Reduction.ipynb
|
Emmayyyyy/dso-560-nlp-and-text-analytics
|
76bde7d0ed7e760b5de455251a523e92a10116fd
|
[
"MIT"
] | null | null | null |
week4/Dimensionality Reduction.ipynb
|
Emmayyyyy/dso-560-nlp-and-text-analytics
|
76bde7d0ed7e760b5de455251a523e92a10116fd
|
[
"MIT"
] | null | null | null |
week4/Dimensionality Reduction.ipynb
|
Emmayyyyy/dso-560-nlp-and-text-analytics
|
76bde7d0ed7e760b5de455251a523e92a10116fd
|
[
"MIT"
] | null | null | null | 74.241641 | 9,384 | 0.759636 | true | 4,703 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.812867 | 0.743168 | 0.604097 |
__label__eng_Latn
| 0.895367 | 0.24185 |
### Agnostic smooth minimization
**Purpose of this demo**: Motivate optimization in general (hyper-parameter selection, non-convexity)
+ Disclaimer: I'm not expert in Python - I use Python/Matlab as tools to validate algorithms and theorems.
+ Thus, my implementations are not the most efficient ones + there might be bugs
**Problem definition:**.
\begin{align}
f(x_1, x_2) = (x_1 + 2x_2 - 7)^2 + (2x_1 + x_2 - 5)^2
\end{align}
\begin{equation*}
\begin{aligned}
& \underset{x \in \mathbb{R}^2}{\text{min}}
& & f(x_1, x_2)
\end{aligned}
\end{equation*}
+ Any properties you might extract by just looking at the function?
+ Is it differentiable?
\begin{align}
\frac{\partial f(x_1, x_2)}{\partial x_1} = 2(x_1 + 2x_2 - 7) + 4(2x_1 + x_2 - 5)
\end{align}
\begin{align}
\frac{\partial f(x_1, x_2)}{\partial x_2} = 4(x_1 + 2x_2 - 7) + 2(2x_1 + x_2 - 5)
\end{align}
and as a vector:
\begin{align}
\nabla f(x_1, x_2) = \begin{bmatrix} 2(x_1 + 2x_2 - 7) + 4(2x_1 + x_2 - 5) \\ 4(x_1 + 2x_2 - 7) + 2(2x_1 + x_2 - 5) \end{bmatrix}
\end{align}
+ Is it negative-valued, positive-valued, or both?
**3D plot**
```python
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import random
from matplotlib import rc
#rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
## for Palatino and other serif fonts use:
rc('font',**{'family':'serif','serif':['Palatino']})
rc('text', usetex=True)
from numpy import linalg as la
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Make data.
X = np.arange(-10, 10, 0.25)
Y = np.arange(-10, 10, 0.25)
X, Y = np.meshgrid(X, Y)
Z = (X + 2*Y - 7)**2 + (2*X + Y - 5)**2
ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)
ax.view_init(10, 90)
```
```python
# Returns the value of the objecive function
def f(x):
return (x[0] + 2*x[1] - 7)**2 + (2*x[0] + x[1] - 5)**2
```
```python
def GD_Booth(x_new, eta, iters, epsilon, verbose, x_star):
p = 2
x_list, f_list = [la.norm(x_new - x_star, 2)], [f(x_new)]
for i in range(iters):
x_old = x_new
# Compute gradient
grad = np.zeros(p)
grad[0] = 2*(x_old[0] + 2*x_old[1] - 7) + 4*(2*x_old[0] + x_old[1] - 5)
grad[1] = 4*(x_old[0] + 2*x_old[1] - 7) + 2*(2*x_old[0] + x_old[1] - 5)
# Perform gradient step
x_new = x_old - eta * grad
if (la.norm(x_new - x_old, 2) / la.norm(x_new, 2)) < epsilon:
break
# Keep track of solutions and objective values
x_list.append(la.norm(x_new - x_star, 2))
f_list.append(f(x_new))
if verbose:
print("iter# = "+ str(i) + ", ||x_new - x_star||_2 = " + str(la.norm(x_new - x_star, 2)) + ", f(x_new) = " + str(f(x_new)))
print("Number of steps:", len(f_list))
return x_new, x_list, f_list
```
```python
# Run algorithm
epsilon = 1e-6 # Precision parameter
iters = 100
eta = 0.1
x_init = np.random.randn(2) # Initial estimate
# print(x_init)
#x_init[0] = 1
#x_init[1] = 1
print(x_init)
x_star = [1, 3]
x_GD, x_list, f_list = GD_Booth(x_init, eta, iters, epsilon, True, x_star)
# Plot
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
xs = range(len(x_list))
plt.plot(xs, x_list, '-o', color = '#3399FF', linewidth = 4, alpha = 0.7, markerfacecolor = 'b')
plt.yscale('log')
plt.xlabel('Iterations')
plt.ylabel(r"$\|x^\star - \widehat{x}\|_2$")
# Make room for the ridiculously large title.
plt.subplots_adjust(top=0.8)
plt.show()
```
**Problem definition: Weirder 2d non-convex problem**.
\begin{align}
f(x_1, x_2) = \sum_{i = 1}^2 \tfrac{x_i^2}{4000} - \prod_{i = 1}^2\cos\left( \tfrac{x_i}{\sqrt{i}} \right) + 1
\end{align}
\begin{equation*}
\begin{aligned}
& \underset{x \in \mathbb{R}^2}{\text{min}}
& & f(x_1, x_2)
\end{aligned}
\end{equation*}
+ Any properties you might extract by just looking at the function?
+ Is it differentiable?
\begin{align}
\frac{\partial f(x_1, x_2)}{\partial x_1} = \tfrac{x_1}{2000} + \sin\left(x_1\right) \cdot \cos\left(\tfrac{x_2}{\sqrt{2}}\right)
\end{align}
\begin{align}
\frac{\partial f(x_1, x_2)}{\partial x_2} = \tfrac{x_2}{2000} + \cos\left(x_1\right) \cdot \frac{\sin\left(\tfrac{x_2}{\sqrt{2}}\right)}{\sqrt{2}}
\end{align}
and as a vector:
\begin{align}
\nabla f(x_1, x_2) = \begin{bmatrix} \tfrac{x_1}{2000} + \sin\left(x_1\right) \cdot \cos\left(\tfrac{x_2}{\sqrt{2}}\right) \\
\tfrac{x_2}{2000} + \cos\left(x_1\right) \cdot \frac{\sin\left(\tfrac{x_2}{\sqrt{2}}\right)}{\sqrt{2}} \end{bmatrix}
\end{align}
**3D plot**
```python
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Make data.
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = (X**2)/4000 + (Y**2)/4000 - np.cos(X)*np.cos(Y/np.sqrt(2)) + 1
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
```
```python
# Returns the value of the objecive function
def f(x):
return (x[0]**2)/4000 + (x[1]**2)/4000 - np.cos(x[0])*np.cos(x[1]/np.sqrt(2)) + 1
```
```python
def GD_Griewank(x_new, eta, iters, epsilon, verbose, x_star):
p = 2
x_list, f_list = [la.norm(x_new - x_star, 2)], [f(x_new)]
for i in range(iters):
x_old = x_new
# Compute gradient
grad = np.zeros(p)
grad[0] = x_old[0]/2000 + np.sin(x_old[0]) * np.cos(x_old[1]/np.sqrt(2))
grad[1] = x_old[1]/2000 + np.cos(x_old[0]) * np.sin(x_old[1]/np.sqrt(2))/np.sqrt(2)
# Perform gradient step
x_new = x_old - eta * grad
if (la.norm(x_new - x_old, 2) / la.norm(x_new, 2)) < epsilon:
break
# Keep track of solutions and objective values
x_list.append(la.norm(x_new - x_star, 2))
f_list.append(f(x_new))
if verbose:
print("iter# = "+ str(i) + ", ||x_new - x_star||_2 = " + str(la.norm(x_new - x_star, 2)) + ", f(x_new) = " + str(f(x_new)))
print("Number of steps:", len(f_list))
return x_new, x_list, f_list
```
```python
# Run algorithm
epsilon = 1e-6 # Precision parameter
iters = 1000
eta = 0.1
# x_init = np.random.randn(2) # Initial estimate
# print(x_init)
x_init[0] = -3
x_init[1] = 3
# print(x_init)
x_star = [0, 0]
x_GD, x_list, f_list = GD_Griewank(x_init, eta, iters, epsilon, True, x_star)
# Plot
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
xs = range(len(x_list))
plt.plot(xs, x_list, '-o', color = '#3399FF', linewidth = 4, alpha = 0.7, markerfacecolor = 'b')
plt.yscale('log')
plt.xlabel('Iterations')
plt.ylabel(r"$\|x^\star - \widehat{x}\|_2$")
# Make room for the ridiculously large title.
plt.subplots_adjust(top=0.8)
plt.show()
```
### Non-convex Lipschitz continuous gradient function
\begin{align}
f(x) = x^2 + 3\sin^2(x)
\end{align}
Then, its gradient and Hessian are calculated as:
\begin{align}
f'(x) = 2x + 6\sin(x) \cdot \cos(x), ~~f''(x) = 2 + 6\cos^2(x) - 6\sin^2(x)
\end{align}
```python
fig = plt.figure()
ax = fig.add_subplot(111)
# Make data.
x = np.arange(-5, 5, 0.25)
f = x**2 + 3*(np.sin(x))**2
ax.plot(x, f, color='green', marker='o', linestyle='dashed', linewidth=2, markersize=2)
```
```python
fig = plt.figure()
ax = fig.add_subplot(111)
# Make data.
x = np.arange(-5, 5, 0.25)
hessian_f = 2 + 6*(np.cos(x))**2 - 6*(np.sin(x))**2
ax.plot(x, hessian_f, color='green', marker='o', linestyle='dashed', linewidth=2, markersize=2)
```
Which means that we can upper bound:
\begin{align}
\|\nabla^2 f(x)\|_2 \leq 8 := L
\end{align}
|
642ecd424e160f07a1dd437f708083df43a6c111
| 160,660 |
ipynb
|
Jupyter Notebook
|
schedule/images/Chapter 2.ipynb
|
akyrillidis/comp414-514
|
ed1a58cda99cb4cb14b62276eebfb4082276e9f9
|
[
"MIT"
] | 2 |
2019-08-26T02:20:04.000Z
|
2020-05-29T11:10:52.000Z
|
schedule/images/Chapter 2.ipynb
|
akyrillidis/comp414-514
|
ed1a58cda99cb4cb14b62276eebfb4082276e9f9
|
[
"MIT"
] | null | null | null |
schedule/images/Chapter 2.ipynb
|
akyrillidis/comp414-514
|
ed1a58cda99cb4cb14b62276eebfb4082276e9f9
|
[
"MIT"
] | 1 |
2020-12-07T05:45:48.000Z
|
2020-12-07T05:45:48.000Z
| 216.231494 | 50,692 | 0.874462 | true | 2,868 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.951142 | 0.962673 | 0.915639 |
__label__eng_Latn
| 0.331654 | 0.96567 |
# Свойства эллипсоидов
<ins>**Определение:**</ins> Множество $E \subseteq ℝ^n$ — **эллипсоид**, если существуют $a \in ℝ^n$ и положительно определенная матрица $A \in ℝ^{n \times n}$, такие что
$$E = E(A, a) := \{x \in ℝ^n \mid (x - a)^T A^{-1} (x - a) \le 1\} \tag 1$$
$$E(A, a) := \{x \in ℝ^n \mid \|x - a\|_A \le 1\}, \tag 2$$
то есть $E(A, a)$ — это единичный шар с центром в $a$ в линейном пространстве $ℝ^n$ с нормой $\|\cdot\|_A$. В частности, шар единичного радиуса $S(0, 1)$ с центром в нуле — это эллипсоид $E(I, 0)$. Эллипсоид $E$ одназначно определяет $a$ (центр) и $A$.
<ins>**Факт:**</ins> Для любой положительно определенной матрицы $A$ существует единственная положительно определенная матрица $A^{1/2}$, такая что $A = A^{1/2} A^{1/2} = (A^{1/2})^T A^{1/2}$.
<ins>**Следствие:**</ins>
$$E(A, a) = A^{1/2} S(0, 1) + a \tag 3$$
<ins>**Доказательство:**</ins>
$$A^{1/2} S(0, 1) + a = \{A^{1/2}x + a \mid x \in S(0, 1)\}\\
= \{x \mid A^{-1/2} (x - a) \in S(0, 1)\}\\
= \{x \mid (A^{-1/2} (x - a))^T A^{-1/2} (x - a) \le 1\}\\
= \{x \mid (x - a)^T (A^{-1/2})^T A^{-1/2} (x - a) \le 1\}\\
= \{x \mid (x - a)^T A^{-1} (x - a) \le 1\} = E(A, a)$$
<ins>**Замечание:**</ins> Каждый эллипсоид — это образ единичного шара при биективном афинном преобразовании.
<!-- <ins>**Факт:**</ins> Пусть $\Lambda$ и $\lambda$ --- наибольшее и наименьшее из собственных чисел матрицы $A$ соответственнно, тогда $\sqrt\Lambda$, $\sqrt\lambda$ --- длины большой и малой полуосей эллипса $E(A, a)$, $S(a, \sqrt\Lambda$) --- наибольший шар содержащий $E$, a $S(a, \sqrt\lambda$) --- наибольший шар содержащийся в $E$. -->
<ins>**Факт:**</ins> Объем эллипсоида $vol\big(E(A, a)\big)$ зависит только от матрицы $A$ и размерности пространства:
$$vol\big(E(A, a)\big) = \sqrt {\det A} \cdot V_n, \tag 4$$
где $V_n$ — объем единичного шара $S(0, 1)$ в $ℝ^n$:
$$V_n = \frac{\pi^{n/2}}{\Gamma(n/2 + 1)} \sim \frac{1}{\sqrt {\pi n}} \bigg(\frac{2e\pi}{n} \bigg)^{n/2}. \tag 5$$
<ins>**Замечание:**</ins> Пусть $T : x \longmapsto Dx + d$ — биективное афинное преобразование, тогда $vol\bigg(T\big(E(A, a)\big)\bigg) = \det D \sqrt{\det A} \cdot V_n$. В частности,
$$\frac{vol\big(E(A, a)\big)}{vol\big(E(B, b)\big)} = \frac{vol\bigg(T\big(E(A, a)\big)\bigg)}{vol\bigg(T\big(E(B, b)\big)\bigg)},$$
то есть отношение объемов эллипсоидов инвариантно относительно биективных афинных преобразований.
<ins>**Наблюдение:**</ins> Для $c \neq 0, \max {c^T x}, x \in S(a, 1)$ достигается при $x^* = a + \frac{c}{\|c\|}$.
<ins>**Лемма:**</ins> Пусть $E(A, a) \subseteq ℝ^n$ — эллипсоид, $c \in ℝ^n \setminus \{0\}$,
$$b := \frac{1}{\sqrt {c^T A c}} A c, \tag 7\\
z_{max} := a + b,\\
z_{min} := a - b,$$
тогда $z_{max}$ максимизирует, а $z_{min}$ минимизирует $c^T x$ на $E$.
<ins>**Доказательство:**</ins> Пусть $Q := A^{1/2}$. Из $(3)$:
$$Q^{-1} E(A, a) = S(0, 1) + Q^{-1} a = S(Q^{-1} a, 1),$$
cледовательно
$$\max\{c^T x \mid x \in E(A, a)\} = \max\{c^T Q Q^{-1} x \mid Q^{-1} x \in Q^{-1} E(A, a)\}\\
= \max\{c^T Q y \mid y \in S(Q^{-1} a, 1)\}\\
= c^T Q \bigg( Q^{-1} a + \frac{(c^T Q)^T}{\|c^T Q\|}\bigg)\\
= c^T a + c^T Q \frac{1}{\|Q c\|} Q c\\
= c^T a + c^T \frac{1}{\sqrt{c^T A c}} A c\\
= c^T a + \sqrt{c^T A c}.$$
Подставляем $(7)$ и получаем требуемое:
$$c^T z_{max} = c^T \bigg(a + \frac{1}{\sqrt{c^T A c}}A c\bigg) = c^T a + \sqrt{c^T A c} = c^T a + \|c\|_{A^{-1}} = \max\{c^T x \mid x \in E(A, a)\}, \tag 8\\
c^T z_{min} = c^T \bigg(a - \frac{1}{\sqrt{c^T A c}} A c\bigg) = c^T a - \sqrt{c^T A c} = c^T a - \|c\|_{A^{-1}} = \min\{c^T x \mid x \in E(A, a)\}.$$
<ins>**Замечание:**</ins> Прямая, проходящая через $z_{max}$ и $z_{min}$ проходит через центр $a$ эллипса $E(A, a)$ и имеет направление $b$.
<ins>**Теорема:**</ins> Для любого выпуклого тела $K \subseteq ℝ^n$ существует единственный эллипсоид $E = E(A, a)$ минимального объема, содержащий $K$. Более того, $K$ содержит эллипсоид $E(n^{-2}A, a)$, получаемый из $E$ гомотетией из центра $E$ с коэффициентом $1/n$.
<ins>**Определение:**</ins> Эллипсоид минимального объема содержащий выпуклое тело $K$ называется эллипсоидом **Лёвнера-Джона** — $LJ(K)$.
<ins>**Факт:**</ins> Пусть $E(A, a) \subseteq ℝ^n$ — эллипсоид, $c \in ℝ^n \setminus \{0\}$,
$$E'(A, a, c) := E(A, a) \cap \{x \in ℝ^n \mid c^T x \le c^T a\}, \tag 9$$
то есть $E'(A, a, c)$ — половина эллипсоида $E(A, a)$, полученная разрезанием $E(A, a)$ через центр $a$ гиперплоскостью $\{x \in ℝ^n \mid c^T x = c^T a\}$.
Тогда $LJ\big(E'(A, a, c)\big)$ — эллипсоид $E(A', a')$, заданный формулами:
$$a' := a - \frac{1}{n + 1} b, \tag {10}$$
$$A' := \frac{n^2}{n^2 - 1}\left(A - \frac{2}{n + 1} b b^T \right), \tag {11}$$
где $b$ — вектор, определенный в $(7)$.
<ins>**Замечание:**</ins> Центр $a'$ эллипсоида $E(A', a') = LJ\big(E'(A, a, c)\big)$ лежит на прямой, проходящей через $z_{max}$ и $z_{min}$ из $(7)$. В частности, из $a$ можно получить $a'$, сделав шаг к $z_{min}$ длины $\frac{1}{n + 1}\|z_{min} - a\|$. Более того, граница $E(A', a')$ касается $E'(A, a, c)$ в точке $z_{min}$ и во множестве $\{x \mid \|x - a\|_A = 1\} \cap \{x \mid c^T x = c^T a\}$. В $ℝ^2$ последнее множество состоит из двух точек, а в $ℝ^3$ является эллипсом.
<ins>**Следствие:**</ins> Пусть $E(A, a) \subseteq ℝ^n$ — эллипсоид, $c \in ℝ^n \setminus \{0\}, \gamma \in ℝ$. Из $(8)$ следует, что гиперплоскость
$$H := \{x \in ℝ^n \mid c^T x = \gamma\}$$
имеет непустое пересечение с $E(A, a)$ тогда и только тогда, когда $c^T z_{min} \le \gamma \le c^T z_{max}$, что равносильно
$$|c^T a - \gamma| \le \sqrt {c^T A c}.$$
Для удобства обозначим
$$\alpha := \frac{c^T a - \gamma}{\sqrt {c^T A c}}. \tag {12}$$
Тогда $H$ пересекает $E(A, a)$ тогда и только тогда, когда $-1 \le \alpha \le 1$. Число $\alpha$ можно интерпретировать как расстояние со знаком от центра $a$ до границы полупространства $\{x \in ℝ^n \mid c^T x \le \gamma\}$ в пространстве $ℝ^n$ с нормой $\|\cdot\|_{A^{-1}}$ (расстояние неположительно, если $a$ содержится в полупространстве).
<ins>**Факт:**</ins> Пусть $E(A, a) \subseteq ℝ^n$ — эллипсоид, $c \in ℝ^n \setminus \{0\}, \gamma \in ℝ$,
$$E'(A, a, c, \gamma) := E(A, a) \cap \{x \in ℝ^n \mid c^T x \le \gamma\}, \tag {13}$$
то есть $E'(A, a, c, \gamma)$ — часть эллипсоида $E(A, a)$, полученная разрезанием $E(A, a)$ гиперплоскостью $H = \{x \in ℝ^n \mid c^T x = \gamma\}$.
Тогда $LJ\big(E'(A, a, c, \gamma)\big)$ — эллипсоид $E(A', a')$, заданный таким образом:
\begin{equation}
\begin{cases}
E(A, a),\ -1 \le \alpha \le -1/n\\
\begin{cases}
a' := a - \frac{1 + n \alpha}{n + 1} b, \tag {14}\\
A' := \frac{n^2}{n^2 - 1}(1 - \alpha^2)\big(A - \frac{2 (1 + n \alpha)}{(n + 1) (1 + \alpha)} b b^T \big)
\end{cases}, -1/n \le \alpha < 1
\end{cases},
\end{equation}
где $b$ — вектор, определенный в $(7)$.
<ins>**Замечание:**</ins> Если $\gamma = c^T a$, то $E'(A, a, c, \gamma) = E'(A, a, c)$ и формулы $(14)$ сводятся к $(10), (11)$.
<ins>**Определение:**</ins> Разрез через центр $a$ эллипсоида $E(A, a)$ гиперплоскостью $H$ называется **центральным**. Если $0 < \alpha < 1$, то $E'(A, a, c, \gamma)$ строго содержится в $E'(A, a, c)$, то есть от $E(A, a)$ отрезается больший кусок, поэтому разрез $c^T x \le \gamma$ называется **глубоким**. Если $-1/n < \alpha < 0$, то от $E(A, a)$ остается больше половины $E'(A, a, c)$, поэтому разрез $c^T x \le \gamma$ называется **мелким**, но даже в этом случае
$$vol\bigg(LJ\big(E'(A, a, c, \gamma)\big)\bigg) < vol\big(E(A, a)\big).$$
# Метод эллипсоидов
<ins>**Задача:**</ins> Дана система неравенств $c_i^T x \le \gamma_i, i = 1, \dots, m$, с $n$ переменными. Требуется определить пустое ли множество
$$P := \{x \in ℝ^n \mid c_i^T x \le \gamma_i, i = 1, \dots, m\} = \{x \mid C x \le d\}, \tag {15}$$
и если $P \neq \emptyset$, то найти точку, содержащуюся в $P$.
Известно, что либо $P = \emptyset$, либо $P$ — ограниченное полноразмерное множество.
<ins>**Следствие:**</ins> Существует $R > 0$, такое что $P \subseteq S(0, R)$.
<ins>**Следствие:**</ins> Существует $r > 0$, такое что либо $P = \emptyset$, либо $P \supseteq S(c, r)$ для некоторой точки $c$.
<ins>**Определение:**</ins> $P \subseteq S(0, R) = E(R^2 I, 0)$. Определим $E_0 = E(A_0, a_0)$, где $A_0 = R^2 I, a_0 = 0$. Пусть на текущем шаге есть эллипсоид $E_k := E(A_k, a_k) \supseteq P$. Можно подставить $a_k$ в систему неравенств $(15)$ и проверить, все ли из них выполняются. Если да, то решение задачи найдено. В противном случае хотя бы одно из неравенств нарушается, допустим $c^T x \le \gamma$. Значит, $c^T a_k > \gamma$. Гиперплоскость $\{x \mid c^T x = c^T a_k\}$, проходящая через центр $a_k$ эллипсоида $E_k$, делит $E_k$ на две половины. Из построения известно, что $P$ содержится в половине
$$E'(A_k, a_k, c) := \{x \in E(A_k, a_k) \mid c^T x \le c^T a_k\}. \tag {16}$$
Тогда определим следующий эллипсоид $E_{k+1}$ в последовательности так:
$$E_{k+1} = LJ\big(E'(A_k, a_k, c)\big) \supseteq E'(A_k, a_k, c) \supseteq P, $$
задающийся формулами $(10)$ и $(11)$.
<ins>**Лемма:**</ins>
$$\frac{vol(E_{k+1})}{vol(E_{k})} = \bigg(\big(\frac{n}{n+1}\big)^{n+1}\big(\frac{n}{n-1}\big)^{n-1}\bigg)^{1/2} < e^{-1/(2n)} < 1$$
<ins>**Доказательство:**</ins> Пусть $E = E_k$ — эллипсоид, а $E' = E_{k+1}$ — эллипсоид полученный из $E$ по формулам $(7), (10), (11)$. Чтобы оценить отношение объемов, предположим, что исходный эллипсоид $F := E(I, 0)$, то есть единичный шар с центром в нуле, а вектор $c$, который используется для вычисления $b$ из $(7)$, равен $(-1, 0, \dots, 0)^T$. В таком случае, по формулам $(7), (10), (11)$ получаем:
$$b = (-1, 0, \dots, 0)^T,\\
a' = a - \frac{1}{n + 1} b = \big(\frac{1}{n+1}, 0, \dots, 0\big)^T,\\
A' = \frac{n^2}{n^2 - 1} \big(I - \frac{2}{n + 1}(-1, 0, \dots, 0)^T(-1, 0, \dots, 0)\big)\\
= diag\bigg(\big(\frac{n^2}{(n + 1)^2}, \frac{n^2}{n^2 - 1}, \dots, \frac{n^2}{n^2 - 1}\big)^T\bigg).$$
Из этого и $(4)$ для объемов $F$ и $F' := E(A', a')$:
$$\frac{vol(F')}{vol(F)} = \frac{\sqrt {\det A'} V_n}{\sqrt {\det A} V_n} = \sqrt {\det A'} = \big(\frac{n^{2n}}{(n+1)^n (n-1)^n} \frac{n-1}{n+1}\big)^{1/2}\\
= \bigg(\big(\frac{n}{n+1}\big)^{n+1}\big(\frac{n}{n-1}\big)^{n-1}\bigg)^{1/2}.$$
Логарифмируя, получаем:
$$\frac{vol(F')}{vol(F)} < e^{-1/(2n)} \Leftrightarrow e^{1/n} < \big(\frac{n + 1}{n}\big)^{n+1}\big(\frac{n-1}{n}\big)^{n-1}\\
\Leftrightarrow \frac{1}{n} < (n+1) \ln {\big(1 + \frac{1}{n}\big)} + (n-1) \ln {\big(1 - \frac{1}{n}\big)}.$$
Раскладывая логарифм в ряд $\ln x = \sum_{k=1}^\infty (-1)^{k+1} (x-1)^k / k, 0 < x \le 2$, получаем:
$$(n+1) \ln {\big(1 + \frac{1}{n}\big)} + (n-1) \ln {\big(1 - \frac{1}{n}\big)}=\\
= \sum_{k=1}^\infty (-1)^{k+1} \frac{n+1}{k n^k} - \sum_{k=1}^\infty \frac{n-1}{k n^k}\\
= \sum_{k=1}^\infty (-1)^{k+1} \frac{2}{k n^k} - \sum_{k=1}^\infty \frac{2(n-1)}{2k n^{2k}}\\
= \sum_{k=1}^\infty \frac{2}{(2k-1) n^{2k-1}} - \sum_{k=1}^\infty \frac{2}{2k n^{2k}} - \sum_{k=1}^\infty \frac{1}{k n^{2k-1}} + \sum_{k=1}^\infty \frac{1}{k n^{2k}}\\
= \sum_{k=1}^\infty \frac{1}{(2k-1) k} \frac{1}{n^{2k-1}}\\
> \frac{1}{n},$$
что доказывает требуемое для нашего конкретного выбора $F$ и $c$.
Теперь пусть $c \in ℝ^n \setminus \{0\}$ — произвольный вектор. Из $(3)$:
$$E = A^{1/2}E(I, 0) + a = A^{1/2}F + a.$$
Ясно, что существует ортогональная матрица $Q$, переводящая $A^{1/2} c$ в $(-\|Q A^{1/2} c\|, 0, \dots, 0)$, то есть
$$(-1, 0, \dots, 0)^T = \frac{1}{\|Q A^{1/2} c\|}Q A^{1/2} c.$$
Тогда
$$T(x) := A^{1/2} Q^T x + a$$
— биективное афинное преобразование, причем $T^{-1}(x) = QA^{-1/2}(x-a)$.
Теперь
$$T(F) = \{T(y) \mid y^T y \le 1\}\\
= \{x \mid (T^{-1}x)^T (T^{-1}x) \le 1\}\\
= \{x \mid (x - a)^T A^{-1/2} Q^T Q A^{-1/2} (x-a) \le 1\}\\
= \{x \mid (x - a)^T A^{-1} (x-a) \le 1\}\\
= E,$$
Аналогично для эллипсоида $F'$ получается $T(F') = E'$.
Пользуясь инвариантностью отношения объемов эллипсоидов к биективным афинным преобразованиям, имеем:
$$\frac{vol(E')}{vol(E)} = \frac{vol\big(T(E')\big)}{vol\big(T(E)\big)} = \frac{vol(F')}{vol(F)} = \bigg(\big(\frac{n}{n+1}\big)^{n+1}\big(\frac{n}{n-1}\big)^{n-1}\bigg)^{1/2} < e^{-1/(2n)}.$$
<ins>**Алгоритм:**</ins>
Инициализация: $k := 0, N := \min \{x \mid x > 2n^2 \ln {\frac{R}{r}}, x \in ℤ \}, A_0 := R^2 I, a_0 = 0, E_0 = E(A_0, a_0)$.
Шаг алгоритма:
* Если $k = N$, СТОП! (Объявить $P$ пустым)
* Если $a_k \in P$, СТОП! (Найдено возможное решение)
* Если $a_k \notin P$, выбрать неравенство $c^T x \le \gamma$ системы $Cx \le d$, которое нарушается при подстановке $a_k$, обновить значения $b, a_{k+1}, A_{k+1}$ через $a_k, A_k$ по формулам $(7), (10), (11)$.
<ins>**Теорема:**</ins> Алгоритм решает поставленную задачу.
<ins>**Доказательство:**</ins> После $2n$ шагов алгоритма объем эллипсоида уменьшается в $e$ раз.
$$vol(E_N) = e^{-\frac{N}{2n}} vol(E_0) < e^{-n \ln {\frac{R}{r}}} vol\big(E(R^2 I)\big)\\
= \frac{r^n}{R^n} R^n vol\big(E(I, 0)\big) = vol\big(E(r^2 I, 0)\big) = vol\big(S(c, r)\big),$$
то есть у $P \subseteq E_N$ объем строго меньше, чем у $S(c, r)$, но это противоречит второму следствию из задачи, если $P \neq \emptyset$. Значит, $P = \emptyset$.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import clear_output, display
from scipy.linalg import svd
from matplotlib.patches import Ellipse
import time
import math
EPS = 1e-7
def ellipsoid_method(A, b, r=EPS, R=15):
clear_output(wait=True)
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(1, 1, 1)
plt.ion()
rng = np.linspace(-10, 10, 1001)
pts = np.array([[x, y]
for x in rng
for y in rng])
mask = np.array([[x >= 0.0 and y >= 0.0 for x in rng] for y in rng])
for a_, b_ in zip(A, b):
mask &= (np.squeeze(a_.reshape((1, -1)) @ pts.T) < b_).reshape(len(rng), len(rng))
ax.imshow(mask.T.astype(int), extent=(pts[:, 0].min(), pts[:, 0].max(),
pts[:, 1].min(), pts[:, 1].max()),
origin='lower', cmap='Greys', alpha=0.3)
time.sleep(2)
display(fig)
n = A.shape[1]
c = np.zeros((n,))
Q = R * R * np.eye(n, dtype=float)
for iter in range(2 * n * (n + 1) * round(math.log(R / r) + 1)):
clear_output(wait=True)
S, V, _ = np.linalg.svd(Q.copy())
angle = np.degrees(np.arctan2(S[1, 0], S[0, 0]))
width, height = 2 * np.sqrt(V)
ax.add_patch(Ellipse(c.copy(), width, height, angle, facecolor='none', edgecolor='b'))
ax.plot(c[0], c[1], 'bo')
time.sleep(2)
display(fig)
mask = np.concatenate((np.squeeze(A @ c) <= b, c >= 0), axis=0)
if np.count_nonzero(np.logical_not(mask)) == 0:
plt.close()
return c
a = A[np.argmax(np.logical_not(mask)), :]
a = a.reshape((-1, 1))
if a.T @ c >= 0:
a = -a
a_ = np.squeeze(np.array([-a.copy()[1], a.copy()[0]]))
l, r = 0, 30
Q_inv = np.linalg.inv(Q)
while r - l > 1e-7:
m = (l + r) / 2
x = (m * a_).reshape((-1, 1))
if x.T @ Q_inv @ x <= 1.0:
l = m
else:
r = m
p1, p2 = c.copy() - l * a_, c.copy() + l * a_
clear_output(wait=True)
ax.plot([p1[0], p2[0]].copy(), [p1[1], p2[1]].copy(), 'r')
time.sleep(2)
display(fig)
c += np.squeeze((Q @ a) / (math.sqrt(a.T @ Q @ a) * (n + 1)))
Q = (n ** 2 / (n ** 2 - 1)) * (Q - (2 / ((n + 1) * a.T @ Q @ a)) * (Q @ a @ a.T @ Q))
return None
ellipsoid_method(np.array([[1, 0.8],
[0, 1],
[-1.2, 0.1],
[0.1, -1],
[-1, -2.5]]),
np.array([7, 3, -0.2, -0.3, 3]))
```
# Литература
* https://www.mpi-inf.mpg.de/fileadmin/inf/d1/ellipsoid-lovasz.pdf
* https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f11/www/notes/lecture08.pdf
|
bb5ad16a5a70ec279fb2a8fbfe8dd96e0741e22b
| 91,389 |
ipynb
|
Jupyter Notebook
|
optimization_course/examples/ellipsoid_method.ipynb
|
ivanoleinik/interactive-visualization
|
66438a8be1acc2e6119f31de41c84db76030ecfc
|
[
"MIT"
] | null | null | null |
optimization_course/examples/ellipsoid_method.ipynb
|
ivanoleinik/interactive-visualization
|
66438a8be1acc2e6119f31de41c84db76030ecfc
|
[
"MIT"
] | null | null | null |
optimization_course/examples/ellipsoid_method.ipynb
|
ivanoleinik/interactive-visualization
|
66438a8be1acc2e6119f31de41c84db76030ecfc
|
[
"MIT"
] | null | null | null | 236.759067 | 69,398 | 0.836851 | true | 7,648 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.766294 | 0.879147 | 0.673685 |
__label__rus_Cyrl
| 0.344121 | 0.403526 |
# Kalman Filter Implementation in Python
Reference: **Probabilistic Robotics**
```python
#Import packages and library
import numpy as np
import math
import matplotlib.pyplot as plt
from scipy.stats import norm
%matplotlib inline
```
## State Vector
Constant Velocity Model for Ego Motion
$$x_k= \left[ \matrix{ x \\ y \\ \dot x \\ \dot y} \right] = \matrix{ \text{Position X} \\ \text{Position Y} \\ \text{Velocity in X} \\ \text{Velocity in Y}}$$
## Observation Model
$$y = \textbf{H} \cdot x$$
which is
$$y = \begin{bmatrix}0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix} \cdot x$$ means: You observe the velocity directly in the correct unit
## Initial State $x_0$
$$x_{0} = \begin{bmatrix}0 \\ 0 \\ 0 \\ 0\end{bmatrix}$$
```python
x = np.matrix([[0,0,0,0]]).T
```
```python
x, x.shape
```
(matrix([[0],
[0],
[0],
[0]]), (4, 1))
## Initial Uncertainty $P_0$
$$P_{0} = \begin{bmatrix}\sigma^2_x & 0 & 0 & 0 \\ 0 & \sigma^2_y & 0 & 0 \\ 0 & 0 & \sigma^2_{\dot x} & 0 \\ 0 & 0 & 0 & \sigma^2_{\dot y} \end{bmatrix}$$
with $\sigma$ as the standard deviation
```python
σ = 1000.0
P = np.diag([σ, σ, σ, σ])
P, P.shape
```
(array([[1000., 0., 0., 0.],
[ 0., 1000., 0., 0.],
[ 0., 0., 1000., 0.],
[ 0., 0., 0., 1000.]]), (4, 4))
```python
fig = plt.figure(figsize=(6, 6))
im = plt.imshow(P, interpolation="none", cmap=plt.get_cmap('binary'))
plt.title('Initial Covariance Matrix $P$')
ylocs, ylabels = plt.yticks()
# set the locations of the yticks
plt.yticks(np.arange(7))
# set the locations and labels of the yticks
plt.yticks(np.arange(6),('$x$', '$y$', '$\dot x$', '$\dot y$'), fontsize=22)
xlocs, xlabels = plt.xticks()
# set the locations of the yticks
plt.xticks(np.arange(7))
# set the locations and labels of the yticks
plt.xticks(np.arange(6),('$x$', '$y$', '$\dot x$', '$\dot y$'), fontsize=22)
plt.xlim([-0.5,3.5])
plt.ylim([3.5, -0.5])
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(plt.gca())
cax = divider.append_axes("right", "5%", pad="3%")
plt.colorbar(im, cax=cax);
```
## Dynamic Matrix $A$
It is calculated from the dynamics of the Egomotion.
$$x_{k+1} = x_{k} + \dot x_{k} \cdot \Delta t$$
$$y_{k+1} = y_{k} + \dot y_{k} \cdot \Delta t$$
$$\dot x_{k+1} = \dot x_{k}$$
$$\dot y_{k+1} = \dot y_{k}$$
```python
δt = 0.1 # Time Step between Filter Steps
A = np.matrix([[1.0, 0.0, δt, 0.0],
[0.0, 1.0, 0.0, δt],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]])
A, A.shape
```
(matrix([[1. , 0. , 0.1, 0. ],
[0. , 1. , 0. , 0.1],
[0. , 0. , 1. , 0. ],
[0. , 0. , 0. , 1. ]]), (4, 4))
## Measurement Matrix $H$
We directly measure the Velocity $\dot x$ and $\dot y$
$$H = \begin{bmatrix}0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}$$
```python
H = np.matrix([[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]])
H, H.shape
```
(matrix([[0., 0., 1., 0.],
[0., 0., 0., 1.]]), (2, 4))
## Measurement Noise Covariance $R$
Tells the Kalman Filter how 'bad' the sensor readings are.
$$R = \begin{bmatrix}\sigma^2_{\dot x} & 0 \\ 0 & \sigma^2_{\dot y} \end{bmatrix}$$
```python
rm = 100.0
R = np.matrix([[rm, 0],
[0, rm]])
R, R.shape
```
(matrix([[100., 0.],
[ 0., 100.]]), (2, 2))
```python
# Plot between -10 and 10 with .001 steps.
xpdf = np.arange(-20, 20, 0.001)
plt.subplot(121)
plt.plot(xpdf, norm.pdf(xpdf,0,R[0,0]))
plt.title('$\dot x$')
plt.subplot(122)
plt.plot(xpdf, norm.pdf(xpdf,0,R[1,1]))
plt.title('$\dot y$')
plt.tight_layout()
```
### Process Noise Covariance $Q$
The Position of the car can be influenced by a force (e.g. wind), which leads to an acceleration disturbance (noise). This process noise has to be modeled with the process noise covariance matrix Q.
$$Q = \begin{bmatrix}\sigma_{x}^2 & \sigma_{xy} & \sigma_{x \dot x} & \sigma_{x \dot y} \\ \sigma_{yx} & \sigma_{y}^2 & \sigma_{y \dot x} & \sigma_{y \dot y} \\ \sigma_{\dot x x} & \sigma_{\dot x y} & \sigma_{\dot x}^2 & \sigma_{\dot x \dot y} \\ \sigma_{\dot y x} & \sigma_{\dot y y} & \sigma_{\dot y \dot x} & \sigma_{\dot y}^2 \end{bmatrix}$$
One can calculate Q as
$$Q = G\cdot G^T \cdot \sigma_v^2$$
with $G = \begin{bmatrix}0.5δt^2 & 0.5δt^2 & δt & δt\end{bmatrix}^T$ and $\sigma_v$ as the acceleration process noise, which can be assumed for a vehicle to be $8.8m/s^2$, according to: Schubert, R., Adam, C., Obst, M., Mattern, N., Leonhardt, V., & Wanielik, G. (2011). [Empirical evaluation of vehicular models for ego motion estimation](http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5940526). 2011 IEEE Intelligent Vehicles Symposium (IV), 534–539. doi:10.1109/IVS.2011.5940526
```python
sv = 8.8
G = np.matrix([[0.5*δt**2],
[0.5*δt**2],
[δt],
[δt]])
Q = G*G.T*sv**2
Q
```
matrix([[0.001936, 0.001936, 0.03872 , 0.03872 ],
[0.001936, 0.001936, 0.03872 , 0.03872 ],
[0.03872 , 0.03872 , 0.7744 , 0.7744 ],
[0.03872 , 0.03872 , 0.7744 , 0.7744 ]])
```python
from sympy import Symbol, Matrix
from sympy.interactive import printing
printing.init_printing()
dt = Symbol('δt')
Qs = Matrix([[0.5*dt**2],[0.5*dt**2],[dt],[dt]])
Qs = Qs*Qs.T
Qs
```
$$\left[\begin{matrix}0.25 δt^{4} & 0.25 δt^{4} & 0.5 δt^{3} & 0.5 δt^{3}\\0.25 δt^{4} & 0.25 δt^{4} & 0.5 δt^{3} & 0.5 δt^{3}\\0.5 δt^{3} & 0.5 δt^{3} & δt^{2} & δt^{2}\\0.5 δt^{3} & 0.5 δt^{3} & δt^{2} & δt^{2}\end{matrix}\right]$$
```python
fig = plt.figure(figsize=(6, 6))
im = plt.imshow(Q, interpolation="none", cmap=plt.get_cmap('binary'))
plt.title('Process Noise Covariance Matrix $P$')
ylocs, ylabels = plt.yticks()
# set the locations of the yticks
plt.yticks(np.arange(7))
# set the locations and labels of the yticks
plt.yticks(np.arange(6),('$x$', '$y$', '$\dot x$', '$\dot y$'), fontsize=22)
xlocs, xlabels = plt.xticks()
# set the locations of the yticks
plt.xticks(np.arange(7))
# set the locations and labels of the yticks
plt.xticks(np.arange(6),('$x$', '$y$', '$\dot x$', '$\dot y$'), fontsize=22)
plt.xlim([-0.5,3.5])
plt.ylim([3.5, -0.5])
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(plt.gca())
cax = divider.append_axes("right", "5%", pad="3%")
plt.colorbar(im, cax=cax);
```
## Identity Matrix $I$
```python
I = np.eye(4)
I, I.shape
```
(array([[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]]), (4, 4))
## Measurements
For example, we are using some random generated measurement values
```python
m = 100 # Measurements
vx= 20 # in X
vy= 10 # in Y
mx = np.array(vx+np.random.randn(m))
my = np.array(vy+np.random.randn(m))
measurements = np.vstack((mx,my))
print(measurements.shape)
print('Standard Deviation of Acceleration Measurements=%.2f' % np.std(mx))
print('You assumed %.2f in R.' % R[0,0])
```
(2, 100)
Standard Deviation of Acceleration Measurements=1.13
You assumed 100.00 in R.
```python
fig = plt.figure(figsize=(16,5))
plt.step(range(m),mx, label='$\dot x$')
plt.step(range(m),my, label='$\dot y$')
plt.ylabel(r'Velocity $m/s$')
plt.title('Measurements')
plt.legend(loc='best',prop={'size':18})
```
```python
# Preallocation for Plotting
xt = []
yt = []
dxt= []
dyt= []
Zx = []
Zy = []
Px = []
Py = []
Pdx= []
Pdy= []
Rdx= []
Rdy= []
Kx = []
Ky = []
Kdx= []
Kdy= []
def savestates(x, Z, P, R, K):
xt.append(float(x[0]))
yt.append(float(x[1]))
dxt.append(float(x[2]))
dyt.append(float(x[3]))
Zx.append(float(Z[0]))
Zy.append(float(Z[1]))
Px.append(float(P[0,0]))
Py.append(float(P[1,1]))
Pdx.append(float(P[2,2]))
Pdy.append(float(P[3,3]))
Rdx.append(float(R[0,0]))
Rdy.append(float(R[1,1]))
Kx.append(float(K[0,0]))
Ky.append(float(K[1,0]))
Kdx.append(float(K[2,0]))
Kdy.append(float(K[3,0]))
```
```python
for n in range(len(measurements[0])):
# Time Update (Prediction)
# ========================
# Project the state ahead
x = A*x
# Project the error covariance ahead
P = A*P*A.T + Q
# Measurement Update (Correction)
# ===============================
# Compute the Kalman Gain
S = H*P*H.T + R
K = (P*H.T) * np.linalg.pinv(S)
# Update the estimate via z
Z = measurements[:,n].reshape(2,1)
y = Z - (H*x) # Innovation or Residual
x = x + (K*y)
# Update the error covariance
P = (I - (K*H))*P
# Save states (for Plotting)
savestates(x, Z, P, R, K)
```
## Kalman Gains $K$
```python
def plot_K():
fig = plt.figure(figsize=(16,9))
plt.plot(range(len(measurements[0])),Kx, label='Kalman Gain for $x$')
plt.plot(range(len(measurements[0])),Ky, label='Kalman Gain for $y$')
plt.plot(range(len(measurements[0])),Kdx, label='Kalman Gain for $\dot x$')
plt.plot(range(len(measurements[0])),Kdy, label='Kalman Gain for $\dot y$')
plt.xlabel('Filter Step')
plt.ylabel('')
plt.title('Kalman Gain (the lower, the more the measurement fullfill the prediction)')
plt.legend(loc='best',prop={'size':22})
plot_K()
```
## Uncertainty Matrix $P$
```python
def plot_P():
fig = plt.figure(figsize=(16,9))
plt.plot(range(len(measurements[0])),Px, label='$x$')
plt.plot(range(len(measurements[0])),Py, label='$y$')
plt.plot(range(len(measurements[0])),Pdx, label='$\dot x$')
plt.plot(range(len(measurements[0])),Pdy, label='$\dot y$')
plt.xlabel('Filter Step')
plt.ylabel('')
plt.title('Uncertainty (Elements from Matrix $P$)')
plt.legend(loc='best',prop={'size':22})
plot_P()
```
## State Estimate $x$
```python
def plot_x():
fig = plt.figure(figsize=(16,9))
plt.step(range(len(measurements[0])),dxt, label='$\dot x$')
plt.step(range(len(measurements[0])),dyt, label='$\dot y$')
plt.axhline(vx, color='#999999', label='$\dot x_{real}$')
plt.axhline(vy, color='#999999', label='$\dot y_{real}$')
plt.xlabel('Filter Step')
plt.title('Estimate (Elements from State Vector $x$)')
plt.legend(loc='best',prop={'size':22})
plt.ylim([0, 30])
plt.ylabel('Velocity')
plot_x()
```
## Position x/y
```python
def plot_xy():
fig = plt.figure(figsize=(16,16))
plt.scatter(xt,yt, s=20, label='State', c='k')
plt.scatter(xt[0],yt[0], s=100, label='Start', c='g')
plt.scatter(xt[-1],yt[-1], s=100, label='Goal', c='r')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Position')
plt.legend(loc='best')
plt.axis('equal')
plot_xy()
```
```python
```
|
32b5b17c1b1a8c6087a59fb8732b4f07645390c4
| 193,684 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/kalman filter-checkpoint.ipynb
|
udohsolomon/robotics-algorithms-in-pyrhon
|
5886699db5fc29fdffc6e101c546368d1acdfa02
|
[
"MIT"
] | 1 |
2021-08-01T05:43:26.000Z
|
2021-08-01T05:43:26.000Z
|
.ipynb_checkpoints/kalman filter-checkpoint.ipynb
|
udohsolomon/robotics-algorithms-in-pyrhon
|
5886699db5fc29fdffc6e101c546368d1acdfa02
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/kalman filter-checkpoint.ipynb
|
udohsolomon/robotics-algorithms-in-pyrhon
|
5886699db5fc29fdffc6e101c546368d1acdfa02
|
[
"MIT"
] | 1 |
2018-08-28T22:10:24.000Z
|
2018-08-28T22:10:24.000Z
| 234.484262 | 42,840 | 0.911283 | true | 3,850 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.908618 | 0.795658 | 0.722949 |
__label__eng_Latn
| 0.422636 | 0.517985 |
# Método de la secante
J.J
---
Está dado por las iteraciones
\begin{equation}
x_{i+1} = x_{i} - \frac{f(x_{i})}{d_{i}},
\end{equation}
con
\begin{equation}
d_{i} = \frac{f(x_{i})-f(x_{i-1})}{x_{i} - x_{i-1}}.
\end{equation}
Ejemplo: $f(x) = x^{2} + 3x +1$
```python
import numpy as np
import matplotlib.pyplot as plt
```
```python
def f(x):
f = x**2 + 3*x + 1
return f
```
```python
X = np.linspace(-3, 1, 100)
plt.plot(X, f(X))
```
```python
e = 0.0001 #error
maxit = 1000 #iteraciones máximas
```
```python
def Secante(x0, x1, func = f, error = e, iterations = maxit):
it = 0
while it < maxit:
it += 1
x = x1 - ((x1-x0)/(f(x1)-f(x0)))*f(x1)
if abs(f(x)) > e:
x0 = x1
x1 = x
else:
break
return x
```
```python
sol1 = Secante(-3.,-2.)
sol2 = Secante(-1.,0.)
print(sol1)
print(sol2)
```
-2.618034055727554
-0.38197424892703863
```python
X = np.linspace(-3, 1, 100)
plt.plot(X, f(X))
plt.plot(sol1,f(sol1),'ro')
plt.plot(sol2,f(sol2),'ro')
plt.show()
```
```python
```
|
4e80a6a622f13afa0541f10b8e8195373ed42d54
| 28,637 |
ipynb
|
Jupyter Notebook
|
Ec. No lineales/Metodo_secante.ipynb
|
JosueJuarez/M-todos-Num-ricos
|
8e328ef0f70519be57163b556db1fd27c3b04560
|
[
"MIT"
] | null | null | null |
Ec. No lineales/Metodo_secante.ipynb
|
JosueJuarez/M-todos-Num-ricos
|
8e328ef0f70519be57163b556db1fd27c3b04560
|
[
"MIT"
] | null | null | null |
Ec. No lineales/Metodo_secante.ipynb
|
JosueJuarez/M-todos-Num-ricos
|
8e328ef0f70519be57163b556db1fd27c3b04560
|
[
"MIT"
] | null | null | null | 145.365482 | 12,312 | 0.896742 | true | 432 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.927363 | 0.847968 | 0.786374 |
__label__yue_Hant
| 0.106281 | 0.665343 |
```python
import matplotlib.pyplot as plt
import numpy as np
import sympy
```
# Math Magic: Final Week Review
# Review: A bit of modern math philosophy
### What is the most elementary, simple, basic, primitive kind of math operation?
### What do you call it when you do a bunch of ^that kind of operation in a row?
### What do you call it when you do a bunch of ^that kind of operation in a row?
### Believe it or not, you can keep going down that road. If you do a bunch of ^that in a row, it is called `tetration`!
### A `hyperoperation`
- The name for any `normal` operation (like the ones above).
- This recursive repeated arithmetic operation can be described by:
# Anyway, that's not even review. Back to the review!
# 1. What is the sine of theta? $sin(\theta) = ?$
# 2. What is the cosine of theta? $cos(\theta) = ?$
# 3. What is the tangent of theta? $tan(\theta) = ?$
# 4. What is the arcsine (inverse sine) of y / r ? $arcsin(\frac{y}{r}) = ?$
# 5. What is the arccosine (inverse cosine) of x / r ? $arccos(\frac{x}{r}) = ?$
# 6. What is the arctangent (inverse tangent) of y / x? $arctan(\frac{y}{x}) = ?$
# 1. What the heck is this demonic drawing above?
# 2. How many $\pi$ radians is 0 degrees?
# 3. How many $\pi$ radians is 90 degrees?
# 4. How many $\pi$ radians is 270 degrees?
# 5. Calculate in python the conversion between any *degrees* angle to *radians*
```python
# YOUR CODE HERE! Try converting degrees to radians
degrees = 42.42 # convert me to radians!
degrees * (2*np.pi/360)
# HINT: use np.pi, it is a highly accurate pi
```
0.7403686686959946
# Cowcooloose review
```
/; ;\
__ \____//
/{_\_/ `'\____
\___ ---(=)--(=)--}
_____________________________/ :--'
,-,'`@@@@@@@@ @@@@@@ \_ `__\
;:( @@@@@@@@@ @@@ \___(o'o)
:: ) @@@@ @@@@@@ ,'@@( `===='
:: : @@@@@: @@@@ `@@@:
:: \ @@@@@: @@@@@@@) ( '@@@'
;; /\ /`, @@@@@@@@@\\ :@@@@@)
::/ ) {_----------------: :~`, ;
;;'`; : ) : / `; ;
;;;; : : ; : ; ; :
`'`' / : : : : : :
)_ \__; ";" :_ ; \_\ `,','
:__\ \ * `,'* \ \ : \ * 8`;'* *
`^' \ :/ `^' `-^-' \v/ : \/ -Bill Ames-
```
### 1. What do we mean by the phrase "rate of change"?
### 2. What do we call the "rate of change" of position?
### 3. What do we call the "rate of change" of velocity/speed?
### 4. What 1-word definition do we use for the phrase "rate of change"?
### 5. What does this thing represent?
## $\frac{df(x)}{dx} = f'(x)$
# Recall that...
## $x(t) = position$
## $\frac{dx(t)}{dt} = x'(t) = velocity$
## $\frac{dx^2(t)}{dt} = x''(t) = acceleration$
### 6. The function $f(x) = e^x$ is special for what reason?
# GUESS WHAT
# I THINK
# WE WILL LEAVE IT AT THAT
# Thank you for a very fun semester!
- I hope you learned something useful!
```python
```
|
b9a96ed26e5094d1361a778b7e9459e767bec7ea
| 6,826 |
ipynb
|
Jupyter Notebook
|
Math Magic/Week16_FinalWeekWut.ipynb
|
jonnyhyman/Programming-Classes
|
f56a9cf90e3b8e4aafad99644f1ed7e87ba14995
|
[
"MIT"
] | 23 |
2018-12-15T01:10:51.000Z
|
2021-07-02T05:23:45.000Z
|
Math Magic/Week16_FinalWeekWut.ipynb
|
jonnyhyman/Programming-Classes
|
f56a9cf90e3b8e4aafad99644f1ed7e87ba14995
|
[
"MIT"
] | null | null | null |
Math Magic/Week16_FinalWeekWut.ipynb
|
jonnyhyman/Programming-Classes
|
f56a9cf90e3b8e4aafad99644f1ed7e87ba14995
|
[
"MIT"
] | 5 |
2020-02-15T12:47:42.000Z
|
2021-02-28T03:01:19.000Z
| 28.206612 | 128 | 0.464694 | true | 979 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.855851 | 0.819893 | 0.701707 |
__label__eng_Latn
| 0.975478 | 0.468631 |
# PC lab 4: Logistic regression for classification
## Introduction
In a binary classification setting, we are interested in assigning an observation $\mathbf{x}$ to one of two possible classes, denoted by $y$. For example, maybe we would like to tell if a patient has a particular disease (y = 1) or not (y = 0), given certain symptoms $\mathbf{x}$. Generally speaking, we want to predict the probability that the class label $y = 1$, conditional on the data that we have observed, $\mathbf{x}$. This probability is also called the *class posterior* or the *class-membership probability*, which we can denote as follows:
\begin{equation}
Pr(Y=1|X) = P(X) = p(y= 1|\mathbf{x})
\end{equation}
The book uses the statistical notation on the left, but the notation with the feature vector $\mathbf{x}$ is more common in machine learning literature. In any case, both notations mean exactly the same. In this PC lab, we will cover one of the most popular classifiers: logistic regression.
Just like linear regression, logistic regression (LR) is a linear model. However, LR does not model the mean of a continuous outcome, but the logarithm of the [odds](https://en.wikipedia.org/wiki/Odds) of the probability $P(X)$:
\begin{equation}
log \frac{P(X)}{1-P(X)} = w_{0}x_{0} + w_{1}x_{1} + ... + w_{p}x_{p} = \mathbf{w^Tx}
\end{equation}
However, we are really interested in the probability $p$ and not in the odds of p. Therefore, it is common to apply the inverse log-odds transformation on both sides of the equation. This transformation is the **logistic function $\phi(z)$**, hence the name of logistic regression:
\begin{equation}
\phi(z) = \frac{1}{1 + e^{-z}} = \frac{e^{z}}{1+e^{z}}
\end{equation}
Verify for yourself that applying $\phi(z)$ on the log-odds yields $p$.
In other words, we can make predictions for $p$ with logistic regression as follows:
\begin{equation}
p(\mathbf{x}) = \phi(w_{0}x_{0} + w_{1}x_{1} + ... + w_{p}x_{p})
\end{equation}
If we want to classify a data point $\mathbf{x}$, we can calculate $p$ with LR and simply assign it to class 1 if $p$ exceeds a certain probability threshold. A typical threshold is 0.5.
<div class="alert alert-success">
<b>EXERCISE: What would happen to our predictions when we would choose a lower threshold, let's say 0.2? How would this affect the accuracy of our predictions? Can you think of a situation where we would want to do this? </b>
</div>
Let's stop for a moment to have a look at what the logistic transformation does:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-white')
x = np.arange(-8,8,0.01) # Generate a range of x values
y = 1/(1+np.exp(-x)) # Calculate the logistic transformation of these x'es
# Plot them
fig, ax = plt.subplots()
ax.scatter(x,y, marker='.');
ax.set_xlabel('x');
ax.set_ylabel('y');
```
As shown, $\phi$ monotonically maps any number from the real domain to a number in [0,1]. Indeed, this is a desirable property if we want to predict a probability!
## Training a LR model
### Loss function: the cross-entropy loss
Now that we have the logistic regression model to predict the probability of belonging to a certain class, all that remains is the question of how to find the weights of the model on a given set of training data. As always, this is the problem of minimizing a loss function to find an optimal set of weights. Where we used the mean squared error (MSE) for linear regression, we will use the **cross-entropy** loss function for LR. Minimizing the binary cross-entropy loss is equivalent to minimizing the negative log-likelihood of the data under a binomial distribution:
\begin{equation}
l_{log} = \frac{1}{n}\sum\limits_{i=1}^{n}-y_{i}log(p(\mathbf{x}_i))-(1-y_i)log(1-p(\mathbf{x}_i))
\end{equation}
Where $y_i$ is the class of data point $i$ and $p(\mathbf{x}_i)$ is the class-membership probability predicted by logistic regression for the observation $\mathbf{x}_i$. If we look at the cross-entropy loss **for a single data point** $l_{log}^{i}$, we can break it down in two parts:
\begin{equation}
l_{log}^{i} =
\begin{cases}
-log(p(\mathbf{x}_i)) & \text{if} \ y_i = 1\\
-log(1-p(\mathbf{x}_i)) & \text{if} \ y_i = 0
\end{cases}
\end{equation}
It should be clear that the cross-entropy loss will be larger for smaller values of $p(\mathbf{x}_i)$ if $y_i = 1$, and vice versa. Let's visualize the cross-entropy loss for these two cases:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-white')
p = np.arange(0.01,0.99,0.01) # Generate a range of predicted probabilities between zero and 1
l_0 = -np.log(p) # cross-entropy loss if y = 1
l_1 = -np.log(1-p)
# Plot them
fig, ax = plt.subplots(figsize=(10,6))
ax.scatter(p,l_0, marker='.');
ax.scatter(p, l_1, marker='.');
ax.set_xlabel('Predicted class-membership probability $p$');
ax.set_ylabel('Cross-entropy loss');
ax.legend(['Cross-entropy loss when y is 1', 'Cross-entropy loss when y is 0']);
```
<div class="alert alert-success">
<b>EXERCISE: Make sure you understand the cross-entropy loss. Verify that it correctly penalizes wrong predictions in both cases. Suppose that we have no information about the data at all, what would be the best guess for p to minimize the cross-entropy loss?</b>
</div>
### Finding the weights with gradient descent
For linear regression, the solutions to the normal equations provide a convenient analytical solution to obtain the optimal set of model weights $\mathbf{w}$ on a set of training data. There is no such solution to find the optimal weights for a logistic regression model, so instead an optimization algorithm such as **gradient descent** is used to train a LR model.
Gradient descent is an iterative optimization algorithm that searches for the optimum of an objective function by making small changes to a set of optimization variables. Gradient descent (and more complex optimization algorithms, but we offer a separate course for that) are widely used in machine learning to find the optimal set of model weights that minimize a certain loss function. Especially when there is no analytical solution for the weights available like for linear regression.
Generally, gradient descent uses the **gradient** of the loss function with respect to the model weights to perform updates to those weights in each iteration. At iteration $k+1$, the algoritm computes the gradient of a loss function $J(\mathbf{w})$ evaluated in the training data. Then, it performs an update to the current parameter values that is relative to the gradient multiplied with the learning rate $\gamma$, which is a constant:
\begin{equation}
\mathbf{w}_{k+1} = \mathbf{w}_{k} - \gamma\nabla{J(\mathbf{w}_{k})}
\end{equation}
Initially, the weights are often initialized with random draws from some distribution. The algorithm continues to do updates, until it converges or until some stopping criterion is reached.
In order to perform gradient descent to find the weights of a logistic regression model, we need to compute the gradient of the loss function with respect to the model parameters. Recall that, for a single data point, the cross-entropy loss function was as follows:
\begin{equation}
l_{log}^{i}(\mathbf{w}) = -y_{i}log(p(\mathbf{x}_i))-(1-y_i)log(1-p(\mathbf{x}_i))
\end{equation}
Where $p(\mathbf{x}_i)$ is nothing else than the weighted sum of the inputs squashed through the sigmoid function:
\begin{equation}
p(\mathbf{x}_i) = \phi(w_{0}x_{0i} + w_{1}x_{1i} + ... + w_{p}x_{pi})
\end{equation}
Before going on, let's first calculate the partial derivative of the sigmoid function:
\begin{equation}
\frac{\partial}{\partial z} \phi(z) = \frac{\partial}{\partial z} \frac{1}{1+e^{-z}} = \frac{e^{-z}}{(1+e^{-z})^2}
\end{equation}
We can rewrite this as follows:
\begin{equation}
\frac{e^{-z}}{(1+e^{-z})^2} = \frac{1 +e^{-z} -1}{(1+e^{-z})^2} = \frac{1}{1+e^{-z}} \Big( 1 - \frac{1}{1+e^{-z}}\Big) = \phi(z)(1 - \phi(z))
\end{equation}
With this result and by applying the chain rule, we can compute the partial derivative of the loss function with respect to the weight $w_j$. We will use the symbol $z$ to denote the weighted sum of the features (i.e., the input for the logistic function) and drop the superscript $i$ for clarity:
\begin{equation}
\frac{\partial l_{log}(\mathbf{w})}{\partial w_j} = \frac{\partial}{\partial w_j} \Big(-ylog(\phi(z))-(1-y)log(1-\phi(z)) \Big) \\ = \Big( \frac{-y}{\phi(z)} + \frac{1-y}{1-\phi(z)} \Big)\frac{\partial}{\partial w_j}\phi(z) \\ = \Big( \frac{-y}{\phi(z)} + \frac{1-y}{1-\phi(z)} \Big) \phi(z)(1-\phi(z))\frac{\partial}{\partial w_j}z
\end{equation}
Since $z = w_{0}x_{0} + w_{1}x_{1} + ... + w_{p}x_{p}$, $\frac{\partial}{\partial w_j}z$ is nothing more than $x_j$, so we can rewrite the above as:
\begin{equation}
\frac{\partial l_{log}(\mathbf{w})}{\partial w_j} = \Big( -y(1-\phi(z) + (1-y)\phi(z))\Big)x_j \\ = \big( -y + \phi(z) \big)x_j = \big( \phi(z) - y \big)x_j
\end{equation}
With this partial derivative of the loss w.r.t $w_j$, we can write the update rule of the gradient descent algorithm for the $j^{th}$ weight:
\begin{equation}
w_{j,k+1} = w_{j,k} - \gamma(\phi(z_k)-y)x_{j}
\end{equation}
In other words, the algorithm will each time perform an update to the weight $w_{j}$ that is in proportion to the difference between the predicted probability of class membership in the previous iteration and the actual class. Makes sense! The entire gradient is simply the vector that contains the partial derivatives with respect to the entire weight vector $\mathbf{w}$, and in reality gradient descent acts on $\mathbf{w}$ and not on an individual weight $w_j$. Also, the gradient is typically not calculated for one data point, but evaluated over the entire training data set.
In practice, software packages such as [scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegressionscikit-learn) do this optimization under the hood, so there is no need to implement it manually each time we want to use logistic regression.
## Application: predicting the status of a breast cancer tumor
In the first application of logistic regression, we will use the [Breast Cancer Wisconsin (Diagnostic) Data Set](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29). The dataset contains information on the disease status of 569 breast cancer patients: they were either diagnosed with a malign (status M) or with a benign (status B) tumor.
For each patient, the dataset also contains 30 features that represent statistics of the cell nuclei present in images taken after [fine needle aspirate tissue samples](https://en.wikipedia.org/wiki/Fine-needle_aspiration). These 30 features are the mean, standard deviation and the maximum of 10 measurements on the cell nuclei:
- radius
- texture
- perimeter
- area
- smoothness
- compactness
- concavity
- concave points
- symmetry
- fractal dimension
**Based on these feature of the cell nuclei, we would like to predict whether a patient has a malign or a benign breast cancer tumor.** Let's read in the data:
```python
import pandas as pd
import numpy as np
data = pd.read_csv('./wdbc.data', header=None, index_col=0, names=['Patient ID', 'status'] + list(np.arange(1,31,1)))
status = data['status']
data.head()
```
First, let's look at the distribution of the disease status:
```python
pd.value_counts(data['status']).plot(kind='bar');
```
There are about 350 benign cases and roughly 200 malign cases. This is a fairly balanced dataset.
<div class="alert alert-success">
<b>EXERCISE: Suppose that the dataset was unbalanced, with 525 B cases and only 25 M cases. Can you think of any problems this could give if we would evaluate the accuracy of our logistic regression predicitions? We will come back to this problem in one of the next labs.</b>
</div>
In order to perform LR, we will encode the disease status as a binary variable.
```python
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder().fit(status)
encoder.classes_ # 'B' will become class 0, 'M' will become class 1
```
```python
y = encoder.transform(status)
x = data.drop('status', axis=1).values # Drop the disease status from the dataframe, convert to numpy array
y
```
<div class="alert alert-success">
<b>EXERCISE: Using scikit-learn, split the data in a 80% training and a 20% test set. Fit a logistic regression model and evaluate trainig and testing accuracy. You should be able to achieve a fairly high accuracy! </b>
</div>
Use [this method](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) for train-test splitting and [this implementation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) to perform logistic regression. You can use the [score method](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.score) to evaluate the accuracy of your model. This method computes the accuracy as follows:
\begin{equation}
score = \frac{\text{Number of correctly classified instances}}{\text{Total number of instances}}
\end{equation}
```python
# ** solution **
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=42)
LRmodel = LogisticRegression()
LRmodel.fit(X_train, y_train)
LRmodel.score(X_train, y_train)
```
```python
LRmodel.score(X_test, y_test)
# ** solution **
```
To get an idea of which features are considered important by the LR model, we can visualize the weights it has learned in a bar plot:
```python
fig, ax = plt.subplots(figsize=(10,5))
pd.Series(LRmodel.coef_.flatten()).plot(ax=ax, kind='bar')
```
<div class="alert alert-success">
<b>Use your LR model to predict the class probabilities and the classes for the training data. Use the [```predict_proba()```](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.predict_proba) method to generate the predicted probabilities. Use the code below to plot the two against each other. Which data points are most likely to be misclassified?</b>
</div>
```python
# ** solution **
predicted_class_probabilities = LRmodel.predict_proba(X_train)[:,1]
predicted_classes = LRmodel.predict(X_train)
#** solution **
misclassified = predicted_classes != y_train
colors = ['#b2182b' if wrong else '#2166ac' for wrong in misclassified ]
fig, ax = plt.subplots(figsize=(8,6))
ax.scatter(predicted_class_probabilities, predicted_classes, marker='.', s=100, color=colors)
ax.set_xlabel('Predicted class probabilies').set_fontsize(20)
ax.set_ylabel('Predicted classes').set_fontsize(20)
ax.legend(['Correctly classified'])
```
Clearly, the misclassified points are those points where the predicted probability of class membership is rather close to 0.5.
# Multiclass classification
## One-versus-one classification
One-versus-one classification is another approach to a multiclass classification problem. For a K-class problem, the strategy consists of training $\frac{K(K-1)}{2}$ classifiers. Each of these classifiers much learn to distinguish to classes. One the classifiers are trained, a voting scheme is applied to make a prediction for an unseen data point: each classifier has to decide between two possible classes. The final predicted class is that class that gets the largest number of votes.
## One-versus-all classification
In one-versus-all (OvA) classification, a single classifier is trained per class, with the samples of that class as positive samples and all other samples as negatives. The strategy proceeds as follows for a K-class classification problem:
**Inputs:**
* a classification algorithm L (learner)
* feature matrix $\mathbf{X}$
* label vector y where $y_i \in {1,...,K}$
**Procedure:**
for each k in {1,...,K}:
* construct a new label vector z where $z_i$ is 1 if $y_i$ = k and 0 otherwise
* train L on $\mathbf{X}$ to obtain a classifier $f_k$. The classifier should return class probabilities and not hard labels.
**Returns**
A list of trained classifiers $f_k$ for each k in {1,...,K}
To make predictions for a new sample $\mathbf{x}$, the $k$ classifiers are applied to $\mathbf{x}$ and the final predicted label is the label that is predicted with the highest confidence (probability):
$\hat{y} = \underset{k \in {1,...,K}}{\mathrm{argmax}} \, f_k(\mathbf{x})$
Let's simulate a toy dataset with three classes and two features, and split it in training and test data:
```python
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
X, y = make_blobs(n_samples=1000, centers= [[-2.5, 0], [0, 1], [3.5, -1]], random_state=42)
#train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
```python
# Make the plot
fig, ax = plt.subplots(figsize=(15,10))
colors=['#66c2a5', '#fc8d62', '#8da0cb']
for i, color in enumerate(colors):
idx_train = np.where(y_train==i)
idx_test = np.where(y_test==i)
plt.scatter(X_train[idx_train,0], X_train[idx_train,1], c=color, edgecolor='black', s=30)
plt.scatter(X_test[idx_test,0], X_test[idx_test, 1],c='white', edgecolor=color, s=70)
ax.legend(['Class 1 - train',
'Class 1 - test',
'Class 2 - train',
'Class 2 - test',
'Class 3 - train',
'Class 3 - test']);
ax.set_xlabel('Feature 1');
ax.set_ylabel('Feature 2');
ax.set_title('Toy dataset for multiclass classification').set_fontsize(20);
```
```python
i = 0
```
```python
z_train = np.zeros(len(y_train))
z_train[np.where(y_train==i)] = 1
```
```python
z_train
```
<div class="alert alert-success">
<b>Implement a one-versus-all loop to tackle this classification problem. Train a list of classifiers on the training data. Make predictions on the test data. You can use the code below to get started. </b>
</div>
```python
# ***solution***
L1 = LogisticRegression()
L2 = LogisticRegression()
L3 = LogisticRegression()
L = [L1, L2, L3]
# Train the list of classifiers in one-v-all fashion
for i,l in enumerate(L):
z_train = (y_train==i)
l.fit(X_train, z_train)
# Make predictions on the test data
predictions = []
for l in L:
predictions.append(l.predict_proba(X_test)[:,1])
predicted_classes = np.array([np.argmax([pred[i] for pred in predictions]) for i in range(len(X_test))])
# ***solution***
```
<div class="alert alert-success">
<b>Run the code below to visualize your predictions. </b>
</div>
```python
classification_accuracy=np.round(np.mean(y_test == predicted_classes)*100,2)
```
```python
# Visualize the predictions
fig, ax = plt.subplots(figsize=(15,10))
colors=['#66c2a5', '#fc8d62', '#8da0cb']
for i, color in enumerate(colors):
idx_train = np.where(y_train==i)
idx_test = np.where(y_test==i)
plt.scatter(X_train[idx_train,0], X_train[idx_train,1], c=color, edgecolor='black', s=30)
plt.scatter(X_test[idx_test,0], X_test[idx_test, 1],c='white', edgecolor=color, s=70)
ax.legend(['Class 1 - train',
'Class 1 - test',
'Class 2 - train',
'Class 2 - test',
'Class 3 - train',
'Class 3 - test']);
# add predictions
for i, color in enumerate(colors):
idx_predicted = np.where(predicted_classes==i)
plt.scatter(X_test[idx_predicted,0], X_test[idx_predicted,1], c=color, marker='s', s=2)
ax.set_xlabel('Feature 1');
ax.set_ylabel('Feature 2');
ax.set_title('Toy dataset for multiclass classification - classification accuracy: {}%'.format(classification_accuracy)).set_fontsize(20);
```
```python
```
```python
```
|
aad31ca6ea3ddb17af3db13405265a851d854943
| 29,091 |
ipynb
|
Jupyter Notebook
|
predmod/lab4/PClab04_logreg_SOLVED.ipynb
|
gdewael/teaching
|
a78155041918422a843f31c863dd11e8afc5646a
|
[
"MIT"
] | null | null | null |
predmod/lab4/PClab04_logreg_SOLVED.ipynb
|
gdewael/teaching
|
a78155041918422a843f31c863dd11e8afc5646a
|
[
"MIT"
] | null | null | null |
predmod/lab4/PClab04_logreg_SOLVED.ipynb
|
gdewael/teaching
|
a78155041918422a843f31c863dd11e8afc5646a
|
[
"MIT"
] | null | null | null | 36.003713 | 592 | 0.603554 | true | 5,264 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.960361 | 0.824462 | 0.791781 |
__label__eng_Latn
| 0.980153 | 0.677906 |
```python
import numpy as np
import scipy
import qiskit
from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister, Aer
from qiskit.circuit import ParameterVector, Parameter
from qiskit import algorithms
from qiskit.ignis.verification.tomography import state_tomography_circuits, TomographyFitter, StateTomographyFitter
from qiskit.quantum_info import Statevector, state_fidelity
from qiskit.opflow import I, X, Y, Z
from qiskit.quantum_info.operators import Operator
from qiskit.algorithms.optimizers import SPSA
from qiskit import IBMQ, transpile
from qiskit.providers.aer import AerSimulator
from qiskit.test.mock import FakeGuadalupe
from qiskit_ionq import IonQProvider
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pandas as pd
```
```python
IBMQ.load_account() # Load account from disk
IBMQ.providers() # List all available providers
```
[<AccountProvider for IBMQ(hub='ibm-q', group='open', project='main')>,
<AccountProvider for IBMQ(hub='ibm-q-community', group='qhack-hackathon', project='7-qubit')>,
<AccountProvider for IBMQ(hub='ibm-q-community', group='qhack-hackathon', project='16-qubit')>]
## Table of contents
- [Experiment 1: Two qubits, one layer](#exp1)
- [Experiment 2: Three qubits, two layers](#exp2)
- [Experiment 3: Generalization](#exp3)
- [Experiment 4: Model capacity](#exp4)
# Simulating collective neutrino oscillation using QAOA algorithm
As stated in the readme, our project aim is to demonstrate the time and space evolution of the set of amplitudes from a Schrodinger equation:
\begin{equation}
|\phi(t)\rangle=e^{-iHt}|\phi_{0}\rangle
\end{equation}
The Hamiltonian from the equation is the Hamiltonian for neutrino flavor evolution in an environment with high denisty of neutrinos which inlcude vacuum and forward-scattering interaction contributions. Let have a closer look into this Hamiltonian
The Hamiltonian that characterizes the system of $N$ interacting neutrinos (each represented by a qubit) is given by
\begin{equation}
H = \sum_{k=1}^N \overrightarrow{b} \cdot \overrightarrow{\sigma_k} + \sum_{p<q}^N J_{pq} \overrightarrow{\sigma_p} \cdot \overrightarrow{\sigma_q}
\end{equation}
with the external field $\overrightarrow{b} = (b^x,b^y,b^z) = \left(\sqrt{1-0.925^2}, 0, -0.925\right)$ and the pair coupling matrix $J_{pq} = 1 - \cos(\theta_{pq})$, where $\theta_{pq} = \arccos(0.9) \frac{|p-q|}{N-1}$.
We choose to trotterize $H$ by viewing it in the form of $H = A_1 + A_2 + A_3 + B_1 + B_2 + B_3$, where
\begin{align}
A_1 &= \sum_{k=1}^N b^x \sigma_k^x,
&A_2 &= \sum_{k=1}^N b^y \sigma_k^y,
&A_3 &= \sum_{k=1}^N b^z \sigma_k^z, \\
B_1 &= \sum_{p<q}^N J_{pq} \sigma_p^x \sigma_q^x,
&B_2 &= \sum_{p<q}^N J_{pq} \sigma_p^y \sigma_q^y,
&B_3 &= \sum_{p<q}^N J_{pq} \sigma_p^z \sigma_q^z.
\end{align}
The time evolution can be approximated by
\begin{align}
e^{-iHt} &\approx (e^{-iA_1t/n}e^{-iA_2t/n}e^{-iA_3t/n}e^{-iB_1t/n}e^{-iB_2t/n}e^{-iB_3t/n})^n \\
&= \prod_{i=1}^n e^{-iA_1t/n}e^{-iA_2t/n}e^{-iA_3t/n}e^{-iB_1t/n}e^{-iB_2t/n}e^{-iB_3t/n}
\end{align}
We introduce parameters $\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma}, \boldsymbol{\delta}, \boldsymbol{\epsilon}, \boldsymbol{\kappa} \in \mathbb{R}^n$ and trotterize the unitary operator as follows
\begin{align}
e^{-iHt} &= \prod_{i=1}^n e^{-iA_1\alpha_i t}e^{-iA_2\beta_i t}e^{-iA_3\gamma_i t}e^{-iB_1 \delta_i t}e^{-iB_2 \epsilon_i t}e^{-iB_3 \kappa_i t} \end{align}
Let's see how to implement these components. We show that the first term can be written as a product of rotations about the $x$-axis.
\begin{align}
e^{-iA_1\alpha_i t} &= e^{-i\alpha_i t\sum_{k=1}^N b^x \sigma_k^x} \\
&\equiv e^{-i\alpha_i t\sum_{k=1}^N \sigma_k^x} \\
&= \prod_{k=1}^N e^{-i\alpha_i t \sigma_k^x} \\
&= \prod_{k=1}^N \text{RX}_k(2\alpha_i t)
\end{align}
In the second line, $b^x$ has been absorbed into the learnable parameters $\alpha_i$. Moreover, the term can be written as a product of smaller exponential terms as in the third line because $\sigma_x$'s commute with each other. This is due to the fact that $[A,B] \equiv AB-BA = 0$ implies $e^A = e^B$. Similarly,
\begin{align}
e^{-iA_2\beta_i t} &= \prod_{k=1}^N \text{RY}_k(2\beta_i t) \\
e^{-iA_3\gamma_i t} &= \prod_{k=1}^N \text{RZ}_k(2\gamma_i t)
\end{align}
The same derivation can be made for two-qubit interactions. In particular,
\begin{align}
e^{-iB_1\delta_i} &= \prod_{p<q}^N e^{-i\delta_i J_{pq} \sigma_p^x \sigma_q^x} \\
&= \prod_{p<q}^N \text{RXX}_k(2 J_{pq} \delta_i t) \\
e^{-iB_2\epsilon_i} &= \prod_{p<q}^N \text{RYY}_k(2 J_{pq} \epsilon_i t) \\
e^{-iB_3\kappa_i} &= \prod_{p<q}^N \text{RZZ}_k(2 J_{pq} \kappa_i t)
\end{align}
With the decomposition, the time evolution be can approximated by a sequence of single- and two-qubit rotation gates. The sequence of those gates contains $6n+1$ parameters, $6n$ out of which are learnables, and the other is the time elapsed of the evolution.
```python
def circuit(num_qubits:int, num_layers:int):
"""
Construct the variational form
"""
varform = QuantumCircuit(num_qubits)
t = Parameter('t')
alpha = ParameterVector('alpha', num_layers)
beta = ParameterVector('beta', num_layers)
gamma = ParameterVector('gamma', num_layers)
delta = ParameterVector('delta', num_layers)
eps = ParameterVector('eps', num_layers)
kappa = ParameterVector('kappa', num_layers)
const = np.arccos(0.9)
angles = np.zeros((num_qubits,num_qubits))
for p in range(num_qubits):
for q in range(num_qubits):
angles[p,q] = const * abs(p-q) / (num_qubits-1)
J = 1 - np.cos(angles)
for n in range(num_layers):
for k in range(num_qubits):
varform.rx(2*alpha[n]*t, k)
for k in range(num_qubits):
varform.ry(2*beta[n]*t, k)
for k in range(num_qubits):
varform.rz(2*gamma[n]*t, k)
## 2-body interactions
for p in range(num_qubits):
for q in range(num_qubits):
if p < q:
varform.rxx(2*delta[n]*J[p,q]*t, p, q)
for p in range(num_qubits):
for q in range(num_qubits):
if p < q:
varform.ryy(2*eps[n]*J[p,q]*t, p, q)
for p in range(num_qubits):
for q in range(num_qubits):
if p < q:
varform.rzz(2*kappa[n]*J[p,q]*t, p, q)
varform.barrier()
return varform, t
```
```python
varform, time_param = circuit(num_qubits=2, num_layers=1)
```
```python
print(varform.num_parameters)
print(varform.parameters)
print(varform.draw())
```
7
ParameterView([ParameterVectorElement(alpha[0]), ParameterVectorElement(beta[0]), ParameterVectorElement(delta[0]), ParameterVectorElement(eps[0]), ParameterVectorElement(gamma[0]), ParameterVectorElement(kappa[0]), Parameter(t)])
┌──────────────────┐┌─────────────────┐┌──────────────────┐»
q_0: ┤ Rx(2*alpha[0]*t) ├┤ Ry(2*beta[0]*t) ├┤ Rz(2*gamma[0]*t) ├»
├──────────────────┤├─────────────────┤├──────────────────┤»
q_1: ┤ Rx(2*alpha[0]*t) ├┤ Ry(2*beta[0]*t) ├┤ Rz(2*gamma[0]*t) ├»
└──────────────────┘└─────────────────┘└──────────────────┘»
« ┌──────────────────────┐┌────────────────────┐ ░
«q_0: ┤0 ├┤0 ├─■────────────────────░─
« │ Rxx(0.2*delta[0]*t) ││ Ryy(0.2*eps[0]*t) │ │ZZ(0.2*kappa[0]*t) ░
«q_1: ┤1 ├┤1 ├─■────────────────────░─
« └──────────────────────┘└────────────────────┘ ░
To avoid confusion, denote the unitary obtained by the variational form is $U(\boldsymbol{\theta},t)$, where $\boldsymbol{\theta}$ represent all learnable parameters, in constrast to the true time evolution $e^{-iHt}$. Our goal is to simulate $e^{-iHt}$ over a period of time. We choose the following objective function to capture the consistency in the capability of $U(\boldsymbol{\theta},t)$ to approximate $e^{-iHt}$.
\begin{align}
\max_\boldsymbol{\theta} & f(\boldsymbol{\theta}), & \text{where } & f(\boldsymbol{\theta}) = \int_0^T \left| \langle 0|e^{iHt} U(\boldsymbol{\theta},t) |0\rangle \right|^2 dt
\end{align}
The function is averaged fidelity between the ideal final states and the final states obtain by $U(\boldsymbol{\theta},t)$. We observe through experiments that the variational form has relatively good generalization capability, thus a small number of time marks suffices. In practice, running quantum circuits on a real device outputs a statistic of measurement results. It is possible to reconstruct the density matrix $\rho(\boldsymbol{\theta},t)$ of the final state by apping state tomography. The optimization problem we actually solve is
\begin{align}
\max_\boldsymbol{\theta} & \frac{1}{S} \sum_{s=1}^S \left| \langle 0|e^{iHt_s} U(\boldsymbol{\theta},t_s) |0\rangle \right|^2, & \text{for statevector simulator, or} \\
\max_\boldsymbol{\theta} & \frac{1}{S} \sum_{s=1}^S \left| \langle 0|e^{iHt_s} \rho(\boldsymbol{\theta},t_s) e^{-iHt_s} |0\rangle \right|^2, & \text{for physical devices.}
\end{align}
for a set of evenly spaced time marks $\{t_s\}_{s=1}^S$ from $0$ to $T$.
```python
def exact_hamiltonian(num_qubits:int):
"""
Compute the exact Hamiltonian
"""
def pair_coupling_matrix():
J = np.zeros(shape=(num_qubits, num_qubits))
for p in range(num_qubits):
for q in range(num_qubits):
J[p,q] = 1 - np.cos(np.arccos(0.9)*(np.abs(p-q)/(num_qubits-1)))
return J
def sigma_k(k):
sigma_X = ( (I^k) ^ X ^ (I^(num_qubits-k-1)) ).to_matrix()
sigma_Y = ( (I^k) ^ Y ^ (I^(num_qubits-k-1)) ).to_matrix()
sigma_Z = ( (I^k) ^ Z ^ (I^(num_qubits-k-1)) ).to_matrix()
return np.array([sigma_X, sigma_Y, sigma_Z])
dim = 2**num_qubits
b_vector = np.array([np.sqrt(1.0-0.925**2), 0.0, -0.925])
sigma_vectors = [sigma_k(qubit) for qubit in range(num_qubits)] # num_qubits x 3 x (dim x dim)
J = pair_coupling_matrix()
first_term = np.zeros(shape=(dim,dim), dtype=np.complex128)
for i in range(num_qubits):
sigma_vec_mul = (b_vector[:, None, None] * sigma_vectors[i])
first_term += np.sum(sigma_vec_mul, axis=0)
second_term = np.zeros(shape=(dim,dim), dtype=np.complex128)
for p in range(num_qubits):
for q in range(num_qubits):
if p < q:
for pauli_idx in range(3):
second_term += J[p,q] * (sigma_vectors[p][pauli_idx] @ sigma_vectors[q][pauli_idx])
return first_term + second_term
```
```python
def exact_unitaries(num_qubits:int, ts:np.ndarray):
"""
Compute the exact unitary operators exp(-iHt) for various t
"""
H = exact_hamiltonian(num_qubits)
operators = []
for t in ts:
U = scipy.linalg.expm(-1j * t * H)
operators.append(Operator(U))
#print(operators[-1].is_unitary())
return operators
def exact_states(num_qubits:int, ts:np.ndarray , init_state:Statevector=None):
"""
Compute the exact output states by applying exp(-iHt) on an initial state for various t
"""
init_state = init_state or Statevector.from_label('0'*num_qubits)
unitaries = exact_unitaries(num_qubits, ts)
output_states = [init_state.evolve(U) for U in unitaries]
return output_states
```
We perform training experiments with both noiseless (Qiskit Statevector) simulator and noisy simulators. Due to high traffic, it is not possible to use IBM and IonQ's hardwares and online simulators as the optimization routine requires many queries to the cost function which involves running quantum circuits. Instead, the use FakeGuadalupe as our noisy simulator. The optimization algorithm is BFGS for noiseless and SPSA for noisy simulations.
```python
def optimize_simulation(varform:QuantumCircuit, time_param:float, num_time_marks:int,
time_max:float=1., noisy_backend:bool=False, verbose=False):
def cost_fn(x):
full_circuits = [circ.bind_parameters(x) for circ in bind_circuits]
if noisy_backend:
tomo_states = []
for circ in full_circuits:
qst = state_tomography_circuits(circ, circ.qubits)
job_res = qiskit.execute(qst, backend, shots=512).result()
#job_res = [qiskit.execute(circ, backend, shots=512).result() for circ in qst]
#job_res = meas_filter.apply(job_res) # apply measurement error mitigation
tomo_fitter = StateTomographyFitter(job_res, qst)
rho_fit = tomo_fitter.fit(method='lstsq')
tomo_states.append(rho_fit)
fids = []
for i in range(len(tomo_states)):
fids.append(state_fidelity(tomo_states[i], ideal_states[i]))
sum_fid = sum(fids)
if verbose:
print("Fidelity sum: ", sum_fid)
return -sum_fid / len(tomo_states)
else: # noiseless backend
circuit_states = [init_state.evolve(full_circ) for full_circ in full_circuits]
fids = []
for i in range(len(circuit_states)):
fids.append(state_fidelity(circuit_states[i], ideal_states[i]))
sum_fid = sum(fids)
if verbose:
print("Fidelity sum: ", sum_fid)
return -sum_fid / len(circuit_states)
if noisy_backend:
backend = FakeGuadalupe()
#backend = IonQProvider().get_backend("ionq_simulator")
# else:
# backend = Aer.get_backend('qasm_simulator')
start = 0.
end = time_max
ts = np.linspace(end,start,num_time_marks, endpoint=False)[::-1]
if verbose:
print("Time marks used: ", ts)
bind_circuits = [varform.bind_parameters({time_param: t}) for t in ts]
init_state = Statevector.from_label('0'*varform.num_qubits)
ideal_states = exact_states(varform.num_qubits, ts, init_state)
if noisy_backend:
init_params = np.random.rand(varform.num_parameters-1)
opt = SPSA(maxiter=500)
point, value, nfev = opt.optimize(num_vars=varform.num_parameters-1, objective_function=cost_fn,initial_point= init_params)
return point, value
else:
init_params = np.random.rand(varform.num_parameters-1)
options = {'disp':True}
res = scipy.optimize.minimize(cost_fn, x0=init_params, method='BFGS', tol=10e-4, options=options)
return res.x, res.fun
```
```python
def simulation(varform, param, ts):
init_state = Statevector.from_label('0'*varform.num_qubits)
ideal_states = exact_states(varform.num_qubits, ts, init_state)
bind_circuits = [varform.bind_parameters({time_param: t}) for t in ts]
full_circuits = [circ.bind_parameters(param) for circ in bind_circuits]
circuit_states = [init_state.evolve(circ) for circ in full_circuits]
backend = FakeGuadalupe()
tomo_states = []
for circ in full_circuits:
qst = state_tomography_circuits(circ, circ.qubits)
job_res = qiskit.execute(qst, backend, shots=512).result()
tomo_fitter = StateTomographyFitter(job_res, qst)
rho_fit = tomo_fitter.fit(method='lstsq')
tomo_states.append(rho_fit)
noiseless_fids = []
noisy_fids = []
device_fids = []
for i in range(len(full_circuits)):
noiseless_fids.append(state_fidelity(circuit_states[i], ideal_states[i]))
device_fids.append(state_fidelity(circuit_states[i], tomo_states[i]))
noisy_fids.append(state_fidelity(tomo_states[i], ideal_states[i]))
return noiseless_fids, noisy_fids, device_fids
def plot(ts, nless_sim, nsy_sim, tomo_fid, min_y, title):
plt.figure(figsize=(8, 6), dpi=80)
plt.plot(ts, nless_sim, label='Noiseless simulation')
plt.plot(ts, nsy_sim, label='Noisy simulation')
plt.plot(ts, tomo_fid, label='Tomography fidelity', linestyle='dashdot')
plt.xlabel('Time')
plt.ylabel('Fidelity')
plt.ylim(min_y)
plt.title(title)
plt.legend()
```
<div id='exp1'/>
# Experiment 1: Two qubits, one layer
We train the circuit based on fidelity in two time marks $t \in \{0.5,1.0\}$. The time interval for simulation is set to $[0,1]$,
```python
varform, time_param = circuit(num_qubits=2, num_layers=1)
print("Number of learnable parameters: ", varform.num_parameters-1)
print(varform.parameters[:-1])
```
Number of learnable parameters: 6
[ParameterVectorElement(alpha[0]), ParameterVectorElement(beta[0]), ParameterVectorElement(delta[0]), ParameterVectorElement(eps[0]), ParameterVectorElement(gamma[0]), ParameterVectorElement(kappa[0])]
```python
# param_opt_noiseless, value_opt_noiseless = optimize_simulation(varform, time_param, 2, noisy_backend=False)
# param_opt_noisy, value_opt_noisy = optimize_simulation(varform, time_param, 2, noisy_backend=True)
# pd.DataFrame(param_opt_noiseless).to_csv("opt_2q_1l_noiseless.csv", header=None, index=None)
# pd.DataFrame(param_opt_noisy).to_csv("opt_2q_1l_noisy.csv", header=None, index=None)
```
```python
param_opt_noiseless = pd.read_csv("opt_2q_1l_noiseless.csv", header=None).to_numpy().flatten()
param_opt_noisy = pd.read_csv("opt_2q_1l_noisy.csv", header=None).to_numpy().flatten()
```
```python
ts = np.linspace(0,1,20)
nless_train_nless_sim, nless_train_nsy_sim, nless_train_tomo_fid = simulation(varform, param_opt_noiseless, ts)
nsy_train_nless_sim, nsy_train_nsy_sim, nsy_train_tomo_fid = simulation(varform, param_opt_noisy, ts)
```
/Users/erio/opt/anaconda3/envs/QML/lib/python3.8/site-packages/qiskit/ignis/verification/tomography/basis/circuits.py:468: DeprecationWarning: The QuantumCircuit.__iadd__() method is being deprecated. Use the compose() (potentially with the inplace=True argument) and tensor() methods which are more flexible w.r.t circuit register compatibility.
prep += circuit
/Users/erio/opt/anaconda3/envs/QML/lib/python3.8/site-packages/qiskit/circuit/quantumcircuit.py:942: DeprecationWarning: The QuantumCircuit.extend() method is being deprecated. Use the compose() (potentially with the inplace=True argument) and tensor() methods which are more flexible w.r.t circuit register compatibility.
return self.extend(rhs)
/Users/erio/opt/anaconda3/envs/QML/lib/python3.8/site-packages/qiskit/ignis/verification/tomography/basis/circuits.py:478: DeprecationWarning: The QuantumCircuit.__add__() method is being deprecated.Use the compose() method which is more flexible w.r.t circuit register compatibility.
circ = prep + meas
/Users/erio/opt/anaconda3/envs/QML/lib/python3.8/site-packages/qiskit/circuit/quantumcircuit.py:933: DeprecationWarning: The QuantumCircuit.combine() method is being deprecated. Use the compose() method which is more flexible w.r.t circuit register compatibility.
return self.combine(rhs)
```python
plot(ts,nless_train_nless_sim, nless_train_nsy_sim, nless_train_tomo_fid, 0.8, "Neutrino evolution simulation - Noiseless training")
plot(ts,nsy_train_nless_sim, nsy_train_nsy_sim, nsy_train_tomo_fid, 0.8, "Neutrino evolution simulation - Noisy training")
```
<div id='exp2'/>
# Experiment 2: Three qubits, two layers
We train the circuit based on fidelity in four time marks $t \in \{0.5,1.0,1.5,2.0\}$. The time interval for simulation is set to $[0,2]$,
```python
varform, time_param = circuit(num_qubits=3, num_layers=2)
print("Number of learnable parameters: ", varform.num_parameters-1)
print(varform.parameters[:-1])
```
Number of learnable parameters: 12
[ParameterVectorElement(alpha[0]), ParameterVectorElement(alpha[1]), ParameterVectorElement(beta[0]), ParameterVectorElement(beta[1]), ParameterVectorElement(delta[0]), ParameterVectorElement(delta[1]), ParameterVectorElement(eps[0]), ParameterVectorElement(eps[1]), ParameterVectorElement(gamma[0]), ParameterVectorElement(gamma[1]), ParameterVectorElement(kappa[0]), ParameterVectorElement(kappa[1])]
```python
# param_opt_noiseless, value_opt_noiseless = optimize_simulation(varform, time_param, 4, time_max=2, noisy_backend=False)
# param_opt_noisy, value_opt_noisy = optimize_simulation(varform, time_param, 4, time_max=2, noisy_backend=True)
# pd.DataFrame(param_opt_noiseless).to_csv("opt_3q_2l_noiseless.csv", header=None, index=None)
# pd.DataFrame(param_opt_noisy).to_csv("opt_3q_2l_noisy.csv", header=None, index=None)
```
```python
param_opt_noiseless = pd.read_csv("opt_3q_2l_noiseless.csv", header=None).to_numpy().flatten()
param_opt_noisy = pd.read_csv("opt_3q_2l_noisy.csv", header=None).to_numpy().flatten()
```
```python
ts = np.linspace(0,2,20)
nless_train_nless_sim, nless_train_nsy_sim, nless_train_tomo_fid = simulation(varform, param_opt_noiseless, ts)
nsy_train_nless_sim, nsy_train_nsy_sim, nsy_train_tomo_fid = simulation(varform, param_opt_noisy, ts)
```
```python
plot(ts,nless_train_nless_sim, nless_train_nsy_sim, nless_train_tomo_fid, 0.4, "Neutrino evolution simulation - Noiseless training")
plot(ts,nsy_train_nless_sim, nsy_train_nsy_sim, nsy_train_tomo_fid, 0.4, "Neutrino evolution simulation - Noisy training")
```
<div id='exp3'/>
# Experiment 3: Generalization
We show that getting the model to learn the optimal parameters for a few time marks suffices. In particular, we let it optimize the average fidelity at $2S$ time marks $t \in \{0.5,1.0,\dots,S-0.5,S\}$ in the time interval $[0,S]$.
The optimized model can simulate the exact time evolution over the entire period with high precision.
```python
varform, time_param = circuit(num_qubits=4, num_layers=4)
print("Number of learnable parameters: ", varform.num_parameters-1)
print(varform.parameters[:-1])
```
Number of learnable parameters: 24
[ParameterVectorElement(alpha[0]), ParameterVectorElement(alpha[1]), ParameterVectorElement(alpha[2]), ParameterVectorElement(alpha[3]), ParameterVectorElement(beta[0]), ParameterVectorElement(beta[1]), ParameterVectorElement(beta[2]), ParameterVectorElement(beta[3]), ParameterVectorElement(delta[0]), ParameterVectorElement(delta[1]), ParameterVectorElement(delta[2]), ParameterVectorElement(delta[3]), ParameterVectorElement(eps[0]), ParameterVectorElement(eps[1]), ParameterVectorElement(eps[2]), ParameterVectorElement(eps[3]), ParameterVectorElement(gamma[0]), ParameterVectorElement(gamma[1]), ParameterVectorElement(gamma[2]), ParameterVectorElement(gamma[3]), ParameterVectorElement(kappa[0]), ParameterVectorElement(kappa[1]), ParameterVectorElement(kappa[2]), ParameterVectorElement(kappa[3])]
```python
# param_opt_noiseless, value_opt_noiseless = optimize_simulation(varform, time_param, 6, time_max=3., noisy_backend=False, verbose=True)
# pd.DataFrame(param_opt_noiseless).to_csv("opt_4q_4l_noiseless.csv", header=None, index=None)
```
```python
param_opt_noiseless = pd.read_csv("opt_4q_4l_noiseless.csv", header=None).to_numpy().flatten() # avg fid = 0.9999949
ts = np.linspace(0,3.0,20)
nless_train_nless_sim, nless_train_nsy_sim, nless_train_tomo_fid = simulation(varform, param_opt_noiseless, ts)
```
```python
plot(ts,nless_train_nless_sim, nless_train_nsy_sim, nless_train_tomo_fid, 0., "Neutrino evolution simulation - Noiseless training")
```
<div id='exp4'/>
# Experiment 4: Model capacity
In this experiment, we investigate the variational form in its ability to approximate the exact time evolution over a long period. Only noiseless training is performed due to the time constraint of the hackathon.
We fix a configuration with $4$ qubits and $4$ layers. We want to find the longest time interval that the variational form still performs well. The number of time marks will be twice the length of the time interval, i.e. if the time interval is $[0,3]$, then there are 6 time marks $t \in \{0.5,1,\dots,2.5,3.0\}$.
```python
varform, time_param = circuit(num_qubits=4, num_layers=4)
print("Number of learnable parameters: ", varform.num_parameters-1)
print(varform.parameters[:-1])
```
Number of learnable parameters: 24
[ParameterVectorElement(alpha[0]), ParameterVectorElement(alpha[1]), ParameterVectorElement(alpha[2]), ParameterVectorElement(alpha[3]), ParameterVectorElement(beta[0]), ParameterVectorElement(beta[1]), ParameterVectorElement(beta[2]), ParameterVectorElement(beta[3]), ParameterVectorElement(delta[0]), ParameterVectorElement(delta[1]), ParameterVectorElement(delta[2]), ParameterVectorElement(delta[3]), ParameterVectorElement(eps[0]), ParameterVectorElement(eps[1]), ParameterVectorElement(eps[2]), ParameterVectorElement(eps[3]), ParameterVectorElement(gamma[0]), ParameterVectorElement(gamma[1]), ParameterVectorElement(gamma[2]), ParameterVectorElement(gamma[3]), ParameterVectorElement(kappa[0]), ParameterVectorElement(kappa[1]), ParameterVectorElement(kappa[2]), ParameterVectorElement(kappa[3])]
```python
# max_time_length = 6
# param_opt_list = []
# value_opt_list = []
# for n in range(1,max_time_length+1):
# print(f'Time interval length = {n}, Num. time marks = {2*n}')
# best_param_opt = 0
# best_value_opt = 0
# for _ in range(3):
# param_opt_noiseless, value_opt_noiseless = optimize_simulation(varform, time_param, 2*n, time_max=n, noisy_backend=False)
# if value_opt_noiseless < best_value_opt:
# best_value_opt = value_opt_noiseless
# best_param_opt = param_opt_noiseless
# param_opt_list.append(best_param_opt)
# value_opt_list.append(best_value_opt)
# pd.DataFrame(np.array(value_opt_list)).to_csv("opt_val_time_length.csv", header=None, index=None)
```
```python
value_opt_list = pd.read_csv("opt_val_time_length.csv", header=None).to_numpy().flatten()
plt.figure(figsize=(8, 6), dpi=80)
plt.plot(list(range(1,len(value_opt_list)+1)), -np.around(value_opt_list,decimals=3))
plt.xlabel('Length of time interval')
plt.ylabel('Highest average fidelity (Best of 3)')
plt.ylim(0.6)
plt.title("Capacity of the variational model over long time lengths")
plt.show()
```
|
a6c6cafe248c2245e68d5de9d62c893b81876310
| 297,212 |
ipynb
|
Jupyter Notebook
|
qhack_openhackathon.ipynb
|
MyEntangled/qhack_hackathon
|
1c872d7041f7723f9def36439a15c345cd636c25
|
[
"MIT"
] | 1 |
2022-03-29T01:35:41.000Z
|
2022-03-29T01:35:41.000Z
|
qhack_openhackathon.ipynb
|
bachbao/Simulating-collective-neutrino-oscillation-using-QAOA-algorithm
|
2a66015f9fb133aa73af1d148e94dab5de1303f9
|
[
"MIT"
] | null | null | null |
qhack_openhackathon.ipynb
|
bachbao/Simulating-collective-neutrino-oscillation-using-QAOA-algorithm
|
2a66015f9fb133aa73af1d148e94dab5de1303f9
|
[
"MIT"
] | null | null | null | 340.449026 | 53,688 | 0.919492 | true | 7,525 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.879147 | 0.782662 | 0.688075 |
__label__eng_Latn
| 0.597849 | 0.43696 |
# Assignment 1: Auto Correct
Welcome to the first assignment of Course 2. This assignment will give you a chance to brush up on your python and probability skills. In doing so, you will implement an auto-correct system that is very effective and useful.
## Outline
- [0. Overview](#0)
- [0.1 Edit Distance](#0-1)
- [1. Data Preprocessing](#1)
- [1.1 Exercise 1](#ex-1)
- [1.2 Exercise 2](#ex-2)
- [1.3 Exercise 3](#ex-3)
- [2. String Manipulation](#2)
- [2.1 Exercise 4](#ex-4)
- [2.2 Exercise 5](#ex-5)
- [2.3 Exercise 6](#ex-6)
- [2.4 Exercise 7](#ex-7)
- [3. Combining the edits](#3)
- [3.1 Exercise 8](#ex-8)
- [3.2 Exercise 9](#ex-9)
- [3.3 Exercise 10](#ex-10)
- [4. Minimum Edit Distance](#4)
- [4.1 Exercise 11](#ex-11)
- [5. Backtrace (Optional)](#5)
<a name='0'></a>
## 0. Overview
You use autocorrect every day on your cell phone and computer. In this assignment, you will explore what really goes on behind the scenes. Of course, the model you are about to implement is not identical to the one used in your phone, but it is still quite good.
By completing this assignment you will learn how to:
- Get a word count given a corpus
- Get a word probability in the corpus
- Manipulate strings
- Filter strings
- Implement Minimum edit distance to compare strings and to help find the optimal path for the edits.
- Understand how dynamic programming works
Similar systems are used everywhere.
- For example, if you type in the word **"I am lerningg"**, chances are very high that you meant to write **"learning"**, as shown in **Figure 1**.
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 1 </div>
<a name='0-1'></a>
#### 0.1 Edit Distance
In this assignment, you will implement models that correct words that are 1 and 2 edit distances away.
- We say two words are n edit distance away from each other when we need n edits to change one word into another.
An edit could consist of one of the following options:
- Delete (remove a letter): ‘hat’ => ‘at, ha, ht’
- Switch (swap 2 adjacent letters): ‘eta’ => ‘eat, tea,...’
- Replace (change 1 letter to another): ‘jat’ => ‘hat, rat, cat, mat, ...’
- Insert (add a letter): ‘te’ => ‘the, ten, ate, ...’
You will be using the four methods above to implement an Auto-correct.
- To do so, you will need to compute probabilities that a certain word is correct given an input.
This auto-correct you are about to implement was first created by [Peter Norvig](https://en.wikipedia.org/wiki/Peter_Norvig) in 2007.
- His [original article](https://norvig.com/spell-correct.html) may be a useful reference for this assignment.
The goal of our spell check model is to compute the following probability:
$$P(c|w) = \frac{P(w|c)\times P(c)}{P(w)} \tag{Eqn-1}$$
The equation above is [Bayes Rule](https://en.wikipedia.org/wiki/Bayes%27_theorem).
- Equation 1 says that the probability of a word being correct $P(c|w) $is equal to the probability of having a certain word $w$, given that it is correct $P(w|c)$, multiplied by the probability of being correct in general $P(C)$ divided by the probability of that word $w$ appearing $P(w)$ in general.
- To compute equation 1, you will first import a data set and then create all the probabilities that you need using that data set.
<a name='1'></a>
# Part 1: Data Preprocessing
```python
import re
import pandas as pd
import numpy as np
from collections import Counter
%matplotlib inline
%config InlineBackend.figure_format='svg'
```
As in any other machine learning task, the first thing you have to do is process your data set.
- Many courses load in pre-processed data for you.
- However, in the real world, when you build these NLP systems, you load the datasets and process them.
- So let's get some real world practice in pre-processing the data!
Your first task is to read in a file called **'shakespeare.txt'** which is found in your file directory. To look at this file you can go to `File ==> Open `.
<a name='ex-1'></a>
### Exercise 1
Implement the function `process_data` which
1) Reads in a corpus (text file)
2) Changes everything to lowercase
3) Returns a list of words.
#### Options and Hints
- If you would like more of a real-life practice, don't open the 'Hints' below (yet) and try searching the web to derive your answer.
- If you want a little help, click on the green "General Hints" section by clicking on it with your mouse.
- If you get stuck or are not getting the expected results, click on the green 'Detailed Hints' section to get hints for each step that you'll take to complete this function.
```python
def process_data(file_name):
"""
Input:
A file_name which is found in your current directory. You just have to read it in.
Output:
words: a list containing all the words in the corpus (text file you read) in lower case.
"""
words=[]
with open(file_name) as f:
file_name_data=f.read()
file_name_data=file_name_data.lower()
words=re.findall('\w+',file_name_data)
return words
```
Note, in the following cell, 'words' is converted to a python `set`. This eliminates any duplicate entries.
```python
word_l = process_data('shakespeare.txt')
vocab = set(word_l) # this will be your new vocabulary
print(f"The first ten words in the text are: \n{word_l[0:10]}")
print(f"There are {len(vocab)} unique words in the vocabulary.")
```
The first ten words in the text are:
['o', 'for', 'a', 'muse', 'of', 'fire', 'that', 'would', 'ascend', 'the']
There are 6116 unique words in the vocabulary.
<a name='ex-2'></a>
### Exercise 2
Implement a `get_count` function that returns a dictionary
- The dictionary's keys are words
- The value for each word is the number of times that word appears in the corpus.
For example, given the following sentence: **"I am happy because I am learning"**, your dictionary should return the following:
<table style="width:20%">
<tr>
<td> <b>Key </b> </td>
<td> <b>Value </b> </td>
</tr>
<tr>
<td> I </td>
<td> 2</td>
</tr>
<tr>
<td>am</td>
<td>2</td>
</tr>
<tr>
<td>happy</td>
<td>1</td>
</tr>
<tr>
<td>because</td>
<td>1</td>
</tr>
<tr>
<td>learning</td>
<td>1</td>
</tr>
</table>
**Instructions**:
Implement a `get_count` which returns a dictionary where the key is a word and the value is the number of times the word appears in the list.
```python
def get_count(word_l):
'''
Input:
word_l: a set of words representing the corpus.
Output:
word_count_dict: The wordcount dictionary where key is the word and value is its frequency.
'''
word_count_dict=dict()
word_count_dict=Counter(word_l)
return word_count_dict
```
```python
word_count_dict = get_count(word_l)
print(f"There are {len(word_count_dict)} key values pairs")
print(f"The count for the word 'thee' is {word_count_dict.get('thee',0)}")
```
There are 6116 key values pairs
The count for the word 'thee' is 240
<a name='ex-3'></a>
### Exercise 3
Given the dictionary of word counts, compute the probability that each word will appear if randomly selected from the corpus of words.
$$P(w_i) = \frac{C(w_i)}{M} \tag{Eqn-2}$$
where
$C(w_i)$ is the total number of times $w_i$ appears in the corpus.
$M$ is the total number of words in the corpus.
For example, the probability of the word 'am' in the sentence **'I am happy because I am learning'** is:
$$P(am) = \frac{C(w_i)}{M} = \frac {2}{7} \tag{Eqn-3}.$$
**Instructions:** Implement `get_probs` function which gives you the probability
that a word occurs in a sample. This returns a dictionary where the keys are words, and the value for each word is its probability in the corpus of words.
```python
def get_probs(word_count_dict):
'''
Input:
word_count_dict: The wordcount dictionary where key is the word and value is its frequency.
Output:
probs: A dictionary where keys are the words and the values are the probability that a word will occur.
'''
probs={}
m=sum(word_count_dict.values())
for key,value in word_count_dict.items():
probs[key]=value/m
return probs
```
```python
probs = get_probs(word_count_dict)
print(f"Length of probs is {len(probs)}")
print(f"P('thee') is {probs['thee']:.4f}")
```
Length of probs is 6116
P('thee') is 0.0045
<a name='2'></a>
# Part 2: String Manipulations
Now, that you have computed $P(w_i)$ for all the words in the corpus, you will write a few functions to manipulate strings so that you can edit the erroneous strings and return the right spellings of the words. In this section, you will implement four functions:
* `delete_letter`: given a word, it returns all the possible strings that have **one character removed**.
* `switch_letter`: given a word, it returns all the possible strings that have **two adjacent letters switched**.
* `replace_letter`: given a word, it returns all the possible strings that have **one character replaced by another different letter**.
* `insert_letter`: given a word, it returns all the possible strings that have an **additional character inserted**.
#### List comprehensions
String and list manipulation in python will often make use of a python feature called [list comprehensions](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions). The routines below will be described as using list comprehensions, but if you would rather implement them in another way, you are free to do so as long as the result is the same. Further, the following section will provide detailed instructions on how to use list comprehensions and how to implement the desired functions. If you are a python expert, feel free to skip the python hints and move to implementing the routines directly.
Python List Comprehensions embed a looping structure inside of a list declaration, collapsing many lines of code into a single line. If you are not familiar with them, they seem slightly out of order relative to for loops.
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 2 </div>
The diagram above shows that the components of a list comprehension are the same components you would find in a typical for loop that appends to a list, but in a different order. With that in mind, we'll continue the specifics of this assignment. We will be very descriptive for the first function, `deletes()`, and less so in later functions as you become familiar with list comprehensions.
<a name='ex-4'></a>
### Exercise 4
**Instructions for delete_letter():** Implement a `delete_letter()` function that, given a word, returns a list of strings with one character deleted.
For example, given the word **nice**, it would return the set: {'ice', 'nce', 'nic', 'nie'}.
**Step 1:** Create a list of 'splits'. This is all the ways you can split a word into Left and Right: For example,
'nice is split into : `[('', 'nice'), ('n', 'ice'), ('ni', 'ce'), ('nic', 'e'), ('nice', '')]`
This is common to all four functions (delete, replace, switch, insert).
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 3 </div>
**Step 2:** This is specific to `delete_letter`. Here, we are generating all words that result from deleting one character.
This can be done in a single line with a list comprehension. You can makes use of this type of syntax:
`[f(a,b) for a, b in splits if condition]`
For our 'nice' example you get:
['ice', 'nce', 'nie', 'nic']
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 4 </div>
#### Levels of assistance
Try this exercise with these levels of assistance.
- We hope that this will make it both a meaningful experience but also not a frustrating experience.
- Start with level 1, then move onto level 2, and 3 as needed.
- Level 1. Try to think this through and implement this yourself.
- Level 2. Click on the "Level 2 Hints" section for some hints to get started.
- Level 3. If you would prefer more guidance, please click on the "Level 3 Hints" cell for step by step instructions.
- If you are still stuck, look at the images in the "list comprehensions" section above.
```python
def delete_letter(word,verbose=False):
'''
Input:
word: the string/word for which you will generate all possible words
in the vocabulary which have 1 missing character
Output:
delete_l: a list of all possible strings obtained by deleting 1 character from word
'''
delete_l=[]
split_l=[]
for c in range(len(word)):
split_l.append((word[:c],word[c:]))
for a,b in split_l:
delete_l.append(a+b[1:])
if verbose:
print(f"input word: {word}, \nsplit_l = {split_l}, \ndelete_l = {delete_l}")
return delete_l
```
```python
delete_word_l = delete_letter(word="cans",
verbose=True)
```
input word: cans,
split_l = [('', 'cans'), ('c', 'ans'), ('ca', 'ns'), ('can', 's')],
delete_l = ['ans', 'cns', 'cas', 'can']
#### Note 1
You might get a slightly different result with split_l.
- Notice how it has the extra tuple `('cans', '')`.
- This will be fine as long as you have checked the size of the right-side substring in tuple (L,R).
- Can you explain why this will give you the same result for the list of deletion strings (delete_l)?
```Python
input word cans,
split_l = [('', 'cans'), ('c', 'ans'), ('ca', 'ns'), ('can', 's'), ('cans', '')],
delete_l = ['ans', 'cns', 'cas', 'can']
```
#### Note 2
If you end up getting the same word as your input word, like this:
```Python
input word cans,
split_l = [('', 'cans'), ('c', 'ans'), ('ca', 'ns'), ('can', 's'), ('cans', '')],
delete_l = ['ans', 'cns', 'cas', 'can', 'cans']
```
- Check how you set the `range`.
- See if you check the length of the string on the right-side of the split.
```python
print(f"Number of outputs of delete_letter('at') is {len(delete_letter('at'))}")
```
Number of outputs of delete_letter('at') is 2
<a name='ex-5'></a>
### Exercise 5
**Instructions for switch_letter()**: Now implement a function that switches two letters in a word. It takes in a word and returns a list of all the possible switches of two letters **that are adjacent to each other**.
- For example, given the word 'eta', it returns {'eat', 'tea'}, but does not return 'ate'.
**Step 1:** is the same as in delete_letter()
**Step 2:** A list comprehension or for loop which forms strings by swapping adjacent letters. This is of the form:
`[f(L,R) for L, R in splits if condition]` where 'condition' will test the length of R in a given iteration. See below.
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 5 </div>
#### Levels of difficulty
Try this exercise with these levels of difficulty.
- Level 1. Try to think this through and implement this yourself.
- Level 2. Click on the "Level 2 Hints" section for some hints to get started.
- Level 3. If you would prefer more guidance, please click on the "Level 3 Hints" cell for step by step instructions.
```python
def switch_letter(word,verbose=True):
'''
Input:
word: input string
Output:
switches: a list of all possible strings with one adjacent charater switched
'''
def swap(c,i,j):
c=list(c)
c[i],c[j]=c[j],c[i]
return ''.join(c)
switch_l = []
split_l = []
len_word=len(word)
for c in range(len_word):
split_l.append((word[:c],word[c:]))
switch_l=[a+b[1]+b[0]+b[2:] for a,b in split_l if len(b) >=2 ]
if verbose: print(f"Input word = {word} \nsplit_l = {split_l} \nswitch_l = {switch_l}")
return switch_l
```
```python
switch_word_l = switch_letter(word="eta",
verbose=True)
```
Input word = eta
split_l = [('', 'eta'), ('e', 'ta'), ('et', 'a')]
switch_l = ['tea', 'eat']
#### Note 1
You may get this:
```Python
Input word = eta
split_l = [('', 'eta'), ('e', 'ta'), ('et', 'a'), ('eta', '')]
switch_l = ['tea', 'eat']
```
- Notice how it has the extra tuple `('eta', '')`.
- This is also correct.
- Can you think of why this is the case?
#### Note 2
If you get an error
```Python
IndexError: string index out of range
```
- Please see if you have checked the length of the strings when switching characters.
```python
print(f"Number of outputs of switch_letter('at') is {len(switch_letter('at'))}")
```
Input word = at
split_l = [('', 'at'), ('a', 't')]
switch_l = ['ta']
Number of outputs of switch_letter('at') is 1
<a name='ex-6'></a>
### Exercise 6
**Instructions for replace_letter()**: Now implement a function that takes in a word and returns a list of strings with one **replaced letter** from the original word.
**Step 1:** is the same as in `delete_letter()`
**Step 2:** A list comprehension or for loop which form strings by replacing letters. This can be of the form:
`[f(a,b,c) for a, b in splits if condition for c in string]` Note the use of the second for loop.
It is expected in this routine that one or more of the replacements will include the original word. For example, replacing the first letter of 'ear' with 'e' will return 'ear'.
**Step 3:** Remove the original input letter from the output.
```python
def replace_letter(word,verbose=False):
'''
Input:
word: the input string/word
Output:
replaces: a list of all possible strings where we replaced one letter from the original word.
'''
letters = 'abcdefghijklmnopqrstuvwxyz'
replace_l = []
split_l = []
for c in range(len(word)):
split_l.append((word[:c],word[c:]))
replace_l = [a + l + (b[1:] if len(b)> 1 else '') for a,b in split_l if b for l in letters]
replace_set=set(replace_l)
replace_set.remove(word)
replace_l = sorted(list(replace_set))
if verbose: print(f"Input word = {word} \nsplit_l = {split_l} \nreplace_l {replace_l}")
return replace_l
```
```python
replace_l = replace_letter(word='can',
verbose=True)
```
Input word = can
split_l = [('', 'can'), ('c', 'an'), ('ca', 'n')]
replace_l ['aan', 'ban', 'caa', 'cab', 'cac', 'cad', 'cae', 'caf', 'cag', 'cah', 'cai', 'caj', 'cak', 'cal', 'cam', 'cao', 'cap', 'caq', 'car', 'cas', 'cat', 'cau', 'cav', 'caw', 'cax', 'cay', 'caz', 'cbn', 'ccn', 'cdn', 'cen', 'cfn', 'cgn', 'chn', 'cin', 'cjn', 'ckn', 'cln', 'cmn', 'cnn', 'con', 'cpn', 'cqn', 'crn', 'csn', 'ctn', 'cun', 'cvn', 'cwn', 'cxn', 'cyn', 'czn', 'dan', 'ean', 'fan', 'gan', 'han', 'ian', 'jan', 'kan', 'lan', 'man', 'nan', 'oan', 'pan', 'qan', 'ran', 'san', 'tan', 'uan', 'van', 'wan', 'xan', 'yan', 'zan']
#### Note 1
If you get something like this:
```Python
Input word = can
split_l = [('', 'can'), ('c', 'an'), ('ca', 'n'), ('can', '')]
replace_l ['aan', 'ban', 'caa', 'cab', 'cac', 'cad', 'cae', 'caf', 'cag', 'cah', 'cai', 'caj', 'cak', 'cal', 'cam', 'cao', 'cap', 'caq', 'car', 'cas', 'cat', 'cau', 'cav', 'caw', 'cax', 'cay', 'caz', 'cbn', 'ccn', 'cdn', 'cen', 'cfn', 'cgn', 'chn', 'cin', 'cjn', 'ckn', 'cln', 'cmn', 'cnn', 'con', 'cpn', 'cqn', 'crn', 'csn', 'ctn', 'cun', 'cvn', 'cwn', 'cxn', 'cyn', 'czn', 'dan', 'ean', 'fan', 'gan', 'han', 'ian', 'jan', 'kan', 'lan', 'man', 'nan', 'oan', 'pan', 'qan', 'ran', 'san', 'tan', 'uan', 'van', 'wan', 'xan', 'yan', 'zan']
```
- Notice how split_l has an extra tuple `('can', '')`, but the output is still the same, so this is okay.
#### Note 2
If you get something like this:
```Python
Input word = can
split_l = [('', 'can'), ('c', 'an'), ('ca', 'n'), ('can', '')]
replace_l ['aan', 'ban', 'caa', 'cab', 'cac', 'cad', 'cae', 'caf', 'cag', 'cah', 'cai', 'caj', 'cak', 'cal', 'cam', 'cana', 'canb', 'canc', 'cand', 'cane', 'canf', 'cang', 'canh', 'cani', 'canj', 'cank', 'canl', 'canm', 'cann', 'cano', 'canp', 'canq', 'canr', 'cans', 'cant', 'canu', 'canv', 'canw', 'canx', 'cany', 'canz', 'cao', 'cap', 'caq', 'car', 'cas', 'cat', 'cau', 'cav', 'caw', 'cax', 'cay', 'caz', 'cbn', 'ccn', 'cdn', 'cen', 'cfn', 'cgn', 'chn', 'cin', 'cjn', 'ckn', 'cln', 'cmn', 'cnn', 'con', 'cpn', 'cqn', 'crn', 'csn', 'ctn', 'cun', 'cvn', 'cwn', 'cxn', 'cyn', 'czn', 'dan', 'ean', 'fan', 'gan', 'han', 'ian', 'jan', 'kan', 'lan', 'man', 'nan', 'oan', 'pan', 'qan', 'ran', 'san', 'tan', 'uan', 'van', 'wan', 'xan', 'yan', 'zan']
```
- Notice how there are strings that are 1 letter longer than the original word, such as `cana`.
- Please check for the case when there is an empty string `''`, and if so, do not use that empty string when setting replace_l.
<a name='ex-7'></a>
### Exercise 7
**Instructions for insert_letter()**: Now implement a function that takes in a word and returns a list with a letter inserted at every offset.
**Step 1:** is the same as in `delete_letter()`
**Step 2:** This can be a list comprehension of the form:
`[f(a,b,c) for a, b in splits if condition for c in string]`
```python
def insert_letter(word,verbose=False):
'''
Input:
word: the input string/word
Output:
inserts: a set of all possible strings with one new letter inserted at every offset
'''
letters = 'abcdefghijklmnopqrstuvwxyz'
insert_l = []
split_l = []
for c in range(len(word)+1):
split_l.append((word[:c],word[c:]))
insert_l=[a+l+b for a,b in split_l for l in letters]
if verbose: print(f"Input word {word} \nsplit_l = {split_l} \ninsert_l = {insert_l}")
return insert_l
```
```python
insert_l = insert_letter('at', True)
print(f"Number of strings output by insert_letter('at') is {len(insert_l)}")
```
Input word at
split_l = [('', 'at'), ('a', 't'), ('at', '')]
insert_l = ['aat', 'bat', 'cat', 'dat', 'eat', 'fat', 'gat', 'hat', 'iat', 'jat', 'kat', 'lat', 'mat', 'nat', 'oat', 'pat', 'qat', 'rat', 'sat', 'tat', 'uat', 'vat', 'wat', 'xat', 'yat', 'zat', 'aat', 'abt', 'act', 'adt', 'aet', 'aft', 'agt', 'aht', 'ait', 'ajt', 'akt', 'alt', 'amt', 'ant', 'aot', 'apt', 'aqt', 'art', 'ast', 'att', 'aut', 'avt', 'awt', 'axt', 'ayt', 'azt', 'ata', 'atb', 'atc', 'atd', 'ate', 'atf', 'atg', 'ath', 'ati', 'atj', 'atk', 'atl', 'atm', 'atn', 'ato', 'atp', 'atq', 'atr', 'ats', 'att', 'atu', 'atv', 'atw', 'atx', 'aty', 'atz']
Number of strings output by insert_letter('at') is 78
#### Note 1
If you get a split_l like this:
```Python
Input word at
split_l = [('', 'at'), ('a', 't')]
insert_l = ['aat', 'bat', 'cat', 'dat', 'eat', 'fat', 'gat', 'hat', 'iat', 'jat', 'kat', 'lat', 'mat', 'nat', 'oat', 'pat', 'qat', 'rat', 'sat', 'tat', 'uat', 'vat', 'wat', 'xat', 'yat', 'zat', 'aat', 'abt', 'act', 'adt', 'aet', 'aft', 'agt', 'aht', 'ait', 'ajt', 'akt', 'alt', 'amt', 'ant', 'aot', 'apt', 'aqt', 'art', 'ast', 'att', 'aut', 'avt', 'awt', 'axt', 'ayt', 'azt']
Number of strings output by insert_letter('at') is 52
```
- Notice that split_l is missing the extra tuple ('at', ''). For insertion, we actually **WANT** this tuple.
- The function is not creating all the desired output strings.
- Check the range that you use for the for loop.
#### Note 2
If you see this:
```Python
Input word at
split_l = [('', 'at'), ('a', 't'), ('at', '')]
insert_l = ['aat', 'bat', 'cat', 'dat', 'eat', 'fat', 'gat', 'hat', 'iat', 'jat', 'kat', 'lat', 'mat', 'nat', 'oat', 'pat', 'qat', 'rat', 'sat', 'tat', 'uat', 'vat', 'wat', 'xat', 'yat', 'zat', 'aat', 'abt', 'act', 'adt', 'aet', 'aft', 'agt', 'aht', 'ait', 'ajt', 'akt', 'alt', 'amt', 'ant', 'aot', 'apt', 'aqt', 'art', 'ast', 'att', 'aut', 'avt', 'awt', 'axt', 'ayt', 'azt']
Number of strings output by insert_letter('at') is 52
```
- Even though you may have fixed the split_l so that it contains the tuple `('at', '')`, notice that you're still missing some output strings.
- Notice that it's missing strings such as 'ata', 'atb', 'atc' all the way to 'atz'.
- To fix this, make sure that when you set insert_l, you allow the use of the empty string `''`.
```python
print(f"Number of outputs of insert_letter('at') is {len(insert_letter('at'))}")
```
Number of outputs of insert_letter('at') is 78
<a name='3'></a>
# Part 3: Combining the edits
Now that you have implemented the string manipulations, you will create two functions that, given a string, will return all the possible single and double edits on that string. These will be `edit_one_letter()` and `edit_two_letters()`.
<a name='3-1'></a>
## 3.1 Edit one letter
<a name='ex-8'></a>
### Exercise 8
**Instructions**: Implement the `edit_one_letter` function to get all the possible edits that are one edit away from a word. The edits consist of the replace, insert, delete, and optionally the switch operation. You should use the previous functions you have already implemented to complete this function. The 'switch' function is a less common edit function, so its use will be selected by an "allow_switches" input argument.
Note that those functions return *lists* while this function should return a *python set*. Utilizing a set eliminates any duplicate entries.
```python
def edit_one_letter(word,allow_switches=True):
"""
Input:
word: the string/word for which we will generate all possible wordsthat are one edit away.
Output:
edit_one_set: a set of words with one possible edit. Please return a set. and not a list.
"""
edit_one_set=set()
edit_one_set.update(delete_letter(word))
if allow_switches:
edit_one_set.update(switch_letter(word))
edit_one_set.update(replace_letter(word))
edit_one_set.update(insert_letter(word))
return edit_one_set
```
```python
tmp_word = "at"
tmp_edit_one_set = edit_one_letter(tmp_word)
# turn this into a list to sort it, in order to view it
tmp_edit_one_l = sorted(list(tmp_edit_one_set))
print(f"input word {tmp_word} \nedit_one_l \n{tmp_edit_one_l}\n")
print(f"The type of the returned object should be a set {type(tmp_edit_one_set)}")
print(f"Number of outputs from edit_one_letter('at') is {len(edit_one_letter('at'))}")
```
Input word = at
split_l = [('', 'at'), ('a', 't')]
switch_l = ['ta']
input word at
edit_one_l
['a', 'aa', 'aat', 'ab', 'abt', 'ac', 'act', 'ad', 'adt', 'ae', 'aet', 'af', 'aft', 'ag', 'agt', 'ah', 'aht', 'ai', 'ait', 'aj', 'ajt', 'ak', 'akt', 'al', 'alt', 'am', 'amt', 'an', 'ant', 'ao', 'aot', 'ap', 'apt', 'aq', 'aqt', 'ar', 'art', 'as', 'ast', 'ata', 'atb', 'atc', 'atd', 'ate', 'atf', 'atg', 'ath', 'ati', 'atj', 'atk', 'atl', 'atm', 'atn', 'ato', 'atp', 'atq', 'atr', 'ats', 'att', 'atu', 'atv', 'atw', 'atx', 'aty', 'atz', 'au', 'aut', 'av', 'avt', 'aw', 'awt', 'ax', 'axt', 'ay', 'ayt', 'az', 'azt', 'bat', 'bt', 'cat', 'ct', 'dat', 'dt', 'eat', 'et', 'fat', 'ft', 'gat', 'gt', 'hat', 'ht', 'iat', 'it', 'jat', 'jt', 'kat', 'kt', 'lat', 'lt', 'mat', 'mt', 'nat', 'nt', 'oat', 'ot', 'pat', 'pt', 'qat', 'qt', 'rat', 'rt', 'sat', 'st', 't', 'ta', 'tat', 'tt', 'uat', 'ut', 'vat', 'vt', 'wat', 'wt', 'xat', 'xt', 'yat', 'yt', 'zat', 'zt']
The type of the returned object should be a set <class 'set'>
Input word = at
split_l = [('', 'at'), ('a', 't')]
switch_l = ['ta']
Number of outputs from edit_one_letter('at') is 129
<a name='3-2'></a>
## Part 3.2 Edit two letters
<a name='ex-9'></a>
### Exercise 9
Now you can generalize this to implement to get two edits on a word. To do so, you would have to get all the possible edits on a single word and then for each modified word, you would have to modify it again.
**Instructions**: Implement the `edit_two_letters` function that returns a set of words that are two edits away. Note that creating additional edits based on the `edit_one_letter` function may 'restore' some one_edits to zero or one edits. That is allowed here. This accounted for in get_corrections.
```python
def edit_two_letters(word,allow_switches=True):
'''
Input:
word: the input string/word
Output:
edit_two_set: a set of strings with all possible two edits
'''
edit_two_set = set()
edit_one = edit_one_letter(word,allow_switches=allow_switches)
for w in edit_one:
if w:
edit_two = edit_one_letter(w,allow_switches=allow_switches)
edit_two_set.update(edit_two)
return edit_two_set
```
```python
tmp_edit_two_set = edit_two_letters("a")
tmp_edit_two_l = sorted(list(tmp_edit_two_set))
print(f"Number of strings with edit distance of two: {len(tmp_edit_two_l)}")
print(f"First 10 strings {tmp_edit_two_l[:10]}")
print(f"Last 10 strings {tmp_edit_two_l[-10:]}")
print(f"The data type of the returned object should be a set {type(tmp_edit_two_set)}")
print(f"Number of strings that are 2 edit distances from 'at' is {len(edit_two_letters('at'))}")
```
Input word = a
split_l = [('', 'a')]
switch_l = []
Input word = r
split_l = [('', 'r')]
switch_l = []
Input word = da
split_l = [('', 'da'), ('d', 'a')]
switch_l = ['ad']
Input word = ea
split_l = [('', 'ea'), ('e', 'a')]
switch_l = ['ae']
Input word = l
split_l = [('', 'l')]
switch_l = []
Input word = aj
split_l = [('', 'aj'), ('a', 'j')]
switch_l = ['ja']
Input word = ap
split_l = [('', 'ap'), ('a', 'p')]
switch_l = ['pa']
Input word = b
split_l = [('', 'b')]
switch_l = []
Input word = pa
split_l = [('', 'pa'), ('p', 'a')]
switch_l = ['ap']
Input word = y
split_l = [('', 'y')]
switch_l = []
Input word = ka
split_l = [('', 'ka'), ('k', 'a')]
switch_l = ['ak']
Input word = la
split_l = [('', 'la'), ('l', 'a')]
switch_l = ['al']
Input word = ba
split_l = [('', 'ba'), ('b', 'a')]
switch_l = ['ab']
Input word = g
split_l = [('', 'g')]
switch_l = []
Input word = ya
split_l = [('', 'ya'), ('y', 'a')]
switch_l = ['ay']
Input word = z
split_l = [('', 'z')]
switch_l = []
Input word = ay
split_l = [('', 'ay'), ('a', 'y')]
switch_l = ['ya']
Input word = am
split_l = [('', 'am'), ('a', 'm')]
switch_l = ['ma']
Input word = qa
split_l = [('', 'qa'), ('q', 'a')]
switch_l = ['aq']
Input word = za
split_l = [('', 'za'), ('z', 'a')]
switch_l = ['az']
Input word = p
split_l = [('', 'p')]
switch_l = []
Input word = e
split_l = [('', 'e')]
switch_l = []
Input word = x
split_l = [('', 'x')]
switch_l = []
Input word = f
split_l = [('', 'f')]
switch_l = []
Input word = aq
split_l = [('', 'aq'), ('a', 'q')]
switch_l = ['qa']
Input word = as
split_l = [('', 'as'), ('a', 's')]
switch_l = ['sa']
Input word = al
split_l = [('', 'al'), ('a', 'l')]
switch_l = ['la']
Input word = oa
split_l = [('', 'oa'), ('o', 'a')]
switch_l = ['ao']
Input word = ac
split_l = [('', 'ac'), ('a', 'c')]
switch_l = ['ca']
Input word = va
split_l = [('', 'va'), ('v', 'a')]
switch_l = ['av']
Input word = ad
split_l = [('', 'ad'), ('a', 'd')]
switch_l = ['da']
Input word = k
split_l = [('', 'k')]
switch_l = []
Input word = h
split_l = [('', 'h')]
switch_l = []
Input word = ia
split_l = [('', 'ia'), ('i', 'a')]
switch_l = ['ai']
Input word = af
split_l = [('', 'af'), ('a', 'f')]
switch_l = ['fa']
Input word = ca
split_l = [('', 'ca'), ('c', 'a')]
switch_l = ['ac']
Input word = c
split_l = [('', 'c')]
switch_l = []
Input word = n
split_l = [('', 'n')]
switch_l = []
Input word = ae
split_l = [('', 'ae'), ('a', 'e')]
switch_l = ['ea']
Input word = v
split_l = [('', 'v')]
switch_l = []
Input word = ax
split_l = [('', 'ax'), ('a', 'x')]
switch_l = ['xa']
Input word = at
split_l = [('', 'at'), ('a', 't')]
switch_l = ['ta']
Input word = wa
split_l = [('', 'wa'), ('w', 'a')]
switch_l = ['aw']
Input word = ta
split_l = [('', 'ta'), ('t', 'a')]
switch_l = ['at']
Input word = ab
split_l = [('', 'ab'), ('a', 'b')]
switch_l = ['ba']
Input word = an
split_l = [('', 'an'), ('a', 'n')]
switch_l = ['na']
Input word = av
split_l = [('', 'av'), ('a', 'v')]
switch_l = ['va']
Input word = ak
split_l = [('', 'ak'), ('a', 'k')]
switch_l = ['ka']
Input word = w
split_l = [('', 'w')]
switch_l = []
Input word = ha
split_l = [('', 'ha'), ('h', 'a')]
switch_l = ['ah']
Input word = ga
split_l = [('', 'ga'), ('g', 'a')]
switch_l = ['ag']
Input word = aw
split_l = [('', 'aw'), ('a', 'w')]
switch_l = ['wa']
Input word = ao
split_l = [('', 'ao'), ('a', 'o')]
switch_l = ['oa']
Input word = j
split_l = [('', 'j')]
switch_l = []
Input word = t
split_l = [('', 't')]
switch_l = []
Input word = fa
split_l = [('', 'fa'), ('f', 'a')]
switch_l = ['af']
Input word = ra
split_l = [('', 'ra'), ('r', 'a')]
switch_l = ['ar']
Input word = xa
split_l = [('', 'xa'), ('x', 'a')]
switch_l = ['ax']
Input word = m
split_l = [('', 'm')]
switch_l = []
Input word = ja
split_l = [('', 'ja'), ('j', 'a')]
switch_l = ['aj']
Input word = ah
split_l = [('', 'ah'), ('a', 'h')]
switch_l = ['ha']
Input word = ua
split_l = [('', 'ua'), ('u', 'a')]
switch_l = ['au']
Input word = q
split_l = [('', 'q')]
switch_l = []
Input word = au
split_l = [('', 'au'), ('a', 'u')]
switch_l = ['ua']
Input word = az
split_l = [('', 'az'), ('a', 'z')]
switch_l = ['za']
Input word = i
split_l = [('', 'i')]
switch_l = []
Input word = sa
split_l = [('', 'sa'), ('s', 'a')]
switch_l = ['as']
Input word = aa
split_l = [('', 'aa'), ('a', 'a')]
switch_l = ['aa']
Input word = u
split_l = [('', 'u')]
switch_l = []
Input word = ar
split_l = [('', 'ar'), ('a', 'r')]
switch_l = ['ra']
Input word = ai
split_l = [('', 'ai'), ('a', 'i')]
switch_l = ['ia']
Input word = d
split_l = [('', 'd')]
switch_l = []
Input word = na
split_l = [('', 'na'), ('n', 'a')]
switch_l = ['an']
Input word = s
split_l = [('', 's')]
switch_l = []
Input word = ag
split_l = [('', 'ag'), ('a', 'g')]
switch_l = ['ga']
Input word = o
split_l = [('', 'o')]
switch_l = []
Input word = ma
split_l = [('', 'ma'), ('m', 'a')]
switch_l = ['am']
Number of strings with edit distance of two: 2654
First 10 strings ['', 'a', 'aa', 'aaa', 'aab', 'aac', 'aad', 'aae', 'aaf', 'aag']
Last 10 strings ['zv', 'zva', 'zw', 'zwa', 'zx', 'zxa', 'zy', 'zya', 'zz', 'zza']
The data type of the returned object should be a set <class 'set'>
Input word = at
split_l = [('', 'at'), ('a', 't')]
switch_l = ['ta']
Input word = axt
split_l = [('', 'axt'), ('a', 'xt'), ('ax', 't')]
switch_l = ['xat', 'atx']
Input word = gat
split_l = [('', 'gat'), ('g', 'at'), ('ga', 't')]
switch_l = ['agt', 'gta']
Input word = bat
split_l = [('', 'bat'), ('b', 'at'), ('ba', 't')]
switch_l = ['abt', 'bta']
Input word = dt
split_l = [('', 'dt'), ('d', 't')]
switch_l = ['td']
Input word = aat
split_l = [('', 'aat'), ('a', 'at'), ('aa', 't')]
switch_l = ['aat', 'ata']
Input word = apt
split_l = [('', 'apt'), ('a', 'pt'), ('ap', 't')]
switch_l = ['pat', 'atp']
Input word = atv
split_l = [('', 'atv'), ('a', 'tv'), ('at', 'v')]
switch_l = ['tav', 'avt']
Input word = aj
split_l = [('', 'aj'), ('a', 'j')]
switch_l = ['ja']
Input word = ap
split_l = [('', 'ap'), ('a', 'p')]
switch_l = ['pa']
Input word = ft
split_l = [('', 'ft'), ('f', 't')]
switch_l = ['tf']
Input word = azt
split_l = [('', 'azt'), ('a', 'zt'), ('az', 't')]
switch_l = ['zat', 'atz']
Input word = mat
split_l = [('', 'mat'), ('m', 'at'), ('ma', 't')]
switch_l = ['amt', 'mta']
Input word = fat
split_l = [('', 'fat'), ('f', 'at'), ('fa', 't')]
switch_l = ['aft', 'fta']
Input word = kat
split_l = [('', 'kat'), ('k', 'at'), ('ka', 't')]
switch_l = ['akt', 'kta']
Input word = ats
split_l = [('', 'ats'), ('a', 'ts'), ('at', 's')]
switch_l = ['tas', 'ast']
Input word = uat
split_l = [('', 'uat'), ('u', 'at'), ('ua', 't')]
switch_l = ['aut', 'uta']
Input word = cat
split_l = [('', 'cat'), ('c', 'at'), ('ca', 't')]
switch_l = ['act', 'cta']
Input word = vt
split_l = [('', 'vt'), ('v', 't')]
switch_l = ['tv']
Input word = ait
split_l = [('', 'ait'), ('a', 'it'), ('ai', 't')]
switch_l = ['iat', 'ati']
Input word = yat
split_l = [('', 'yat'), ('y', 'at'), ('ya', 't')]
switch_l = ['ayt', 'yta']
Input word = ay
split_l = [('', 'ay'), ('a', 'y')]
switch_l = ['ya']
Input word = amt
split_l = [('', 'amt'), ('a', 'mt'), ('am', 't')]
switch_l = ['mat', 'atm']
Input word = atk
split_l = [('', 'atk'), ('a', 'tk'), ('at', 'k')]
switch_l = ['tak', 'akt']
Input word = avt
split_l = [('', 'avt'), ('a', 'vt'), ('av', 't')]
switch_l = ['vat', 'atv']
Input word = am
split_l = [('', 'am'), ('a', 'm')]
switch_l = ['ma']
Input word = oat
split_l = [('', 'oat'), ('o', 'at'), ('oa', 't')]
switch_l = ['aot', 'ota']
Input word = sat
split_l = [('', 'sat'), ('s', 'at'), ('sa', 't')]
switch_l = ['ast', 'sta']
Input word = wt
split_l = [('', 'wt'), ('w', 't')]
switch_l = ['tw']
Input word = wat
split_l = [('', 'wat'), ('w', 'at'), ('wa', 't')]
switch_l = ['awt', 'wta']
Input word = ayt
split_l = [('', 'ayt'), ('a', 'yt'), ('ay', 't')]
switch_l = ['yat', 'aty']
Input word = gt
split_l = [('', 'gt'), ('g', 't')]
switch_l = ['tg']
Input word = atq
split_l = [('', 'atq'), ('a', 'tq'), ('at', 'q')]
switch_l = ['taq', 'aqt']
Input word = atx
split_l = [('', 'atx'), ('a', 'tx'), ('at', 'x')]
switch_l = ['tax', 'axt']
Input word = atf
split_l = [('', 'atf'), ('a', 'tf'), ('at', 'f')]
switch_l = ['taf', 'aft']
Input word = atn
split_l = [('', 'atn'), ('a', 'tn'), ('at', 'n')]
switch_l = ['tan', 'ant']
Input word = alt
split_l = [('', 'alt'), ('a', 'lt'), ('al', 't')]
switch_l = ['lat', 'atl']
Input word = bt
split_l = [('', 'bt'), ('b', 't')]
switch_l = ['tb']
Input word = ato
split_l = [('', 'ato'), ('a', 'to'), ('at', 'o')]
switch_l = ['tao', 'aot']
Input word = atu
split_l = [('', 'atu'), ('a', 'tu'), ('at', 'u')]
switch_l = ['tau', 'aut']
Input word = jat
split_l = [('', 'jat'), ('j', 'at'), ('ja', 't')]
switch_l = ['ajt', 'jta']
Input word = mt
split_l = [('', 'mt'), ('m', 't')]
switch_l = ['tm']
Input word = aq
split_l = [('', 'aq'), ('a', 'q')]
switch_l = ['qa']
Input word = et
split_l = [('', 'et'), ('e', 't')]
switch_l = ['te']
Input word = atb
split_l = [('', 'atb'), ('a', 'tb'), ('at', 'b')]
switch_l = ['tab', 'abt']
Input word = as
split_l = [('', 'as'), ('a', 's')]
switch_l = ['sa']
Input word = ct
split_l = [('', 'ct'), ('c', 't')]
switch_l = ['tc']
Input word = ath
split_l = [('', 'ath'), ('a', 'th'), ('at', 'h')]
switch_l = ['tah', 'aht']
Input word = al
split_l = [('', 'al'), ('a', 'l')]
switch_l = ['la']
Input word = it
split_l = [('', 'it'), ('i', 't')]
switch_l = ['ti']
Input word = ant
split_l = [('', 'ant'), ('a', 'nt'), ('an', 't')]
switch_l = ['nat', 'atn']
Input word = atj
split_l = [('', 'atj'), ('a', 'tj'), ('at', 'j')]
switch_l = ['taj', 'ajt']
Input word = nat
split_l = [('', 'nat'), ('n', 'at'), ('na', 't')]
switch_l = ['ant', 'nta']
Input word = aht
split_l = [('', 'aht'), ('a', 'ht'), ('ah', 't')]
switch_l = ['hat', 'ath']
Input word = ac
split_l = [('', 'ac'), ('a', 'c')]
switch_l = ['ca']
Input word = ut
split_l = [('', 'ut'), ('u', 't')]
switch_l = ['tu']
Input word = ad
split_l = [('', 'ad'), ('a', 'd')]
switch_l = ['da']
Input word = pt
split_l = [('', 'pt'), ('p', 't')]
switch_l = ['tp']
Input word = aot
split_l = [('', 'aot'), ('a', 'ot'), ('ao', 't')]
switch_l = ['oat', 'ato']
Input word = lt
split_l = [('', 'lt'), ('l', 't')]
switch_l = ['tl']
Input word = pat
split_l = [('', 'pat'), ('p', 'at'), ('pa', 't')]
switch_l = ['apt', 'pta']
Input word = hat
split_l = [('', 'hat'), ('h', 'at'), ('ha', 't')]
switch_l = ['aht', 'hta']
Input word = atw
split_l = [('', 'atw'), ('a', 'tw'), ('at', 'w')]
switch_l = ['taw', 'awt']
Input word = af
split_l = [('', 'af'), ('a', 'f')]
switch_l = ['fa']
Input word = akt
split_l = [('', 'akt'), ('a', 'kt'), ('ak', 't')]
switch_l = ['kat', 'atk']
Input word = ae
split_l = [('', 'ae'), ('a', 'e')]
switch_l = ['ea']
Input word = ax
split_l = [('', 'ax'), ('a', 'x')]
switch_l = ['xa']
Input word = eat
split_l = [('', 'eat'), ('e', 'at'), ('ea', 't')]
switch_l = ['aet', 'eta']
Input word = aft
split_l = [('', 'aft'), ('a', 'ft'), ('af', 't')]
switch_l = ['fat', 'atf']
Input word = lat
split_l = [('', 'lat'), ('l', 'at'), ('la', 't')]
switch_l = ['alt', 'lta']
Input word = xt
split_l = [('', 'xt'), ('x', 't')]
switch_l = ['tx']
Input word = atr
split_l = [('', 'atr'), ('a', 'tr'), ('at', 'r')]
switch_l = ['tar', 'art']
Input word = aqt
split_l = [('', 'aqt'), ('a', 'qt'), ('aq', 't')]
switch_l = ['qat', 'atq']
Input word = act
split_l = [('', 'act'), ('a', 'ct'), ('ac', 't')]
switch_l = ['cat', 'atc']
Input word = ta
split_l = [('', 'ta'), ('t', 'a')]
switch_l = ['at']
Input word = aty
split_l = [('', 'aty'), ('a', 'ty'), ('at', 'y')]
switch_l = ['tay', 'ayt']
Input word = an
split_l = [('', 'an'), ('a', 'n')]
switch_l = ['na']
Input word = ab
split_l = [('', 'ab'), ('a', 'b')]
switch_l = ['ba']
Input word = av
split_l = [('', 'av'), ('a', 'v')]
switch_l = ['va']
Input word = ak
split_l = [('', 'ak'), ('a', 'k')]
switch_l = ['ka']
Input word = rat
split_l = [('', 'rat'), ('r', 'at'), ('ra', 't')]
switch_l = ['art', 'rta']
Input word = abt
split_l = [('', 'abt'), ('a', 'bt'), ('ab', 't')]
switch_l = ['bat', 'atb']
Input word = aet
split_l = [('', 'aet'), ('a', 'et'), ('ae', 't')]
switch_l = ['eat', 'ate']
Input word = zt
split_l = [('', 'zt'), ('z', 't')]
switch_l = ['tz']
Input word = aw
split_l = [('', 'aw'), ('a', 'w')]
switch_l = ['wa']
Input word = ao
split_l = [('', 'ao'), ('a', 'o')]
switch_l = ['oa']
Input word = jt
split_l = [('', 'jt'), ('j', 't')]
switch_l = ['tj']
Input word = agt
split_l = [('', 'agt'), ('a', 'gt'), ('ag', 't')]
switch_l = ['gat', 'atg']
Input word = tat
split_l = [('', 'tat'), ('t', 'at'), ('ta', 't')]
switch_l = ['att', 'tta']
Input word = ajt
split_l = [('', 'ajt'), ('a', 'jt'), ('aj', 't')]
switch_l = ['jat', 'atj']
Input word = atp
split_l = [('', 'atp'), ('a', 'tp'), ('at', 'p')]
switch_l = ['tap', 'apt']
Input word = t
split_l = [('', 't')]
switch_l = []
Input word = adt
split_l = [('', 'adt'), ('a', 'dt'), ('ad', 't')]
switch_l = ['dat', 'atd']
Input word = rt
split_l = [('', 'rt'), ('r', 't')]
switch_l = ['tr']
Input word = atz
split_l = [('', 'atz'), ('a', 'tz'), ('at', 'z')]
switch_l = ['taz', 'azt']
Input word = vat
split_l = [('', 'vat'), ('v', 'at'), ('va', 't')]
switch_l = ['avt', 'vta']
Input word = ah
split_l = [('', 'ah'), ('a', 'h')]
switch_l = ['ha']
Input word = atg
split_l = [('', 'atg'), ('a', 'tg'), ('at', 'g')]
switch_l = ['tag', 'agt']
Input word = au
split_l = [('', 'au'), ('a', 'u')]
switch_l = ['ua']
Input word = ati
split_l = [('', 'ati'), ('a', 'ti'), ('at', 'i')]
switch_l = ['tai', 'ait']
Input word = yt
split_l = [('', 'yt'), ('y', 't')]
switch_l = ['ty']
Input word = az
split_l = [('', 'az'), ('a', 'z')]
switch_l = ['za']
Input word = kt
split_l = [('', 'kt'), ('k', 't')]
switch_l = ['tk']
Input word = st
split_l = [('', 'st'), ('s', 't')]
switch_l = ['ts']
Input word = iat
split_l = [('', 'iat'), ('i', 'at'), ('ia', 't')]
switch_l = ['ait', 'ita']
Input word = ate
split_l = [('', 'ate'), ('a', 'te'), ('at', 'e')]
switch_l = ['tae', 'aet']
Input word = art
split_l = [('', 'art'), ('a', 'rt'), ('ar', 't')]
switch_l = ['rat', 'atr']
Input word = atm
split_l = [('', 'atm'), ('a', 'tm'), ('at', 'm')]
switch_l = ['tam', 'amt']
Input word = zat
split_l = [('', 'zat'), ('z', 'at'), ('za', 't')]
switch_l = ['azt', 'zta']
Input word = atc
split_l = [('', 'atc'), ('a', 'tc'), ('at', 'c')]
switch_l = ['tac', 'act']
Input word = aa
split_l = [('', 'aa'), ('a', 'a')]
switch_l = ['aa']
Input word = ht
split_l = [('', 'ht'), ('h', 't')]
switch_l = ['th']
Input word = ot
split_l = [('', 'ot'), ('o', 't')]
switch_l = ['to']
Input word = ar
split_l = [('', 'ar'), ('a', 'r')]
switch_l = ['ra']
Input word = dat
split_l = [('', 'dat'), ('d', 'at'), ('da', 't')]
switch_l = ['adt', 'dta']
Input word = tt
split_l = [('', 'tt'), ('t', 't')]
switch_l = ['tt']
Input word = ai
split_l = [('', 'ai'), ('a', 'i')]
switch_l = ['ia']
Input word = att
split_l = [('', 'att'), ('a', 'tt'), ('at', 't')]
switch_l = ['tat', 'att']
Input word = ag
split_l = [('', 'ag'), ('a', 'g')]
switch_l = ['ga']
Input word = ata
split_l = [('', 'ata'), ('a', 'ta'), ('at', 'a')]
switch_l = ['taa', 'aat']
Input word = aut
split_l = [('', 'aut'), ('a', 'ut'), ('au', 't')]
switch_l = ['uat', 'atu']
Input word = atd
split_l = [('', 'atd'), ('a', 'td'), ('at', 'd')]
switch_l = ['tad', 'adt']
Input word = a
split_l = [('', 'a')]
switch_l = []
Input word = nt
split_l = [('', 'nt'), ('n', 't')]
switch_l = ['tn']
Input word = ast
split_l = [('', 'ast'), ('a', 'st'), ('as', 't')]
switch_l = ['sat', 'ats']
Input word = xat
split_l = [('', 'xat'), ('x', 'at'), ('xa', 't')]
switch_l = ['axt', 'xta']
Input word = awt
split_l = [('', 'awt'), ('a', 'wt'), ('aw', 't')]
switch_l = ['wat', 'atw']
Input word = qt
split_l = [('', 'qt'), ('q', 't')]
switch_l = ['tq']
Input word = atl
split_l = [('', 'atl'), ('a', 'tl'), ('at', 'l')]
switch_l = ['tal', 'alt']
Input word = qat
split_l = [('', 'qat'), ('q', 'at'), ('qa', 't')]
switch_l = ['aqt', 'qta']
Number of strings that are 2 edit distances from 'at' is 7154
<a name='3-3'></a>
## Part 3-3: suggest spelling suggestions
Now you will use your `edit_two_letters` function to get a set of all the possible 2 edits on your word. You will then use those strings to get the most probable word you meant to type aka your typing suggestion.
<a name='ex-10'></a>
### Exercise 10
**Instructions**: Implement `get_corrections`, which returns a list of zero to n possible suggestion tuples of the form (word, probability_of_word).
**Step 1:** Generate suggestions for a supplied word: You'll use the edit functions you have developed. The 'suggestion algorithm' should follow this logic:
* If the word is in the vocabulary, suggest the word.
* Otherwise, if there are suggestions from `edit_one_letter` that are in the vocabulary, use those.
* Otherwise, if there are suggestions from `edit_two_letters` that are in the vocabulary, use those.
* Otherwise, suggest the input word.*
* The idea is that words generated from fewer edits are more likely than words with more edits.
Note:
- Edits of one or two letters may 'restore' strings to either zero or one edit. This algorithm accounts for this by preferentially selecting lower distance edits first.
#### Short circuit
In Python, logical operations such as `and` and `or` have two useful properties. They can operate on lists and they have ['short-circuit' behavior](https://docs.python.org/3/library/stdtypes.html). Try these:
```python
# example of logical operation on lists or sets
print( [] and ["a","b"] )
print( [] or ["a","b"] )
#example of Short circuit behavior
val1 = ["Most","Likely"] or ["Less","so"] or ["least","of","all"] # selects first, does not evalute remainder
print(val1)
val2 = [] or [] or ["least","of","all"] # continues evaluation until there is a non-empty list
print(val2)
```
[]
['a', 'b']
['Most', 'Likely']
['least', 'of', 'all']
The logical `or` could be used to implement the suggestion algorithm very compactly. Alternately, if/then constructs could be used.
**Step 2**: Create a 'best_words' dictionary where the 'key' is a suggestion and the 'value' is the probability of that word in your vocabulary. If the word is not in the vocabulary, assign it a probability of 0.
**Step 3**: Select the n best suggestions. There may be fewer than n.
```python
def get_corrections(word,probs,vocab,n=2,verbose=False):
'''
Input:
word: a user entered string to check for suggestions
probs: a dictionary that maps each word to its probability in the corpus
vocab: a set containing all the vocabulary
n: number of possible word corrections you want returned in the dictionary
Output:
n_best: a list of tuples with the most probable n corrected words and their probabilities.
'''
suggestions = []
n_best = []
suggestions=list((word in vocab and word) or edit_one_letter(word).intersection(vocab)\
or edit_two_letters(word).intersection(vocab))
n_best=[[s,probs[s]] for s in list(reversed(suggestions))]
if verbose: print("suggestions = ", suggestions)
return n_best
```
```python
my_word = 'dys'
tmp_corrections = get_corrections(my_word, probs, vocab, 2, verbose=False)
for i, word_prob in enumerate(tmp_corrections):
print(f"word {i}: {word_prob[0]}, probability {word_prob[1]:.6f}")
# CODE REVIEW COMMENT: using "tmp_corrections" insteads of "cors". "cors" is not defined
print(f"data type of corrections {type(tmp_corrections)}")
```
Input word = dys
split_l = [('', 'dys'), ('d', 'ys'), ('dy', 's')]
switch_l = ['yds', 'dsy']
word 0: dye, probability 0.000019
word 1: days, probability 0.000410
data type of corrections <class 'list'>
<a name='4'></a>
# Part 4: Minimum Edit distance
Now that you have implemented your auto-correct, how do you evaluate the similarity between two strings? For example: 'waht' and 'what'
Also how do you efficiently find the shortest path to go from the word, 'waht' to the word 'what'?
You will implement a dynamic programming system that will tell you the minimum number of edits required to convert a string into another string.
<a name='4-1'></a>
### Part 4.1 Dynamic Programming
Dynamic Programming breaks a problem down into subproblems which can be combined to form the final solution. Here, given a string source[0..i] and a string target[0..j], we will compute all the combinations of substrings[i, j] and calculate their edit distance. To do this efficiently, we will use a table to maintain the previously computed substrings and use those to calculate larger substrings.
You have to create a matrix and update each element in the matrix as follows:
$$\text{Initialization}$$
\begin{align}
D[0,0] &= 0 \\
D[i,0] &= D[i-1,0] + del\_cost(source[i]) \tag{4}\\
D[0,j] &= D[0,j-1] + ins\_cost(target[j]) \\
\end{align}
$$\text{Per Cell Operations}$$
\begin{align}
\\
D[i,j] =min
\begin{cases}
D[i-1,j] + del\_cost\\
D[i,j-1] + ins\_cost\\
D[i-1,j-1] + \left\{\begin{matrix}
rep\_cost; & if src[i]\neq tar[j]\\
0 ; & if src[i]=tar[j]
\end{matrix}\right.
\end{cases}
\tag{5}
\end{align}
So converting the source word **play** to the target word **stay**, using an input cost of one, a delete cost of 1, and replace cost of 2 would give you the following table:
<table style="width:20%">
<tr>
<td> <b> </b> </td>
<td> <b># </b> </td>
<td> <b>s </b> </td>
<td> <b>t </b> </td>
<td> <b>a </b> </td>
<td> <b>y </b> </td>
</tr>
<tr>
<td> <b> # </b></td>
<td> 0</td>
<td> 1</td>
<td> 2</td>
<td> 3</td>
<td> 4</td>
</tr>
<tr>
<td> <b> p </b></td>
<td> 1</td>
<td> 2</td>
<td> 3</td>
<td> 4</td>
<td> 5</td>
</tr>
<tr>
<td> <b> l </b></td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td> <b> a </b></td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td> <b> y </b></td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>5</td>
<td>4</td>
</tr>
</table>
The operations used in this algorithm are 'insert', 'delete', and 'replace'. These correspond to the functions that you defined earlier: insert_letter(), delete_letter() and replace_letter(). switch_letter() is not used here.
The diagram below describes how to initialize the table. Each entry in D[i,j] represents the minimum cost of converting string source[0:i] to string target[0:j]. The first column is initialized to represent the cumulative cost of deleting the source characters to convert string "EER" to "". The first row is initialized to represent the cumulative cost of inserting the target characters to convert from "" to "NEAR".
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 6 Initializing Distance Matrix</div>
Filling in the remainder of the table utilizes the 'Per Cell Operations' in the equation (5) above. Note, the diagram below includes in the table some of the 3 sub-calculations shown in light grey. Only 'min' of those operations is stored in the table in the `min_edit_distance()` function.
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 7 Filling Distance Matrix</div>
Note that the formula for $D[i,j]$ shown in the image is equivalent to:
\begin{align}
\\
D[i,j] =min
\begin{cases}
D[i-1,j] + del\_cost\\
D[i,j-1] + ins\_cost\\
D[i-1,j-1] + \left\{\begin{matrix}
rep\_cost; & if src[i]\neq tar[j]\\
0 ; & if src[i]=tar[j]
\end{matrix}\right.
\end{cases}
\tag{5}
\end{align}
The variable `sub_cost` (for substitution cost) is the same as `rep_cost`; replacement cost. We will stick with the term "replace" whenever possible.
Below are some examples of cells where replacement is used. This also shows the minimum path from the lower right final position where "EER" has been replaced by "NEAR" back to the start. This provides a starting point for the optional 'backtrace' algorithm below.
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 8 Examples Distance Matrix</div>
<a name='ex-11'></a>
### Exercise 11
Again, the word "substitution" appears in the figure, but think of this as "replacement".
**Instructions**: Implement the function below to get the minimum amount of edits required given a source string and a target string.
```python
def min_edit_distance(source, target, ins_cost = 1, del_cost = 1, rep_cost = 2):
'''
Input:
source: a string corresponding to the string you are starting with
target: a string corresponding to the string you want to end with
ins_cost: an integer setting the insert cost
del_cost: an integer setting the delete cost
rep_cost: an integer setting the replace cost
Output:
D: a matrix of len(source)+1 by len(target)+1 containing minimum edit distances
med: the minimum edit distance (med) required to convert the source string to the target
'''
# use deletion and insertion cost as 1
m=len(source)
n=len(target)
# initialize cost matrix with zeros and dimensions (m+1,n+1)
D=np.zeros((m+1,n+1),dtype=int)
for row in range(1,m+1):
D[row,0]=D[row-1,0]+del_cost
for col in range(1,n+1):
D[0,col]=D[0,col-1]+ins_cost
for row in range(1,m+1):
for col in range(1,n+1):
r_cost=rep_cost
if source[row-1]==target[col-1]:
r_cost=0
D[row,col]=min([D[row-1,col]+del_cost, D[row,col-1]+ins_cost, D[row-1,col-1]+r_cost])
med=D[m,n]
return D,med
```
```python
# Testing
source = 'play'
target = 'stay'
matrix, min_edits = min_edit_distance(source, target)
print("minimum edits: ",min_edits, "\n")
idx = list('#' + source)
cols = list('#' + target)
df = pd.DataFrame(matrix, index=idx, columns= cols)
print(df)
```
minimum edits: 4
# s t a y
# 0 1 2 3 4
p 1 2 3 4 5
l 2 3 4 5 6
a 3 4 5 4 5
y 4 5 6 5 4
```python
# Testing
source = 'eer'
target = 'near'
matrix, min_edits = min_edit_distance(source, target)
print("minimum edits: ",min_edits, "\n")
idx = list(source)
idx.insert(0, '#')
cols = list(target)
cols.insert(0, '#')
df = pd.DataFrame(matrix, index=idx, columns= cols)
print(df)
```
minimum edits: 3
# n e a r
# 0 1 2 3 4
e 1 2 1 2 3
e 2 3 2 3 4
r 3 4 3 4 3
We can now test several of our routines at once:
```python
source = "eer"
targets = edit_one_letter(source,allow_switches = False) #disable switches since min_edit_distance does not include them
for t in targets:
_, min_edits = min_edit_distance(source, t,1,1,1) # set ins, del, sub costs all to one
if min_edits != 1: print(source, t, min_edits)
```
**Expected Results:** (empty)
The 'replace()' routine utilizes all letters a-z one of which returns the original word.
```python
source = "eer"
targets = edit_two_letters(source,allow_switches = False) #disable switches since min_edit_distance does not include them
for t in targets:
_, min_edits = min_edit_distance(source, t,1,1,1) # set ins, del, sub costs all to one
if min_edits != 2 and min_edits != 1: print(source, t, min_edits)
```
eer eer 0
# Submission
Make sure you submit your assignment before you modify anything below
<a name='5'></a>
# Part 5: Optional - Backtrace
Once you have computed your matrix using minimum edit distance, how would find the shortest path from the top left corner to the bottom right corner?
Note that you could use backtrace algorithm. Try to find the shortest path given the matrix that your `min_edit_distance` function returned.
You can use these [lecture slides on minimum edit distance](https://web.stanford.edu/class/cs124/lec/med.pdf) by Dan Jurafsky to learn about the algorithm for backtrace.
#### References
- Dan Jurafsky - Speech and Language Processing - Textbook
- This auto-correct explanation was first done by Peter Norvig in 2007
```python
```
|
0ac2fca316630eaa38cf3d30ad35c221955c70cd
| 82,873 |
ipynb
|
Jupyter Notebook
|
NLP/Learn_by_deeplearning.ai/Course 2 - Probabilistic Models/Labs/Week 1/C2-W1-assignment-Auto Correct.ipynb
|
tsuirak/skills
|
22280be0870627c5dd84e069ec271aeeb6797831
|
[
"MIT"
] | 362 |
2020-10-08T07:34:25.000Z
|
2022-03-30T05:11:30.000Z
|
NLP/Learn_by_deeplearning.ai/Course 2 - Probabilistic Models/Labs/Week 1/C2-W1-assignment-Auto Correct.ipynb
|
abcd1758323829/skills
|
195fad43e99de5efe6491817ad2b79e12665cc2a
|
[
"MIT"
] | 7 |
2020-07-07T16:10:23.000Z
|
2021-06-04T08:17:55.000Z
|
NLP/Learn_by_deeplearning.ai/Course 2 - Probabilistic Models/Labs/Week 1/C2-W1-assignment-Auto Correct.ipynb
|
abcd1758323829/skills
|
195fad43e99de5efe6491817ad2b79e12665cc2a
|
[
"MIT"
] | 238 |
2020-10-08T12:01:31.000Z
|
2022-03-25T08:10:42.000Z
| 36.604682 | 859 | 0.485417 | true | 18,782 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.782662 | 0.893309 | 0.69916 |
__label__eng_Latn
| 0.922515 | 0.462714 |
# Chapter 5 Simplifications
```python
from sympy import *
x, y, z = symbols('x, y, z')
init_printing(use_unicode=True)
```
## 5.1 単純化
`sympy`のどんな数式も`simplify()`で簡単な形にできる!:
```python
simplify(sin(x)**2 + cos(x)**2)
```
```python
simplify((x**3 + x**2 - x - 1) / (x**2 + 2*x + 1))
```
```python
simplify(gamma(x) / gamma(x-2)) #ガンマ関数(特殊関数)
```
#### 注意点:その1
```python
simplify(x**2 + 2*x + 1)
```
---> **因数分解できない!!!** 因数分解は`factor()`関数を使う:
```python
factor(x**2 + 2*x + 1)
```
#### 注意点:その2
`simplify()`は遅い!
#### 解決策
- `simplify()`は「ある程度」簡単な形にまでしか変形できないので、確実に式を簡単にしたいなら、その用途に応じた適切な関数を使うべき!
- インタラクティブシェルで`simplify`の挙動を見てから**個別の関数**(以下) を使って簡単にしよう.
## 5.2 多項式 / 有理式
### 5.2.1 `expand`関数
多項式を展開し、必要ならば項をキャンセルする.
```python
expand((x + 1)**2)
```
```python
expand((x + 2)*(x - 3))
```
「式を展開する」ことで「式が簡単になる」ことがある。
```python
expand((x + 1)*(x - 2) - (x - 1)*x) #式がキャンセルし合う
```
### 5.2.2 `factor`関数
数式を可能な限り因数分解する
```python
factor(x**3 - x**2 + x - 1)
```
```python
factor(x**2*z + 4*x*y*z + 4*y**2*z)
```
```python
factor_list(x**2*z + 4*x*y*z + 4*y**2*z) #(変数or定数, べき)
```
#### 三角関数程度の式なら、関数`factor`, `expand`で対応可能
```python
expand((cos(x) + sin(x))**2)
```
```python
factor(cos(x)**2 + 2*cos(x)*sin(x) + sin(x)**2)
```
### 5.2.3 `collect`関数
特定の変数でまとめたり、特定次の係数を取り出す.
```python
expr = x*y + x -3 + 2*x**2 - z*x**2 + x**3
```
```python
expr
```
```python
collected_expr = collect(expr, x) #xでまとめる.
```
```python
collected_expr
```
さらに以下のようにcoeffメソッドで特定次を取り出せる.
```python
collected_expr.coeff(x, 2) #xの2次だけ取り出す.
```
### 5.2.4 `cancel`関数
有理式を簡単にする
```python
cancel((x**2 + 2*x + 1) / (x**2 + x))
```
```python
expr = 1/x + (2*x/2 - 2) /(x - 4)
```
```python
expr
```
```python
cancel(expr) #分母を通分する
```
```python
factor(expr) #factorも同じような操作をする.
```
```python
expr = (x*y**2 - 2*x*y*z + x*z**2 + y**2 - 2*y*z + z**2) / (x**2 - 1)
```
```python
expr
```
```python
cancel(expr)
```
```python
factor(expr) #factorも同じような変形をする.
```
**コメント**
式を単にキャンセルさせてシンプルにさせたいときは、`factor()`より`cancel()`のほうが効率的
### 5.2.5 `apart`関数
有理式(分数)を部分分数分解する
```python
x = symbols('x')
expr = (4*x**3 + 21*x**2 + 10*x + 12) / (x**4 + 5*x**3 + 5*x**2 + 4*x)
```
```python
expr
```
```python
apart(expr)
```
## 5.3 三角関数
**コメント**: 逆三角関数は頭に"a"を付ける: acos, asin, atan, etc...
```python
acos(x)
```
```python
cos(acos(x))
```
```python
asin(1)
```
### 5.3.1 `trigsimp`関数
三角関数の表式を、公式を用いて可能な限りシンプルな形にする.
```python
trigsimp(sin(x)**2 + cos(x)**2)
```
```python
trigsimp(sin(x)**4 - 2*cos(x)**2*sin(x)**2 + cos(x)**4)
```
```python
trigsimp(sin(x)*tan(x)/sec(x))
```
```python
trigsimp(cosh(x)**2-sinh(x)**2)
```
### 5.3.2 `expand_trig`関数
三角関数の式を展開する。 `trigsimp`と`expand_trig`は完全に逆の操作をする
```python
expand_trig(sin(x + y))
```
```python
expand_trig(tan(2*x))
```
## 5.4 べき乗
```python
x, y = symbols('x y', positive=True) #変数が正であると仮定
```
```python
a, b = symbols('a, b', real = True) #変数が実数であると仮定
```
```python
z, t, c = symbols('z t c')
```
**コメント**: `sqrt(x)`と`x**Rational(1,2)`, `x**0.5`, `x**(1/2)`は同じ
```python
sqrt(x)
```
```python
x**Rational(1,2)
```
```python
x**(0.5)
```
```python
x**(1/2)
```
### 5.4.1 `powsimp` 関数
冪が変数(`Sympy`シンボル)のときに限り、シンプルな形にする
```python
powsimp(x**a*x**b) #これ以上簡単にできない.
```
```python
powsimp(x**a*y**a)
```
変数の仮定にかかわらず実行させたいとき:
```python
powsimp(t**c*z**c)
```
を
```python
powsimp(t**c*z**c, force=True)
```
とする. `t` もしくは `z` が負になっても強制的にこの変形は行われる.
```python
(z*t)**2 #冪が整数、有理数, 2のとき.
```
```python
sqrt(x*y) #同じ
```
**注意** このような式に対しては`powsimp`は使えない:
```python
powsimp(z**2*t**2) #指数が整数
```
```python
sqrt(x*y)
```
--->冪が変数のときに`powsimp`で簡単にできる.
### 5.4.2 `expand_power_expr`関数, `expand_power_base`関数
べき乗を展開する. `powsimp`関数と逆の操作
```python
expand_power_exp(x**(a + b))
```
```python
expand_power_base((x*y)**a)
```
**注意** これも`powsimp()`と同様で、変形できないときは元の式を返す:
```python
expand_power_base((z*t)**c)
```
`t*z`が正という条件を`symbols`でつけていれば展開できるが、
今回のようにそうと限らないときは展開してくれない. 強制的に行うには
```python
expand_power_base((z*t)**c, force=True)
```
とする. また冪が数のときは
```python
x**2*x**3
```
```python
expand_power_exp(x**5)
```
のように変形できない。
### 5.4.3 `powdenest`関数
べき乗のべき乗を展開
```python
(x**a)**b #カッコを外して展開
```
```python
powdenest((x**a)**b)
```
```python
powdenest((z**a)**b)
```
```python
powdenest((z**a)**b, force=True)
```
## 5.5 指数関数、対数関数
```python
ln(x) #ln(x)とlog(x)は同じ.
```
```python
log(x)
```
```python
x, y = symbols('x y', positive=True)
```
```python
n = symbols('n', real=True)
```
### 5.5.1 `expand_log`関数
対数関数を展開する
```python
expand_log(log(x*y))
```
```python
expand_log(log(x/y))
```
```python
expand_log(log(x**2))
```
```python
expand_log(log(x**n))
```
```python
expand_log(log(z*t))
```
**注意** これまでと同様にして、正でない変数は展開できないので、そのときは`Force=True`オプションを付ける。
```python
expand_log(log(z**2))
```
```python
expand_log(log(z**2), force=True)
```
### 5.5.2 `logcombine`関数
対数関数をシンプルにする.
```python
logcombine(log(x) + log(y)) #対数関数を簡単にする
```
```python
logcombine(n*log(x))
```
```python
logcombine(n*log(z))
```
```python
logcombine(n*log(z), force=True)
```
## 5.6 特殊関数
```python
x, y, z = symbols('x y z')
```
```python
k, m, n = symbols('k m n')
```
### 5.6.1 階乗
```python
factorial(n)
```
```python
factorial(10)
```
### 5.6.2 組み合わせ (Combination)
```python
binomial(n, k) #nCk
```
```python
combsimp(factorial(n) / factorial(n - 3)) #シンプルにする
```
```python
combsimp(binomial(n + 1, k + 1) / binomial(n, k))
```
### 5.6.3 ガンマ関数
```python
gamma(z)
```
```python
combsimp(gamma(x)*gamma(1 - x)) #ガンマ関数にも使える
```
### 5.6.4 一般化された超幾何関数
```python
hyper([1, 2], [3], z)
```
### 5.6.5 関数を別の関数で書き換える
```python
tan(x).rewrite(sin) #tanをsinで書き換える
```
```python
factorial(x).rewrite(gamma) #階乗をガンマ関数で書き換える
```
### 5.6.6 特殊関数をいくつかの恒等式で書き換える
```python
expand_func(gamma(x + 3))
```
次は[Chapter6 Calculus](https://hiroyuki827.github.io/SymPy_tutorial/Chapter6_Calculus.html)へ!
|
3d192210ddfccc9d60e6751ab3a29291c138bf02
| 107,278 |
ipynb
|
Jupyter Notebook
|
Chapter5_Simplification.ipynb
|
hiroyuki827/SymPy_tutorial
|
8423ceab49482dc83c90c4cb1d388cad100ced84
|
[
"BSD-3-Clause"
] | 9 |
2018-01-02T16:53:11.000Z
|
2021-05-05T13:48:49.000Z
|
Chapter5_Simplification.ipynb
|
hiroyuki827/SymPy_tutorial
|
8423ceab49482dc83c90c4cb1d388cad100ced84
|
[
"BSD-3-Clause"
] | 1 |
2018-06-12T03:51:09.000Z
|
2018-06-13T08:15:45.000Z
|
Chapter5_Simplification.ipynb
|
hiroyuki827/SymPy_tutorial
|
8423ceab49482dc83c90c4cb1d388cad100ced84
|
[
"BSD-3-Clause"
] | null | null | null | 41.165771 | 1,936 | 0.721658 | true | 3,068 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.932453 | 0.839734 | 0.783013 |
__label__yue_Hant
| 0.470601 | 0.657533 |
# 1D Simple Harmonic Oscillator
## Imports
```python
from IPython.display import display, display_pretty
```
```python
from sympy import init_printing
init_printing(use_latex=True)
```
```python
from sympy import *
from sympy.physics.quantum import *
from sympy.physics.quantum.sho1d import *
from sympy.physics.quantum.tests.test_sho1d import *
```
## Printing Of Operators
Create a raising and lowering operator and make sure they print correctly
```python
ad = RaisingOp('a')
ad
```
```python
a = LoweringOp('a')
a
```
```python
print(latex(ad))
print(latex(a))
```
a^{\dag}
a
```python
display_pretty(ad)
display_pretty(a)
```
†
a
a
```python
print(srepr(ad))
print(srepr(a))
```
RaisingOp(Symbol('a'))
LoweringOp(Symbol('a'))
```python
print(repr(ad))
print(repr(a))
```
RaisingOp(a)
a
## Printing of States
Create a simple harmonic state and check its printing
```python
k = SHOKet('k')
k
```
```python
b = SHOBra('b')
b
```
```python
print(pretty(k))
print(pretty(b))
```
❘k⟩
⟨b❘
```python
print(latex(k))
print(latex(b))
```
{\left|k\right\rangle }
{\left\langle b\right|}
```python
print(srepr(k))
print(srepr(b))
```
SHOKet(Symbol('k'))
SHOBra(Symbol('b'))
## Properties
Take the dagger of the raising and lowering operators. They should return each other:
```python
Dagger(ad)
```
```python
Dagger(a)
```
Check commutators of the raising and lowering operators
```python
Commutator(ad,a).doit()
```
```python
Commutator(a,ad).doit()
```
Take a look at the dual states of the bra and ket
```python
k.dual
```
```python
b.dual
```
Taking the inner product of the bra and ket will return the Kronecker delta function
```python
InnerProduct(b,k).doit()
```
Take a look at how the raising and lowering operators act on states. We use qapply to apply an operator to a state
```python
qapply(ad*k)
```
```python
qapply(a*k)
```
But the states may have an explicit energy level. Let's look at the ground and first excited states
```python
kg = SHOKet(0)
kf = SHOKet(1)
```
```python
qapply(ad*kg)
```
```python
qapply(ad*kf)
```
```python
qapply(a*kg)
```
```python
qapply(a*kf)
```
## Number operator and Hamiltonian
Let's look at the number operator and Hamiltonian operator:
```python
k = SHOKet('k')
ad = RaisingOp('a')
a = LoweringOp('a')
N = NumberOp('N')
H = Hamiltonian('H')
```
The number operator is simply expressed as `ad*a`:
```python
N.rewrite('a').doit()
```
The number operator expressed in terms of the position and momentum operators:
```python
N.rewrite('xp').doit()
```
It can also be expressed in terms of the Hamiltonian operator:
```python
N.rewrite('H').doit()
```
The Hamiltonian operator can be expressed in terms of the raising and lowering operators, position and momentum operators, and the number operator:
```python
H.rewrite('a').doit()
```
```python
H.rewrite('xp').doit()
```
```python
H.rewrite('N').doit()
```
The raising and lowering operators can also be expressed in terms of the position and momentum operators
```python
ad.rewrite('xp').doit()
```
```python
a.rewrite('xp').doit()
```
### Properties
Let's take a look at how the number operator and Hamiltonian act on states:
```python
qapply(N*k)
```
Apply the number operator to a state returns the state times the ket:
```python
ks = SHOKet(2)
qapply(N*ks)
```
```python
qapply(H*k)
```
Let's see how the operators commute with each other:
```python
Commutator(N,ad).doit()
```
```python
Commutator(N,a).doit()
```
```python
Commutator(N,H).doit()
```
## Representation
We can express the operators in number operator basis. There are different ways to create a matrix in Python, we will use 3 different ways.
Sympy:
```python
represent(ad, basis=N, ndim=4, format='sympy')
```
Numpy:
```python
represent(ad, basis=N, ndim=5, format='numpy')
```
array([[ 0. , 0. , 0. , 0. , 0. ],
[ 1. , 0. , 0. , 0. , 0. ],
[ 0. , 1.41421356, 0. , 0. , 0. ],
[ 0. , 0. , 1.73205081, 0. , 0. ],
[ 0. , 0. , 0. , 2. , 0. ]])
`scipy.sparse`:
```python
sparse_rep = represent(ad, basis=N, ndim=4, format='scipy.sparse', spmatrix='lil')
sparse_rep
```
<4x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in Compressed Sparse Row format>
```python
print(sparse_rep)
```
(1, 0) 1.0
(2, 1) 1.41421356237
(3, 2) 1.73205080757
The same can be done for the other operators
```python
represent(a, basis=N, ndim=4, format='sympy')
```
```python
represent(N, basis=N, ndim=4, format='sympy')
```
```python
represent(H, basis=N, ndim=4, format='sympy')
```
Bras and kets can also be represented:
```python
k0 = SHOKet(0)
k1 = SHOKet(1)
b0 = SHOBra(0)
b1 = SHOBra(1)
```
```python
represent(k0, basis=N, ndim=5, format='sympy')
```
```python
represent(k1, basis=N, ndim=5, format='sympy')
```
```python
represent(b0, basis=N, ndim=5, format='sympy')
```
```python
represent(b1, basis=N, ndim=5, format='sympy')
```
|
2a40f7b2003b28851d633659ef1bbe219ec0dc86
| 62,985 |
ipynb
|
Jupyter Notebook
|
notebooks/sho1d.ipynb
|
gvvynplaine/quantum_notebooks
|
58783823596465fe2d6c494c2cc3a53ae69a9752
|
[
"BSD-3-Clause"
] | 42 |
2017-10-17T22:44:27.000Z
|
2022-03-28T06:26:46.000Z
|
notebooks/sho1d.ipynb
|
gvvynplaine/quantum_notebooks
|
58783823596465fe2d6c494c2cc3a53ae69a9752
|
[
"BSD-3-Clause"
] | 2 |
2017-10-09T05:16:41.000Z
|
2018-09-22T03:08:29.000Z
|
notebooks/sho1d.ipynb
|
gvvynplaine/quantum_notebooks
|
58783823596465fe2d6c494c2cc3a53ae69a9752
|
[
"BSD-3-Clause"
] | 12 |
2017-10-09T04:22:19.000Z
|
2022-03-28T06:25:21.000Z
| 38.382084 | 2,564 | 0.698865 | true | 1,623 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.909907 | 0.884039 | 0.804394 |
__label__eng_Latn
| 0.912717 | 0.707208 |
```python
from scipy.stats import binom, poisson, norm, lognorm
import matplotlib.pyplot as plt
from iminuit import Minuit # The actual fitting tool, better than scipy's
from scipy import stats
import sympy as sy
import numpy as np
import math
import sys
sys.path.append(r'/home/saim/External_Functions')
from ExternalFunctions import Chi2Regression, BinnedLH, UnbinnedLH
from ExternalFunctions import nice_string_output, add_text_to_ax
```
```python
# Plotting stuff
plt.rcParams['font.size'] = 18
plt.style.use(['science', 'notebook', 'grid'])
pink = '#e377c2'
blue = '#1f77b4'
golden = '#ff7f0e'
green = '#2ca02c'
red = '#d62728'
purple = '#9467bd'
light_blue = '#17becf'
```
```python
r = np.random
r.seed(42)
```
```python
def chi2_eval(fitted_object, Npoints, Nparams):
Chi2_value = fitted_object.fval
Ndof = Npoints - Nparams # Number of degrees of freedom
Chi2_prob = stats.chi2.sf(Chi2_value, Ndof)
return Chi2_value, Ndof, Chi2_prob
```
```python
# Turning histogram data into x, y, and sigma_y values for all non-zero entries (not considered in Chi2 fit):
def hist_data(data, Nbins, mini, maxi):
counts, bin_edges = np.histogram(data,
bins = Nbins,
range = (mini, maxi),
density = False)
bin_centers = (bin_edges[1:] + bin_edges[:-1]) / 2
x = bin_centers[counts > 0]
y = counts[counts > 0]
sy = np.sqrt(y)
return x, y, sy
```
```python
def draw_chi2fit(Nparams, x_values, x_min, x_max, PDF,
fitted_dist, Nbins, x_bin, y_bin, sigma):
# Produce the points for drawing the fit:
x_axis = np.linspace(x_min, x_max, Nbins)
y_axis = PDF(x_axis, *fitted_dist.values[:]
)
# Produce figure with histogram (with error bars) and fit overlayed:
fig, ax = plt.subplots(figsize=(14, 6))
ax.errorbar(x_bin, y_bin, sigma, fmt = '.', color = '#1f77b4', label = 'Data')
ax.plot(x_axis, y_axis, '-', color = golden, label = 'Fit')
ax.set(xlabel = "Value",
ylabel = "Frequency",
title = "")
ax.legend(loc = 'lower right',
fontsize=14);
# Fitting results
chi2_value = fitted_dist.fval
Ndof = Nbins - fitted_dist.nfit
chi2_prob = stats.chi2.sf(chi2_value, Ndof)
# Define figure text
d = {'Entries': len(x_values),
'Chi2': chi2_value,
'ndf': Ndof,
'Prob': chi2_prob,
}
for name in fitted_dist.parameters:
d[name] = [fitted_dist.values[name], fitted_dist.errors[name]]
text = nice_string_output(d, extra_spacing = 2, decimals = 3)
add_text_to_ax(0.69, 0.95, text, ax, fontsize = 15)
fig.tight_layout()
```
## Gaussian chi2 fit
```python
Npointz = 10000 # Number of random points produced
x_all = r.normal(loc = 0.2,
scale = 1.1,
size = Npointz)
Nbinz = 100
xmin, xmax = np.min(x_all), np.max(x_all)
binwidth_gauss = np.ptp(x_all) / Nbinz
#binwidth = (xmax - xmin) / Nbins
# Fitting function which is NOT normalised but has normalisation constants "N" in,
# and includes the bin width:
def func_gauss_norm(x, N, mu, sigma) :
norm = binwidth_gauss * N / np.sqrt(2.0 * np.pi) / sigma
z = (x - mu) / sigma
return norm * np.exp(-0.5 * (z**2))
def func_gaussian_alt(x, N, mu, sigma) :
return binwidth_gauss * N * norm.pdf(x, mu, sigma)
```
```python
x1, y1, sy1 = hist_data(x_all, Nbinz, xmin, xmax)
```
```python
# Fitting
chi2_gaussian = Chi2Regression(func_gauss_norm, x1, y1, sy1) # Fitting object
chi2_gaussian.errordef = Minuit.LEAST_SQUARES
minuit_gaussian = Minuit(chi2_gaussian,
N = Npointz,
mu = 0,
sigma = 0.05)
minuit_gaussian.migrad() # Perform the actual fit
```
<table>
<tr>
<td colspan="2" style="text-align:left" title="Minimum value of function"> FCN = 75.35 </td>
<td colspan="3" style="text-align:center" title="No. of function evaluations in last call and total number"> Nfcn = 196 (196 total) </td>
</tr>
<tr>
<td colspan="2" style="text-align:left" title="Estimated distance to minimum and goal"> EDM = 2.32e-07 (Goal: 0.0002) </td>
<td colspan="3" style="text-align:center" title="No. of gradient evaluations in last call and total number"> </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Minimum </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Parameters </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> No Parameters at limit </td>
</tr>
<tr>
<td colspan="2" style="text-align:center;background-color:#92CCA6;color:black"> Below EDM threshold (goal x 10) </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> Below call limit </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Hesse ok </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Has Covariance </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix accurate?"> Accurate </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix positive definite?"> Pos. def. </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Was positive definiteness enforced by Minuit?"> Not forced </td>
</tr>
</table><table>
<tr>
<td></td>
<th title="Variable name"> Name </th>
<th title="Value of parameter"> Value </th>
<th title="Hesse error"> Hesse Error </th>
<th title="Minos lower error"> Minos Error- </th>
<th title="Minos upper error"> Minos Error+ </th>
<th title="Lower limit of the parameter"> Limit- </th>
<th title="Upper limit of the parameter"> Limit+ </th>
<th title="Is the parameter fixed in the fit"> Fixed </th>
</tr>
<tr>
<th> 0 </th>
<td> N </td>
<td> 9.93e3 </td>
<td> 0.10e3 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<th> 1 </th>
<td> mu </td>
<td> 0.199 </td>
<td> 0.011 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<th> 2 </th>
<td> sigma </td>
<td> 1.094 </td>
<td> 0.008 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
</table>
```python
draw_chi2fit(3, x_all, xmin, xmax, func_gauss_norm,
minuit_gaussian, Nbinz, x1, y1, sy1)
```
## Linear chi2 fit
```python
# Fitting function
def func_linear(x, alpha0, alpha1):
return alpha0 + alpha1*x
```
```python
# Parameters
alpha0 = 3.6
alpha1 = 0.3
sigma_y = 0.5
```
```python
lin_Npoints = 50 # Number of random points produced
lin_x = np.arange(lin_Npoints) # Generate points in array
#exLin = np.zeros_like(lin_x)
lin_y = alpha0 + alpha1 * lin_x + r.normal(0, sigma_y, lin_Npoints) # linear function + gaussian errors
error_lin_y = sigma_y * np.ones_like(lin_x)
```
```python
# Fitting
chi2_linear = Chi2Regression(func_linear, lin_x, lin_y, error_lin_y) # Fitting object
chi2_linear.errordef = Minuit.LEAST_SQUARES
# Give fitting function, its parameters their starting fitting values
minuit_linear = Minuit(chi2_linear,
alpha0 = 2,
alpha1 = 0.1)
minuit_linear.migrad() # perform the actual fit
```
<table>
<tr>
<td colspan="2" style="text-align:left" title="Minimum value of function"> FCN = 37.08 </td>
<td colspan="3" style="text-align:center" title="No. of function evaluations in last call and total number"> Nfcn = 32 (32 total) </td>
</tr>
<tr>
<td colspan="2" style="text-align:left" title="Estimated distance to minimum and goal"> EDM = 1.82e-21 (Goal: 0.0002) </td>
<td colspan="3" style="text-align:center" title="No. of gradient evaluations in last call and total number"> </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Minimum </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Parameters </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> No Parameters at limit </td>
</tr>
<tr>
<td colspan="2" style="text-align:center;background-color:#92CCA6;color:black"> Below EDM threshold (goal x 10) </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> Below call limit </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Hesse ok </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Has Covariance </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix accurate?"> Accurate </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix positive definite?"> Pos. def. </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Was positive definiteness enforced by Minuit?"> Not forced </td>
</tr>
</table><table>
<tr>
<td></td>
<th title="Variable name"> Name </th>
<th title="Value of parameter"> Value </th>
<th title="Hesse error"> Hesse Error </th>
<th title="Minos lower error"> Minos Error- </th>
<th title="Minos upper error"> Minos Error+ </th>
<th title="Lower limit of the parameter"> Limit- </th>
<th title="Upper limit of the parameter"> Limit+ </th>
<th title="Is the parameter fixed in the fit"> Fixed </th>
</tr>
<tr>
<th> 0 </th>
<td> alpha0 </td>
<td> 3.63 </td>
<td> 0.14 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<th> 1 </th>
<td> alpha1 </td>
<td> 0.301 </td>
<td> 0.005 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
</table>
```python
chi2_linear, Ndof_linear, pval_linear = chi2_eval(minuit_linear, len(lin_x), 2)
```
```python
figLin, axLin = plt.subplots(figsize=(16, 8))
axLin.errorbar(lin_x,
lin_y,
error_lin_y,
fmt = 'ro',
ecolor = 'k',
elinewidth = 1,
capsize = 1,
capthick = 1)
axLin.plot(lin_x,
func_linear(lin_x, *minuit_linear.values[:]),
'-r',
color = blue)
d = {'Intercept':[minuit_linear.values['alpha0'], minuit_linear.errors['alpha0']],
'Slope': [minuit_linear.values['alpha1'], minuit_linear.errors['alpha1']],
'Chi2': chi2_linear,
'ndf': Ndof_linear,
'Prob': pval_linear,
}
text = nice_string_output(d, extra_spacing=2, decimals=3)
add_text_to_ax(0.04, 0.95, text, axLin, fontsize=20)
figLin.tight_layout()
```
## Monte Carlo Simulation and Fitting
```python
N_points = 10000
N_bins = 100
# inverse integrated function added to itself 4 times
exp_inv = sum(-0.8*np.log(r.uniform(size = N_points)) for i in range(4))
# Function given in problem statement which is summed 4 times
def exp_func(x):
return sum(r.exponential(0.8, N_points) for i in range(4))
xmin_exp = 0
xmax_exp = 20
x_axis_exp = np.linspace(start = xmin_exp,
stop = xmax_exp,
num = 10000)
y_axis_exp = exp_func(x_axis_exp)
```
```python
# Init plot object
fig, ax = plt.subplots(figsize=(15, 9))
# Plot generated data
ax.hist(exp_inv,
bins = N_bins,
range = (xmin_exp, xmax_exp),
color = blue,
histtype = 'step'
)
# Plot labels
ax.set(xlabel = "x - following f(x)",
ylabel = "Frequency",
xlim = (xmin_exp -1.0 , xmax_exp+1.0))
# Define figure text
textstr = '\n'.join((
r'$\mathrm{Entries}=%.2f$' % (len(exp_inv), ),
r'$\mathrm{Mean}=%.2f$' % (exp_inv.mean(), ),
r'$\mathrm{Std}=%.2f$' % (exp_inv.std(ddof=1), )))
# Plot figure text
props = dict(boxstyle = 'round',
facecolor = 'white',
edgecolor = 'black',
alpha=0.5)
# place a text box in upper left in axes coords
ax.text(0.86,
0.95,
textstr,
transform = ax.transAxes,
fontsize = 14,
verticalalignment='top',
bbox = props)
fig.tight_layout()
```
```python
# Binning the data
x3, y3, sigma_y3 = hist_data(exp_inv, 100, 0, 20)
```
```python
# Fitting
chi2_MC_Gauss = Chi2Regression(func_gauss_norm, x3, y3, sigma_y3) # Fitting object
chi2_MC_Gauss.errordef = Minuit.LEAST_SQUARES
minuit_MC_Gauss = Minuit(chi2_MC_Gauss,
N = N_points,
mu = 3,
sigma = 1.6)
minuit_MC_Gauss.migrad() # Perform the actual fit
```
<table>
<tr>
<td colspan="2" style="text-align:left" title="Minimum value of function"> FCN = 1148 </td>
<td colspan="3" style="text-align:center" title="No. of function evaluations in last call and total number"> Nfcn = 65 (65 total) </td>
</tr>
<tr>
<td colspan="2" style="text-align:left" title="Estimated distance to minimum and goal"> EDM = 2.98e-05 (Goal: 0.0002) </td>
<td colspan="3" style="text-align:center" title="No. of gradient evaluations in last call and total number"> </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Minimum </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Parameters </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> No Parameters at limit </td>
</tr>
<tr>
<td colspan="2" style="text-align:center;background-color:#92CCA6;color:black"> Below EDM threshold (goal x 10) </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> Below call limit </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Hesse ok </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Has Covariance </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix accurate?"> Accurate </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix positive definite?"> Pos. def. </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Was positive definiteness enforced by Minuit?"> Not forced </td>
</tr>
</table><table>
<tr>
<td></td>
<th title="Variable name"> Name </th>
<th title="Value of parameter"> Value </th>
<th title="Hesse error"> Hesse Error </th>
<th title="Minos lower error"> Minos Error- </th>
<th title="Minos upper error"> Minos Error+ </th>
<th title="Lower limit of the parameter"> Limit- </th>
<th title="Upper limit of the parameter"> Limit+ </th>
<th title="Is the parameter fixed in the fit"> Fixed </th>
</tr>
<tr>
<th> 0 </th>
<td> N </td>
<td> 20.82e3 </td>
<td> 0.22e3 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<th> 1 </th>
<td> mu </td>
<td> 3.126 </td>
<td> 0.019 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<th> 2 </th>
<td> sigma </td>
<td> 1.349 </td>
<td> 0.015 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
</table>
```python
draw_chi2fit(3, exp_inv, xmin_exp, xmax_exp,
func_gaussian_alt, minuit_MC_Gauss, N_bins, x3, y3, sigma_y3)
```
## Exponential Fit
```python
N_exp = 10000 # Number of random points produced
x_exp = r.exponential(np.e, N_exp)
exp_bins = 100
binwidth_exp = np.ptp(x_exp) / exp_bins
exp_min, exp_max = np.min(x_exp), np.max(x_exp)
```
```python
def exp_pdf(x, N, tau):
return N * binwidth_exp / tau * np.exp(-x/tau)
```
```python
# Binning data
x4, y4, sy4 = hist_data(x_exp, exp_bins, exp_min, exp_max)
```
```python
# Fitting
chi2_exp = Chi2Regression(exp_pdf, x4, y4, sy4) # Fitting object
chi2_exp.errordef = Minuit.LEAST_SQUARES
minuit_exp = Minuit(chi2_exp,
N = 10000,
tau = 2)
minuit_exp.migrad() # Perform the actual fit
```
<table>
<tr>
<td colspan="2" style="text-align:left" title="Minimum value of function"> FCN = 85.13 </td>
<td colspan="3" style="text-align:center" title="No. of function evaluations in last call and total number"> Nfcn = 38 (38 total) </td>
</tr>
<tr>
<td colspan="2" style="text-align:left" title="Estimated distance to minimum and goal"> EDM = 7.49e-07 (Goal: 0.0002) </td>
<td colspan="3" style="text-align:center" title="No. of gradient evaluations in last call and total number"> </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Minimum </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Parameters </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> No Parameters at limit </td>
</tr>
<tr>
<td colspan="2" style="text-align:center;background-color:#92CCA6;color:black"> Below EDM threshold (goal x 10) </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> Below call limit </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Hesse ok </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Has Covariance </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix accurate?"> Accurate </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix positive definite?"> Pos. def. </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Was positive definiteness enforced by Minuit?"> Not forced </td>
</tr>
</table><table>
<tr>
<td></td>
<th title="Variable name"> Name </th>
<th title="Value of parameter"> Value </th>
<th title="Hesse error"> Hesse Error </th>
<th title="Minos lower error"> Minos Error- </th>
<th title="Minos upper error"> Minos Error+ </th>
<th title="Lower limit of the parameter"> Limit- </th>
<th title="Upper limit of the parameter"> Limit+ </th>
<th title="Is the parameter fixed in the fit"> Fixed </th>
</tr>
<tr>
<th> 0 </th>
<td> N </td>
<td> 9.93e3 </td>
<td> 0.10e3 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<th> 1 </th>
<td> tau </td>
<td> 2.693 </td>
<td> 0.027 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
</table>
```python
draw_chi2fit(2, x_exp, exp_min, exp_max, exp_pdf,
minuit_exp, exp_bins, x4, y4, sy4)
```
## Power Law Fit
```python
N_pow = 10000 # Number of random points produced
x_pow = r.power(a = 15,
size = N_pow)
pow_bins = 100
binwidth_pow = np.ptp(x_pow) / pow_bins
pow_min, pow_max = np.min(x_pow), np.max(x_pow)
```
```python
def power_pdf(x, N, a, b):
return N * binwidth_pow / a * np.power(x, b)
```
```python
# Binning data
x5, y5, sy5 = hist_data(x_pow, pow_bins, pow_min, pow_max)
```
```python
# Fitting
chi2_pow = Chi2Regression(power_pdf, x5, y5, sy5) # Fitting object
chi2_pow.errordef = Minuit.LEAST_SQUARES
minuit_pow = Minuit(chi2_pow,
N = 10000,
a = 4,
b = 1)
minuit_pow.migrad() # Perform the actual fit
```
<table>
<tr>
<td colspan="2" style="text-align:left" title="Minimum value of function"> FCN = 59.57 </td>
<td colspan="3" style="text-align:center" title="No. of function evaluations in last call and total number"> Nfcn = 129 (129 total) </td>
</tr>
<tr>
<td colspan="2" style="text-align:left" title="Estimated distance to minimum and goal"> EDM = 1.17e-05 (Goal: 0.0002) </td>
<td colspan="3" style="text-align:center" title="No. of gradient evaluations in last call and total number"> </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Minimum </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Valid Parameters </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> No Parameters at limit </td>
</tr>
<tr>
<td colspan="2" style="text-align:center;background-color:#92CCA6;color:black"> Below EDM threshold (goal x 10) </td>
<td colspan="3" style="text-align:center;background-color:#92CCA6;color:black"> Below call limit </td>
</tr>
<tr>
<td style="text-align:center;background-color:#92CCA6;color:black"> Hesse ok </td>
<td style="text-align:center;background-color:#92CCA6;color:black"> Has Covariance </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix accurate?"> Accurate </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Is covariance matrix positive definite?"> Pos. def. </td>
<td style="text-align:center;background-color:#92CCA6;color:black" title="Was positive definiteness enforced by Minuit?"> Not forced </td>
</tr>
</table><table>
<tr>
<td></td>
<th title="Variable name"> Name </th>
<th title="Value of parameter"> Value </th>
<th title="Hesse error"> Hesse Error </th>
<th title="Minos lower error"> Minos Error- </th>
<th title="Minos upper error"> Minos Error+ </th>
<th title="Lower limit of the parameter"> Limit- </th>
<th title="Upper limit of the parameter"> Limit+ </th>
<th title="Is the parameter fixed in the fit"> Fixed </th>
</tr>
<tr>
<th> 0 </th>
<td> N </td>
<td> 0.11e6 </td>
<td> 0.12e6 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<th> 1 </th>
<td> a </td>
<td> 0.7 </td>
<td> 0.8 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<th> 2 </th>
<td> b </td>
<td> 14.24 </td>
<td> 0.16 </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
</table>
```python
draw_chi2fit(3, x_pow, pow_min, pow_max, power_pdf,
minuit_pow, pow_bins, x5, y5, sy5)
```
```python
stats.anderson(x_exp, dist='expon')
```
AndersonResult(statistic=0.8263378287992964, critical_values=array([0.922, 1.078, 1.341, 1.606, 1.957]), significance_level=array([15. , 10. , 5. , 2.5, 1. ]))
```python
# Null is accepted at all significance levels as test statistic is lower than critical values
```
|
037268fc8981222b0a590293e4a37dc3c57d6ef2
| 412,589 |
ipynb
|
Jupyter Notebook
|
Chi2_fitting.ipynb
|
SaimNazir/Statistics
|
8337f94a8c77f03657b46e85d7ecb8b1d78f747b
|
[
"MIT"
] | null | null | null |
Chi2_fitting.ipynb
|
SaimNazir/Statistics
|
8337f94a8c77f03657b46e85d7ecb8b1d78f747b
|
[
"MIT"
] | null | null | null |
Chi2_fitting.ipynb
|
SaimNazir/Statistics
|
8337f94a8c77f03657b46e85d7ecb8b1d78f747b
|
[
"MIT"
] | null | null | null | 340.139324 | 75,232 | 0.902399 | true | 7,171 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.875787 | 0.845942 | 0.740865 |
__label__eng_Latn
| 0.409005 | 0.55961 |
# 1일차 Quiz
<div align='right'><b> 류회성(Hoesung Ryu)</b> </div>
<div align='right'> (skainof23@gmail.com) </div>
> 간단하게 Python, NumPy 그리고 Pandas에 대한 간단한 퀴즈를 보겠습니다.
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#-Python-
--" data-toc-modified-id="-Python-
---1"><span class="toc-item-num">1 </span><i class="fa fa-tasks"></i> Python
</a></span></li><li><span><a href="#-NumPy-
--" data-toc-modified-id="-NumPy-
---2"><span class="toc-item-num">2 </span><i class="fa fa-tasks"></i> NumPy
</a></span></li><li><span><a href="#-Pandas-
--" data-toc-modified-id="-Pandas-
---3"><span class="toc-item-num">3 </span><i class="fa fa-tasks"></i> Pandas
</a></span></li><li><span><a href="#Reference" data-toc-modified-id="Reference-4"><span class="toc-item-num">4 </span>Reference</a></span></li></ul></div>
```python
import os
import sys
import warnings
warnings.filterwarnings(action='ignore')
```
---
<div class="alert alert-success" data-title="">
<h2><i class="fa fa-tasks" aria-hidden="true"></i> Python
</h2>
</div>
**Q1.** You are given a list $x=[1,2,3,4,5,6]$.
Print the third items in list x
```python
x = [1,2,3,4,5,6]
```
```python
x[2]
```
3
**Q2.** Print the reversed order of list $x=[1,2,3,4,5,6]$
```python
x[::-1]
```
[6, 5, 4, 3, 2, 1]
**Q3.** Given a list iterate it and display numbers which are divisible by $5$
- List = [12, 15, 32, 42, 55, 75, 122, 132]
```python
list_ = [12, 15, 32, 42, 55, 75, 122, 132, 150, 180, 200]
for item in list_:
if(item % 5 == 0):
print(item)
```
15
55
75
150
180
200
**Q4.** Print the following pattern:
```
1
1 2
1 2 3
1 2 3 4
1 2 3 4 5
```
```python
lastNumber = 6
for row in range(1, lastNumber):
for column in range(1, row + 1):
print(column, end=' ')
print("")
```
1
1 2
1 2 3
1 2 3 4
1 2 3 4 5
**Q5.** Print the following pattern:
```
5 4 3 2 1
4 3 2 1
3 2 1
2 1
1
```
```python
n = 5
k = 5
for i in range(0,n+1):
for j in range(k-i,0,-1):
print(j,end=' ')
print()
```
5 4 3 2 1
4 3 2 1
3 2 1
2 1
1
**Q6.** The Fibonacci sequence is defined as below:
$$F_n = F_{n-1} + F_{n-2}$$
where $n$ denotes the $n^\text{th}$ item of the Fibonacci sequence. You are given the first three numbers of the Fibonacci sequence as F = [0, 1, 1]. Create a for loop to determine the next 20 numbers of the Fibonacci sequence. Print F with the final 23 numbers.
_Hint: use F.append() to add a new Fibonacci value to the end of the list F._
```python
F = [0, 1, 1]
for n in range(3, 23):
f_n = F[n - 1] + F[n - 2]
F.append(f_n)
print(F)
```
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711]
**Q7.** Given the list $x = [2.0,3.0,5.0,7.0,9.0]$, create a list $Y(x)$ for each float in x. Print the list $Y$.
$$Y(x) = \frac{(3.0x)^2}{(99x - x^3)} - \frac{1}{x}$$
```python
# Q7
x = [2.0, 3.0, 5.0, 7.0, 9.0]
Y = []
for v in x:
new_val = (3.0 * v) ** 2 / (99 * v - v ** 3) - 1 / v
Y.append(new_val)
# one-liner with list-comprehension:
Y = [(3.0 * v) ** 2 / (99 * v - v ** 3) - 1 / v
for v in x]
print(Y)
```
[-0.31052631578947365, -0.033333333333333326, 0.4081081081081081, 1.1171428571428572, 4.388888888888889]
---
<div class="alert alert-success" data-title="">
<h2><i class="fa fa-tasks" aria-hidden="true"></i> NumPy
</h2>
</div>
```python
import numpy as np
```
**Q1.** Create a null vector of size 10 (★☆☆)
```python
Z = np.zeros(10)
print(Z)
```
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
**Q2.** Reverse the below list(first element becomes last) (★☆☆):
```python
x = [1,2,3,4,5]
```
```python
X = [1,2,3,4,5]
```
```python
X[::-1]
```
[5, 4, 3, 2, 1]
**Q3.** Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)
(hint: np.linspace)
```python
Z = np.linspace(0,1,11,endpoint=False)[1:]
print(Z)
```
[0.09090909 0.18181818 0.27272727 0.36363636 0.45454545 0.54545455
0.63636364 0.72727273 0.81818182 0.90909091]
**Q4.** Create a random vector of size 10 and sort it (★★☆)
```python
Z = np.random.random(10)
Z.sort()
print(Z)
```
[0.02889321 0.39602243 0.40008064 0.48197894 0.52795913 0.55275259
0.75936614 0.82002682 0.91805959 0.93233571]
**Q5.** Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
```python
Z = np.random.random((10,10))
Zmin, Zmax = Z.min(), Z.max()
print(Zmin, Zmax)
```
0.006461933927761954 0.9744015893640688
**Q6.** Create a 3x3x3 array with random values (★☆☆)
```python
Z = np.random.random((3,3,3))
print(Z)
```
[[[0.6896078 0.59597608 0.43508025]
[0.83440615 0.03484723 0.15020937]
[0.60759487 0.83454451 0.72601657]]
[[0.64333068 0.62188361 0.5527411 ]
[0.37278046 0.30690588 0.80008019]
[0.51429488 0.71915612 0.99470295]]
[[0.20987646 0.12009793 0.39928661]
[0.46249193 0.27473153 0.7594266 ]
[0.16051055 0.95553955 0.72917551]]]
**Q7.** solve the equation $F=M \times a $ for a (hint: np.linalg.solve)
```python
M = np.array([[2,3],[-2,9]])
F = np.array([12.9,12.3])
```
```python
M = np.array([[2,3],[-2,9]])
F = np.array([12.9,12.3])
```
```python
a = np.linalg.solve(M,F)
print(a)
```
[3.3 2.1]
**Q8.** dot product(matrix multiplication)
prodcut those two values t1, t2
```python
t1 = np.array([[1,2],[2,3]])
t2 = np.array([[3,5],[4,6]])
```
```python
t1 = np.array([[1,2],[2,3]])
t2 = np.array([[3,5],[4,6]])
```
```python
t1 @ t2
```
array([[11, 17],
[18, 28]])
**Q9.** Consider the following four $x,y$ data points.
|$x$ |$y$|
|--|--|
|0.0 | 1.1 |
|0.7 | 2.99|
|1.7 | 5.69|
|2.1 |6.77 |
where an equation of a line is defined as
$$y = mx + b$$
The matrix equation for fitting a line is defined as
\begin{equation}
\begin{bmatrix}
x_1 & 1.0 \\
x_2 & 1.0 \\
\vdots & \vdots \\
x_n & 1.0 \\
\end{bmatrix}\begin{bmatrix}
m \\
b
\end{bmatrix}
= \begin{bmatrix}
y_1 \\
y_2 \\
\vdots \\
y_n \\
\end{bmatrix}
\end{equation}
where the first data points are $x_1$ and $y_1$. The second data points are $x_2$ and $y_2$, and so forth.
```python
x = np.array([0.0, 0.7, 1.7, 2.1])
y = np.array([1.1, 2.99, 5.69, 6.77])
A = np.vstack([x, np.ones(len(x))]).T
# or
A = np.array([x, np.ones(len(x))]).T
c, residuals, rank, sing = np.linalg.lstsq(A, y)
print(c)
print(residuals)
print(rank)
```
[2.7 1.1]
[1.42981039e-30]
2
---
<div class="alert alert-success" data-title="">
<h2><i class="fa fa-tasks" aria-hidden="true"></i> Pandas
</h2>
</div>
**Q1.** Import pandas under the alias `pd`.
```python
import pandas as pd
```
**Q2.** Print the version of pandas that has been imported.
```python
pd.__version__
```
'1.0.3'
**Q3.** Print out all the version information of the libraries that are required by the pandas library.
```python
pd.show_versions()
```
INSTALLED VERSIONS
------------------
commit : None
python : 3.6.10.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : AMD64 Family 23 Model 1 Stepping 1, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 1.0.3
numpy : 1.16.6
pytz : 2020.1
dateutil : 2.8.1
pip : 20.0.2
setuptools : 47.1.1.post20200604
Cython : 0.29.21
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.1
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.13.0
pandas_datareader: None
bs4 : 4.6.0
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.1
matplotlib : 3.1.3
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
numba : None
Note: remember to import numpy using:
```python
import numpy as np
```
Consider the following Python dictionary `data` and Python list `labels`:
``` python
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
```
(This is just some meaningless data I made up with the theme of animals and trips to a vet.)
**Q4.** Create a DataFrame `df` from this dictionary `data` which has the index `labels`.
```python
import numpy as np
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
df = pd.DataFrame(data, index=labels)
```
**Q5.** Display a summary of the basic information about this DataFrame and its data (*hint: there is a single method that can be called on the DataFrame*).
```python
df.info()
# ...or...
df.describe()
```
<class 'pandas.core.frame.DataFrame'>
Index: 10 entries, a to j
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 animal 10 non-null object
1 age 8 non-null float64
2 visits 10 non-null int64
3 priority 10 non-null object
dtypes: float64(1), int64(1), object(2)
memory usage: 400.0+ bytes
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>age</th>
<th>visits</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>8.000000</td>
<td>10.000000</td>
</tr>
<tr>
<th>mean</th>
<td>3.437500</td>
<td>1.900000</td>
</tr>
<tr>
<th>std</th>
<td>2.007797</td>
<td>0.875595</td>
</tr>
<tr>
<th>min</th>
<td>0.500000</td>
<td>1.000000</td>
</tr>
<tr>
<th>25%</th>
<td>2.375000</td>
<td>1.000000</td>
</tr>
<tr>
<th>50%</th>
<td>3.000000</td>
<td>2.000000</td>
</tr>
<tr>
<th>75%</th>
<td>4.625000</td>
<td>2.750000</td>
</tr>
<tr>
<th>max</th>
<td>7.000000</td>
<td>3.000000</td>
</tr>
</tbody>
</table>
</div>
**Q6.** Return the first 3 rows of the DataFrame `df`.
```python
df.iloc[:3]
# or equivalently
df.head(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<th>a</th>
<td>cat</td>
<td>2.5</td>
<td>1</td>
<td>yes</td>
</tr>
<tr>
<th>b</th>
<td>cat</td>
<td>3.0</td>
<td>3</td>
<td>yes</td>
</tr>
<tr>
<th>c</th>
<td>snake</td>
<td>0.5</td>
<td>2</td>
<td>no</td>
</tr>
</tbody>
</table>
</div>
**Q7.** Select just the 'animal' and 'age' columns from the DataFrame `df`.
```python
df.loc[:, ['animal', 'age']]
# or
df[['animal', 'age']]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
</tr>
</thead>
<tbody>
<tr>
<th>a</th>
<td>cat</td>
<td>2.5</td>
</tr>
<tr>
<th>b</th>
<td>cat</td>
<td>3.0</td>
</tr>
<tr>
<th>c</th>
<td>snake</td>
<td>0.5</td>
</tr>
<tr>
<th>d</th>
<td>dog</td>
<td>NaN</td>
</tr>
<tr>
<th>e</th>
<td>dog</td>
<td>5.0</td>
</tr>
<tr>
<th>f</th>
<td>cat</td>
<td>2.0</td>
</tr>
<tr>
<th>g</th>
<td>snake</td>
<td>4.5</td>
</tr>
<tr>
<th>h</th>
<td>cat</td>
<td>NaN</td>
</tr>
<tr>
<th>i</th>
<td>dog</td>
<td>7.0</td>
</tr>
<tr>
<th>j</th>
<td>dog</td>
<td>3.0</td>
</tr>
</tbody>
</table>
</div>
**Q8.** Select the data in rows `[3, 4, 8]` *and* in columns `['animal', 'age']`.
```python
df.loc[df.index[[3, 4, 8]], ['animal', 'age']]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
</tr>
</thead>
<tbody>
<tr>
<th>d</th>
<td>dog</td>
<td>NaN</td>
</tr>
<tr>
<th>e</th>
<td>dog</td>
<td>5.0</td>
</tr>
<tr>
<th>i</th>
<td>dog</td>
<td>7.0</td>
</tr>
</tbody>
</table>
</div>
**Q9.** Select only the rows where the number of visits is greater than 3.
```python
df[df['visits'] > 3]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
</tbody>
</table>
</div>
**Q10.** Select the rows where the age is missing, i.e. it is `NaN`.
```python
df[df['age'].isnull()]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<th>d</th>
<td>dog</td>
<td>NaN</td>
<td>3</td>
<td>yes</td>
</tr>
<tr>
<th>h</th>
<td>cat</td>
<td>NaN</td>
<td>1</td>
<td>yes</td>
</tr>
</tbody>
</table>
</div>
**Q11.** Select the rows where the animal is a cat *and* the age is less than 3.
```python
df[(df['animal'] == 'cat') & (df['age'] < 3)]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<th>a</th>
<td>cat</td>
<td>2.5</td>
<td>1</td>
<td>yes</td>
</tr>
<tr>
<th>f</th>
<td>cat</td>
<td>2.0</td>
<td>3</td>
<td>no</td>
</tr>
</tbody>
</table>
</div>
**Q12.** Select the rows the age is between 2 and 4 (inclusive).
```python
df[df['age'].between(2, 4)]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<th>a</th>
<td>cat</td>
<td>2.5</td>
<td>1</td>
<td>yes</td>
</tr>
<tr>
<th>b</th>
<td>cat</td>
<td>3.0</td>
<td>3</td>
<td>yes</td>
</tr>
<tr>
<th>f</th>
<td>cat</td>
<td>2.0</td>
<td>3</td>
<td>no</td>
</tr>
<tr>
<th>j</th>
<td>dog</td>
<td>3.0</td>
<td>1</td>
<td>no</td>
</tr>
</tbody>
</table>
</div>
**Q13.** Change the age in row 'f' to 1.5.
```python
df.loc['f', 'age'] = 1.5
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<th>a</th>
<td>cat</td>
<td>2.5</td>
<td>1</td>
<td>yes</td>
</tr>
<tr>
<th>b</th>
<td>cat</td>
<td>3.0</td>
<td>3</td>
<td>yes</td>
</tr>
<tr>
<th>c</th>
<td>snake</td>
<td>0.5</td>
<td>2</td>
<td>no</td>
</tr>
<tr>
<th>d</th>
<td>dog</td>
<td>NaN</td>
<td>3</td>
<td>yes</td>
</tr>
<tr>
<th>e</th>
<td>dog</td>
<td>5.0</td>
<td>2</td>
<td>no</td>
</tr>
<tr>
<th>f</th>
<td>cat</td>
<td>1.5</td>
<td>3</td>
<td>no</td>
</tr>
<tr>
<th>g</th>
<td>snake</td>
<td>4.5</td>
<td>1</td>
<td>no</td>
</tr>
<tr>
<th>h</th>
<td>cat</td>
<td>NaN</td>
<td>1</td>
<td>yes</td>
</tr>
<tr>
<th>i</th>
<td>dog</td>
<td>7.0</td>
<td>2</td>
<td>no</td>
</tr>
<tr>
<th>j</th>
<td>dog</td>
<td>3.0</td>
<td>1</td>
<td>no</td>
</tr>
</tbody>
</table>
</div>
**Q14.** Calculate the sum of all visits in `df` (i.e. the total number of visits).
```python
df['visits'].sum()
```
19
**Q15.** Calculate the mean age for each different animal in `df`.
```python
df.groupby('animal')['age'].mean()
```
animal
cat 2.333333
dog 5.000000
snake 2.500000
Name: age, dtype: float64
**Q16.** Append a new row 'k' to `df` with your choice of values for each column. Then delete that row to return the original DataFrame.
```python
df.loc['k'] = [5.5, 'dog', 'no', 2]
# and then deleting the new row...
df = df.drop('k')
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<th>a</th>
<td>cat</td>
<td>2.5</td>
<td>1</td>
<td>yes</td>
</tr>
<tr>
<th>b</th>
<td>cat</td>
<td>3</td>
<td>3</td>
<td>yes</td>
</tr>
<tr>
<th>c</th>
<td>snake</td>
<td>0.5</td>
<td>2</td>
<td>no</td>
</tr>
<tr>
<th>d</th>
<td>dog</td>
<td>NaN</td>
<td>3</td>
<td>yes</td>
</tr>
<tr>
<th>e</th>
<td>dog</td>
<td>5</td>
<td>2</td>
<td>no</td>
</tr>
<tr>
<th>f</th>
<td>cat</td>
<td>1.5</td>
<td>3</td>
<td>no</td>
</tr>
<tr>
<th>g</th>
<td>snake</td>
<td>4.5</td>
<td>1</td>
<td>no</td>
</tr>
<tr>
<th>h</th>
<td>cat</td>
<td>NaN</td>
<td>1</td>
<td>yes</td>
</tr>
<tr>
<th>i</th>
<td>dog</td>
<td>7</td>
<td>2</td>
<td>no</td>
</tr>
<tr>
<th>j</th>
<td>dog</td>
<td>3</td>
<td>1</td>
<td>no</td>
</tr>
</tbody>
</table>
</div>
**Q17.** Count the number of each type of animal in `df`.
```python
df['animal'].value_counts()
```
dog 4
cat 4
snake 2
Name: animal, dtype: int64
**Q18.** Sort `df` first by the values in the 'age' in *decending* order, then by the value in the 'visit' column in *ascending* order (so row `i` should be first, and row `d` should be last).
```python
df.sort_values(by=['age', 'visits'], ascending=[False, True])
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<th>i</th>
<td>dog</td>
<td>7</td>
<td>2</td>
<td>no</td>
</tr>
<tr>
<th>e</th>
<td>dog</td>
<td>5</td>
<td>2</td>
<td>no</td>
</tr>
<tr>
<th>g</th>
<td>snake</td>
<td>4.5</td>
<td>1</td>
<td>no</td>
</tr>
<tr>
<th>j</th>
<td>dog</td>
<td>3</td>
<td>1</td>
<td>no</td>
</tr>
<tr>
<th>b</th>
<td>cat</td>
<td>3</td>
<td>3</td>
<td>yes</td>
</tr>
<tr>
<th>a</th>
<td>cat</td>
<td>2.5</td>
<td>1</td>
<td>yes</td>
</tr>
<tr>
<th>f</th>
<td>cat</td>
<td>1.5</td>
<td>3</td>
<td>no</td>
</tr>
<tr>
<th>c</th>
<td>snake</td>
<td>0.5</td>
<td>2</td>
<td>no</td>
</tr>
<tr>
<th>h</th>
<td>cat</td>
<td>NaN</td>
<td>1</td>
<td>yes</td>
</tr>
<tr>
<th>d</th>
<td>dog</td>
<td>NaN</td>
<td>3</td>
<td>yes</td>
</tr>
</tbody>
</table>
</div>
**Q19.** The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be `True` and 'no' should be `False`.
```python
df['priority'] = df['priority'].map({'yes': True, 'no': False})
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<th>a</th>
<td>cat</td>
<td>2.5</td>
<td>1</td>
<td>True</td>
</tr>
<tr>
<th>b</th>
<td>cat</td>
<td>3</td>
<td>3</td>
<td>True</td>
</tr>
<tr>
<th>c</th>
<td>snake</td>
<td>0.5</td>
<td>2</td>
<td>False</td>
</tr>
<tr>
<th>d</th>
<td>dog</td>
<td>NaN</td>
<td>3</td>
<td>True</td>
</tr>
<tr>
<th>e</th>
<td>dog</td>
<td>5</td>
<td>2</td>
<td>False</td>
</tr>
<tr>
<th>f</th>
<td>cat</td>
<td>1.5</td>
<td>3</td>
<td>False</td>
</tr>
<tr>
<th>g</th>
<td>snake</td>
<td>4.5</td>
<td>1</td>
<td>False</td>
</tr>
<tr>
<th>h</th>
<td>cat</td>
<td>NaN</td>
<td>1</td>
<td>True</td>
</tr>
<tr>
<th>i</th>
<td>dog</td>
<td>7</td>
<td>2</td>
<td>False</td>
</tr>
<tr>
<th>j</th>
<td>dog</td>
<td>3</td>
<td>1</td>
<td>False</td>
</tr>
</tbody>
</table>
</div>
**Q20.** In the 'animal' column, change the 'snake' entries to 'python'.
```python
df['animal'] = df['animal'].replace('snake', 'python')
```
```python
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>animal</th>
<th>age</th>
<th>visits</th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<th>a</th>
<td>cat</td>
<td>2.5</td>
<td>1</td>
<td>True</td>
</tr>
<tr>
<th>b</th>
<td>cat</td>
<td>3</td>
<td>3</td>
<td>True</td>
</tr>
<tr>
<th>c</th>
<td>python</td>
<td>0.5</td>
<td>2</td>
<td>False</td>
</tr>
<tr>
<th>d</th>
<td>dog</td>
<td>NaN</td>
<td>3</td>
<td>True</td>
</tr>
<tr>
<th>e</th>
<td>dog</td>
<td>5</td>
<td>2</td>
<td>False</td>
</tr>
<tr>
<th>f</th>
<td>cat</td>
<td>1.5</td>
<td>3</td>
<td>False</td>
</tr>
<tr>
<th>g</th>
<td>python</td>
<td>4.5</td>
<td>1</td>
<td>False</td>
</tr>
<tr>
<th>h</th>
<td>cat</td>
<td>NaN</td>
<td>1</td>
<td>True</td>
</tr>
<tr>
<th>i</th>
<td>dog</td>
<td>7</td>
<td>2</td>
<td>False</td>
</tr>
<tr>
<th>j</th>
<td>dog</td>
<td>3</td>
<td>1</td>
<td>False</td>
</tr>
</tbody>
</table>
</div>
## Reference
- https://rfriend.tistory.com/346
|
7b2c7cf03da0a4238ab7675a4af1a75e486ce060
| 65,191 |
ipynb
|
Jupyter Notebook
|
code/Day01/Day01_quiz_answer.ipynb
|
hoesung/blockchain-devML-course
|
ff6ba6ca2479ddb07e4868d503cf57d2d28a4652
|
[
"MIT"
] | 1 |
2020-08-05T16:29:27.000Z
|
2020-08-05T16:29:27.000Z
|
code/Day01/Day01_quiz_answer.ipynb
|
hoesung/blockchain-devML-course
|
ff6ba6ca2479ddb07e4868d503cf57d2d28a4652
|
[
"MIT"
] | null | null | null |
code/Day01/Day01_quiz_answer.ipynb
|
hoesung/blockchain-devML-course
|
ff6ba6ca2479ddb07e4868d503cf57d2d28a4652
|
[
"MIT"
] | null | null | null | 24.787452 | 273 | 0.381709 | true | 10,375 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.79053 | 0.672332 | 0.531499 |
__label__eng_Latn
| 0.379292 | 0.073179 |
```python
a,*args,c=range(10)
```
```python
a=[]
a[0:2]=range(3)
```
```python
a
```
[0, 1, 2]
```python
args
```
[1, 2, 3, 4, 5, 6, 7, 8]
```python
from pathlib import Path
dataset = 'wiki_images'
datasets_root = Path('/path/to/datasets/')
train_path = datasets_root / dataset / 'train'
test_path = datasets_root / dataset / 'test'
```
```python
print(train_path)
```
/path/to/datasets/wiki_images/train
```python
s="Hello World"
for c in s:
print(c)
```
H
e
l
l
o
W
o
r
l
d
```python
z=[]
y=list()
z,y
z==y
```
True
```python
```
```python
t="The quick brown fox jumps over the lazy dog"
print('The Phrase "{}" has {} words'.format(t,len(c)))
for w in t.split():
print(w)
l=t.split()
l+=[2]
l+= [9872.6782]
l
print(l, l[-4])
```
The Phrase "The quick brown fox jumps over the lazy dog" has 1 words
The
quick
brown
fox
jumps
over
the
lazy
dog
['The', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog', 2, 9872.6782] lazy
```python
def fib(n):
'''
Calculates the n'th Fibonacci series term for a given 'n'
'''
# Hard code initial terms for 0th and 1st
prev = 1
pprev = 0
for i in range(2,n+1):
fib = pprev + prev
pprev = prev
prev = fib
print('The Fibonnaci term at [index, fib] is [{}, {}]'.format(i,fib))
return(fib)
```
```python
fib(5)
```
The Fibonnaci term at [index, fib] is [2, 1]
The Fibonnaci term at [index, fib] is [3, 2]
The Fibonnaci term at [index, fib] is [4, 3]
The Fibonnaci term at [index, fib] is [5, 5]
5
```python
def fib2(n):
'''
Calculates the n'th Fibonacci series term for a given 'n'
'''
# Hard code initial terms for 0th and 1st
ser=[]
prev = 1
pprev = 0
for i in range(2,n+1):
fib = pprev + prev
pprev = prev
prev = fib
print('The Fibonnaci term at [index, fib] is [{}, {}]'.format(i,fib))
ser += [fib]
return(ser)
```
```python
f=fib2(22)
```
The Fibonnaci term at [index, fib] is [2, 1]
The Fibonnaci term at [index, fib] is [3, 2]
The Fibonnaci term at [index, fib] is [4, 3]
The Fibonnaci term at [index, fib] is [5, 5]
The Fibonnaci term at [index, fib] is [6, 8]
The Fibonnaci term at [index, fib] is [7, 13]
The Fibonnaci term at [index, fib] is [8, 21]
The Fibonnaci term at [index, fib] is [9, 34]
The Fibonnaci term at [index, fib] is [10, 55]
The Fibonnaci term at [index, fib] is [11, 89]
The Fibonnaci term at [index, fib] is [12, 144]
The Fibonnaci term at [index, fib] is [13, 233]
The Fibonnaci term at [index, fib] is [14, 377]
The Fibonnaci term at [index, fib] is [15, 610]
The Fibonnaci term at [index, fib] is [16, 987]
The Fibonnaci term at [index, fib] is [17, 1597]
The Fibonnaci term at [index, fib] is [18, 2584]
The Fibonnaci term at [index, fib] is [19, 4181]
The Fibonnaci term at [index, fib] is [20, 6765]
The Fibonnaci term at [index, fib] is [21, 10946]
The Fibonnaci term at [index, fib] is [22, 17711]
```python
import copy as c
l=t.split()
cop=c.copy(l)
```
```python
d={0:1,2:1,3:1,4:0}
l
```
['The', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog']
```python
d.update({1:1})
```
```python
d['dogspeak']=l
```
```python
for i, p in enumerate([ e if sym.isprime(e) for e in f ]) : print(i,p)
```
```python
sym.isprime(f[7])
```
False
```python
[ e for e in f if sym.isprime(e)]
```
[2, 3, 5, 13, 89, 233, 1597]
```python
import sympy as sym
def bulk_prime_check(l):
return [sym.isprime(e) for e in l]
```
```python
f=fib2(234)
g=bulk_prime_check(fib2(234))
```
The Fibonnaci term at [index, fib] is [2, 1]
The Fibonnaci term at [index, fib] is [3, 2]
The Fibonnaci term at [index, fib] is [4, 3]
The Fibonnaci term at [index, fib] is [5, 5]
The Fibonnaci term at [index, fib] is [6, 8]
The Fibonnaci term at [index, fib] is [7, 13]
The Fibonnaci term at [index, fib] is [8, 21]
The Fibonnaci term at [index, fib] is [9, 34]
The Fibonnaci term at [index, fib] is [10, 55]
The Fibonnaci term at [index, fib] is [11, 89]
The Fibonnaci term at [index, fib] is [12, 144]
The Fibonnaci term at [index, fib] is [13, 233]
The Fibonnaci term at [index, fib] is [14, 377]
The Fibonnaci term at [index, fib] is [15, 610]
The Fibonnaci term at [index, fib] is [16, 987]
The Fibonnaci term at [index, fib] is [17, 1597]
The Fibonnaci term at [index, fib] is [18, 2584]
The Fibonnaci term at [index, fib] is [19, 4181]
The Fibonnaci term at [index, fib] is [20, 6765]
The Fibonnaci term at [index, fib] is [21, 10946]
The Fibonnaci term at [index, fib] is [22, 17711]
The Fibonnaci term at [index, fib] is [23, 28657]
The Fibonnaci term at [index, fib] is [24, 46368]
The Fibonnaci term at [index, fib] is [25, 75025]
The Fibonnaci term at [index, fib] is [26, 121393]
The Fibonnaci term at [index, fib] is [27, 196418]
The Fibonnaci term at [index, fib] is [28, 317811]
The Fibonnaci term at [index, fib] is [29, 514229]
The Fibonnaci term at [index, fib] is [30, 832040]
The Fibonnaci term at [index, fib] is [31, 1346269]
The Fibonnaci term at [index, fib] is [32, 2178309]
The Fibonnaci term at [index, fib] is [33, 3524578]
The Fibonnaci term at [index, fib] is [34, 5702887]
The Fibonnaci term at [index, fib] is [35, 9227465]
The Fibonnaci term at [index, fib] is [36, 14930352]
The Fibonnaci term at [index, fib] is [37, 24157817]
The Fibonnaci term at [index, fib] is [38, 39088169]
The Fibonnaci term at [index, fib] is [39, 63245986]
The Fibonnaci term at [index, fib] is [40, 102334155]
The Fibonnaci term at [index, fib] is [41, 165580141]
The Fibonnaci term at [index, fib] is [42, 267914296]
The Fibonnaci term at [index, fib] is [43, 433494437]
The Fibonnaci term at [index, fib] is [44, 701408733]
The Fibonnaci term at [index, fib] is [45, 1134903170]
The Fibonnaci term at [index, fib] is [46, 1836311903]
The Fibonnaci term at [index, fib] is [47, 2971215073]
The Fibonnaci term at [index, fib] is [48, 4807526976]
The Fibonnaci term at [index, fib] is [49, 7778742049]
The Fibonnaci term at [index, fib] is [50, 12586269025]
The Fibonnaci term at [index, fib] is [51, 20365011074]
The Fibonnaci term at [index, fib] is [52, 32951280099]
The Fibonnaci term at [index, fib] is [53, 53316291173]
The Fibonnaci term at [index, fib] is [54, 86267571272]
The Fibonnaci term at [index, fib] is [55, 139583862445]
The Fibonnaci term at [index, fib] is [56, 225851433717]
The Fibonnaci term at [index, fib] is [57, 365435296162]
The Fibonnaci term at [index, fib] is [58, 591286729879]
The Fibonnaci term at [index, fib] is [59, 956722026041]
The Fibonnaci term at [index, fib] is [60, 1548008755920]
The Fibonnaci term at [index, fib] is [61, 2504730781961]
The Fibonnaci term at [index, fib] is [62, 4052739537881]
The Fibonnaci term at [index, fib] is [63, 6557470319842]
The Fibonnaci term at [index, fib] is [64, 10610209857723]
The Fibonnaci term at [index, fib] is [65, 17167680177565]
The Fibonnaci term at [index, fib] is [66, 27777890035288]
The Fibonnaci term at [index, fib] is [67, 44945570212853]
The Fibonnaci term at [index, fib] is [68, 72723460248141]
The Fibonnaci term at [index, fib] is [69, 117669030460994]
The Fibonnaci term at [index, fib] is [70, 190392490709135]
The Fibonnaci term at [index, fib] is [71, 308061521170129]
The Fibonnaci term at [index, fib] is [72, 498454011879264]
The Fibonnaci term at [index, fib] is [73, 806515533049393]
The Fibonnaci term at [index, fib] is [74, 1304969544928657]
The Fibonnaci term at [index, fib] is [75, 2111485077978050]
The Fibonnaci term at [index, fib] is [76, 3416454622906707]
The Fibonnaci term at [index, fib] is [77, 5527939700884757]
The Fibonnaci term at [index, fib] is [78, 8944394323791464]
The Fibonnaci term at [index, fib] is [79, 14472334024676221]
The Fibonnaci term at [index, fib] is [80, 23416728348467685]
The Fibonnaci term at [index, fib] is [81, 37889062373143906]
The Fibonnaci term at [index, fib] is [82, 61305790721611591]
The Fibonnaci term at [index, fib] is [83, 99194853094755497]
The Fibonnaci term at [index, fib] is [84, 160500643816367088]
The Fibonnaci term at [index, fib] is [85, 259695496911122585]
The Fibonnaci term at [index, fib] is [86, 420196140727489673]
The Fibonnaci term at [index, fib] is [87, 679891637638612258]
The Fibonnaci term at [index, fib] is [88, 1100087778366101931]
The Fibonnaci term at [index, fib] is [89, 1779979416004714189]
The Fibonnaci term at [index, fib] is [90, 2880067194370816120]
The Fibonnaci term at [index, fib] is [91, 4660046610375530309]
The Fibonnaci term at [index, fib] is [92, 7540113804746346429]
The Fibonnaci term at [index, fib] is [93, 12200160415121876738]
The Fibonnaci term at [index, fib] is [94, 19740274219868223167]
The Fibonnaci term at [index, fib] is [95, 31940434634990099905]
The Fibonnaci term at [index, fib] is [96, 51680708854858323072]
The Fibonnaci term at [index, fib] is [97, 83621143489848422977]
The Fibonnaci term at [index, fib] is [98, 135301852344706746049]
The Fibonnaci term at [index, fib] is [99, 218922995834555169026]
The Fibonnaci term at [index, fib] is [100, 354224848179261915075]
The Fibonnaci term at [index, fib] is [101, 573147844013817084101]
The Fibonnaci term at [index, fib] is [102, 927372692193078999176]
The Fibonnaci term at [index, fib] is [103, 1500520536206896083277]
The Fibonnaci term at [index, fib] is [104, 2427893228399975082453]
The Fibonnaci term at [index, fib] is [105, 3928413764606871165730]
The Fibonnaci term at [index, fib] is [106, 6356306993006846248183]
The Fibonnaci term at [index, fib] is [107, 10284720757613717413913]
The Fibonnaci term at [index, fib] is [108, 16641027750620563662096]
The Fibonnaci term at [index, fib] is [109, 26925748508234281076009]
The Fibonnaci term at [index, fib] is [110, 43566776258854844738105]
The Fibonnaci term at [index, fib] is [111, 70492524767089125814114]
The Fibonnaci term at [index, fib] is [112, 114059301025943970552219]
The Fibonnaci term at [index, fib] is [113, 184551825793033096366333]
The Fibonnaci term at [index, fib] is [114, 298611126818977066918552]
The Fibonnaci term at [index, fib] is [115, 483162952612010163284885]
The Fibonnaci term at [index, fib] is [116, 781774079430987230203437]
The Fibonnaci term at [index, fib] is [117, 1264937032042997393488322]
The Fibonnaci term at [index, fib] is [118, 2046711111473984623691759]
The Fibonnaci term at [index, fib] is [119, 3311648143516982017180081]
The Fibonnaci term at [index, fib] is [120, 5358359254990966640871840]
The Fibonnaci term at [index, fib] is [121, 8670007398507948658051921]
The Fibonnaci term at [index, fib] is [122, 14028366653498915298923761]
The Fibonnaci term at [index, fib] is [123, 22698374052006863956975682]
The Fibonnaci term at [index, fib] is [124, 36726740705505779255899443]
The Fibonnaci term at [index, fib] is [125, 59425114757512643212875125]
The Fibonnaci term at [index, fib] is [126, 96151855463018422468774568]
The Fibonnaci term at [index, fib] is [127, 155576970220531065681649693]
The Fibonnaci term at [index, fib] is [128, 251728825683549488150424261]
The Fibonnaci term at [index, fib] is [129, 407305795904080553832073954]
The Fibonnaci term at [index, fib] is [130, 659034621587630041982498215]
The Fibonnaci term at [index, fib] is [131, 1066340417491710595814572169]
The Fibonnaci term at [index, fib] is [132, 1725375039079340637797070384]
The Fibonnaci term at [index, fib] is [133, 2791715456571051233611642553]
The Fibonnaci term at [index, fib] is [134, 4517090495650391871408712937]
The Fibonnaci term at [index, fib] is [135, 7308805952221443105020355490]
The Fibonnaci term at [index, fib] is [136, 11825896447871834976429068427]
The Fibonnaci term at [index, fib] is [137, 19134702400093278081449423917]
The Fibonnaci term at [index, fib] is [138, 30960598847965113057878492344]
The Fibonnaci term at [index, fib] is [139, 50095301248058391139327916261]
The Fibonnaci term at [index, fib] is [140, 81055900096023504197206408605]
The Fibonnaci term at [index, fib] is [141, 131151201344081895336534324866]
The Fibonnaci term at [index, fib] is [142, 212207101440105399533740733471]
The Fibonnaci term at [index, fib] is [143, 343358302784187294870275058337]
The Fibonnaci term at [index, fib] is [144, 555565404224292694404015791808]
The Fibonnaci term at [index, fib] is [145, 898923707008479989274290850145]
The Fibonnaci term at [index, fib] is [146, 1454489111232772683678306641953]
The Fibonnaci term at [index, fib] is [147, 2353412818241252672952597492098]
The Fibonnaci term at [index, fib] is [148, 3807901929474025356630904134051]
The Fibonnaci term at [index, fib] is [149, 6161314747715278029583501626149]
The Fibonnaci term at [index, fib] is [150, 9969216677189303386214405760200]
The Fibonnaci term at [index, fib] is [151, 16130531424904581415797907386349]
The Fibonnaci term at [index, fib] is [152, 26099748102093884802012313146549]
The Fibonnaci term at [index, fib] is [153, 42230279526998466217810220532898]
The Fibonnaci term at [index, fib] is [154, 68330027629092351019822533679447]
The Fibonnaci term at [index, fib] is [155, 110560307156090817237632754212345]
The Fibonnaci term at [index, fib] is [156, 178890334785183168257455287891792]
The Fibonnaci term at [index, fib] is [157, 289450641941273985495088042104137]
The Fibonnaci term at [index, fib] is [158, 468340976726457153752543329995929]
The Fibonnaci term at [index, fib] is [159, 757791618667731139247631372100066]
The Fibonnaci term at [index, fib] is [160, 1226132595394188293000174702095995]
The Fibonnaci term at [index, fib] is [161, 1983924214061919432247806074196061]
The Fibonnaci term at [index, fib] is [162, 3210056809456107725247980776292056]
The Fibonnaci term at [index, fib] is [163, 5193981023518027157495786850488117]
The Fibonnaci term at [index, fib] is [164, 8404037832974134882743767626780173]
The Fibonnaci term at [index, fib] is [165, 13598018856492162040239554477268290]
The Fibonnaci term at [index, fib] is [166, 22002056689466296922983322104048463]
The Fibonnaci term at [index, fib] is [167, 35600075545958458963222876581316753]
The Fibonnaci term at [index, fib] is [168, 57602132235424755886206198685365216]
The Fibonnaci term at [index, fib] is [169, 93202207781383214849429075266681969]
The Fibonnaci term at [index, fib] is [170, 150804340016807970735635273952047185]
The Fibonnaci term at [index, fib] is [171, 244006547798191185585064349218729154]
The Fibonnaci term at [index, fib] is [172, 394810887814999156320699623170776339]
The Fibonnaci term at [index, fib] is [173, 638817435613190341905763972389505493]
The Fibonnaci term at [index, fib] is [174, 1033628323428189498226463595560281832]
The Fibonnaci term at [index, fib] is [175, 1672445759041379840132227567949787325]
The Fibonnaci term at [index, fib] is [176, 2706074082469569338358691163510069157]
The Fibonnaci term at [index, fib] is [177, 4378519841510949178490918731459856482]
The Fibonnaci term at [index, fib] is [178, 7084593923980518516849609894969925639]
The Fibonnaci term at [index, fib] is [179, 11463113765491467695340528626429782121]
The Fibonnaci term at [index, fib] is [180, 18547707689471986212190138521399707760]
The Fibonnaci term at [index, fib] is [181, 30010821454963453907530667147829489881]
The Fibonnaci term at [index, fib] is [182, 48558529144435440119720805669229197641]
The Fibonnaci term at [index, fib] is [183, 78569350599398894027251472817058687522]
The Fibonnaci term at [index, fib] is [184, 127127879743834334146972278486287885163]
The Fibonnaci term at [index, fib] is [185, 205697230343233228174223751303346572685]
The Fibonnaci term at [index, fib] is [186, 332825110087067562321196029789634457848]
The Fibonnaci term at [index, fib] is [187, 538522340430300790495419781092981030533]
The Fibonnaci term at [index, fib] is [188, 871347450517368352816615810882615488381]
The Fibonnaci term at [index, fib] is [189, 1409869790947669143312035591975596518914]
The Fibonnaci term at [index, fib] is [190, 2281217241465037496128651402858212007295]
The Fibonnaci term at [index, fib] is [191, 3691087032412706639440686994833808526209]
The Fibonnaci term at [index, fib] is [192, 5972304273877744135569338397692020533504]
The Fibonnaci term at [index, fib] is [193, 9663391306290450775010025392525829059713]
The Fibonnaci term at [index, fib] is [194, 15635695580168194910579363790217849593217]
The Fibonnaci term at [index, fib] is [195, 25299086886458645685589389182743678652930]
The Fibonnaci term at [index, fib] is [196, 40934782466626840596168752972961528246147]
The Fibonnaci term at [index, fib] is [197, 66233869353085486281758142155705206899077]
The Fibonnaci term at [index, fib] is [198, 107168651819712326877926895128666735145224]
The Fibonnaci term at [index, fib] is [199, 173402521172797813159685037284371942044301]
The Fibonnaci term at [index, fib] is [200, 280571172992510140037611932413038677189525]
The Fibonnaci term at [index, fib] is [201, 453973694165307953197296969697410619233826]
The Fibonnaci term at [index, fib] is [202, 734544867157818093234908902110449296423351]
The Fibonnaci term at [index, fib] is [203, 1188518561323126046432205871807859915657177]
The Fibonnaci term at [index, fib] is [204, 1923063428480944139667114773918309212080528]
The Fibonnaci term at [index, fib] is [205, 3111581989804070186099320645726169127737705]
The Fibonnaci term at [index, fib] is [206, 5034645418285014325766435419644478339818233]
The Fibonnaci term at [index, fib] is [207, 8146227408089084511865756065370647467555938]
The Fibonnaci term at [index, fib] is [208, 13180872826374098837632191485015125807374171]
The Fibonnaci term at [index, fib] is [209, 21327100234463183349497947550385773274930109]
The Fibonnaci term at [index, fib] is [210, 34507973060837282187130139035400899082304280]
The Fibonnaci term at [index, fib] is [211, 55835073295300465536628086585786672357234389]
The Fibonnaci term at [index, fib] is [212, 90343046356137747723758225621187571439538669]
The Fibonnaci term at [index, fib] is [213, 146178119651438213260386312206974243796773058]
The Fibonnaci term at [index, fib] is [214, 236521166007575960984144537828161815236311727]
The Fibonnaci term at [index, fib] is [215, 382699285659014174244530850035136059033084785]
The Fibonnaci term at [index, fib] is [216, 619220451666590135228675387863297874269396512]
The Fibonnaci term at [index, fib] is [217, 1001919737325604309473206237898433933302481297]
The Fibonnaci term at [index, fib] is [218, 1621140188992194444701881625761731807571877809]
The Fibonnaci term at [index, fib] is [219, 2623059926317798754175087863660165740874359106]
The Fibonnaci term at [index, fib] is [220, 4244200115309993198876969489421897548446236915]
The Fibonnaci term at [index, fib] is [221, 6867260041627791953052057353082063289320596021]
The Fibonnaci term at [index, fib] is [222, 11111460156937785151929026842503960837766832936]
The Fibonnaci term at [index, fib] is [223, 17978720198565577104981084195586024127087428957]
The Fibonnaci term at [index, fib] is [224, 29090180355503362256910111038089984964854261893]
The Fibonnaci term at [index, fib] is [225, 47068900554068939361891195233676009091941690850]
The Fibonnaci term at [index, fib] is [226, 76159080909572301618801306271765994056795952743]
The Fibonnaci term at [index, fib] is [227, 123227981463641240980692501505442003148737643593]
The Fibonnaci term at [index, fib] is [228, 199387062373213542599493807777207997205533596336]
The Fibonnaci term at [index, fib] is [229, 322615043836854783580186309282650000354271239929]
The Fibonnaci term at [index, fib] is [230, 522002106210068326179680117059857997559804836265]
The Fibonnaci term at [index, fib] is [231, 844617150046923109759866426342507997914076076194]
The Fibonnaci term at [index, fib] is [232, 1366619256256991435939546543402365995473880912459]
The Fibonnaci term at [index, fib] is [233, 2211236406303914545699412969744873993387956988653]
The Fibonnaci term at [index, fib] is [234, 3577855662560905981638959513147239988861837901112]
The Fibonnaci term at [index, fib] is [2, 1]
The Fibonnaci term at [index, fib] is [3, 2]
The Fibonnaci term at [index, fib] is [4, 3]
The Fibonnaci term at [index, fib] is [5, 5]
The Fibonnaci term at [index, fib] is [6, 8]
The Fibonnaci term at [index, fib] is [7, 13]
The Fibonnaci term at [index, fib] is [8, 21]
The Fibonnaci term at [index, fib] is [9, 34]
The Fibonnaci term at [index, fib] is [10, 55]
The Fibonnaci term at [index, fib] is [11, 89]
The Fibonnaci term at [index, fib] is [12, 144]
The Fibonnaci term at [index, fib] is [13, 233]
The Fibonnaci term at [index, fib] is [14, 377]
The Fibonnaci term at [index, fib] is [15, 610]
The Fibonnaci term at [index, fib] is [16, 987]
The Fibonnaci term at [index, fib] is [17, 1597]
The Fibonnaci term at [index, fib] is [18, 2584]
The Fibonnaci term at [index, fib] is [19, 4181]
The Fibonnaci term at [index, fib] is [20, 6765]
The Fibonnaci term at [index, fib] is [21, 10946]
The Fibonnaci term at [index, fib] is [22, 17711]
The Fibonnaci term at [index, fib] is [23, 28657]
The Fibonnaci term at [index, fib] is [24, 46368]
The Fibonnaci term at [index, fib] is [25, 75025]
The Fibonnaci term at [index, fib] is [26, 121393]
The Fibonnaci term at [index, fib] is [27, 196418]
The Fibonnaci term at [index, fib] is [28, 317811]
The Fibonnaci term at [index, fib] is [29, 514229]
The Fibonnaci term at [index, fib] is [30, 832040]
The Fibonnaci term at [index, fib] is [31, 1346269]
The Fibonnaci term at [index, fib] is [32, 2178309]
The Fibonnaci term at [index, fib] is [33, 3524578]
The Fibonnaci term at [index, fib] is [34, 5702887]
The Fibonnaci term at [index, fib] is [35, 9227465]
The Fibonnaci term at [index, fib] is [36, 14930352]
The Fibonnaci term at [index, fib] is [37, 24157817]
The Fibonnaci term at [index, fib] is [38, 39088169]
The Fibonnaci term at [index, fib] is [39, 63245986]
The Fibonnaci term at [index, fib] is [40, 102334155]
The Fibonnaci term at [index, fib] is [41, 165580141]
The Fibonnaci term at [index, fib] is [42, 267914296]
The Fibonnaci term at [index, fib] is [43, 433494437]
The Fibonnaci term at [index, fib] is [44, 701408733]
The Fibonnaci term at [index, fib] is [45, 1134903170]
The Fibonnaci term at [index, fib] is [46, 1836311903]
The Fibonnaci term at [index, fib] is [47, 2971215073]
The Fibonnaci term at [index, fib] is [48, 4807526976]
The Fibonnaci term at [index, fib] is [49, 7778742049]
The Fibonnaci term at [index, fib] is [50, 12586269025]
The Fibonnaci term at [index, fib] is [51, 20365011074]
The Fibonnaci term at [index, fib] is [52, 32951280099]
The Fibonnaci term at [index, fib] is [53, 53316291173]
The Fibonnaci term at [index, fib] is [54, 86267571272]
The Fibonnaci term at [index, fib] is [55, 139583862445]
The Fibonnaci term at [index, fib] is [56, 225851433717]
The Fibonnaci term at [index, fib] is [57, 365435296162]
The Fibonnaci term at [index, fib] is [58, 591286729879]
The Fibonnaci term at [index, fib] is [59, 956722026041]
The Fibonnaci term at [index, fib] is [60, 1548008755920]
The Fibonnaci term at [index, fib] is [61, 2504730781961]
The Fibonnaci term at [index, fib] is [62, 4052739537881]
The Fibonnaci term at [index, fib] is [63, 6557470319842]
The Fibonnaci term at [index, fib] is [64, 10610209857723]
The Fibonnaci term at [index, fib] is [65, 17167680177565]
The Fibonnaci term at [index, fib] is [66, 27777890035288]
The Fibonnaci term at [index, fib] is [67, 44945570212853]
The Fibonnaci term at [index, fib] is [68, 72723460248141]
The Fibonnaci term at [index, fib] is [69, 117669030460994]
The Fibonnaci term at [index, fib] is [70, 190392490709135]
The Fibonnaci term at [index, fib] is [71, 308061521170129]
The Fibonnaci term at [index, fib] is [72, 498454011879264]
The Fibonnaci term at [index, fib] is [73, 806515533049393]
The Fibonnaci term at [index, fib] is [74, 1304969544928657]
The Fibonnaci term at [index, fib] is [75, 2111485077978050]
The Fibonnaci term at [index, fib] is [76, 3416454622906707]
The Fibonnaci term at [index, fib] is [77, 5527939700884757]
The Fibonnaci term at [index, fib] is [78, 8944394323791464]
The Fibonnaci term at [index, fib] is [79, 14472334024676221]
The Fibonnaci term at [index, fib] is [80, 23416728348467685]
The Fibonnaci term at [index, fib] is [81, 37889062373143906]
The Fibonnaci term at [index, fib] is [82, 61305790721611591]
The Fibonnaci term at [index, fib] is [83, 99194853094755497]
The Fibonnaci term at [index, fib] is [84, 160500643816367088]
The Fibonnaci term at [index, fib] is [85, 259695496911122585]
The Fibonnaci term at [index, fib] is [86, 420196140727489673]
The Fibonnaci term at [index, fib] is [87, 679891637638612258]
The Fibonnaci term at [index, fib] is [88, 1100087778366101931]
The Fibonnaci term at [index, fib] is [89, 1779979416004714189]
The Fibonnaci term at [index, fib] is [90, 2880067194370816120]
The Fibonnaci term at [index, fib] is [91, 4660046610375530309]
The Fibonnaci term at [index, fib] is [92, 7540113804746346429]
The Fibonnaci term at [index, fib] is [93, 12200160415121876738]
The Fibonnaci term at [index, fib] is [94, 19740274219868223167]
The Fibonnaci term at [index, fib] is [95, 31940434634990099905]
The Fibonnaci term at [index, fib] is [96, 51680708854858323072]
The Fibonnaci term at [index, fib] is [97, 83621143489848422977]
The Fibonnaci term at [index, fib] is [98, 135301852344706746049]
The Fibonnaci term at [index, fib] is [99, 218922995834555169026]
The Fibonnaci term at [index, fib] is [100, 354224848179261915075]
The Fibonnaci term at [index, fib] is [101, 573147844013817084101]
The Fibonnaci term at [index, fib] is [102, 927372692193078999176]
The Fibonnaci term at [index, fib] is [103, 1500520536206896083277]
The Fibonnaci term at [index, fib] is [104, 2427893228399975082453]
The Fibonnaci term at [index, fib] is [105, 3928413764606871165730]
The Fibonnaci term at [index, fib] is [106, 6356306993006846248183]
The Fibonnaci term at [index, fib] is [107, 10284720757613717413913]
The Fibonnaci term at [index, fib] is [108, 16641027750620563662096]
The Fibonnaci term at [index, fib] is [109, 26925748508234281076009]
The Fibonnaci term at [index, fib] is [110, 43566776258854844738105]
The Fibonnaci term at [index, fib] is [111, 70492524767089125814114]
The Fibonnaci term at [index, fib] is [112, 114059301025943970552219]
The Fibonnaci term at [index, fib] is [113, 184551825793033096366333]
The Fibonnaci term at [index, fib] is [114, 298611126818977066918552]
The Fibonnaci term at [index, fib] is [115, 483162952612010163284885]
The Fibonnaci term at [index, fib] is [116, 781774079430987230203437]
The Fibonnaci term at [index, fib] is [117, 1264937032042997393488322]
The Fibonnaci term at [index, fib] is [118, 2046711111473984623691759]
The Fibonnaci term at [index, fib] is [119, 3311648143516982017180081]
The Fibonnaci term at [index, fib] is [120, 5358359254990966640871840]
The Fibonnaci term at [index, fib] is [121, 8670007398507948658051921]
The Fibonnaci term at [index, fib] is [122, 14028366653498915298923761]
The Fibonnaci term at [index, fib] is [123, 22698374052006863956975682]
The Fibonnaci term at [index, fib] is [124, 36726740705505779255899443]
The Fibonnaci term at [index, fib] is [125, 59425114757512643212875125]
The Fibonnaci term at [index, fib] is [126, 96151855463018422468774568]
The Fibonnaci term at [index, fib] is [127, 155576970220531065681649693]
The Fibonnaci term at [index, fib] is [128, 251728825683549488150424261]
The Fibonnaci term at [index, fib] is [129, 407305795904080553832073954]
The Fibonnaci term at [index, fib] is [130, 659034621587630041982498215]
The Fibonnaci term at [index, fib] is [131, 1066340417491710595814572169]
The Fibonnaci term at [index, fib] is [132, 1725375039079340637797070384]
The Fibonnaci term at [index, fib] is [133, 2791715456571051233611642553]
The Fibonnaci term at [index, fib] is [134, 4517090495650391871408712937]
The Fibonnaci term at [index, fib] is [135, 7308805952221443105020355490]
The Fibonnaci term at [index, fib] is [136, 11825896447871834976429068427]
The Fibonnaci term at [index, fib] is [137, 19134702400093278081449423917]
The Fibonnaci term at [index, fib] is [138, 30960598847965113057878492344]
The Fibonnaci term at [index, fib] is [139, 50095301248058391139327916261]
The Fibonnaci term at [index, fib] is [140, 81055900096023504197206408605]
The Fibonnaci term at [index, fib] is [141, 131151201344081895336534324866]
The Fibonnaci term at [index, fib] is [142, 212207101440105399533740733471]
The Fibonnaci term at [index, fib] is [143, 343358302784187294870275058337]
The Fibonnaci term at [index, fib] is [144, 555565404224292694404015791808]
The Fibonnaci term at [index, fib] is [145, 898923707008479989274290850145]
The Fibonnaci term at [index, fib] is [146, 1454489111232772683678306641953]
The Fibonnaci term at [index, fib] is [147, 2353412818241252672952597492098]
The Fibonnaci term at [index, fib] is [148, 3807901929474025356630904134051]
The Fibonnaci term at [index, fib] is [149, 6161314747715278029583501626149]
The Fibonnaci term at [index, fib] is [150, 9969216677189303386214405760200]
The Fibonnaci term at [index, fib] is [151, 16130531424904581415797907386349]
The Fibonnaci term at [index, fib] is [152, 26099748102093884802012313146549]
The Fibonnaci term at [index, fib] is [153, 42230279526998466217810220532898]
The Fibonnaci term at [index, fib] is [154, 68330027629092351019822533679447]
The Fibonnaci term at [index, fib] is [155, 110560307156090817237632754212345]
The Fibonnaci term at [index, fib] is [156, 178890334785183168257455287891792]
The Fibonnaci term at [index, fib] is [157, 289450641941273985495088042104137]
The Fibonnaci term at [index, fib] is [158, 468340976726457153752543329995929]
The Fibonnaci term at [index, fib] is [159, 757791618667731139247631372100066]
The Fibonnaci term at [index, fib] is [160, 1226132595394188293000174702095995]
The Fibonnaci term at [index, fib] is [161, 1983924214061919432247806074196061]
The Fibonnaci term at [index, fib] is [162, 3210056809456107725247980776292056]
The Fibonnaci term at [index, fib] is [163, 5193981023518027157495786850488117]
The Fibonnaci term at [index, fib] is [164, 8404037832974134882743767626780173]
The Fibonnaci term at [index, fib] is [165, 13598018856492162040239554477268290]
The Fibonnaci term at [index, fib] is [166, 22002056689466296922983322104048463]
The Fibonnaci term at [index, fib] is [167, 35600075545958458963222876581316753]
The Fibonnaci term at [index, fib] is [168, 57602132235424755886206198685365216]
The Fibonnaci term at [index, fib] is [169, 93202207781383214849429075266681969]
The Fibonnaci term at [index, fib] is [170, 150804340016807970735635273952047185]
The Fibonnaci term at [index, fib] is [171, 244006547798191185585064349218729154]
The Fibonnaci term at [index, fib] is [172, 394810887814999156320699623170776339]
The Fibonnaci term at [index, fib] is [173, 638817435613190341905763972389505493]
The Fibonnaci term at [index, fib] is [174, 1033628323428189498226463595560281832]
The Fibonnaci term at [index, fib] is [175, 1672445759041379840132227567949787325]
The Fibonnaci term at [index, fib] is [176, 2706074082469569338358691163510069157]
The Fibonnaci term at [index, fib] is [177, 4378519841510949178490918731459856482]
The Fibonnaci term at [index, fib] is [178, 7084593923980518516849609894969925639]
The Fibonnaci term at [index, fib] is [179, 11463113765491467695340528626429782121]
The Fibonnaci term at [index, fib] is [180, 18547707689471986212190138521399707760]
The Fibonnaci term at [index, fib] is [181, 30010821454963453907530667147829489881]
The Fibonnaci term at [index, fib] is [182, 48558529144435440119720805669229197641]
The Fibonnaci term at [index, fib] is [183, 78569350599398894027251472817058687522]
The Fibonnaci term at [index, fib] is [184, 127127879743834334146972278486287885163]
The Fibonnaci term at [index, fib] is [185, 205697230343233228174223751303346572685]
The Fibonnaci term at [index, fib] is [186, 332825110087067562321196029789634457848]
The Fibonnaci term at [index, fib] is [187, 538522340430300790495419781092981030533]
The Fibonnaci term at [index, fib] is [188, 871347450517368352816615810882615488381]
The Fibonnaci term at [index, fib] is [189, 1409869790947669143312035591975596518914]
The Fibonnaci term at [index, fib] is [190, 2281217241465037496128651402858212007295]
The Fibonnaci term at [index, fib] is [191, 3691087032412706639440686994833808526209]
The Fibonnaci term at [index, fib] is [192, 5972304273877744135569338397692020533504]
The Fibonnaci term at [index, fib] is [193, 9663391306290450775010025392525829059713]
The Fibonnaci term at [index, fib] is [194, 15635695580168194910579363790217849593217]
The Fibonnaci term at [index, fib] is [195, 25299086886458645685589389182743678652930]
The Fibonnaci term at [index, fib] is [196, 40934782466626840596168752972961528246147]
The Fibonnaci term at [index, fib] is [197, 66233869353085486281758142155705206899077]
The Fibonnaci term at [index, fib] is [198, 107168651819712326877926895128666735145224]
The Fibonnaci term at [index, fib] is [199, 173402521172797813159685037284371942044301]
The Fibonnaci term at [index, fib] is [200, 280571172992510140037611932413038677189525]
The Fibonnaci term at [index, fib] is [201, 453973694165307953197296969697410619233826]
The Fibonnaci term at [index, fib] is [202, 734544867157818093234908902110449296423351]
The Fibonnaci term at [index, fib] is [203, 1188518561323126046432205871807859915657177]
The Fibonnaci term at [index, fib] is [204, 1923063428480944139667114773918309212080528]
The Fibonnaci term at [index, fib] is [205, 3111581989804070186099320645726169127737705]
The Fibonnaci term at [index, fib] is [206, 5034645418285014325766435419644478339818233]
The Fibonnaci term at [index, fib] is [207, 8146227408089084511865756065370647467555938]
The Fibonnaci term at [index, fib] is [208, 13180872826374098837632191485015125807374171]
The Fibonnaci term at [index, fib] is [209, 21327100234463183349497947550385773274930109]
The Fibonnaci term at [index, fib] is [210, 34507973060837282187130139035400899082304280]
The Fibonnaci term at [index, fib] is [211, 55835073295300465536628086585786672357234389]
The Fibonnaci term at [index, fib] is [212, 90343046356137747723758225621187571439538669]
The Fibonnaci term at [index, fib] is [213, 146178119651438213260386312206974243796773058]
The Fibonnaci term at [index, fib] is [214, 236521166007575960984144537828161815236311727]
The Fibonnaci term at [index, fib] is [215, 382699285659014174244530850035136059033084785]
The Fibonnaci term at [index, fib] is [216, 619220451666590135228675387863297874269396512]
The Fibonnaci term at [index, fib] is [217, 1001919737325604309473206237898433933302481297]
The Fibonnaci term at [index, fib] is [218, 1621140188992194444701881625761731807571877809]
The Fibonnaci term at [index, fib] is [219, 2623059926317798754175087863660165740874359106]
The Fibonnaci term at [index, fib] is [220, 4244200115309993198876969489421897548446236915]
The Fibonnaci term at [index, fib] is [221, 6867260041627791953052057353082063289320596021]
The Fibonnaci term at [index, fib] is [222, 11111460156937785151929026842503960837766832936]
The Fibonnaci term at [index, fib] is [223, 17978720198565577104981084195586024127087428957]
The Fibonnaci term at [index, fib] is [224, 29090180355503362256910111038089984964854261893]
The Fibonnaci term at [index, fib] is [225, 47068900554068939361891195233676009091941690850]
The Fibonnaci term at [index, fib] is [226, 76159080909572301618801306271765994056795952743]
The Fibonnaci term at [index, fib] is [227, 123227981463641240980692501505442003148737643593]
The Fibonnaci term at [index, fib] is [228, 199387062373213542599493807777207997205533596336]
The Fibonnaci term at [index, fib] is [229, 322615043836854783580186309282650000354271239929]
The Fibonnaci term at [index, fib] is [230, 522002106210068326179680117059857997559804836265]
The Fibonnaci term at [index, fib] is [231, 844617150046923109759866426342507997914076076194]
The Fibonnaci term at [index, fib] is [232, 1366619256256991435939546543402365995473880912459]
The Fibonnaci term at [index, fib] is [233, 2211236406303914545699412969744873993387956988653]
The Fibonnaci term at [index, fib] is [234, 3577855662560905981638959513147239988861837901112]
```python
for i, e in enumerate(f):
print('Term {} is {} with Prime : {}'.format(i,e,g[i]))
```
Term 0 is 1 with Prime : False
Term 1 is 2 with Prime : True
Term 2 is 3 with Prime : True
Term 3 is 5 with Prime : True
Term 4 is 8 with Prime : False
Term 5 is 13 with Prime : True
Term 6 is 21 with Prime : False
Term 7 is 34 with Prime : False
Term 8 is 55 with Prime : False
Term 9 is 89 with Prime : True
Term 10 is 144 with Prime : False
Term 11 is 233 with Prime : True
Term 12 is 377 with Prime : False
Term 13 is 610 with Prime : False
Term 14 is 987 with Prime : False
Term 15 is 1597 with Prime : True
Term 16 is 2584 with Prime : False
Term 17 is 4181 with Prime : False
Term 18 is 6765 with Prime : False
Term 19 is 10946 with Prime : False
Term 20 is 17711 with Prime : False
Term 21 is 28657 with Prime : True
Term 22 is 46368 with Prime : False
Term 23 is 75025 with Prime : False
Term 24 is 121393 with Prime : False
Term 25 is 196418 with Prime : False
Term 26 is 317811 with Prime : False
Term 27 is 514229 with Prime : True
Term 28 is 832040 with Prime : False
Term 29 is 1346269 with Prime : False
Term 30 is 2178309 with Prime : False
Term 31 is 3524578 with Prime : False
Term 32 is 5702887 with Prime : False
Term 33 is 9227465 with Prime : False
Term 34 is 14930352 with Prime : False
Term 35 is 24157817 with Prime : False
Term 36 is 39088169 with Prime : False
Term 37 is 63245986 with Prime : False
Term 38 is 102334155 with Prime : False
Term 39 is 165580141 with Prime : False
Term 40 is 267914296 with Prime : False
Term 41 is 433494437 with Prime : True
Term 42 is 701408733 with Prime : False
Term 43 is 1134903170 with Prime : False
Term 44 is 1836311903 with Prime : False
Term 45 is 2971215073 with Prime : True
Term 46 is 4807526976 with Prime : False
Term 47 is 7778742049 with Prime : False
Term 48 is 12586269025 with Prime : False
Term 49 is 20365011074 with Prime : False
Term 50 is 32951280099 with Prime : False
Term 51 is 53316291173 with Prime : False
Term 52 is 86267571272 with Prime : False
Term 53 is 139583862445 with Prime : False
Term 54 is 225851433717 with Prime : False
Term 55 is 365435296162 with Prime : False
Term 56 is 591286729879 with Prime : False
Term 57 is 956722026041 with Prime : False
Term 58 is 1548008755920 with Prime : False
Term 59 is 2504730781961 with Prime : False
Term 60 is 4052739537881 with Prime : False
Term 61 is 6557470319842 with Prime : False
Term 62 is 10610209857723 with Prime : False
Term 63 is 17167680177565 with Prime : False
Term 64 is 27777890035288 with Prime : False
Term 65 is 44945570212853 with Prime : False
Term 66 is 72723460248141 with Prime : False
Term 67 is 117669030460994 with Prime : False
Term 68 is 190392490709135 with Prime : False
Term 69 is 308061521170129 with Prime : False
Term 70 is 498454011879264 with Prime : False
Term 71 is 806515533049393 with Prime : False
Term 72 is 1304969544928657 with Prime : False
Term 73 is 2111485077978050 with Prime : False
Term 74 is 3416454622906707 with Prime : False
Term 75 is 5527939700884757 with Prime : False
Term 76 is 8944394323791464 with Prime : False
Term 77 is 14472334024676221 with Prime : False
Term 78 is 23416728348467685 with Prime : False
Term 79 is 37889062373143906 with Prime : False
Term 80 is 61305790721611591 with Prime : False
Term 81 is 99194853094755497 with Prime : True
Term 82 is 160500643816367088 with Prime : False
Term 83 is 259695496911122585 with Prime : False
Term 84 is 420196140727489673 with Prime : False
Term 85 is 679891637638612258 with Prime : False
Term 86 is 1100087778366101931 with Prime : False
Term 87 is 1779979416004714189 with Prime : False
Term 88 is 2880067194370816120 with Prime : False
Term 89 is 4660046610375530309 with Prime : False
Term 90 is 7540113804746346429 with Prime : False
Term 91 is 12200160415121876738 with Prime : False
Term 92 is 19740274219868223167 with Prime : False
Term 93 is 31940434634990099905 with Prime : False
Term 94 is 51680708854858323072 with Prime : False
Term 95 is 83621143489848422977 with Prime : False
Term 96 is 135301852344706746049 with Prime : False
Term 97 is 218922995834555169026 with Prime : False
Term 98 is 354224848179261915075 with Prime : False
Term 99 is 573147844013817084101 with Prime : False
Term 100 is 927372692193078999176 with Prime : False
Term 101 is 1500520536206896083277 with Prime : False
Term 102 is 2427893228399975082453 with Prime : False
Term 103 is 3928413764606871165730 with Prime : False
Term 104 is 6356306993006846248183 with Prime : False
Term 105 is 10284720757613717413913 with Prime : False
Term 106 is 16641027750620563662096 with Prime : False
Term 107 is 26925748508234281076009 with Prime : False
Term 108 is 43566776258854844738105 with Prime : False
Term 109 is 70492524767089125814114 with Prime : False
Term 110 is 114059301025943970552219 with Prime : False
Term 111 is 184551825793033096366333 with Prime : False
Term 112 is 298611126818977066918552 with Prime : False
Term 113 is 483162952612010163284885 with Prime : False
Term 114 is 781774079430987230203437 with Prime : False
Term 115 is 1264937032042997393488322 with Prime : False
Term 116 is 2046711111473984623691759 with Prime : False
Term 117 is 3311648143516982017180081 with Prime : False
Term 118 is 5358359254990966640871840 with Prime : False
Term 119 is 8670007398507948658051921 with Prime : False
Term 120 is 14028366653498915298923761 with Prime : False
Term 121 is 22698374052006863956975682 with Prime : False
Term 122 is 36726740705505779255899443 with Prime : False
Term 123 is 59425114757512643212875125 with Prime : False
Term 124 is 96151855463018422468774568 with Prime : False
Term 125 is 155576970220531065681649693 with Prime : False
Term 126 is 251728825683549488150424261 with Prime : False
Term 127 is 407305795904080553832073954 with Prime : False
Term 128 is 659034621587630041982498215 with Prime : False
Term 129 is 1066340417491710595814572169 with Prime : True
Term 130 is 1725375039079340637797070384 with Prime : False
Term 131 is 2791715456571051233611642553 with Prime : False
Term 132 is 4517090495650391871408712937 with Prime : False
Term 133 is 7308805952221443105020355490 with Prime : False
Term 134 is 11825896447871834976429068427 with Prime : False
Term 135 is 19134702400093278081449423917 with Prime : True
Term 136 is 30960598847965113057878492344 with Prime : False
Term 137 is 50095301248058391139327916261 with Prime : False
Term 138 is 81055900096023504197206408605 with Prime : False
Term 139 is 131151201344081895336534324866 with Prime : False
Term 140 is 212207101440105399533740733471 with Prime : False
Term 141 is 343358302784187294870275058337 with Prime : False
Term 142 is 555565404224292694404015791808 with Prime : False
Term 143 is 898923707008479989274290850145 with Prime : False
Term 144 is 1454489111232772683678306641953 with Prime : False
Term 145 is 2353412818241252672952597492098 with Prime : False
Term 146 is 3807901929474025356630904134051 with Prime : False
Term 147 is 6161314747715278029583501626149 with Prime : False
Term 148 is 9969216677189303386214405760200 with Prime : False
Term 149 is 16130531424904581415797907386349 with Prime : False
Term 150 is 26099748102093884802012313146549 with Prime : False
Term 151 is 42230279526998466217810220532898 with Prime : False
Term 152 is 68330027629092351019822533679447 with Prime : False
Term 153 is 110560307156090817237632754212345 with Prime : False
Term 154 is 178890334785183168257455287891792 with Prime : False
Term 155 is 289450641941273985495088042104137 with Prime : False
Term 156 is 468340976726457153752543329995929 with Prime : False
Term 157 is 757791618667731139247631372100066 with Prime : False
Term 158 is 1226132595394188293000174702095995 with Prime : False
Term 159 is 1983924214061919432247806074196061 with Prime : False
Term 160 is 3210056809456107725247980776292056 with Prime : False
Term 161 is 5193981023518027157495786850488117 with Prime : False
Term 162 is 8404037832974134882743767626780173 with Prime : False
Term 163 is 13598018856492162040239554477268290 with Prime : False
Term 164 is 22002056689466296922983322104048463 with Prime : False
Term 165 is 35600075545958458963222876581316753 with Prime : False
Term 166 is 57602132235424755886206198685365216 with Prime : False
Term 167 is 93202207781383214849429075266681969 with Prime : False
Term 168 is 150804340016807970735635273952047185 with Prime : False
Term 169 is 244006547798191185585064349218729154 with Prime : False
Term 170 is 394810887814999156320699623170776339 with Prime : False
Term 171 is 638817435613190341905763972389505493 with Prime : False
Term 172 is 1033628323428189498226463595560281832 with Prime : False
Term 173 is 1672445759041379840132227567949787325 with Prime : False
Term 174 is 2706074082469569338358691163510069157 with Prime : False
Term 175 is 4378519841510949178490918731459856482 with Prime : False
Term 176 is 7084593923980518516849609894969925639 with Prime : False
Term 177 is 11463113765491467695340528626429782121 with Prime : False
Term 178 is 18547707689471986212190138521399707760 with Prime : False
Term 179 is 30010821454963453907530667147829489881 with Prime : False
Term 180 is 48558529144435440119720805669229197641 with Prime : False
Term 181 is 78569350599398894027251472817058687522 with Prime : False
Term 182 is 127127879743834334146972278486287885163 with Prime : False
Term 183 is 205697230343233228174223751303346572685 with Prime : False
Term 184 is 332825110087067562321196029789634457848 with Prime : False
Term 185 is 538522340430300790495419781092981030533 with Prime : False
Term 186 is 871347450517368352816615810882615488381 with Prime : False
Term 187 is 1409869790947669143312035591975596518914 with Prime : False
Term 188 is 2281217241465037496128651402858212007295 with Prime : False
Term 189 is 3691087032412706639440686994833808526209 with Prime : False
Term 190 is 5972304273877744135569338397692020533504 with Prime : False
Term 191 is 9663391306290450775010025392525829059713 with Prime : False
Term 192 is 15635695580168194910579363790217849593217 with Prime : False
Term 193 is 25299086886458645685589389182743678652930 with Prime : False
Term 194 is 40934782466626840596168752972961528246147 with Prime : False
Term 195 is 66233869353085486281758142155705206899077 with Prime : False
Term 196 is 107168651819712326877926895128666735145224 with Prime : False
Term 197 is 173402521172797813159685037284371942044301 with Prime : False
Term 198 is 280571172992510140037611932413038677189525 with Prime : False
Term 199 is 453973694165307953197296969697410619233826 with Prime : False
Term 200 is 734544867157818093234908902110449296423351 with Prime : False
Term 201 is 1188518561323126046432205871807859915657177 with Prime : False
Term 202 is 1923063428480944139667114773918309212080528 with Prime : False
Term 203 is 3111581989804070186099320645726169127737705 with Prime : False
Term 204 is 5034645418285014325766435419644478339818233 with Prime : False
Term 205 is 8146227408089084511865756065370647467555938 with Prime : False
Term 206 is 13180872826374098837632191485015125807374171 with Prime : False
Term 207 is 21327100234463183349497947550385773274930109 with Prime : False
Term 208 is 34507973060837282187130139035400899082304280 with Prime : False
Term 209 is 55835073295300465536628086585786672357234389 with Prime : False
Term 210 is 90343046356137747723758225621187571439538669 with Prime : False
Term 211 is 146178119651438213260386312206974243796773058 with Prime : False
Term 212 is 236521166007575960984144537828161815236311727 with Prime : False
Term 213 is 382699285659014174244530850035136059033084785 with Prime : False
Term 214 is 619220451666590135228675387863297874269396512 with Prime : False
Term 215 is 1001919737325604309473206237898433933302481297 with Prime : False
Term 216 is 1621140188992194444701881625761731807571877809 with Prime : False
Term 217 is 2623059926317798754175087863660165740874359106 with Prime : False
Term 218 is 4244200115309993198876969489421897548446236915 with Prime : False
Term 219 is 6867260041627791953052057353082063289320596021 with Prime : False
Term 220 is 11111460156937785151929026842503960837766832936 with Prime : False
Term 221 is 17978720198565577104981084195586024127087428957 with Prime : False
Term 222 is 29090180355503362256910111038089984964854261893 with Prime : False
Term 223 is 47068900554068939361891195233676009091941690850 with Prime : False
Term 224 is 76159080909572301618801306271765994056795952743 with Prime : False
Term 225 is 123227981463641240980692501505442003148737643593 with Prime : False
Term 226 is 199387062373213542599493807777207997205533596336 with Prime : False
Term 227 is 322615043836854783580186309282650000354271239929 with Prime : False
Term 228 is 522002106210068326179680117059857997559804836265 with Prime : False
Term 229 is 844617150046923109759866426342507997914076076194 with Prime : False
Term 230 is 1366619256256991435939546543402365995473880912459 with Prime : False
Term 231 is 2211236406303914545699412969744873993387956988653 with Prime : False
Term 232 is 3577855662560905981638959513147239988861837901112 with Prime : False
```python
prime_dict=dict(zip(f,g))
```
```python
prime_dict[27777890035288]
```
False
```python
a,b, *rest=[1,2,3,4,5,6,7,8,9]
```
```python
a,b,rest
```
(1, 2, [3, 4, 5, 6, 7, 8, 9])
```python
if a==b:
pass
elif :
pass:
```
```python
import sympy as sym
```
```python
sym.isprime(707317)
```
False
```python
```
|
db40caf3e621a487832b2d514472bd9ff73ba1e1
| 65,745 |
ipynb
|
Jupyter Notebook
|
Misc/Dan pythonification.ipynb
|
TensorMan/training-and-reference
|
68d2dea416e10bfe5b2a9b47b1794ce5c2b65371
|
[
"Apache-2.0"
] | null | null | null |
Misc/Dan pythonification.ipynb
|
TensorMan/training-and-reference
|
68d2dea416e10bfe5b2a9b47b1794ce5c2b65371
|
[
"Apache-2.0"
] | null | null | null |
Misc/Dan pythonification.ipynb
|
TensorMan/training-and-reference
|
68d2dea416e10bfe5b2a9b47b1794ce5c2b65371
|
[
"Apache-2.0"
] | null | null | null | 50.925639 | 357 | 0.655761 | true | 18,515 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.760651 | 0.849971 | 0.646531 |
__label__eng_Latn
| 0.841645 | 0.340439 |
# Partikkel skytes i tyngdefeltet
En partikkel skytes ut i tyngdefeltet med fart $v_0=|\vec{v}(0)|$ i en retning som danner en vinkel $\alpha$ med horisontalen (z-aksen). Vi antar at det kun er tyngdekraften som virker, så Newton's annen lov gir oss at
$$
\begin{equation}
m \vec{a} = -mg \mathbf{k},
\end{equation}
$$
der $m$ er massen til partikkelen. Vi kan se bort ifra massen siden vi kan dele begge sider på $m$.
Vi kan integrere Newton's annen low to ganger og finne posisjonen $\vec{r}(t)=x \mathbf{i} + z \mathbf{k}$ for partikkelen. Bruker først at $\vec{a}=d\vec{v}/dt$ slik at vi får
$$
\begin{equation}
\frac{d \vec{v}}{dt} = -g\mathbf{k}.
\end{equation}
$$
Integrerer opp og får
$$
\begin{equation}
\vec{v}(t) - \vec{v}(0) = -g t \mathbf{k}.
\end{equation}
$$
Bruker så at $\vec{v} = d \vec{r} / dt$ og integrerer på samme måte fra
$$
\begin{equation}
\frac{d \vec{r}}{dt} = \vec{v}(0) - g t \mathbf{k}.
\end{equation}
$$
Ender opp med at
$$
\begin{equation}
\vec{r}(t) - \vec{r}(0) = \vec{v}(0) t - \frac{1}{2}g t^2 \mathbf{k},
\end{equation}
$$
der vi kan ta bort $\vec{r}(0)$ siden vi regner at utskytningen starter i origo.
Vektoren $\vec{v}(0)$ kan dekomponeres langs $x$ og $y$-akser ved å bruke vinkelen $\alpha$. Får at
$$
\begin{equation}
\vec{v}(0) = v_0 \cos(\alpha) \mathbf{i} + v_0 \sin(\alpha) \mathbf{k},
\end{equation}
$$
og dermed har vi funnet posisjonsvektoren $\vec{r}$ med komponenter
$$
\begin{align}
x &= v_0 \cos(\alpha) t,\\
z &= v_0 \sin(\alpha) t - \frac{1}{2}gt^2.
\end{align}
$$
slik at posisjonsvektoren blir
$$
\begin{align}
\vec{r}(t) &= x(t) \mathbf{i} + z(t) \mathbf{k}, \notag\\
&= v_0 \cos(\alpha) t \mathbf{i} + (v_0 \sin(\alpha) t - \frac{1}{2}gt^2) \mathbf{k}.
\end{align}
$$
Hastighetsvektoren kan også samles sammen til
$$
\begin{equation}
\label{eq:v}
\vec{v}(t) = v_0 \cos(\alpha) \mathbf{i} + (v_0 \sin(\alpha) - gt) \mathbf{k}.
\end{equation}
$$
Vi skal nå modellere dette med et interaktivt plot som viser partikkelbanen og vektoren. Til dette trenger vi å gjøre et par observasjoner.
1. Hva er domenet til partikkelen?
1. Hvor høyt skytes partikkelen?
2. Hvor lang tid tar det før den kommer ned igjen?
Finner svar på A ved å se på når hastigheten i z-retningen er lik null (fra $z$-komponenten til $\vec{v}$ over)
$$
\begin{align*}
v_0 \sin(\alpha) - g t_0 &= 0 \\
t_0 &= \frac{v_0 \sin(\alpha)}{g}.
\end{align*}
$$
Høyden ved $t_0$ er gitt ved $z(t_0)$, som vi kaller $H$
$$
z(t_0) = H = \frac{(v_0 \sin(\alpha))^2}{2 g}
$$
Så domenet for $z$ må være minst $[0, H]$ for å få med hele partikkelbanen.
Tiden det tar for partikkelen å komme ned igjen finner vi ved å løse $z(t) = 0$
$$
\begin{align*}
v_0 \sin(\alpha) t - \frac{1}{2}gt^2 &= 0 \\
t\left(v_0 \sin(\alpha) - \frac{1}{2}g t \right) &= 0
\end{align*}
$$
som løses enkelt til
$$
t = 0 \lor t = \frac{2 v_0 \sin(\alpha)}{g}
$$
Vi kaller tiden da partikkelen igjen treffer bakken ($z=0$) for $T=2 v_0 \sin(\alpha)/g$. Vi innser at simuleringen må gå minst i $t \in [0, T]$.
Vi finner domenenet for $x$-aksen ved å sette sluttiden inn i uttrykket for $x$
$$
x\left(T\right) = v_0 \cos(\alpha) T.
$$
Så $x$-domenet blir $[0, v_0 \cos(\alpha) T]$.
Da er vi klare til å starte plottingen!
Vi starter med å importere verktøyene vi trenger. Vi vil lage et interaktivt plot som viser partikkelen langs en bane. Da kan vi bruke [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/).
```python
from ipywidgets import interact
import matplotlib.pyplot as plt
import numpy as np
```
```python
%matplotlib inline
g = 9.81
v0 = 10
a = np.pi/4
T = 2*v0*np.sin(a)/g
N = 100 # Bruker interval med N punkter
t0 = np.linspace(0, T*1.1, N)
```
```python
# Beregner hele partikkelbanen først
x0 = v0*np.cos(a)*t0
z0 = v0*np.sin(a)*t0 - 0.5*g*t0**2
def partikkelbane(t):
# plot hele partikkelbanen
plt.figure()
plt.plot(x0, z0, 'b')
# plot partikkelen
x = v0*np.cos(a)*t
z = v0*np.sin(a)*t - 0.5*g*t**2
plt.plot(x, z, 'ok')
# plot en hastighetsvektor ved t0 og t
plt.arrow(0, 0, v0*np.cos(a)*T/4, v0*np.sin(a)*T/4, head_width=v0*T/100)
plt.arrow(x, z, v0*np.cos(a)*T/4, (v0*np.sin(a)-g*t)*T/4, head_width=v0*T/100)
# posisjonsvektor
plt.arrow(0, 0, x, z, length_includes_head=True)
plt.ylim(z0.min(), 2*z0.max())
plt.text(0.25*v0*np.cos(a), 0.3*v0*np.sin(a), r'$\vec{v}(0)$')
plt.text(x+0.25*v0*np.cos(a), z+0.2*(v0*np.sin(a)-g*t), r'$\vec{v}(t)$')
plt.text(0.5*x, 0.4*z, r'$\vec{r}(t)$')
plt.show()
interact(partikkelbane, t=(0, T, T/20))
```
interactive(children=(FloatSlider(value=0.7208020195581524, description='t', max=1.4416040391163047, step=0.07…
<function __main__.partikkelbane(t)>
Vi kan nå beregne buelengden, eller strekningen partikkelen tilbakelegger, ved å integrere numerisk
$$
\begin{equation}
L = \int_{0}^{t_m}|\vec{v}(t)|dt
\end{equation}
$$
Numerisk integrasjon kan gjøres i `numpy` ved å bruke trapes-metoden [trapz](https://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html). Den krever som input funksjonsverdiene $|\vec{v}(t_i)|$ og punktene der funksjonsverdiene er samplet, her $t_i=i T/(N-1)$ for $i=0, 1, \ldots, N-1$ og vi har brukt $N=100$ punkter i koden over.
```python
t0 = np.linspace(0, T, N)
x0 = v0*np.cos(a)*t0
z0 = v0*np.sin(a)*t0 - 0.5*g*t0**2
L = np.trapz(np.sqrt((v0*np.cos(a))**2 + (v0*np.sin(a) - g*t0)**2), t0)
print('Buelengden er %2.3f'%(L))
```
Buelengden er 11.700
# Sympy
Vi skal nå gjøre samme implemetering ved å benytte [sympy](https://sympy.org) for eksakt symbolsk manipulering av vektorer. Til det skal vi bruke `sympy` sin `vector`-modul som inneholder en klasse for et koordinatsystem `CoordSys3D`.
```python
import sympy as sp
from sympy.vector import CoordSys3D
```
Vi antar nå først at både `v0,a` og `t` er variable
```python
g = 9.81
v0, a, t = sp.symbols('v0,a,t')
N = CoordSys3D('N')
x = v0*sp.cos(a)*t
z = v0*sp.sin(a)*t - 0.5*g*t**2
# Definer posisjonsvektoren
r = x*N.i + z*N.k
# Deriver mhp t for å finne hastigheten
v = sp.diff(r, t)
print(r)
print(v)
```
(t*v0*cos(a))*N.i + (-4.905*t**2 + t*v0*sin(a))*N.k
(v0*cos(a))*N.i + (-9.81*t + v0*sin(a))*N.k
```python
# Legg parametre i dictionary
v0 = 10
a = np.pi/4
d = {'v0': v0, 'a': a}
T = 2*v0*np.sin(a)/g
# Redefiner variable med disse parameterne
x = x.subs(d)
z = z.subs(d)
r = r.subs(d)
v = v.subs(d)
# Gjør x og z kallbare funksjoner for numpy-arrays
x = sp.lambdify(t, x)
z = sp.lambdify(t, z)
# Evaluer x og z for hele tidsdomenet
t0 = np.linspace(0, T*1.1, 100)
x0 = x(t0)
z0 = z(t0)
vx0 = np.float(v.subs(t, 0).dot(N.i))
vz0 = np.float(v.subs(t, 0).dot(N.k))
```
```python
%matplotlib inline
def partikkelbane(t):
# plot hele partikkelbanen
plt.figure()
plt.plot(x0, z0, 'b')
# plot partikkelen
plt.plot(x(t), z(t), 'ok')
# plot en hastighetsvektor ved t0 og t
vx = np.float(v.subs('t', t).dot(N.i))
vz = np.float(v.subs('t', t).dot(N.k))
scale = T/4
plt.arrow(0, 0, vx0*scale, vz0*scale, head_width=v0*T/100)
plt.arrow(x(t), z(t), vx*scale, vz*scale, head_width=v0*T/100)
# plot posisjonsvektor
plt.arrow(0, 0, x(t), z(t), head_width=v0*T/100, length_includes_head=True)
plt.ylim(z0.min(), 2*z0.max())
plt.text(0.25*vx0, 0.3*vz0, r'$\vec{v}(0)$')
plt.text(x(t)+0.25*vx0, z(t)+0.2*vz, r'$\vec{v}(t)$')
plt.text(0.5*x(t), 0.4*z(t), r'$\vec{r}(t)$')
plt.show()
interact(partikkelbane, t=(0, T*1.1, T*1.1/20))
```
interactive(children=(FloatSlider(value=0.7928822215139677, description='t', max=1.5857644430279354, step=0.07…
<function __main__.partikkelbane(t)>
```python
Ls = sp.Integral(sp.sqrt(v.dot(v)), (t, 0, T)).evalf()
print('Buelengde = %2.4f'%(Ls))
```
Buelengde = 11.7002
|
0c12c2e11368ac4be0def9bc0774aeb894d21af3
| 13,877 |
ipynb
|
Jupyter Notebook
|
notebooks/partikkelbane.ipynb
|
mikaem/MEK1100-21
|
b2dfb4dc3598f57989dbf1f397179ced9a8e39b9
|
[
"BSD-2-Clause"
] | null | null | null |
notebooks/partikkelbane.ipynb
|
mikaem/MEK1100-21
|
b2dfb4dc3598f57989dbf1f397179ced9a8e39b9
|
[
"BSD-2-Clause"
] | null | null | null |
notebooks/partikkelbane.ipynb
|
mikaem/MEK1100-21
|
b2dfb4dc3598f57989dbf1f397179ced9a8e39b9
|
[
"BSD-2-Clause"
] | null | null | null | 27.809619 | 352 | 0.494415 | true | 3,201 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.843895 | 0.746036 |
__label__nob_Latn
| 0.795381 | 0.571625 |
# 13 Linear Algebra: Singular Value Decomposition
One can always decompose a matrix $\mathsf{A}$
\begin{gather}
\mathsf{A} = \mathsf{U}\,\text{diag}(w_j)\,\mathsf{V}^{T}\\
\mathsf{U}^T \mathsf{U} = \mathsf{U} \mathsf{U}^T = 1\\
\mathsf{V}^T \mathsf{V} = \mathsf{V} \mathsf{V}^T = 1
\end{gather}
where $\mathsf{U}$ and $\mathsf{V}$ are orthogonal matrices and the $w_j$ are the _singular values_ that are assembled into a diagonal matrix $\mathsf{W}$.
$$
\mathsf{W} = \text{diag}(w_j)
$$
The inverse (if it exists) can be directly calculated from the SVD:
$$
\mathsf{A}^{-1} = \mathsf{V} \text{diag}(1/w_j) \mathsf{U}^T
$$
## Solving ill-conditioned coupled linear equations
```python
import numpy as np
```
### Non-singular matrix
Solve the linear system of equations
$$
\mathsf{A}\mathbf{x} = \mathbf{b}
$$
Using the standard linear solver in numpy:
```python
A = np.array([
[1, 2, 3],
[3, 2, 1],
[-1, -2, -6],
])
b = np.array([0, 1, -1])
```
```python
np.linalg.solve(A, b)
```
array([ 0.83333333, -0.91666667, 0.33333333])
Using the inverse from SVD:
$$
\mathbf{x} = \mathsf{A}^{-1} \mathbf{b}
$$
```python
U, w, VT = np.linalg.svd(A)
print(w)
```
[ 7.74140616 2.96605874 0.52261473]
First check that the SVD really factors $\mathsf{A} = \mathsf{U}\,\text{diag}(w_j)\,\mathsf{V}^{T}$:
```python
U.dot(np.diag(w).dot(VT))
```
array([[ 1., 2., 3.],
[ 3., 2., 1.],
[-1., -2., -6.]])
```python
np.allclose(A, U.dot(np.diag(w).dot(VT)))
```
True
Now calculate the matrix inverse $\mathsf{A}^{-1} = \mathsf{V} \text{diag}(1/w_j) \mathsf{U}^T$:
```python
inv_w = 1/w
print(inv_w)
```
[ 0.1291755 0.33714774 1.91345545]
```python
A_inv = VT.T.dot(np.diag(inv_w)).dot(U.T)
print(A_inv)
```
[[ -8.33333333e-01 5.00000000e-01 -3.33333333e-01]
[ 1.41666667e+00 -2.50000000e-01 6.66666667e-01]
[ -3.33333333e-01 -1.08335035e-16 -3.33333333e-01]]
Check that this is the same that we get from `numpy.linalg.inv()`:
```python
np.allclose(A_inv, np.linalg.inv(A))
```
True
Now, *finally* solve (and check against `numpy.linalg.solve()`):
```python
x = A_inv.dot(b)
print(x)
np.allclose(x, np.linalg.solve(A, b))
```
[ 0.83333333 -0.91666667 0.33333333]
True
```python
A.dot(x)
```
array([ -7.77156117e-16, 1.00000000e+00, -1.00000000e+00])
```python
np.allclose(A.dot(x), b)
```
True
### Singular matrix
If the matrix $\mathsf{A}$ is *singular* (i.e., its rank (linearly independent rows or columns) is less than its dimension and hence the linear system of equation does not have a unique solution):
For example, the following matrix has the same row twice:
```python
C = np.array([
[ 0.87119148, 0.9330127, -0.9330127],
[ 1.1160254, 0.04736717, -0.04736717],
[ 1.1160254, 0.04736717, -0.04736717],
])
b1 = np.array([ 2.3674474, -0.24813392, -0.24813392])
b2 = np.array([0, 1, 1])
```
```python
np.linalg.solve(C, b1)
```
NOTE: failure is not always that obvious: numerically, a matrix can be *almost* singular.
Try solving the linear system of equations
$$
\mathsf{D}\mathbf{x} = \mathbf{b}_1
$$
with matrix $\mathsf{D}$ below:
```python
D = C.copy()
D[2, :] = C[0] - 3*C[1]
D
```
array([[ 0.87119148, 0.9330127 , -0.9330127 ],
[ 1.1160254 , 0.04736717, -0.04736717],
[-2.47688472, 0.79091119, -0.79091119]])
```python
np.linalg.solve(D, b1)
```
array([ 1.61493184e+00, 2.69013663e+16, 2.69013663e+16])
Note that some of the values are huge, and suspiciously like the inverse of machine precision? Sign of a nearly singular matrix.
**Note**: *Just because a function did not throw an exception it does not mean that the answer is correct.* **Always check your output!**
Now back to the example with $\mathsf{C}$:
#### SVD for singular matrices
If a matrix is *singular* or *near singular* then one can *still* apply SVD.
One can then compute the *pseudo inverse*
\begin{align}
\mathsf{A}^{-1} &= \mathsf{V} \text{diag}(\alpha_j) \mathsf{U}^T \\
\alpha_j &= \begin{cases}
\frac{1}{w_j}, &\quad\text{if}\ w_j \neq 0\\
0, &\quad\text{if}\ w_j = 0
\end{cases}
\end{align}
i.e., any singular $w_j = 0$ is being "augmented" by setting
$$
\frac{1}{w_j} \rightarrow 0 \quad\text{if}\quad w_j = 0
$$
in $\text{diag}(1/w_j)$.
Perform the SVD for the singular matrix $\mathsf{C}$:
```python
U, w, VT = np.linalg.svd(C)
print(w)
```
[ 1.99999999e+00 1.00000000e+00 2.46519033e-32]
Note the third value $w_2 \approx 0$: sign of a singular matrix.
Test that the SVD really decomposes $\mathsf{A} = \mathsf{U}\,\text{diag}(w_j)\,\mathsf{V}^{T}$:
```python
U.dot(np.diag(w).dot(VT))
```
array([[ 0.87119148, 0.9330127 , -0.9330127 ],
[ 1.1160254 , 0.04736717, -0.04736717],
[ 1.1160254 , 0.04736717, -0.04736717]])
```python
np.allclose(C, U.dot(np.diag(w).dot(VT)))
```
True
There are the **singular values** (let's say, $|w_i| < 10^{-12}$):
```python
singular_values = np.abs(w) < 1e-12
print(singular_values)
```
[False False True]
#### Pseudo-inverse
Calculate the **pseudo-inverse** from the SVD
\begin{align}
\mathsf{A}^{-1} &= \mathsf{V} \text{diag}(\alpha_j) \mathsf{U}^T \\
\alpha_j &= \begin{cases}
\frac{1}{w_j}, &\quad\text{if}\ w_j \neq 0\\
0, &\quad\text{if}\ w_j = 0
\end{cases}
\end{align}
Augment:
```python
inv_w = 1/w
inv_w[singular_values] = 0
print(inv_w)
```
[ 0.5 1. 0. ]
```python
C_inv = VT.T.dot(np.diag(inv_w)).dot(U.T)
print(C_inv)
```
[[-0.04736717 0.46650635 0.46650635]
[ 0.5580127 -0.21779787 -0.21779787]
[-0.5580127 0.21779787 0.21779787]]
#### Solution for $\mathbf{b}_1$
Now solve the linear problem with SVD:
```python
x1 = C_inv.dot(b1)
print(x1)
```
[-0.34365138 1.4291518 -1.4291518 ]
```python
C.dot(x1)
```
array([ 2.3674474 , -0.24813392, -0.24813392])
```python
np.allclose(C.dot(x1), b1)
```
True
Thus, using the pseudo-inverse $\mathsf{C}^{-1}$ we can obtain solutions to the equation
$$
\mathsf{C} \mathbf{x}_1 = \mathbf{b}_1
$$
However, $\mathbf{x}_1$ is not the only solution: there's a whole line of solutions that are formed by the special solution and a combination of the basis vectors in the *null space* of the matrix:
The (right) *kernel* or *null space* contains all vectors $\mathbf{x^0}$ for which
$$
\mathsf{C} \mathbf{x^0} = 0
$$
(The dimension of the null space corresponds to the number of singular values.) You can find a basis that spans the null space. Any linear combination of null space basis vectors will also end up in the null space when $\mathbf{A}$ is applied to it.
Specifically, if $\mathbf{x}_1$ is a special solution and $\lambda_1 \mathbf{x}^0_1 + \lambda_2 \mathbf{x}^0_2 + \dots$ is a vector in the null space then
$$
\mathbf{x} = \mathbf{x}_1 + ( \lambda_1 \mathbf{x}^0_1 + \lambda_2 \mathbf{x}^0_2 + \dots )
$$
is **also a solution** because
$$
\mathsf{C} \mathbf{x} = \mathsf{C} \mathbf{x^0} + \mathsf{C} ( \lambda_1 \mathbf{x}^0_1 + \lambda_2 \mathbf{x}^0_2 + \dots ) = \mathsf{C} \mathbf{x^0} + 0 = \mathbf{b}_1 + 0 = \mathbf{b}_1
$$
The $\lambda_i$ are arbitrary real numbers and hence there is an infinite number of solutions.
In SVD:
* The columns $U_{\cdot, i}$ of $\mathsf{U}$ (i.e. `U.T[i]` or `U[:, i]`) corresponding to non-zero $w_i$, i.e. $\{i : w_i \neq 0\}$, form the basis for the _range_ of the matrix $\mathsf{A}$.
* The columns $V_{\cdot, i}$ of $\mathsf{V}$ (i.e. `V.T[i]` or `V[:, i]`) corresponding to zero $w_i$, i.e. $\{i : w_i = 0\}$, form the basis for the _null space_ of the matrix $\mathsf{A}$.
```python
x1
```
array([-0.34365138, 1.4291518 , -1.4291518 ])
The rank space comes from $\mathsf{U}^T$:
```python
U.T
```
array([[ -7.07106782e-01, -4.99999999e-01, -4.99999999e-01],
[ 7.07106780e-01, -5.00000001e-01, -5.00000001e-01],
[ -2.47010760e-16, -7.07106781e-01, 7.07106781e-01]])
The basis vectors for the rank space (``~ bool_array`` applies a logical ``NOT`` operation to the entries in the boolean array so that we can pick out "not singular values"):
```python
U.T[~singular_values]
```
array([[-0.70710678, -0.5 , -0.5 ],
[ 0.70710678, -0.5 , -0.5 ]])
The null space comes from $\mathsf{V}^T$:
```python
VT
```
array([[-0.8660254 , -0.35355339, 0.35355339],
[-0.5 , 0.61237244, -0.61237244],
[-0. , -0.70710678, -0.70710678]])
The basis vector for the null space:
```python
VT[singular_values]
```
array([[-0. , -0.70710678, -0.70710678]])
The component of $\mathbf{x}_1$ along the basis vector of the null space of $\mathsf{C}$ (here a 1D space) – note that this component is zero, i.e., the special solution lives in the rank space:
```python
x1.dot(VT[singular_values][0])
```
2.2204460492503131e-16
We can create a family of solutions by adding vectors in the null space to the special solution $\mathbf{x}_1$, e.g. $\lambda_1 = 2$:
```python
lambda_1 = 2
x1_1 = x1 + lambda_1 * VT[2]
print(x1_1)
np.allclose(C.dot(x1_1), b1)
```
[-0.34365138 0.01493824 -2.84336536]
True
Thus, **all** solutions are
```
x1 + lambda * VT[2]
```
#### Solution for $\mathbf{b}_2$
The solution vector $x_2$ solves
$$
\mathsf{C}\mathbf{x}_2 = \mathbf{b}_2
$$
```python
b2
```
array([0, 1, 1])
```python
x2 = C_inv.dot(b2)
print(x2)
print(C.dot(x2))
np.allclose(C.dot(x2), b2)
```
[ 0.9330127 -0.43559574 0.43559574]
[ -4.44089210e-16 1.00000000e+00 1.00000000e+00]
True
... and the general solution will again be obtained by adding any multiple of the null space basis vector.
#### Null space
The Null space is spanned by the following basis vectors (just one in this example):
```python
null_basis = VT[singular_values]
null_basis
```
array([[-0. , -0.70710678, -0.70710678]])
Show that
$$
\mathsf{C}\mathbf{x}^0 = 0
$$
```python
C.dot(null_basis.T)
```
array([[ 0.00000000e+00],
[ -6.93889390e-18],
[ -6.93889390e-18]])
## SVD for fewer equations than unknowns
$N$ equations for $M$ unknowns with $N < M$:
* no unique solutions (underdetermined)
* $M-N$ dimensional family of solutions
* SVD: at least $M-N$ zero or negligible $w_j$: columns of $\mathsf{V}$ corresponding to singular $w_j$ span the solution space when added to a particular solution.
Same as the above [**Solving ill-conditioned coupled linear equations**](#Solving-ill-conditioned-coupled-linear-equations).
## SVD for more equations than unknowns
$N$ equations for $M$ unknowns with $N > M$:
* no exact solutions in general (overdetermined)
* but: SVD can provide best solution in the least-square sense
$$
\mathbf{x} = \mathsf{V}\, \text{diag}(1/w_j)\, \mathsf{U}^{T}\, \mathbf{b}
$$
where
* $\mathbf{x}$ is a $M$-dimensional vector of the unknowns (parameters of the fit),
* $\mathsf{V}$ is a $M \times N$ matrix
* the $w_j$ form a square $M \times M$ matrix,
* $\mathsf{U}$ is a $M \times N$ matrix (and $\mathsf{U}^T$ is a $N \times M$ matrix), and
* $\mathbf{b}$ is the $N$-dimensional vector of the given values (data)
It can be shown that $\mathbf{x}$ minimizes the residual
$$
\mathbf{r} := |\mathsf{A}\mathbf{x} - \mathbf{b}|.
$$
where the matrix $\mathsf{A}$ will be described below and will contain the evaluation of the fit function for each data point in $\mathbf{b}$.
(For a $N \le M$, one can find $\mathbf{x}$ so that $\mathbf{r} = 0$ – see above.)
(In the following, we will switch notation and denote the vector of $M$ unknown parameters of the model as $\mathbf{a}$; this $\mathbf{a}$ corresponds to $\mathbf{x}$ above. $N$ is the number of observations.)
### Linear least-squares fitting
This is the *liner least-squares fitting problem*: Given $N$ data points $(x_i, y_i)$ (where $1 \le i \le N$), fit to a linear model $y(x)$, which can be any linear combination of $M$ functions of $x$.
For example, if we have $N$ functions $x^k$ with parameters $a_k$
$$
y(x) = a_1 + a_2 x + a_3 x^2 + \dots + a_M x^{M-1}
$$
or in general
$$
y(x) = \sum_{k=1}^M a_k X_k(x)
$$
The goal is to determine the $M$ coefficients $a_k$.
Define the **merit function**
$$
\chi^2 = \sum_{i=1}^N \left[ \frac{y_i - \sum_{k=1}^M a_k X_k(x_i)}{\sigma_i}\right]^2
$$
(sum of squared deviations, weighted with standard deviations $\sigma_i$ on the $y_i$).
Best parameters $a_k$ are the ones that *minimize $\chi^2$*.
*Design matrix* $\mathsf{A}$ ($N \times M$, $N \geq M$), vector of measurements $\mathbf{b}$ ($N$-dim) and parameter vector $\mathbf{a}$ ($M$-dim):
\begin{align}
A_{ij} &= \frac{X_j(x_i)}{\sigma_i}\\
b_i &= \frac{y_i}{\sigma_i}\\
\mathbf{a} &= (a_1, a_2, \dots, a_M)
\end{align}
The design matrix $\mathsf{A}$ contains the *predicted* values from the basis functions for all values $x_i$ of the independent variable $x$ for which we have measured data $y_i$.
Minimum occurs when the derivative vanishes:
$$
0 = \frac{\partial\chi^2}{\partial a_k} = \sum_{i=1}^N {\sigma_i}^{-2} \left[ y_i - \sum_{j=1}^M a_j X_j(x_i) \right] X_k(x_i), \quad 1 \leq k \leq M
$$
($M$ coupled equations)
To simplify the notation, define the $M \times M$ matrix
\begin{align}
\alpha_{kj} &= \sum_{i=1}^N \frac{X_k(x_i) X_j(x_i)}{\sigma_i^2}\\
\mathsf{\alpha} &= \mathsf{A}^T \mathsf{A}
\end{align}
and the vector of length $M$
\begin{align}
\beta_{k} &= \sum_{i=1}^N \frac{y_i X_k(x_i)}{\sigma_i^2}\\
\boldsymbol{\beta} &= \mathsf{A}^T \mathbf{b}
\end{align}
Then the $M$ coupled equations can be compactly written as
\begin{align}
\sum_{j=1}^{M} \alpha_{kj} a_j &= \beta_k\\
\mathsf{\alpha}\mathbf{a} = \boldsymbol{\beta}
\end{align}
$\mathsf{\alpha}$ and $\boldsymbol{\beta}$ are known, so we have to solve this matrix equation for the vector of the unknown parameters $\mathbf{a}$.
#### Error estimates for the parameters
The inverse of $\mathsf{\alpha}$ is related to the uncertainties in the parameters:
$$
\mathsf{C} := \mathsf{\alpha}^{-1}
$$
in particular
$$
\sigma(a_i) = C_{ii}
$$
(and the $C_{ij}$ are the co-variances).
#### Solution of the linear least-squares fitting problem with SVD
We need to solve the overdetermined system of $M$ coupled equations
\begin{align}
\sum_{j=1}^{M} \alpha_{kj} a_j &= \beta_k\\
\mathsf{\alpha}\mathbf{a} = \boldsymbol{\beta}
\end{align}
SVD finds $\mathbf{a}$ that minimizes
$$
\chi^2 = |\mathsf{A}\mathbf{a} - \mathbf{b}|
$$
The errors are
$$
\sigma^2(a_j) = \sum_{i=1}^{M} \left(\frac{V_{ji}}{w_i}\right)^2
$$
#### Example
Synthetic data
$$
y(x) = 3\sin x - 2\sin 3x + \sin 4x
$$
with noise $r$ added (uniform in range $-5 < r < 5$).
```python
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.style.use('ggplot')
import numpy as np
```
```python
def signal(x, noise=0):
r = np.random.uniform(-noise, noise, len(x))
return 3*np.sin(x) - 2*np.sin(3*x) + np.sin(4*x) + r
```
```python
X = np.linspace(-10, 10, 500)
Y = signal(X, noise=5)
```
```python
plt.plot(X, Y, 'r-', X, signal(X, noise=0), 'k--')
```
Define our fit function (the model) and the basis functions. We need the basis functions for setting up the problem and we will later use the fitfunction together with our parameter estimates to compare our fit to the true underlying function.
```python
def fitfunc(x, a):
return a[0]*np.cos(x) + a[1]*np.sin(x) + \
a[2]*np.cos(2*x) + a[3]*np.sin(2*x) + \
a[4]*np.cos(3*x) + a[5]*np.sin(3*x) + \
a[6]*np.cos(4*x) + a[7]*np.sin(4*x)
def basisfuncs(x):
return np.array([np.cos(x), np.sin(x),
np.cos(2*x), np.sin(2*x),
np.cos(3*x), np.sin(3*x),
np.cos(4*x), np.sin(4*x)])
```
(Note that we could have used the `basisfuncs()` in `fitfunc()` – left as an exercise for the keen reader...)
Set up the $\mathsf{\alpha}$ matrix and the $\boldsymbol{\beta}$ vector (here we assume that all observations have the same error $\sigma = 1$):
```python
M = 8
sigma = 1.
alpha = np.zeros((M, M))
beta = np.zeros(M)
for x in X:
Xk = basisfuncs(x)
for k in range(M):
for j in range(M):
alpha[k, j] += Xk[k]*Xk[j]
for x, y in zip(X, Y):
beta += y * basisfuncs(x)/sigma
```
Finally, solving the problem follows the same procedure as before:
Get the SVD:
```python
U, w, VT = np.linalg.svd(alpha)
V = VT.T
```
In this case, the singular values do not immediately show if any basis functions are superfluous (this would be the case for values close to 0).
```python
w
```
array([ 296.92809624, 282.94804954, 243.7895787 , 235.7300808 ,
235.15938555, 235.14838812, 235.14821093, 235.14821013])
... nevertheless, remember to routinely mask any singular values or close to singular values:
```python
w_inv = 1/w
w_inv[np.abs(w) < 1e-12] = 0
alpha_inv = V.dot(np.diag(w_inv)).dot(U.T)
```
Solve the system of equations with the pseudo-inverse:
```python
a_values = alpha_inv.dot(beta)
print(a_values)
```
[ 0.02941343 3.15273275 0.22893881 0.14290046 0.30121258 -2.04230627
0.28692984 1.08197408]
Compare the fitted values to the original parameters $a_j = 0, +3, 0, 0, 0, -2, 0, +1$.
The original parameters show up as 3.15, -2.04 and 1.08 but the other parameters also have appreciable values. Given that the noise was sizable, this is not unreasonable.
Compare the plot of the underlying true function ("signal", dashed line) to the model ("fit", solid line):
```python
plt.plot(X, fitfunc(X, a_values), 'b-', label="fit")
plt.plot(X, signal(X, noise=0), 'k--', label="signal")
plt.legend(loc="best", fontsize="small")
```
We get some spurious oscillations but overall the result looks reasonable.
```python
```
|
9bb9690d59f2f18ac0cae0920b48b87b2fc91a84
| 126,193 |
ipynb
|
Jupyter Notebook
|
13_linear_algebra/13_SVD.ipynb
|
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
|
635a6678569406e11865c8a583a56f4a3cf2bdc4
|
[
"CC-BY-4.0"
] | null | null | null |
13_linear_algebra/13_SVD.ipynb
|
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
|
635a6678569406e11865c8a583a56f4a3cf2bdc4
|
[
"CC-BY-4.0"
] | null | null | null |
13_linear_algebra/13_SVD.ipynb
|
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
|
635a6678569406e11865c8a583a56f4a3cf2bdc4
|
[
"CC-BY-4.0"
] | null | null | null | 74.494097 | 46,576 | 0.805528 | true | 6,372 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.932453 | 0.831143 | 0.775002 |
__label__eng_Latn
| 0.903956 | 0.638922 |
# Quick overview of the finite element method
<div id="ch:overview"></div>
<!-- dom:FIGURE: [fig/dolfin_mesh.png, width=500 frac=0.8] Example on a complicated domain for solving PDEs. <div id="overview:meshex"></div> -->
<!-- begin figure -->
<div id="overview:meshex"></div>
<p>Example on a complicated domain for solving PDEs.</p>
<!-- end figure -->
The finite element method is a rich and versatile approach to construct
computational schemes to solve any partial differential equation on
any domain in any dimension. The method may at first glance appear
cumbersome and even unnatural as it relies on variational formulations
and polynomial spaces.
Let us start by outlining the concepts briefly.
Consider the following PDE in 2D:
$$
-\nabla^2 u = -u_{xx} - u_{yy} = f,
$$
equipped with suitable boundary conditions.
A finite difference scheme to solve the current PDE
would in the simplest case be described by the stencil
<!-- Equation labels as ordinary links -->
<div id="overview:2d:fdm0"></div>
$$
\begin{equation}
\label{overview:2d:fdm0} \tag{1}
-\frac{u_{i-1,j} - 2 u_{i,j} + u_{i+1,j}}{h^2}
-\frac{u_{i,j-1} - 2 u_{i,j} + u_{i,j+1}}{h^2}
= f_{i}
\end{equation}
$$
or reordered to the more recognizable
<!-- Equation labels as ordinary links -->
<div id="overview:2d:fdm"></div>
$$
\begin{equation}
\label{overview:2d:fdm} \tag{2}
\frac{-u_{i-1,j} -u_{i,j-1} + 4 u_{i,j} - u_{i+1,j} -u_{i,j+1}}{h^2} = f_{i}
{\thinspace .}
\end{equation}
$$
On a structured mesh, the stencil appears natural and
is convenient to implement.
However, for a unstructured, "complicated" domain
as shown in [Figure](#overview:meshex),
we would need to be careful when placing
points and evaluating stencils and functions.
In particular,
it will be difficult to evaluate the stencil near the dolphin in
[Figure](#overview:meshex) because some points will be on the inside and some outside on the outside of the dolphin.
Both accuracy and efficiency
may easily be sacrificed by a reckless implementation.
In general, a domain like the one represented in [Figure](#overview:meshex) will be represented by a triangulation. The
finite element method (and the finite volume method which often is a
special case of the finite element method) is a methodology for
creating stencils in a structured manner
that adapt to the underlying triangulation.
The triangulation in [Figure](#overview:meshex) is a mesh that
consists of cells that are connected and defined in terms of
vertices. The fundamental idea of the finite element method is
to construct a procedure to compute a stencil on a general element and
then apply this procedure to each element of the mesh. Let
us therefore denote the mesh as $\Omega$ while $\Omega_e$ is the domain
of a generic element such that $\Omega=\cup_e \Omega_e$.
This is exactly the point where the challenges of the finite element
method start and where we need some new concepts. The basic question
is: How should we create a stencil for a
general element and a general PDE that has the maximal accuracy and
minimal computational complexity at the current triangulation? The
two basic building blocks of the finite element method are
1. the solution is represented in terms of a polynomial expression on the
given general element, and
2. a variational formulation of the PDE
where element-wise integration enables the PDE to be transformed to a
stencil.
Step 1 is, as will be explained later, conveniently represented
both implementation-wise and mathematically as a solution
<!-- Equation labels as ordinary links -->
<div id="overview:u:fem"></div>
$$
\begin{equation}
\label{overview:u:fem} \tag{3}
u = \sum_{i=0}^N c_i {\psi}_i(x,y),
\end{equation}
$$
where $\{c_i\}$ are the coefficients to be determined
(often called the degrees of freedom)
and ${\psi}_i(x,y)$ are prescribed polynomials.
The basis functions ${\psi}_i(x,y)$ used to express the solution
is often called the trial functions.
The next step is the variational formulation. This step
may seem like a magic trick or a cumbersome
mathematical exercise at first glance.
We take the PDE and multiply by a function $v$ (usually called
the test function)
and integrate over an element $\Omega_e$ and obtain the expression
<!-- Equation labels as ordinary links -->
<div id="overview:poisson"></div>
$$
\begin{equation}
\label{overview:poisson} \tag{4}
\int_{\Omega_e} -\nabla^2 u \, v {\, \mathrm{d}x} = \int_{\Omega_e} f \, v {\, \mathrm{d}x}
\end{equation}
$$
A perfectly natural question at this point is: Why multiply
with a test function $v$? The simple answer is that
there are $N+1$ unknowns that need to be determined in $u$
in ([3](#overview:u:fem))
and for this we need $N+1$ equations. The equations are
obtained by using $N+1$ different test functions which when used
in ([5](#overview:fem:a))
give rise to $N+1$ linearly independent equations.
While ([4](#overview:poisson)) is a variational formulation of
our PDE problem, it is not the most common form.
It is common to re-write
<!-- Equation labels as ordinary links -->
<div id="overview:fem:a"></div>
$$
\begin{equation}
\label{overview:fem:a} \tag{5}
\int_{\Omega_e} -\nabla^2 u \, v {\, \mathrm{d}x}
\end{equation}
$$
to weaken the requirement of the polynomial space used for the
trial functions (that here needs to be twice differentiable)
and write this term in its corresponding weak form.
That
is, the term is rewritten in terms of first-derivatives only (of
both the trial and the test function) with the aid of Gauss-Green's lemma:
<!-- Equation labels as ordinary links -->
<div id="overview:fem:a:weak"></div>
$$
\begin{equation}
\label{overview:fem:a:weak} \tag{6}
\int_{\Omega_e} -\nabla^2 u \, v {\, \mathrm{d}x} =
\int_{\Omega_e} \nabla u \cdot \nabla v {\, \mathrm{d}x} - \int_{\partial \Omega_e} \frac{\partial u}{\partial n} \, v \, dS
\end{equation}
$$
The reasons behind this alternative formulation are rather mathematical and will
not be a major subject of this book as they are well described elsewhere.
In fact, a precise explanation would need tools from functional analysis.
With the above rewrite and assuming now that the boundary term vanishes due to
boundary conditions (why this is possible will be dealt with in detail
later) the
stencil, corresponding to ([2](#overview:2d:fdm)), is represented by
$$
\int_{\Omega_e} \nabla u \cdot \nabla v {\, \mathrm{d}x}
$$
where $u$ is called the *trial function*, $v$ is called a *test function*,
and $\Omega$ is an element of
a triangulated mesh. The idea of software like FEniCS is that this
piece of mathematics can be directly expressed in terms of Python code as
```python
# DO NOT RUN THIS CELL, THIS IS A DEMONSTRATION
mesh = Mesh("some_file")
V = FunctionSpace(mesh, "some polynomial")
u = TrialFunction(V)
v = TestFunction(V)
a = dot(grad(u), grad(v))*dx
```
The methodology and code in this example is not tied to a particular
equation, except the formula for `a`, holding the derivatives of our
sample PDE, but any other PDE terms could be expressed via `u`, `v`,
`grad`, and other symbolic operators in this line of code. In fact,
finite element packages like FEniCS are typically structured as
general toolboxes that can be adapted to any PDE as soon as the
derivation of variational formulations is mastered. The main obstacle
here for a novice FEM user is then to understand the concept of trial
functions and test functions realized in terms of polynomial spaces.
Hence, a finite element formulation (or a weak formulation) of
the Poisson problem that works on any mesh $\Omega$ can be written
in terms of solving the problem:
$$
\int_\Omega\nabla u\cdot\nabla vd{\, \mathrm{d}x} = \int_\Omega fv{\, \mathrm{d}x}{\thinspace .}
$$
By varying the trial and test spaces we obtain different stencils,
some of which will be identical to finite difference schemes on
particular meshes. We will now show a complete FEniCS program to
illustrate how a typical finite element code may be structured
```python
# DO NOT RUN THIS CELL, THIS IS A DEMONSTRATION
mesh = Mesh("some_file")
V = FunctionSpace(mesh, "some polynomial")
u = TrialFunction(V)
v = TestFunction(V)
a = dot(grad(u), grad(v))*dx
L = f*v*dx
bc = DirichletBC(V, "some_function", "some_domain")
solution = Function(V) # unknown FEM function
solve(a == L, solution, bc)
plot(solution)
```
<!-- # -->
<!-- # -->
<!-- # -->
<!-- # -->
While the finite element method is versatile and may be adapted to any
PDE on any domain in any dimension, the different methods that are
derived by using different trial and test functions may vary
significantly in terms of accuracy and efficiency. In fact, a bad choice of polynomial space may in some cases lead to a
completely wrong result. This is particularly the case for complicated
PDEs. For this reason, it is dangerous to regard the method as a black
box and not do proper verification of the method for a particular
application.
In our view, there
are three important tests that should be frequently employed
during verification:
1. reducing the model problem to 1D and carefully check the calculations involved in the variational formulation on a small 1D mesh
2. perform the calculation involved on one general or random element
3. test whether convergences is obtained and to what order the method converge by refining the mesh
The two first tasks here should ideally be performed by independent calculations
outside the framework used for the simulations. In our view `sympy` is a
convenient tool that can be used to assist hand calculations.
So far, we have outlined how the finite element method handles derivatives
in a PDE, but we also had a right-hand side function $f$. This term is multiplied
by the test function $v$ as well, such that the entire Poisson equation
is transformed to
$$
\int_\Omega\nabla u\cdot\nabla vd{\, \mathrm{d}x} = \int_\Omega fv{\, \mathrm{d}x}{\thinspace .}
$$
This statement is assumed valid for all test functions $v$ in some
function space $V$ of polynomials. The right-hand side expression is
coded in FEniCS as
```python
# DO NOT RUN THIS CELL. RUN THE FULL PROGRAM (IN THE END) INSTEAD.
L = f*v*dx
```
and the problem is then solved by the statements
```python
# DO NOT RUN THIS CELL. RUN THE FULL PROGRAM (IN THE END) INSTEAD.
u = Function(V) # unknown FEM function
solve(a == L, u, bc)
```
where `bc` holds information about boundary conditions. This information
is connected to information about the triangulation, the *mesh*.
Assuming $u=0$ on the boundary, we can in FEniCS generate a triangular
mesh over a rectangular domain $[-1,-1]\times [-1,1]$ as follows:
```python
# DO NOT RUN THIS CELL. RUN THE FULL PROGRAM (IN THE END) INSTEAD.
mesh = RectangleMesh(Point(-1, -1), Point(1, 1), 10, 10)
bc = DirichletBC(V, 0, 'on_boundary')
```
Mathematically, the finite element method transforms our PDE to
a sparse linear system. The `solve` step performs two tasks:
construction of the linear system based on the given information about
the domain and its elements, and then solution of the linear system by
either an iterative or direct method.
We are now in a position to summarize all the parts of a FEniCS program
that solves the Poisson equation by the finite element method:
```python
from fenics import *
mesh = RectangleMesh(Point(-1, -1), Point(1, 1), 10, 10)
V = FunctionSpace(mesh, 'P', 2) # quadratic polynomials
bc = DirichletBC(V, 0, 'on_boundary')
u = TrialFunction(V)
v = TestFunction(V)
a = dot(grad(u), grad(v))*dx
L = f*v*dx
u = Function(V) # unknown FEM function to be computed
solve(a == L, u, bc)
vtkfile = File('poisson.pvd'); vtkfile << u # store solution
```
Solving a different PDE is a matter of changing `a` and `L`.
Although we assert here that the finite element method is a tool that
can solve any PDE problem on any domain of any complexity, the
fundamental ideas of the method are in fact even more general.
We will therefore start the book by variational
methods for approximation in general, then consider the finite
element in a wide range of applications.
|
1ef5582c6ca84b93c8062dd8ac98435e11082d55
| 17,889 |
ipynb
|
Jupyter Notebook
|
1- overview.ipynb
|
mbarzegary/finite-element-intro
|
47ef0a3592b823ae71a874ee35850114f16b6d8b
|
[
"MIT"
] | 8 |
2021-01-26T13:18:02.000Z
|
2022-02-14T15:20:11.000Z
|
1- overview.ipynb
|
mbarzegary/finite-element-intro
|
47ef0a3592b823ae71a874ee35850114f16b6d8b
|
[
"MIT"
] | null | null | null |
1- overview.ipynb
|
mbarzegary/finite-element-intro
|
47ef0a3592b823ae71a874ee35850114f16b6d8b
|
[
"MIT"
] | 2 |
2021-08-05T23:14:15.000Z
|
2021-10-05T10:22:29.000Z
| 33.75283 | 156 | 0.593773 | true | 3,206 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.867036 | 0.757856 |
__label__eng_Latn
| 0.997088 | 0.599086 |
# Realization of Recursive Filters
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing.
## Cascaded Structures
The realization of recursive filters with a high order may be subject to numerical issues. For instance, when the coefficients span a wide amplitude range, their quantization may require a small quantization step or may impose a large relative error for small coefficients. The basic concept of cascaded structures is to decompose a high order filter into a cascade of lower order filters, typically first and second order recursive filters.
### Decomposition into Second-Order Sections
The rational transfer function $H(z)$ of a linear time-invariant (LTI) recursive system can be [expressed by its zeros and poles](introduction.ipynb#Transfer-Function) as
\begin{equation}
H(z) = \frac{b_M}{a_N} \cdot \frac{\prod_{\mu=1}^{P} (z - z_{0\mu})^{m_\mu}}{\prod_{\nu=1}^{Q} (z - z_{\infty\nu})^{n_\nu}}
\end{equation}
where $z_{0\mu}$ and $z_{\infty\nu}$ denote the $\mu$-th zero and $\nu$-th pole of degree $m_\mu$ and $n_\nu$ of $H(z)$, respectively. The total number of zeros and poles is denoted by $P$ and $Q$.
The poles and zeros of a real-valued filter $h[k] \in \mathbb{R}$ are either single real valued or conjugate complex pairs. This motivates to split the transfer function into
* first order filters constructed from a single pole and zero
* second order filters constructed from a pair of conjugated complex poles and zeros
Decomposing the transfer function into these two types by grouping the poles and zeros into single poles/zeros and conjugate complex pairs of poles/zeros results in
\begin{equation}
H(z) = K \cdot \prod_{\eta=1}^{S_1} \frac{(z - z_{0\eta})}{(z - z_{\infty\eta})}
\cdot \prod_{\eta=1}^{S_2} \frac{(z - z_{0\eta}) (z - z_{0\eta}^*)} {(z - z_{\infty\eta})(z - z_{\infty\eta}^*)}
\end{equation}
where $K$ denotes a constant and $S_1 + 2 S_2 = N$ with $N$ denoting the order of the system. The cascade of two systems results in a multiplication of their transfer functions. Above decomposition represents a cascade of first- and second-order recursive systems. The former can be treated as a special case of second-order recursive systems. The decomposition is therefore known as decomposition into second-order sections (SOSs) or [biquad filters](https://en.wikipedia.org/wiki/Digital_biquad_filter). Using a cascade of SOSs the transfer function of the recursive system can be rewritten as
\begin{equation}
H(z) = \prod_{\mu=1}^{S} \frac{b_{0, \mu} + b_{1, \mu} \, z^{-1} + b_{2, \mu} \, z^{-2}}{1 + a_{1, \mu} \, z^{-1} + a_{2, \mu} \, z^{-2}}
\end{equation}
where $S = \lceil \frac{N}{2} \rceil$ denotes the total number of SOSs. These results state that any real valued system of order $N > 2$ can be decomposed into SOSs. This has a number of benefits
* quantization effects can be reduced by sensible grouping of poles/zeros, e.g. such that the spanned amplitude range of the filter coefficients is limited
* A SOS may be extended by a gain factor to further reduce quantization effects by normalization of the coefficients
* efficient and numerically stable SOSs serve as generic building blocks for higher-order recursive filters
### Example - Cascaded second-order section realization of a lowpass
The following example illustrates the decomposition of a higher-order recursive Butterworth lowpass filter into a cascade of second-order sections.
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.markers import MarkerStyle
from matplotlib.patches import Circle
import scipy.signal as sig
N = 9 # order of recursive filter
def zplane(z, p, title='Poles and Zeros'):
"Plots zero and pole locations in the complex z-plane"
ax = plt.gca()
ax.plot(np.real(z), np.imag(z), 'bo', fillstyle='none', ms = 10)
ax.plot(np.real(p), np.imag(p), 'rx', fillstyle='none', ms = 10)
unit_circle = Circle((0,0), radius=1, fill=False,
color='black', ls='solid', alpha=0.9)
ax.add_patch(unit_circle)
ax.axvline(0, color='0.7')
ax.axhline(0, color='0.7')
plt.title(title)
plt.xlabel(r'Re{$z$}')
plt.ylabel(r'Im{$z$}')
plt.axis('equal')
plt.xlim((-2, 2))
plt.ylim((-2, 2))
plt.grid()
# design filter
b, a = sig.butter(N, 0.2)
# decomposition into SOS
sos = sig.tf2sos(b, a, pairing='nearest')
# print filter coefficients
print('Coefficients of the recursive part \n')
print(['%1.2f'%ai for ai in a])
print('\n')
print('Coefficients of the recursive part of the individual SOS \n')
print('Section \t a1 \t\t a2')
for n in range(sos.shape[0]):
print('%d \t\t %1.5f \t %1.5f'%(n, sos[n, 4], sos[n, 5]))
# plot pole and zero locations
plt.figure(figsize=(5,5))
zplane(np.roots(b), np.roots(a), 'Poles and Zeros - Overall')
plt.figure(figsize=(10, 7))
for n in range(sos.shape[0]):
plt.subplot(231+n)
zplane(np.roots(sos[n, 0:3]), np.roots(sos[n, 3:6]), title='Poles and Zeros - Section %d'%n)
plt.tight_layout()
# compute and plot frequency response of sections
plt.figure(figsize=(10,5))
for n in range(sos.shape[0]):
Om, H = sig.freqz(sos[n, 0:3], sos[n, 3:6])
plt.plot(Om, 20*np.log10(np.abs(H)), label=r'Section %d'%n)
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H_n(e^{j \Omega})|$ in dB')
plt.legend()
plt.grid()
```
**Exercise**
* What amplitude range is spanned by the filter coefficients?
* What amplitude range is spanned by the SOS coefficients?
* Change the pole/zero grouping strategy from `pairing='nearest'` to `pairing='keep_odd'`. What changes?
* Increase the order `N` of the filter. What changes?
Solution: Inspecting both the coefficients of the recursive part of the original filter and of the individual SOS reveals that the spanned amplitude range is lower for the latter. The choice of the pole/zero grouping strategy influences the locations of the poles/zeros in the individual SOS, the spanned amplitude range of their coefficients and the transfer functions of the individual sections. The total number of SOS scales with the order of the original filter.
|
fc548a25d50192000b7e3f16400875d322ceacf3
| 113,836 |
ipynb
|
Jupyter Notebook
|
Lectures_Advanced-DSP/recursive_filters/cascaded_structures.ipynb
|
lev1khachatryan/ASDS_DSP
|
9059d737f6934b81a740c79b33756f7ec9ededb3
|
[
"MIT"
] | 1 |
2020-12-29T18:02:13.000Z
|
2020-12-29T18:02:13.000Z
|
Lectures_Advanced-DSP/recursive_filters/cascaded_structures.ipynb
|
lev1khachatryan/ASDS_DSP
|
9059d737f6934b81a740c79b33756f7ec9ededb3
|
[
"MIT"
] | null | null | null |
Lectures_Advanced-DSP/recursive_filters/cascaded_structures.ipynb
|
lev1khachatryan/ASDS_DSP
|
9059d737f6934b81a740c79b33756f7ec9ededb3
|
[
"MIT"
] | null | null | null | 476.301255 | 47,104 | 0.934274 | true | 1,713 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.853913 | 0.847968 | 0.72409 |
__label__eng_Latn
| 0.982035 | 0.520637 |
# 원형 단면의 관성 모멘트<br>Area Moment of Inertia of a Circular Section
```python
# 그래프, 수학 기능 추가
# Add graph and math features
import pylab as py
import numpy as np
import numpy.linalg as nl
# 기호 연산 기능 추가
# Add symbolic operation capability
import sympy as sy
```
참고문헌 : <br>
* Pytel 외 저, 이주성 외 역, 재료역학, 2판, 한티미디어, 2013.<br>
* 위키백과 기여자, '단면 이차 모멘트', 위키백과, , 2018년 3월 4일, 13:19 UTC, <https://ko.wikipedia.org/wiki/%EB%8B%A8%EB%A9%B4_%EC%9D%B4%EC%B0%A8_%EB%AA%A8%EB%A9%98%ED%8A%B8> [2018년 7월 31일에 접근] <br>
Ref: <br>
* Pytel, Kiusalaas, Sharma, Mechanics of Materials, 2nd Ed., Cengage Learning, 2013.<br>
* Wikipedia contributors, 'Second moment of area', Wikipedia, The Free Encyclopedia, 16 June 2018, 14:15 UTC, <https://en.wikipedia.org/w/index.php?title=Second_moment_of_area&oldid=846126944> [accessed 31 July 2018]
다음과 같은 원형 단면의 2차 모멘트를 구해 보자.<br>
Let's try to find the second moment of area of following circular section.
반지름 $r=10mm$<br>Radius of the section $r=10mm$
```python
r_mm = 10
```
## 단면 2차 모멘트의 정의<br>Definition of a Second Moment of Area
$$
I_x=\int_A y^2 dA
$$
여기서 $dA$는 다음과 같다. (원점은 원의 중심에 위치)<br>
Here, $dA$ is as follows. (The origin is at the center of the circle)
$$
dA=S_x(y)dy=2\sqrt{r^2-y^2}dy
$$
Python 언어로는 다음과 같이 구현할 수 있다.<br>We can implement in python as follows.
```python
def sx(y_mm):
if abs(y_mm) <= r_mm :
result = 2 * (r_mm * r_mm - y_mm * y_mm) ** 0.5
else:
result = 0
return result
```
이 함수의 그래프를 그려 보자<br>Let's plot this.
```python
y_mm_array = py.arange(-r_mm, r_mm+0.05, 0.1)
sx_mm_array = py.array([sx(y_mm) for y_mm in y_mm_array])
py.plot(sx_mm_array * 0.5, y_mm_array)
py.plot(sx_mm_array * (-0.5), y_mm_array)
py.axis('equal')
py.grid(True)
py.xlabel('x(mm)')
py.ylabel('y(mm)')
```
## 정적분 계산<br>Numerical Integration
0차 적분 함수를 이용해 보자<br>Let's use 0'th order numerical integration function.
```python
def get_delta_x(xi, xe, n):
return (xe - xi) / n
```
```python
def num_int_0(f, xi, xe, n, b_verbose=False):
x_array = py.linspace(xi, xe, n+1)
delta_x = x_array[1] - x_array[0]
assert 1e-3 > (abs(delta_x - get_delta_x(xi, xe, n)) / get_delta_x(xi, xe, n)), f"delta_x = {delta_x}"
integration_result = 0.0
for k in range(n):
x_k = x_array[k]
F_k = f(x_k) * delta_x
if b_verbose:
print('k = %2d, F_k = %g' % (k, F_k))
integration_result += F_k
return integration_result
```
### 단면 2차 모멘트<br>Second moment of area
```python
def y2sx(y_mm):
return y_mm * y_mm * sx(y_mm)
```
```python
I_y_mm4 = num_int_0(y2sx, -r_mm, r_mm, int(r_mm * 10))
```
```python
I_y_mm4
```
확인해 보자.<br>Let's verify.
```python
I_y_exact_mm4 = np.pi * (r_mm ** 4) * 0.25
```
```python
I_y_exact_mm4
```
```python
abs(I_y_exact_mm4 - I_y_mm4)
```
어떻게 하면 위 오차를 줄일 수 있을 것인가?<br>How can we make the error above smaller?
```python
error = (I_y_exact_mm4 - I_y_mm4)
```
```python
try :
assert (1e-6 > abs(error)), "Error too large"
except AssertionError as e:
print(e)
```
## Final Bell<br>마지막 종
```python
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
```python
```
|
b80998a7ab9136807796c7e84cbc2c969ae6db5d
| 7,559 |
ipynb
|
Jupyter Notebook
|
30_num_int/40_circular_section_MOI.ipynb
|
kangwon-naver/nmisp
|
141f8148b3ce783d3df27ee0c9986f530cada8fb
|
[
"BSD-3-Clause"
] | 7 |
2019-05-14T11:00:53.000Z
|
2020-08-27T01:04:29.000Z
|
30_num_int/40_circular_section_MOI.ipynb
|
kangwon-naver/nmisp
|
141f8148b3ce783d3df27ee0c9986f530cada8fb
|
[
"BSD-3-Clause"
] | 170 |
2018-07-12T06:06:21.000Z
|
2022-01-28T09:06:55.000Z
|
30_num_int/40_circular_section_MOI.ipynb
|
kangwon-naver/nmisp
|
141f8148b3ce783d3df27ee0c9986f530cada8fb
|
[
"BSD-3-Clause"
] | 57 |
2018-08-28T08:38:59.000Z
|
2020-09-02T03:40:47.000Z
| 20.374663 | 226 | 0.48181 | true | 1,283 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.879147 | 0.826712 | 0.726801 |
__label__kor_Hang
| 0.831403 | 0.526934 |
# SymPy
**SymPy** is a Computer Algebra System (CAS) for Python. It does symbolic computation instead of numeric computation. This means that mathematical objects are represented exactly, not approximately as in the case of numerical representation.
Take the example of $\sqrt{8}$. When calculated numerically, we get the approximate answer 2.82842712475. But in SymPy it is represented as $2 \sqrt{2}$. Further, performing operations on such representations will continue to retain accuracy. Note that $\frac{\sqrt{8}}{\sqrt{3}}$ is simplified to $\frac{2 \sqrt{6}}{3}$, retaining full accuracy. You can numerically evaluate any expression with the **`N()`** function in SymPy.
```python
from sympy import *
import math
x, y, z, t = symbols('x y z t') # Symbols representing real numbers
k, m, n = symbols('k m n', integer=True) # Symbols representing integers
f, g, h = symbols('f g h', cls=Function) # Symbols repesenting function names
init_printing()
print math.sqrt(8)
print sqrt(8)
print math.sqrt(8) / math.sqrt(3)
print sqrt(8) / sqrt(3)
print N(sqrt(8) / sqrt(3)) # Numerical evaluation
print N(sqrt(8. / 3.))
```
2.82842712475
2*sqrt(2)
1.63299316186
2*sqrt(6)/3
1.63299316185545
1.63299316185545
## Numerical Simplification
```python
print nsimplify(0.1)
print nsimplify(6.28, [pi], tolerance=0.01)
print nsimplify(pi, tolerance=0.1)
print nsimplify(pi, tolerance=0.001)
```
1/10
2*pi
22/7
355/113
## Algebra
SymPy can handle algebraic expressions, simplify them and evaluate them.
```python
eq = ((x+y)**2 * (x+1))
eq
```
```python
expand(eq)
```
You can substitute a numerical value for any of the symbols and simplify the expression. The method to do this is **`subs()`**. It takes two arguments, the symbol and the numerical value it is to assume. If an expression has more than one symbol, substitution must be done one symbol at a time.
```python
eq.subs(x, 1).subs(y,1)
```
```python
a = 1/x + (x*sin(x) - 1)/x
a
```
```python
N(a.subs(x, 1))
```
```python
```
## Integral Calculus
SymPy performs integration symbolically, like you would if you were doing so by hand rather than numerically.
```python
a = Integral(cos(x), x)
Eq(a, a.doit())
```
```python
b = Integral(sqrt(1/x), x)
Eq(b, b.doit())
```
Here is how we can evaluate the definite integral $\int_0^{\infty} e^{-x} dx$
```python
b = Integral(x**2+2*x+3, x)
Eq(b, b.doit())
integrate(b, (x, 0, 1))
```
```python
integrate(exp(-x), (x, 0, oo))
```
Here is the definite integral $\int_{-\infty}^{\infty} -x^2 - y^2 dx$
```python
integrate(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))
```
```python
c = Integral(exp(-x**2), (x, 0, 1))
Eq(c, c.doit())
```
```python
N(c.doit())
```
## Differential Calculus
SymPy can also perform differentiation symbolically. Derivative of $y = x^2 + 3x - \frac{1}{2}$ is $y' = 2x + 3$. Substituting $x=2$ in the derivative results in the numerical value $y'(2) = 7$.
```python
s = "x**2 + 3*x - 1/2"
c = sympify(s)
print c
d = diff(c)
print d
d.subs(x, 2)
```
It is possible to differentiate a function multiple times and obtain the second or higher derivatives.
```python
print diff(x**4)
print diff(x**4, x, x) # Differentiate w.r.t. x two times
print diff(x**4, x, 2) # Differentiate w.r.t. x two times
```
4*x**3
12*x**2
12*x**2
A function of two or more variables can be differentiated with respect to any of the variables any number of times.
```python
expr = exp(x*y*z)
deriv = diff(expr, x, y, z)
print deriv
```
(x**2*y**2*z**2 + 3*x*y*z + 1)*exp(x*y*z)
## Limits
SymPy can evaluate limits of functions.
```python
print limit(sin(x)/x, x, 0)
print limit(tan(x)/x, x, 0)
```
1
1
```python
expr = x**2 / exp(x)
print expr.subs(x, oo)
print limit(expr, x, oo)
```
nan
0
```python
expr = Limit((cos(x) -1) / x, x, 0)
expr
```
```python
expr.doit()
```
```python
c = Limit((-x + sin(x)) / (x * cos(x) - sin(x)), x, 0)
c
```
```python
r = Limit(x * (sin(x) - x * cos(x)) / (2*(1-cos(x)) - x * sin(x)), x, 0)
r
```
```python
print c.doit()
print r.doit()
```
1/2
4
## Solution of Equations
To solve the equation $x^2 = 1$, first form the equation with the **`Eq()`** SymPy function by defining the left and right hand sides. Then solve the equation by calling the SymPy function **`solve()`**.
```python
solve(Eq(x**2, 1), x)
```
The same equation could also be expressed as $x^2 - 1 = 0$ and solved as show below:
```python
solve(Eq(x**2 - 1, 0), x)
```
Since it is a common form to have zero on the right hand side, SymPy allows you to dispense with the **`Eq()`** function call to form the equation and solve the equation directly as follows:
```python
solve(x**2 - 1, x)
```
Let us now solve the polynomial equation $x^2 - x = 0$
```python
print solve(x**2 - x, x)
```
[0, 1]
For polynomial equations, **`solve`** prints repeated roots, if any, only once. The function **`roots()`** prints the roots and their frequency.
```python
print solve(x**3 - 6*x**2 + 9*x, x)
print roots(x**3 - 6*x**2 + 9*x, x)
```
[0, 3]
{0: 1, 3: 2}
Differential equations can be solved using the SymPy function **`dsolve()`**. Let us first represent the differential equation $f''(x) - 2 f'(x) + f(x) = \sin(x)$ as follows using **`Eq()`**, and then solve it using **`dsolve()`**:
```python
diffeq = Eq(f(x).diff(x, x) - 2 * f(x).diff(x) + f(x), sin(x))
diffeq
```
```python
dsolve(diffeq, f(x))
```
In the above solution $C_1$ and $C_2$ are arbitrary constants of integration which will have to be determined by applying known conditions.
```python
```
|
96e3ec4c43393178c3355d2615af427fa6b91c1e
| 49,236 |
ipynb
|
Jupyter Notebook
|
SymPy.ipynb
|
satish-annigeri/Notebooks
|
92a7dc1d4cf4aebf73bba159d735a2e912fc88bb
|
[
"CC0-1.0"
] | null | null | null |
SymPy.ipynb
|
satish-annigeri/Notebooks
|
92a7dc1d4cf4aebf73bba159d735a2e912fc88bb
|
[
"CC0-1.0"
] | null | null | null |
SymPy.ipynb
|
satish-annigeri/Notebooks
|
92a7dc1d4cf4aebf73bba159d735a2e912fc88bb
|
[
"CC0-1.0"
] | null | null | null | 36.14978 | 441 | 0.661731 | true | 1,832 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92523 | 0.899121 | 0.831894 |
__label__eng_Latn
| 0.97049 | 0.771102 |
# Chapter 4
# Finite-Dimensional Optimization
In this chapter we examine methods for optimizing a function with respect to a finite
number of variables. In the finite-dimensional optimization problem, one is given a
real-valued function $f$ defined on $X \subset R^n$ and asked to find an $x^* \in X$ such that
$f(x^*) \geq f(x)$ for all $x \in X$. We denote this problem
$$\max_{x \in X} f(x)$$
and call $f$ the objective function, $X$ the feasible set, and $x^*$, if it exists, a maximum.
There is a close relationship between the finite-dimensional optimization problems
discussed in this chapter and the rootfinding and complementarity problems
discussed in the previous chapter. The first-order necessary conditions of an unconstrained
problem pose a rootfinding problem; the Karush-Kuhn-Tucker first-order
necessary conditions of a constrained optimization problem pose a complementarity
problem. The rootfinding and complementarity problems associated with optimization
problems are special in that they possess a natural merit function, the objective
function itself, which may be used to determine whether iterations are converging on
a solution.
Over the years, numerical analysts have studied finite-dimensional optimization
problems extensively and have devised a variety of algorithms for solving them quickly
and accurately. We begin our discussion with derivative-free methods, which are useful
if the objective function is rough or if its derivatives are expensive to compute.
We then turn to Newton-type methods for unconstrained optimization, which employ
derivatives or derivative estimates to locate an optimum. Univariate unconstrained
optimization methods are of particular interest because many multivariate optimization
algorithms use the strategy of first determining a linear direction to move in,
and then finding the optimal point in that direction. We conclude with a discussion
of how to solve constrained optimization problems.
## 4.1 Derivative-Free Methods
As was the case with univariate rootfinding, optimization algorithms exist that will
place progressively smaller brackets around a local maximum of a univariate function.
Such methods are relatively slow, but do not require the evaluation of function
derivatives and are guaranteed to find a local optimum to a prescribed tolerance in a
known number of steps.
The most widely-used derivative-free method is the **golden search** method.
Suppose
we wish to find a local maximum of a continuous univariate function $f(x)$ on
the interval $[a; b]$.
Pick any two numbers in the interior of the interval, say $x_1$ and $x_2$
with $x_1 < x_2$.
Evaluate the function and replace the original interval with $[a; x2]$ if
$f(x_1) > f(x_2)$ or with $[x_1; b]$ if $f(x_2) \geq f(x_1)$.
A key issue is how to pick the interior evaluation points.
Two simple criteria lead
to the most widely-used strategy.
First, the length of the new interval should be
independent of whether the upper or lower bound is replaced.
Second, on successive
iterations, one should be able to reuse an interior point from the previous iteration so
that only one new function evaluation is performed per iteration.
These conditions
are uniquely satisfied by selecting $x_i = a + \alpha_i (b - a)$, where
$$\alpha_1 = \frac{3-\sqrt 5}{2}$$
$$\alpha_2 = \frac{\sqrt 5 -1}{2}$$
The value $\alpha_2$ is known as the golden ratio, a number dear to the hearts of Greek
philosophers and Renaissance artists.
```python
import numpy as np
from numpy import append, array, diagonal, tril, triu
from numpy.linalg import inv
from scipy.linalg import lu
#from scipy.linalg import solve
from pprint import pprint
from numpy import array, zeros, diag, diagflat, dot
from sympy import *
import sympy as sym
init_printing()
```
```python
%matplotlib notebook
from matplotlib import pyplot as plt
```
```python
maxit = 1000
tol = 1/10000
x0= np.array([0,3])
f = lambda x: x * np.cos(x ** 2)
a,b = 0,3
```
```python
x = np.linspace(0,3, 100)
y = f(x)
plt.plot(x,y)
plt.scatter( np.array([0.8083,2.5234]), f(np.array([0.8083,2.5234])) , c='r' )
plt.title("Figure 4.1 Maximization of $x cos(x^2)$ via golden search")
```
<IPython.core.display.Javascript object>
<div id='e79a0ac5-53f4-4b69-a307-58b61a006883'></div>
<matplotlib.text.Text at 0x7f73f848bcf8>
```python
```
```python
alpha1 = (3 - np.sqrt(5)) / 2
alpha2 = (np.sqrt(5) - 1) / 2
if a > b:
a, b = b, a
x1 = a + alpha1 * (b - a)
x2 = a + alpha2 * (b - a)
f1, f2 = f(x1), f(x2)
d = (alpha1 * alpha2)*(b - a)
```
```python
while d > tol:
d = d * alpha2
if f2 < f1: # x2 is new upper bound
x2, x1 = x1, x1 - d
f2, f1 = f1, f(x1)
else: # x1 is new lower bound
x1, x2 = x2, x2 + d
f1, f2 = f2, f(x2)
```
```python
#x1 if f1 > f2 else x2
```
```python
if f1>f2:
x = x2
else:
x = x1
x
```
```python
def mygolden(f,a, b, maxit = 1000, tol = 1/10000):
alpha1 = (3 - np.sqrt(5)) / 2
alpha2 = (np.sqrt(5) - 1) / 2
if a > b:
a, b = b, a
x1 = a + alpha1 * (b - a)
x2 = a + alpha2 * (b - a)
f1, f2 = f(x1), f(x2)
d = (alpha1 * alpha2)*(b - a) # initial d
while d > tol:
d = d * alpha2 # alpha2 is the golden ratio
if f2 < f1: # x2 is new upper bound
x2, x1 = x1, x1 - d
f2, f1 = f1, f(x1)
else: # x1 is new lower bound
x1, x2 = x2, x2 + d
f1, f2 = f2, f(x2)
if f1>f2:
x = x2
else:
x = x1
return x
```
```python
mygolden(f, 0, 3)
```
Execution of this script yields the result $x = 0.8083$. As can be seen in Figure 4.1,
this point is a local maximum, but not a global maximum in $[0; 3]$. The golden search
method is guaranteed to find the global maximum when the function is concave.
However, as the present example makes clear, this need not be true when the optimand
is not concave.
## Nelder-Mead algorithm
```python
```
Another widely-used derivative-free optimization method for multivariate functions
is the **Nelder-Mead algorithm**.
The Nelder-Mead algorithm is simple, but slow and unreliable. However, if a
problem involves only a single optimization or costly function and derivative evaluations,
the Nelder-Mead algorithm is worth trying. In many problems an optimization
problem that is embedded in a larger problem must be solved repeatedly, with the
function parameters perturbed slightly with each iteration. For such problems, which
are common is dynamic models, one generally will want to use a method that moves
more quickly and reliably to the optimum, given a good starting point.
(source: https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method)
The Nelder–Mead method or downhill simplex method or amoeba method is a commonly applied numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is applied to nonlinear optimization problems for which derivatives may not be known. However, the Nelder–Mead technique is a heuristic search method that can converge to non-stationary points[1] on problems that can be solved by alternative methods.
(source: http://www.scholarpedia.org/article/Nelder-Mead_algorithm)
The Nelder-Mead algorithm or simplex search algorithm, originally published in 1965 (Nelder and Mead, 1965), is one of the best known algorithms for multidimensional unconstrained optimization without derivatives. This method should not be confused with Dantzig's simplex method for linear programming, which is completely different, as it solves a linearly constrained linear problem.
The basic algorithm is quite simple to understand and very easy to use. For these reasons, it is very popular in many fields of science and technology, especially in chemistry and medicine.
The method does not require any derivative information, which makes it suitable for problems with non-smooth functions. It is widely used to solve parameter estimation and similar statistical problems, where the function values are uncertain or subject to noise. It can also be used for problems with discontinuous functions, which occur frequently in statistics and experimental mathematics.
```python
#https://github.com/fchollet/nelder-mead/blob/master/nelder_mead.py
'''
Pure Python/Numpy implementation of the Nelder-Mead algorithm.
Reference: https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method
'''
import copy
def nelder_mead(f, x_start,
step=0.1, no_improve_thr=10e-6,
no_improv_break=10, max_iter=0,
alpha=1., gamma=2., rho=-0.5, sigma=0.5):
'''
@param f (function): function to optimize, must return a scalar score
and operate over a numpy array of the same dimensions as x_start
@param x_start (numpy array): initial position
@param step (float): look-around radius in initial step
@no_improv_thr, no_improv_break (float, int): break after no_improv_break iterations with
an improvement lower than no_improv_thr
@max_iter (int): always break after this number of iterations.
Set it to 0 to loop indefinitely.
@alpha, gamma, rho, sigma (floats): parameters of the algorithm
(see Wikipedia page for reference)
return: tuple (best parameter array, best score)
'''
# init
dim = len(x_start)
prev_best = f(x_start)
no_improv = 0
res = [[x_start, prev_best]]
for i in range(dim):
x = copy.copy(x_start)
x[i] = x[i] + step
score = f(x)
res.append([x, score])
# simplex iter
iters = 0
while 1:
# order
res.sort(key=lambda x: x[1])
best = res[0][1]
# break after max_iter
if max_iter and iters >= max_iter:
return res[0]
iters += 1
# break after no_improv_break iterations with no improvement
print('...best so far:', best)
if best < prev_best - no_improve_thr:
no_improv = 0
prev_best = best
else:
no_improv += 1
if no_improv >= no_improv_break:
return res[0]
# centroid
x0 = [0.] * dim
for tup in res[:-1]:
for i, c in enumerate(tup[0]):
x0[i] += c / (len(res)-1)
# reflection
xr = x0 + alpha*(x0 - res[-1][0])
rscore = f(xr)
if res[0][1] <= rscore < res[-2][1]:
del res[-1]
res.append([xr, rscore])
continue
# expansion
if rscore < res[0][1]:
xe = x0 + gamma*(x0 - res[-1][0])
escore = f(xe)
if escore < rscore:
del res[-1]
res.append([xe, escore])
continue
else:
del res[-1]
res.append([xr, rscore])
continue
# contraction
xc = x0 + rho*(x0 - res[-1][0])
cscore = f(xc)
if cscore < res[-1][1]:
del res[-1]
res.append([xc, cscore])
continue
# reduction
x1 = res[0][0]
nres = []
for tup in res:
redx = x1 + sigma*(tup[0] - x1)
score = f(redx)
nres.append([redx, score])
res = nres
```
```python
import math
import numpy as np
# def f(x):
# return math.sin(x[0]) * math.cos(x[1]) * (1. / (abs(x[2]) + 1))
#f(x,y) = x^2 - 4*x + y^2 - y - x*y;
# f = lambda x: x[0]**2- 4*x[0] + x[1]**2- x[1] - x[0]*x[1]
def f(x):
return x[0]**2- 4*x[0] + x[1]**2- x[1] - x[0]*x[1]
```
```python
nelder_mead(f, np.array([0., 0.]))
```
...best so far: -0.39
...best so far: -0.7275
...best so far: -1.393125
...best so far: -2.35265625
...best so far: -3.5309765625
...best so far: -5.22336914063
...best so far: -5.22336914063
...best so far: -5.4678515625
...best so far: -6.54388916016
...best so far: -6.54388916016
...best so far: -6.79
...best so far: -6.79
...best so far: -6.82644058228
...best so far: -6.89778457642
...best so far: -6.94423038483
...best so far: -6.98128607035
...best so far: -6.98128607035
...best so far: -6.99655470744
...best so far: -6.99655470744
...best so far: -6.99655470744
...best so far: -6.99880626416
...best so far: -6.99950646219
...best so far: -6.99950646219
...best so far: -6.99972928513
...best so far: -6.99991771801
...best so far: -6.99995652906
...best so far: -6.99995652906
...best so far: -6.99998705445
...best so far: -6.99999239547
...best so far: -6.99999584097
...best so far: -6.99999751173
...best so far: -6.99999837535
...best so far: -6.99999975505
...best so far: -6.99999975505
...best so far: -6.99999975505
...best so far: -6.99999994341
...best so far: -6.99999994341
...best so far: -6.99999995028
...best so far: -6.99999997999
...best so far: -6.99999999846
...best so far: -6.99999999846
[array([ 2.99996614, 2.00000911]), -6.9999999984625303]
```python
#https://codesachin.wordpress.com/2016/01/16/nelder-mead-optimization/
from IPython.display import YouTubeVideo
# Evaluates the function:
# f(x,y) = x^2 - 4*x + y^2 - y - x*y;
YouTubeVideo("HUqLxHfxWqU")
```
##### Scipy implementation
http://www.scipy-lectures.org/advanced/mathematical_optimization/
In scipy, scipy.optimize.fmin() implements the Nelder-Mead approach:
```python
from scipy import optimize
optimize.fmin(f, [2, 2])
```
Optimization terminated successfully.
Current function value: -7.000000
Iterations: 40
Function evaluations: 73
array([ 2.99998082, 2.00001514])
https://docs.scipy.org/doc/scipy/reference/optimize.minimize-neldermead.html
```python
optimize.minimize(f, [2, 2],method='Nelder-Mead')
```
final_simplex: (array([[ 2.99998082, 2.00001514],
[ 2.9999984 , 1.99993767],
[ 2.99993967, 1.99992527]]), array([-7., -7., -7.]))
fun: -6.9999999991122497
message: 'Optimization terminated successfully.'
nfev: 73
nit: 40
status: 0
success: True
x: array([ 2.99998082, 2.00001514])
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
## 4.2 Newton-Raphson Method
banana " f = ('-100*(x(2)-x(1)^2)^2-(1-x(1))^2')", so-called because its contours resemble bananas.
```python
f = lambda x,y:(-100*(y-x**2)**2-(1-x)**2)
# def f(x,y):
# # the height function
# return (1 - x / 2 + x**5 + y**3) * np.exp(-x**2 -y**2)
n = 256
x = np.linspace(-0.25, 1.25, n)
y = np.linspace(-0.25, 1.25, n)
X,Y = np.meshgrid(x, y)
plt.figure()
x0,y0 = 0,0
# use plt.contourf to filling contours
# X, Y and value for (X,Y) point
plt.contourf(X, Y, f(X, Y), 38, alpha=.75,cmap='bone')# cmap=plt.cm.hot)
# use plt.contour to add contour lines
C = plt.contour(X, Y, f(X, Y), 38, colors='black', linewidth=.5)
plt.clabel(C, inline=True, fontsize=10)
# plt.xticks(())
# plt.yticks(())
# set dot styles
plt.scatter([x0, ], [y0, ], s=50, color='b')
plt.xlim(-0.25, 1.25)
plt.ylim(-0.25, 1.25)
```
The Newton-Raphson method for maximizing an objective function uses successive
quadratic approximations to the objective in the hope that the maxima of the approximants
will converge to the maximum of the objective. The Newton-Raphson
method is intimately related to the Newton method for solving rootfinding problems.
Indeed, the Newton-Raphson method is identical to applying Newton's method to
compute the root of the gradient of the objective function.
The Taylor series of $f(x)$ about the point $x=x_0 + \epsilon$ is given by
$$f(x_0 + \epsilon) = f(x_0)+ f'(x_0) \epsilon +\frac{1}{2} f''(x_0) \epsilon^2 $$
$$f(x) = f(x^{(k)})+ f'(x^{(k)}) (x-x^{(k)}) + \frac{1}{2}(x-x^{(k)})^T f''(x^{(k)}) (x-x^{(k)}) $$
```python
```
Solving the first order condition
```python
```
$$f'(x^{(k)})+ f''(x^{(k)}) (x-x^{(k)}) = 0$$
yields the iteration rule
$$x^{(k+1)} \leftarrow x^{(k)} - [f''(x^{(k)})]^{-1} f'(x^{(k)}) $$
In theory, the Newton-Raphson method converges if $f$ is twice continuously difierentiable
and if the initial value of x supplied by the analyst is sufficiently close to a
local maximum of $f$ at which the **Hessian $f''$** is negative definite. There is, however,
no generally practical formula for determining what sufficiently close is.
```python
```
The Newton-Raphson method can be robust to the starting
value if $f$ is well behaved, for example, if f is **globally concave**. The Newton-Raphson
method, however, can be very sensitive to starting value if the function is not globally
concave. Also, in practice, the **Hessian $f''$** must be well-conditioned at the optimum,
otherwise rounding errors in the vicinity of the optimum can make it difficult to
compute a precise approximate solution.
The Newton-Raphson algorithm has numerous drawbacks.
First, the algorithm
requires computation of both the first and second derivatives of the objective function.
Second, the Newton-Raphson algorithm offers no **guarantee** that the objective function
value may be increased in the direction of the Newton step. Such a guarantee is
available only if the Hessian **Hessian $f''(x^k)$** is **negative definite**; otherwise, one may actually
move towards a saddle point of f (if the Hessian is indefinite) or even a minimum (if
Hessian is **positive definite**).
For this reason, the Newton-Raphson method is rarely
used in practice, and then only if the objective function is **globally concave**.
## 4.3 Quasi-Newton Methods
Quasi-Newton methods employ a similar strategy to the Newton-Raphson method,
but **replace the Hessian of the objective function (or its inverse) with a negative
definite approximation, guaranteeing that function value can be increased in the direction
of the Newton step**.
The most efficient quasi-Newton algorithms employ an
approximation to the inverse Hessian, rather than the Hessian itself, in order to avoid
performing a linear solve, and employ updating rules that do **not require second
derivative information** to ease the burden of implementation and the cost of computation.
In analogy with the Newton-Raphson method, quasi-Newton methods use a search
direction of the form
$$d^{(k)} = -B^{(k)} f'(x^{(k)})$$
where $B^{(k)}$ is an approximation to the **inverse Hessian** of f at the kth iterate $x^{(k)}$.
The vector $d^{(k)}$ is called the **Newton or quasi-Newton step**.
The more robust quasi-Newton methods do not necessarily take the full Newton
step, but rather shorten it or lengthen it in order to obtain improvement in the
objective function. This is accomplished by performing a line-search in which one
seeks a **step length $s > 0$** that maximizes or nearly maximizes $f (x^{(k)} + sd^{(k)})$. Given
the computed step length $s^{(k)}$, one updates the iterate as follows:
$$x^{(k+1)}= x^{(k)} + s^{(k)} d^{(k)}$$
Quasi-Newton method differ in how the inverse Hessian approximation Bk is constructed
and updated. The simplest quasi-Newton method sets
$$B^{(k)} = - I $$,
where I is the identity matrix. This leads to a Newton step that is identical to the gradient of
the objective function at the current iterate:
$$d^{(k)} = f'(x^{(k)})$$
The choice of gradient as a step direction is intuitively appealing because the gradient
always points in the direction which, to a first order, promises the greatest increase in
f. For this reason, this quasi-Newton method is called the method of *steepest ascent.*'
The steepest ascent method is simple to implement, but is numerically *less efiicient*
in practice than competing quasi-Newton methods that *incorporate* information regarding
the **curvature of the objective function**.
The **most widely-used** quasi-Newton methods that employ **curvature information**
produce a sequence of inverse Hessian estimates that satisfy two conditions.
**First,**
given that
$$d^{(k)} \approx f''^{-1}(x^{(k)})( f'(x^{(k)}+ d^{(k)} ) - f'(x^{(k)}) )$$
the **inverse Hessian estimate** $A^{(-k)}$ is required to satisfy the so-called **quasi-Newton condition:**
$$d^{(k)} = B^{(k)} (x^{(k)})( f'(x^{(k)}+ d^{(k)} ) - f'(x^{(k)}) )$$
**Second,** the inverse Hessian estimate $A^{(-k)}$ is required to be both **symmetric and
negative-definite**, as must be true of the inverse Hessian at a local maximum. The
negative definiteness of the Hessian estimate assures that the objective function value
can be inreased in the **direction of the Newton step**.
Two methods that satisfy the quasi-Newton and negative definiteness conditions
are the **Davidson-Fletcher-Powell (DFP)** and **Broyden-Fletcher-Goldfarb-Shano (BFGS)**
updating methods. The **DFP** method uses the updating scheme
$$B \leftarrow B + \frac{d d^T}{d^T u} - \frac{B u u^T B}{u^T B u} $$
where
$$d = x^{(k+1)} - x^{(k)}$$
and
$$u = f'(x^{(k+1)}) - f'(x^{(k)})$$
The **BFGS** method uses the update scheme
$$B \leftarrow B + \frac{1}{d^T u}( w d^T + d w^T - \frac{w^T u}{d^T u}) d d^T $$
where
$$w = d - B u$$
The BFGS algorithm is generally considered superior to DFP, although there
are problems for which DFP outperforms BFGS. However, except for the updating
formulae, the two methods are identical, so it is easy to implement both and give
users the choice.
```python
step_methods = ['none','bhhh','bt','golden']
search_methods = ['steepest','dfp','bfgs']
```
```python
# step_methods = {'none': _step_none,
# 'bhhh': _step_bhhh,
# 'bt': _step_bt,
# 'golden': _step_golden
# }
# search_methods = {'steepest': _search_steepest,
# 'bfgs': _search_bfgs,
# 'dfp': _search_dfp
# }
```
```python
# def _search_bfgs(f, ff=None, u=None, d=None):
# ud = np.inner(u, d)
# w = d - B.dot(u)
# wd = np.outer(w, d)
# return B+ ((wd + wd.T) - (np.inner(u, w) * np.outer(d, d)) / ud) / ud
# # self.reset = False
# def _search_dfp(self, ff=None, u=None, d=None):
# ud = np.inner(u, d)
# v = B.dot(u)
# return B+ np.outer(d, d) / ud - np.outer(v, v) / np.inner(u, v)
# #self.reset = False
# def _search_steepest(self, ff, u=None, d=None):
# return -np.identity(k) / np.maximum(abs(fx0), 1)
```
```python
```
```python
# this function optstep is not covered in textbook. Only supporing implementation of qnewton.
errcode = False
def optstep(stepmeth,func, x0, fx0, g0, d, maxstep = 1000):
# take multiple output of function
A = func(x)
_is_there_jacobian = (type(A) is tuple) and (len(A) == 2)
if _is_there_jacobian:
#print('Jacobian was provided by user!')
f = lambda z: func(z)[0]
# several step search method
def _step_none(f, x0, fx0, d,maxstep):
fx = f(x0 + d)
if fx < fx0:
s = 1
errcode = False
return s, f
else:
return _step_golden(f, x0, fx0, d,maxstep)
def _step_bhhh(f, x0, fx0, g0, d,maxstep):
# Intializations
delta = 0.0001
dg = -np.inner(g0, d) # directional derivative
tol1 = dg * delta
tol0 = dg * (1 - delta)
s, ds = 1, 1
errcode = False
# Bracket the cone
for it in range(maxstep):
x = x0 + s * d
fs = f(x)
temp = (fx0 - fs) / s
if temp < tol0:
ds *= 2
s += ds
else:
break
if (tol0 <= temp) and (temp <=tol1):
return s, fs
ds /= 2
s -= ds
it0 = it + 1
# Then use bisection to get inside it
for it in range(it0, maxstep):
ds /= 2
x = x0 + s * d
fs = f(x)
temp = (fx0 - fs) / s
if temp > tol1:
s -= ds
elif temp < tol0:
s += ds
else:
return s, fs
# If it has not returned yet, call _step_golden!
return _step_golden(f, x0, fx0, d, maxstep)
def _step_bt(f, x0, fx0, g0, d, maxstep):
delta = 1e-4 # Defines cone of convergence; must be on (0,1/2)
ub = 0.5 # Upper bound on acceptable reduction in s.
lb = 0.1 # Lower bound on acceptable reduction in s.
errcode = 0
dg = -np.inner(d, g0) # directional derivative
tol1 = delta * dg
tol0 = (1 - delta) * dg
# full step
s = 1
fs = f(x0+d)
if (fx0 - fs) <= tol1:
return s, fs
# quadratic approximation
s2, fs2 = s, fs
s = -0.5 * dg / (-fs + fx0 - dg)
s = max(s, lb)
fs = f(x0 + s * d)
temp = (-fs + fx0) / s
if (tol0 <= temp) and (temp <= tol1):
return s, fs
# cubic approximation
for it in range(3, maxstep):
temp = (s - s2) * np.array([s * s, s2 * s2])
temp = np.array([- fs + fx0 - dg * s, -fs2 + fx0 - dg * s2]) / temp
a = temp[0] - temp[1]
b = s * temp[1] - s2 * temp[0]
s2 = s
fs2 = fs
if np.all(a == 0): # quadratic fits exactly
s = -0.5 * dg / b
else:
disc = b * b - 3 * a * dg
if np.all(disc < 0):
errcode = 2
return s, fs # complex root
s = (np.sqrt(disc) - b) / (3 * a)
s = np.maximum(np.minimum(s, ub * s2), lb * s2) # ensures acceptable step size; cp(f, lb, up)
fs = f(x0 + s * d)
temp = (-fs + fx0) / s
if np.all(tol0 <= temp) and np.all(temp <= tol1):
return s, fs
# If it has not returned yet, call _step_golden instead
return _step_golden(f, x0, fx0, d,maxstep)
def _step_golden(f, x0, fx0, d,maxstep):
alpha1 = (3 - np.sqrt(5)) / 2
alpha2 = (np.sqrt(5) - 1) / 2
tol = 1.e-4
tol *= alpha1*alpha2
s = 1
errcode = True
niter = 0
s0 = 0
it = 0
# Find a bracketing interval
fs = f(x0 + d)
if fx0 >= fs:
lenght = alpha1
else:
for it in range(maxstep):
s *= 2
fl = fs
fs = f(x0 + s*d)
if fs <=fl:
lenght = alpha1 * (s - s0)
break
else:
s0 /= 2
if (it + 1) >= maxstep:
s /= 2
fs = fl
return s, fs
xl = x0 + (s + lenght) * d
xs = x0 + (s - lenght) * d
s -= lenght
lenght *= alpha2 # lenght now measures relative distance between xl and xs
fs = f(xs)
fl = f(xl)
# Golden search to find minimum
while it < maxstep:
it += 1
if fs < fl:
s -= lenght
lenght *= alpha2
xs = xl
xl -= lenght * d
fs = fl
fl = f(xl)
else:
lenght *= alpha2
s += lenght
xl = xs
xs += lenght * d
fl = fs
fs = f(xs)
if lenght < tol:
errcode = False
break
if fl > fs:
fs = fl
s -= lenght
return s, fs
# return resulted s and fx
if stepmeth == None:
return _step_none(f, x0, fx0, d,maxstep)
elif stepmeth == "bhhh":
return _step_bhhh(f, x0, fx0, g0, d,maxstep)
elif stepmeth == "bt":
return _step_bt(f, x0, fx0, g0, d,maxstep)
elif stepmeth == "golden":
return _step_golden(f, x0, fx0, d,maxstep)
```
```python
def f(x):
y = (-100*(x[1]-x[0]**2)**2-(1-x[0])**2)
dy = np.array([2*(1-x[0])+400*(x[1]-x[0]**2)*x[0], -200*(x[1]-x[0]**2)])
return y,dy
```
```python
s, fx = optstep("golden" ,f, x, fx0, g0, d, maxstep)
```
```python
s,fx
```
The script assumes that the user
has written a Python routine f that evaluates the function at an arbitrary point and
that the user has specified a starting point x, an initial guess for the inverse Hessian
A, a convergence tolerance tol, and a limit on the number of iterations maxit. The
script uses an auxiliary algorithm optstep to determine the step length (discussed
in the next section). The algorithm also offers the user a choice on how to select the
search direction, searchmeth (1-steepest ascent, 2-DFP, 3-BFGS).
https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html
```python
```
```python
# if self.x0 is None or self.x0[0] is None:
# raise ValueError('Initial value is required to solve a OP, none provided!')
```
```python
x_list = list()# sequence of solutions of x for ploting
x0 = np.array([1.,0.]) # initial value for x
maxit, maxstep, tol,eps0, eps1,all_x = 10000, 10000, 1/10000,1.0,1.e-12 ,False # keyword arguments
x_list = [x0] # first x
searchmeth =2 # pick a search method.
stepmeth = "bt"
```
```python
x = x0 # initialize
k = x.shape[0] # number of variables
eps = np.spacing(1) # epsolin
A = f(x) # tuble of multiple outputs from object function
_is_there_jacobian = (type(A) is tuple) and (len(A) == 2)
# get first fx and g. object value and gradient/hessian value.
if _is_there_jacobian:
print('Jacobian was provided by user!')
fx0,g0 = f(x)
else:
print('Jacobian was not provided by user!')
fx0 = f(x)
try:
g0 = jacobian(f,x) # customized jacobian function
except NameError:
print("jacobian function Not in scope!\n Using identity matrix as jacobian matrix")
g0 = np.identity(k)
else:
print("jacobian function In scope!")
B = None # inversed Hessian matrix
if B is None:
B = -np.identity(k) / np.maximum(abs(fx0), 1) # using identity matrix as Hessian
print("Hessian is not provide and reset as normailized identity matrix! so steepest ascent") # steepest ascent
```
Jacobian was provided by user!
Hessian is not provide and reset as normailized identity matrix! so steepest ascent
```python
import warnings
```
```python
if np.linalg.norm(g0) < eps: # similar to np.all(g0<eps)
#break #return x
print("g0 is less than eps")
if np.all(g0 < eps): # check conditions
#break #return x
print("g0 is less than eps")
print("Solving nonlinear equations by using {} search method and {} step method".format(search_methods[searchmeth-1].capitalize(), stepmeth))
print("Start iteration......")
for it in range(maxit):
d = -np.dot(B, g0) # search direction
if (np.inner(d, g0) / (np.inner(d, d))) < eps1: # must go uphill
B = -np.identity(k) / np.maximum(abs(fx0), 1) # otherwise use
d = g0 / np.maximum(np.abs(fx0), 1) # steepest ascent
s, fx = optstep("bt" ,f, x, fx0, g0, d, maxstep)
if fx <= fx0:
warnings.warn('Iterations stuck in qnewton')
# break #x # return x
# reset Hessian and d.
B = -np.identity(k) / np.maximum(abs(fx0), 1) # otherwise use
d = g0.T / np.maximum(abs(fx0), 1) # steepest ascent
s, fx = optstep("bt" ,f, x, fx0, g0, d, maxstep)
if errcode:
warnings.warn('Cannot find suitable step in qnewton')
# return x
# reset to 1 and fx0
s, fx = 1, fx0
d *= s
x = x + d
x_list.append(x.copy())
if np.any(np.isnan(x) | np.isinf(x)):
raise ValueError('NaNs or Infs encountered')
# update fx and g
if _is_there_jacobian:
#print('Jacobian was provided by user!')
fx,g = f(x)
else:
print('Jacobian was not provided by user!')
fx = f(x)
try:
g = jacobian(f,x)
except NameError:
print("jacobian function Not in scope!\n Using identity matrix as jacobian matrix")
g = np.identity(k)
else:
print("jacobian function In scope!")
# Test convergence using Marquardt's criteria and gradient test
if ((fx - fx0) / (abs(fx) + eps0) < tol and
np.all(np.abs(d) / (np.abs(x) + eps0) < tol)) or\
np.all(np.abs(g) < eps):
print("Meet the tol. x: ", x)
break
# #return x
# if np.all( np.abs(d)/(np.abs(x) + eps0)< tol) or np.all(np.abs(g) < eps):
# print("Meet the tol. x: ", x)
# break
# Update inverse Hessian
u = g - g0 # change in Jacobian
ud = np.inner(u, d)
#print("Please specify one search method: 1:steepest ascen;2: DFP;3:BFGS")
if np.all(np.abs(ud) < eps):
B = -np.identity(k) / np.maximum(abs(fx0), 1) # otherwise use
else:
if searchmeth == 1 and np.abs(ud) < eps: # steepest ascent
B = -np.identity(k) / np.maximum(abs(fx), 1)
elif searchmeth == 2: # DFP
v = B.dot(u)
B += np.outer(d, d) / ud - np.outer(v, v) / np.inner(u, v)
elif searchmeth == 3: # BFGS
w = d - B.dot(u)
wd = np.outer(w, d)
B += ((wd + wd.T) - (np.inner(u, w) * np.outer(d, d)) / ud) / ud
# else:
# print("Please specify one search method: 1:steepest ascen;2: DFP;3:BFGS")
# Update iteration
fx0 = fx
g0 = g
print("finish {}th iteration...".format(it))
#print("x list: " + for str(x) in x_list)
if it > maxit:
warnings.warn('Maximum iterations exceeded in qnewton')
```
Solving nonlinear equations by using Dfp search method and bt step method
Start iteration......
finish 0th iteration...
finish 1th iteration...
finish 2th iteration...
finish 3th iteration...
finish 4th iteration...
finish 5th iteration...
finish 6th iteration...
finish 7th iteration...
finish 8th iteration...
finish 9th iteration...
finish 10th iteration...
finish 11th iteration...
finish 12th iteration...
finish 13th iteration...
finish 14th iteration...
finish 15th iteration...
finish 16th iteration...
finish 17th iteration...
finish 18th iteration...
finish 19th iteration...
finish 20th iteration...
finish 21th iteration...
finish 22th iteration...
finish 23th iteration...
finish 24th iteration...
finish 25th iteration...
finish 26th iteration...
finish 27th iteration...
finish 28th iteration...
finish 29th iteration...
finish 30th iteration...
Meet the tol. x: [ 0.99999993 0.99999986]
```python
x_list
```
[array([ 1., 0.]),
array([ 0.41314554, 0.29342723]),
array([ 0.39580618, 0.1727269 ]),
array([ 0.39675149, 0.15838385]),
array([ 0.40058059, 0.16006693]),
array([ 0.44685739, 0.18183245]),
array([ 0.43732885, 0.17841517]),
array([ 0.43639767, 0.17942195]),
array([ 0.43473828, 0.18607608]),
array([ 0.43680863, 0.18965204]),
array([ 0.4480958 , 0.20703251]),
array([ 0.45412053, 0.21461039]),
array([ 0.47763513, 0.24324066]),
array([ 0.49988381, 0.26943615]),
array([ 0.60901205, 0.39721893]),
array([ 0.62360754, 0.38653846]),
array([ 0.65591217, 0.42617689]),
array([ 0.70613869, 0.49067417]),
array([ 0.79844428, 0.61783675]),
array([ 0.77015994, 0.58500286]),
array([ 0.78330409, 0.60878183]),
array([ 0.83733992, 0.70089268]),
array([ 0.92482331, 0.84647259]),
array([ 0.90279767, 0.81240971]),
array([ 0.91516948, 0.83493489]),
array([ 0.9689266 , 0.93422914]),
array([ 0.9676163 , 0.93401163]),
array([ 0.97753087, 0.95539111]),
array([ 0.98854753, 0.97740207]),
array([ 0.99970154, 0.99931034]),
array([ 0.9998261, 0.9996212]),
array([ 1.00000204, 1.00000394]),
array([ 0.99999993, 0.99999986])]
```python
```
```python
def myqnewton(f, x0, B, searchmeth = 3,stepmeth = "bt" ,maxit = 10000, maxstep = 10000,tol = 1/100000,\
eps = np.spacing(1),eps0 =1.0, eps1 = 1.e-12, all_x = False):
'''
maxit, maxstep, tol,eps0, eps1 = 10000, 10000, 1/10000,1.0,1.e-12
f: object function and jacobian
x0: initial value
all_x: if we collect x value for plotting
'''
x = x0
if all_x:
x_list = [x0]
A = f(x)
_is_there_jacobian = (type(A) is tuple) and (len(A) == 2)
if _is_there_jacobian:
print('Jacobian was provided by user!')
fx0,g0 = f(x)
else:
print('Jacobian was not provided by user!')
fx0 = f(x)
try:
g0 = jacobian(f,x)
except NameError:
print("jacobian function Not in scope!\n Using identity matrix as jacobian matrix")
g0 = np.identity(k)
else:
print("jacobian function In scope!")
if np.all(np.abs(g0) < eps): # similar to np.all(g0<eps)
print("abs(g0)< eps...")
return x
print("Solving nonlinear equations by using {} search method and {} step method".format(search_methods[searchmeth-1].capitalize(), stepmeth))
print("Start iteration......")
for it in range(maxit):
d = -np.dot(B, g0) # search direction, initial d
# https://github.com/randall-romero/CompEcon-python/blob/master/compecon/optimize.py
if (np.inner(d, g0) / (np.inner(d, d))) < eps1: # must go uphill
B = -np.identity(k) / np.maximum(abs(fx0), 1) # otherwise use
d = g0 / np.maximum(np.abs(fx0), 1) # steepest ascent
# optimize search step length
s, fx = optstep(stepmeth ,f, x, fx0, g0, d, maxstep)
if fx <= fx0:
warnings.warn('Iterations stuck in qnewton')
#return x
# reset Hessian and d.
B = -np.identity(k) / np.maximum(abs(fx0), 1) # otherwise use
d = g0.T / np.maximum(abs(fx0), 1) # steepest ascent
s, fx = optstep("bt" ,f, x, fx0, g0, d, maxstep)
if errcode:
warnings.warn('Cannot find suitable step in qnewton')
# return x
# reset to 1 and fx0
s, fx = 1, fx0
# update d and x
d *= s
x = x + d
# keep record of x sequence in list
if all_x:
x_list.append(x.copy())
if np.any(np.isnan(x) | np.isinf(x)):
raise ValueError('NaNs or Infs encountered')
# update fx and g again
if _is_there_jacobian:
#print('Jacobian was provided by user!')
fx,g = f(x)
else:
print('Jacobian was not provided by user!')
fx = f(x)
try:
g = jacobian(f,x)
except NameError:
print("jacobian function Not in scope!\n Using identity matrix as jacobian matrix")
g = np.identity(k)
else:
print("jacobian function In scope!")
# Test convergence using Marquardt's criteria and gradient test
if ((fx - fx0) / (abs(fx) + eps0) < tol and
np.all(np.abs(d) / (np.abs(x) + eps0) < tol)) or\
np.all(np.abs(g) < eps):
print("Meet the tol. x: ", x)
#break
if all_x:
return x, x_list
else:
return x
# Update inverse Hessian
u = g - g0 # change in Jacobian
ud = np.inner(u, d)
# pick a search method
#print("Please specify one search method: 1:steepest ascen;2: DFP;3:BFGS")
if np.all(np.abs(ud) < eps):
B = -np.identity(k) / np.maximum(abs(fx0), 1) # otherwise use
else:
if searchmeth == 1 and np.abs(ud) < eps: # steepest ascent
B = -np.identity(k) / np.maximum(abs(fx), 1)
elif searchmeth == 2: # DFP
v = B.dot(u)
B += np.outer(d, d) / ud - np.outer(v, v) / np.inner(u, v)
elif searchmeth == 3: # BFGS
w = d - B.dot(u)
wd = np.outer(w, d)
B += ((wd + wd.T) - (np.inner(u, w) * np.outer(d, d)) / ud) / ud
# Update iteration
fx0 = fx
g0 = g
print("finish {}th iteration...".format(it))
# end of iteration if exceed the maxit
if it >= maxit:
warnings.warn('Maximum iterations exceeded in qnewton')
return x
```
```python
myqnewton(f, x0, B, searchmeth = 3,stepmeth = "bt" ,maxit = 10000, maxstep = 10000,tol = 1/100000,\
eps = np.spacing(1),eps0 =1.0, eps1 = 1.e-12, all_x = False)
```
Jacobian was provided by user!
Solving nonlinear equations by using Bfgs search method and bt step method
Start iteration......
finish 0th iteration...
finish 1th iteration...
finish 2th iteration...
finish 3th iteration...
finish 4th iteration...
finish 5th iteration...
finish 6th iteration...
finish 7th iteration...
finish 8th iteration...
finish 9th iteration...
finish 10th iteration...
Meet the tol. x: [ 1.00000061 1.00000117]
array([ 1.00000061, 1.00000117])
```python
myqnewton(f, x0, B, searchmeth = 2,stepmeth = "bt" ,maxit = 10000, maxstep = 10000,tol = 1/100000,\
eps = np.spacing(1),eps0 =1.0, eps1 = 1.e-12, all_x = False)
```
Jacobian was provided by user!
Solving nonlinear equations by using Dfp search method and bt step method
Start iteration......
finish 0th iteration...
finish 1th iteration...
finish 2th iteration...
finish 3th iteration...
finish 4th iteration...
finish 5th iteration...
finish 6th iteration...
finish 7th iteration...
finish 8th iteration...
finish 9th iteration...
finish 10th iteration...
finish 11th iteration...
finish 12th iteration...
finish 13th iteration...
finish 14th iteration...
finish 15th iteration...
finish 16th iteration...
finish 17th iteration...
finish 18th iteration...
finish 19th iteration...
finish 20th iteration...
finish 21th iteration...
finish 22th iteration...
finish 23th iteration...
finish 24th iteration...
finish 25th iteration...
finish 26th iteration...
Meet the tol. x: [ 0.99999807 0.99999585]
array([ 0.99999807, 0.99999585])
```python
myqnewton(f, x0, B, searchmeth = 2,stepmeth = "bt" ,maxit = 10000, maxstep = 10000,tol = 1/100000,\
eps = np.spacing(1),eps0 =1.0, eps1 = 1.e-12, all_x = True)
```
Jacobian was provided by user!
Solving nonlinear equations by using Dfp search method and bt step method
Start iteration......
finish 0th iteration...
finish 1th iteration...
finish 2th iteration...
finish 3th iteration...
finish 4th iteration...
finish 5th iteration...
finish 6th iteration...
finish 7th iteration...
finish 8th iteration...
finish 9th iteration...
finish 10th iteration...
finish 11th iteration...
finish 12th iteration...
finish 13th iteration...
finish 14th iteration...
finish 15th iteration...
finish 16th iteration...
finish 17th iteration...
finish 18th iteration...
finish 19th iteration...
finish 20th iteration...
finish 21th iteration...
finish 22th iteration...
finish 23th iteration...
finish 24th iteration...
finish 25th iteration...
finish 26th iteration...
finish 27th iteration...
finish 28th iteration...
finish 29th iteration...
finish 30th iteration...
Meet the tol. x: [ 1. 1.]
(array([ 1., 1.]),
[array([ 1., 0.]),
array([ 0.62610675, -0.45538251]),
array([ 0.80942433, 0.73890693]),
array([ 0.80926495, 0.65836064]),
array([ 0.81029476, 0.65658064]),
array([ 0.81149645, 0.65795954]),
array([ 0.81937217, 0.66750197]),
array([ 0.82112266, 0.67005007]),
array([ 0.84119489, 0.69949451]),
array([ 0.83921082, 0.69679031]),
array([ 0.83394945, 0.68989746]),
array([ 0.83252062, 0.68833605]),
array([ 0.83030783, 0.68639339]),
array([ 0.83004821, 0.68680305]),
array([ 0.83034079, 0.6889422 ]),
array([ 0.83128484, 0.69133142]),
array([ 0.83394548, 0.69736556]),
array([ 0.83614526, 0.70186326]),
array([ 0.84123512, 0.71195744]),
array([ 0.84518495, 0.71952602]),
array([ 0.85314137, 0.73456449]),
array([ 0.86088506, 0.74901455]),
array([ 0.87586691, 0.77680972]),
array([ 0.91582748, 0.85079014]),
array([ 0.92108449, 0.84802548]),
array([ 0.95707049, 0.91414103]),
array([ 0.98409515, 0.96593609]),
array([ 0.98863641, 0.97619571]),
array([ 0.9968118, 0.993516 ]),
array([ 0.99961717, 0.99919731]),
array([ 0.99997711, 0.99995547]),
array([ 1.00000171, 1.00000325]),
array([ 1., 1.])])
```python
myqnewton(f, x0, B, searchmeth =1,stepmeth = "bt" ,maxit = 10000, maxstep = 10000,tol = 1/100000,\
eps = np.spacing(1),eps0 =1.0, eps1 = 1.e-12, all_x = False)
```
Jacobian was provided by user!
Solving nonlinear equations by using Steepest search method and bt step method
Start iteration......
finish 0th iteration...
finish 1th iteration...
finish 2th iteration...
finish 3th iteration...
finish 4th iteration...
finish 5th iteration...
finish 6th iteration...
finish 7th iteration...
finish 8th iteration...
finish 9th iteration...
finish 10th iteration...
finish 11th iteration...
finish 12th iteration...
finish 13th iteration...
finish 14th iteration...
finish 15th iteration...
finish 16th iteration...
finish 17th iteration...
finish 18th iteration...
finish 19th iteration...
finish 20th iteration...
finish 21th iteration...
finish 22th iteration...
finish 23th iteration...
finish 24th iteration...
finish 25th iteration...
finish 26th iteration...
finish 27th iteration...
finish 28th iteration...
finish 29th iteration...
finish 30th iteration...
finish 31th iteration...
finish 32th iteration...
finish 33th iteration...
finish 34th iteration...
finish 35th iteration...
finish 36th iteration...
finish 37th iteration...
finish 38th iteration...
finish 39th iteration...
finish 40th iteration...
finish 41th iteration...
finish 42th iteration...
finish 43th iteration...
finish 44th iteration...
finish 45th iteration...
finish 46th iteration...
finish 47th iteration...
finish 48th iteration...
finish 49th iteration...
finish 50th iteration...
finish 51th iteration...
finish 52th iteration...
finish 53th iteration...
finish 54th iteration...
finish 55th iteration...
finish 56th iteration...
finish 57th iteration...
finish 58th iteration...
finish 59th iteration...
finish 60th iteration...
finish 61th iteration...
finish 62th iteration...
finish 63th iteration...
finish 64th iteration...
finish 65th iteration...
finish 66th iteration...
finish 67th iteration...
finish 68th iteration...
finish 69th iteration...
finish 70th iteration...
finish 71th iteration...
finish 72th iteration...
finish 73th iteration...
finish 74th iteration...
Meet the tol. x: [ 1.00000592 1.00001275]
array([ 1.00000592, 1.00001275])
```python
```
## 4.4 Line Search Methods
Just as was the case with rootfinding problems, it is not always best to take a full
Newton step. In fact, it may be better to either stop short or move past the Newton
step. If we view the Newton step as defining a *search direction*, performing a onedimensional
search in that direction will generally produce improved results.
http://reference.wolfram.com/language/tutorial/UnconstrainedOptimizationLineSearchMethods.html
https://en.wikipedia.org/wiki/Line_search
A number of diffierent line
search methods are used in practice, including the golden search method.
The **golden
search** algorithm is very reliable, but computationally inefficient. Two alternative
schemes are typically used in practice to perform line searches.
The first, known as
the **Armijo search**, is similar to the backstepping algorithm used in rootfinding and
complementarity problems. The idea is to find the minimum power j such that
```python
# https://github.com/smwade/ACME-2/blob/master/line_search/solutions.py
def backtracking(f, slope, x, p, a=1, rho=.9, c=10e-4):
"""Perform a backtracking line search to satisfy the Armijo Conditions.
Parameters:
f (function): the twice-differentiable objective function.
slope (float): The value of grad(f)^T p.
x (ndarray of shape (n,)): The current iterate.
p (ndarray of shape (n,)): The current search direction.
a (float): The intial step length. (set to 1 in Newton and
quasi-Newton methods)
rho (float): A number in (0,1).
c (float): A number in (0,1).
Returns:
(float) The computed step size satisfying the Armijo condition.
"""
while f(x + a*p) > f(x) + c * a * slope:
a = float(rho * a)
return a
```
Another widely-used approach, known as **Goldstein search**, is to find any value of
s that satisfies
A simple strategy for locating an acceptable point is to first find a point in or
above the cone using step doubling (doubling the value of s at each iteration). If a
point above the cone is found first, we have a bracket within which points in the cone
must lie. We can then narrow the bracket using the golden search method. We call this the bhhhstep approach.
Another approach, stepbt, checks to see if s = 1 is in the cone and, if so, maximizes
a quadratic approximation to the objective function in the Newton direction
constructed from knowledge of f(x), f0(x)d and f(x + d). If the computed step s is
acceptable, it is taken. Otherwise, the algorithm iterates until an acceptable step is
found using a cubic approximation to the objective function in the Newton direction
constructed from knowledge of f(x), f0(x)d, f(x + s(j1)d) and f(x + s(j)d). stepbt
is fast and generally gives good results. It is recommended as the default lines search
procedure for general maximization algorithms.
```python
#
```
```python
```
```python
```
## 4.5 Special Cases
Two special cases arise often enough in economic practice (especially in econometrics)
to warrant additional discussion. Nonlinear least squares and the maximum likelihood
problems have objective functions with special structures that give rise to their
own special quasi-Newton methods. The special methods differ from other Newton
and quasi-Newton methods only in the choice of the matrix used to approximate the
Hessian. Because these problems generally arise in the context of statistical applications,
we alter our notation to conform with the conventions for those applications.
The optimization takes place with respect to a k-dimensional parameter vector $\theta$ and
n will refer to the number of observations.
```python
```
```python
```
## Reference
- Optimization and Solving Systems of Equations in Julia
https://github.com/pkofod/JC2017
https://www.youtube.com/watch?v=E_UlaGoObTw
|
57dda1ed1c6ee73e91a6943256e19bc897ffcec4
| 194,831 |
ipynb
|
Jupyter Notebook
|
Chapter04.ipynb
|
lnsongxf/Applied_Computational_Economics_and_Finance
|
f14661bfbfa711d49539bda290d4be5a25087185
|
[
"MIT"
] | 19 |
2018-05-09T08:17:44.000Z
|
2021-12-26T07:02:17.000Z
|
Chapter04.ipynb
|
lnsongxf/Applied_Computational_Economics_and_Finance
|
f14661bfbfa711d49539bda290d4be5a25087185
|
[
"MIT"
] | null | null | null |
Chapter04.ipynb
|
lnsongxf/Applied_Computational_Economics_and_Finance
|
f14661bfbfa711d49539bda290d4be5a25087185
|
[
"MIT"
] | 11 |
2017-12-15T13:39:35.000Z
|
2021-05-15T15:06:02.000Z
| 48.023416 | 39,295 | 0.602753 | true | 15,178 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90053 | 0.859664 | 0.774153 |
__label__eng_Latn
| 0.962261 | 0.636949 |
# Exercise 4: Neural Networks Learning
In this exercise, you will implement the backpropagation algorithm for neural networks and apply it to the task of hand-written digit recognition.
## Our dataset
We are given a data set in `ex4data1.mat` that contains 5000 training examples of handwritten digits. (This is exactly the same data set as in last week's exercise 3).
Each training example is a 20 pixel by 20 pixel grayscale image of the digit. Each pixel is represented by a floating point number indicating the grayscale intensity at that location. The 20 by 20 grid of pixels is “unrolled” into a 400-dimensional vector. Each of these training examples becomes a single row in our data matrix X. This gives us a 5000 by 400 matrix X where every row is a training example for a handwritten digit image.
The second part of the training set is a 5000-dimensional vector y that contains labels for the training set. Like in the last exercise, a “0” digit is labeled as “10”, while the digits “1” to “9” are labeled as “1” to “9” in their natural order.
```octave
% Load saved matrices for X and y from file
load('data/ex4data1.mat');
% The matrices X and y will now be in our Octave environment
```
Our neural network is shown in the picture below. It has 3 layers – an input layer, a hidden layer and an output layer. Recall that our inputs are pixel values of digit images. Since the images are of size 20 × 20, this gives us 400 input layer units (not counting the extra bias unit which always outputs +1).
We will begin by visualizing a subset of the training set. We reuse the 'displayDat'a function from the last exercise.
```octave
m = size(X, 1);
% Randomly select 100 data points
rand_indices = randperm(m);
sel = X(rand_indices(1:100), :);
% and display them
displayData(sel);
```
In order to get started, we have been provided with a set of network parameters $ (\Theta^{(1)},\Theta^{(2)}) $ that have neen previously trained. These are stored in ex4weights.mat. The parameters have dimensions that are sized for a neural network with 25 units in the second layer and 10 output units (corresponding to the 10 digit classes).
```octave
% Load trained parameters as matrices Theta1 and Theta2
load('data/ex4weights.mat');
% Theta1 has size 25 x 401
% Theta2 has size 10 x 26
```
```octave
%% Setup the parameters you will use for this exercise
input_layer_size = 400; % 20x20 Input Images of Digits
hidden_layer_size = 25; % 25 hidden units
num_labels = 10; % 10 labels, from 1 to 10
% (note that we have mapped "0" to label 10)
```
## Forward propagation and cost function
As a first step, we will implement the cost function and the gradient for the neural network. The cost function (without regularization) is defined as:
$$ J(\theta) = \frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K}[-y_k^{(i)} \log(h_\theta(x^{(i)})_k) - (1-y_k^{(i)}) \log(1 - h_\theta(x^{(i)})_k) ] $$
where $ K $ is the number of labels (or outputs, in our case $ K = 10$ and $ h_\theta(x^{(i)})_k = a^{(3)}_k $ is the activation value of the $k$th unit in the output layer (compare with neural network layout depicted above).
As always we make use of a couple of helper functions.
```octave
function g = sigmoid(z)
%SIGMOID Compute sigmoid function
% g = SIGMOID(z) computes the sigmoid of z.
% The function should work on scalar *and* matrix values
g = zeros(size(z));
g = 1 ./ ( 1 + exp(-z));
end
```
```octave
function g = sigmoidGradient(z)
%SIGMOIDGRADIENT returns the gradient of the sigmoid function
%evaluated at z
% g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
% evaluated at z. t.
% The function should work on scalar *and* matrix values
g = zeros(size(z));
g = sigmoid(z) .* (1-sigmoid(z));
end
```
```octave
fprintf('Sigmoid gradient evaluated at [-1 -0.5 0 0.5 1]:\n ');
g = sigmoidGradient([-1 -0.5 0 0.5 1])
```
Sigmoid gradient evaluated at [-1 -0.5 0 0.5 1]:
g =
0.19661 0.23500 0.25000 0.23500 0.19661
```octave
function result = h(theta, x)
result = (sigmoid(theta' * x'))';
end
```
### Step 1: Feedforward computation
As a first step we implement the feedforward computation that computes $ h_\theta(x^{(i)}) $ for every example $ i $ and sums the cost over all examples.
### Step 2: Backpropagation
As a second step, we will implement the backpropagation algorithm to compute the gradients
$ \frac{\partial}{\partial \theta_{ij}^{(l)}} J(\Theta) $ of our cost function.
Recall that the intuition behind the backpropagation algorithm is as follows. Given a
training example $ (x^{(t)}, y^{(t)}) $, we will first run a “forward pass” to compute
all the activations throughout the network, including the output value of the
hypothesis $ h_\theta (x{(t}) $.
Then, for each node j in layer l, we would like to compute
an “error term” $ \delta_j(l) $ that measures how much that node was “responsible”
for any errors in our output.
* For an output node, we can directly measure the difference between the
network’s activation and the true target value, and use that to define $ \delta_j^{(3)} $
(since layer 3 is the output layer):
$$ \delta_j^{(3)} = \alpha_j^{(3)} - y_j $$
* For the hidden units, we can compute $ δ_{j}(l) $ based on a weighted average of the error terms
$ \delta^{(l+1)} $ of the nodes in layer $ (l + 1) $:
$$ \delta^{(l)} = {\Theta^{(l)}}^T \delta_{(l+1)} .* g'(z^{(l)}) $$
Why? Let's assume a very simple, *linear* neural network.
$ \delta^{(l)} $ can be seen as the change of the network's cost function $ J $ in relation to a change in the output $ z^{(l)} $ of our node in layer $ l $.
$$
\begin{align}
\delta^{(l)} & = \frac{\partial}{\partial z^{(l)}} J(\theta^{l}) \\
& = \frac{\partial J(\theta^{l})}{\partial z^{(l)}} \frac{\partial z^{(l+1)}}{\partial z^{(l+1)}}
= \underbrace{\frac{\partial J(\theta^{l})}{\partial z^{(l+1)}}}_{= \delta^{(l+1)}} \underbrace{\frac{\partial z^{(l+1)}}{\partial z^{(l)}}}_{=(*)} \\
\end{align}
$$
Let's calculate (*):
$$
\begin{align}
\frac{\partial z^{(l+1)}}{\partial z^{(l)}} & \overbrace{=}^{k:=l+1} \frac{\partial z^{(k)}}{\partial z^{(k-1)}}
= \frac{\partial}{\partial z^{(k-1)}}(\theta^{(k-1)} g(z^{(k-1)}))
= \theta^{(k-1)} g'(z^{(k-1)}) \\
& \overbrace{=}^{l=k-1} = \theta^{(l)} g'(z^{(l)}) \\
\end{align}
$$
Put together:
$$ \delta^{(l)} = \delta^{(l+1)} \theta^{(l)} g'(z^{(l)}) $$
Recall, recall that
$$ g'(z^{(l)}) = g(z^{(l)})(1-g(z^{(l)})) = \alpha^{l}(1-\alpha^{(l)}) $$
With
$$ \Delta^{(l)} = \delta^{(l+1)} (\alpha^{(l)})^T $$
we can compute our gradients as follows:
$$ \frac{\partial}{\partial \theta_{ij}^{(l)}} J(\Theta) = \frac{1}{m} \Delta_{ij}^{(l)} $$
### Step 3: Regularized cost function
For neural networks *with regularization* we add a regularization term to the cost function.
$$ r = \frac{\lambda}{m} ( \sum_{\Theta^{(1)}_{j \neq 1, i}} (\Theta^{1}_{j, i})^2 ) + \sum_{\Theta^{(2)}_{j \neq 1, i \neq 1}} (\Theta^{2}_{j, i})^2 $$
Note that you should not be regularizing the terms that correspond to the bias. For the matrices `Theta1` and `Theta2`, this corresponds to the first column of each matrix.
For the gradients, this means, that we have to add an additional regularization of $ \frac{\lambda}{m} \Theta_{ji}^{(l)} , j > 1 $.
```octave
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
% [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
% X, y, lambda) computes the cost and gradient of the neural network. The
% parameters for the neural network are "unrolled" into the vector
% nn_params and need to be converted back into the weight matrices.
%
% The returned parameter grad should be a "unrolled" vector of the
% partial derivatives of the neural network.
%
% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
% dim_Theta1 = size(Theta1)
% dim_Theta2 = size(Theta2)
% Setup some useful variables
m = size(X, 1)
% We need to return the following variables correctly
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));
% Part 1: Feedforward the neural network and return the cost in the
% variable J.
% Add additional bias node to X
X = [ones(m, 1) X];
% Note, that whereas the original labels (in the variable y) were 1, 2, ..., 10,
% for the purpose of training a neural network,
% we need to recode the labels as vectors containing only values 0 or 1.
y_Vec = zeros(m, num_labels);
for i =1:m
y_Vec(i, y(i)) = 1;
end
% Theta1, Theta2 need to be transposed, since h(theta, X) expects theta to be a column vector
% In Theta1, Theta2 however, the parameters for each node are represented as a row
% Hidden layer
alpha2 = h(Theta1', X);
% Add additional bias node alpha2(0)
alpha2 = [ones(m, 1) alpha2];
% Output layer
alpha3 = h(Theta2', alpha2);
% Cost function without regularization term
% We can use matrix multiplication to compute our inner sum
inner = -log(alpha3)*y_Vec' - log(1-alpha3)*(1-y_Vec') ;
sum1 = sum(diag(inner));
J = 1/m * sum1;
% Part 2: Implement the backpropagation algorithm to compute the gradients
% Theta1_grad and Theta2_grad. You should return the partial derivatives of
% the cost function with respect to Theta1 and Theta2 in Theta1_grad and
% Theta2_grad, respectively.
% Output layer
delta3 = alpha3 - y_Vec;
% Hidden layer
delta2 = delta3 * Theta2 .* alpha2 .* (1-alpha2);
delta2 = delta2(:, 2:end);
Delta1 = delta2' * X;
Delta2 = delta3' * alpha2;
Theta1_grad = 1/m * Delta1;
Theta2_grad = 1/m * Delta2;
%
% Part 3: Implement regularization with the cost function and gradients.
%
% Regularization term for the cost function
Theta1_squared = Theta1 .^ 2;
Theta2_squared = Theta2 .^ 2;
Theta1_squared = Theta1_squared(:, 2:end);
Theta2_squared = Theta2_squared(:, 2:end);
reg_sum = sum(Theta1_squared(:)) + sum(Theta2_squared(:));
J = J + lambda /(2*m) * reg_sum;
% Regularization terms for the gradient matrices
R1 = Theta1;
R1(:, 1) = 0;
R2 = Theta2;
R2(:, 1) = 0;
Theta1_grad = Theta1_grad + lambda/m * R1;
Theta2_grad = Theta2_grad + lambda/m * R2;
% Unroll gradients
grad = [Theta1_grad(:) ; Theta2_grad(:)];
end
```
```octave
% Addional testcase
il = 2; % input layer
hl = 2; % hidden layer
nl = 4; % number of labels
nn = [ 1:18 ] / 10; % nn_params
X_test = cos([1 2 ; 3 4 ; 5 6]);
y_test = [4; 2; 3];
[J grad] = nnCostFunction(nn, il, hl, nl, X_test, y_test, 0)
[J grad] = nnCostFunction(nn, il, hl, nl, X_test, y_test, 4)
```
m = 3
J = 7.4070
grad =
0.766138
0.979897
-0.027540
-0.035844
-0.024929
-0.053862
0.883417
0.568762
0.584668
0.598139
0.459314
0.344618
0.256313
0.311885
0.478337
0.368920
0.259771
0.322331
m = 3
J = 19.474
grad =
0.76614
0.97990
0.37246
0.49749
0.64174
0.74614
0.88342
0.56876
0.58467
0.59814
1.92598
1.94462
1.98965
2.17855
2.47834
2.50225
2.52644
2.72233
```octave
% Unroll parameters
nn_params = [Theta1(:) ; Theta2(:)];
% Call cost function
% without regularization:
lambda = 0;
J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda);
fprintf(['Cost at parameters (loaded from ex4weights): %f '...
'\n(this value should be about 0.287629)\n'], J);
% with regularization (lambda = 1):
lambda = 1;
J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda);
fprintf(['Cost at parameters (loaded from ex4weights) with regularization (lambda=1): %f '...
'\n(this value should be about 0.383770)\n'], J);
```
m = 5000
Cost at parameters (loaded from ex4weights): 0.287629
(this value should be about 0.287629)
m = 5000
Cost at parameters (loaded from ex4weights) with regularization (lambda=1): 0.383770
(this value should be about 0.383770)
## Gradient Checking
We can apply a method called *gradient checking* to verify numerically, if the computation of the gradients through backpropagation was correct.
The idea ist to "onroll" $ \Theta^{(1)}, \Theta^{(2)} $ into a long vector $ \theta $ and work with a function $ J{\theta) $.
We can now verify if our "onrolled" gradient $ f_i(\theta) $ computed above matches a numerical approximation of the gradient for each $ i $:
$$ f_i(\theta) = \frac {J(\theta^{(i+)}) - J(\theta^{(i-)}) } {2 \epsilon} $$
$ \theta^{(i+)} $ is the same as $ \theta $, except its $i$th element has been incremented by $ \epsilon $. Similarly, $ \theta^{(i-)} $ is the corresponding vector with the $i$th element decreased by $ \epsilon $.
```octave
function W = debugInitializeWeights(fan_out, fan_in)
%DEBUGINITIALIZEWEIGHTS Initialize the weights of a layer with fan_in
%incoming connections and fan_out outgoing connections using a fixed
%strategy, this will help you later in debugging
% W = DEBUGINITIALIZEWEIGHTS(fan_in, fan_out) initializes the weights
% of a layer with fan_in incoming connections and fan_out outgoing
% connections using a fix set of values
%
% Note that W should be set to a matrix of size(1 + fan_in, fan_out) as
% the first row of W handles the "bias" terms
%
% Set W to zeros
W = zeros(fan_out, 1 + fan_in);
% Initialize W using "sin", this ensures that W is always of the same
% values and will be useful for debugging
W = reshape(sin(1:numel(W)), size(W)) / 10;
end
```
```octave
function numgrad = computeNumericalGradient(J, theta)
%COMPUTENUMERICALGRADIENT Computes the gradient using "finite differences"
%and gives us a numerical estimate of the gradient.
% numgrad = COMPUTENUMERICALGRADIENT(J, theta) computes the numerical
% gradient of the function J around theta. Calling y = J(theta) should
% return the function value at theta.
% Notes: The following code implements numerical gradient checking, and
% returns the numerical gradient.It sets numgrad(i) to (a numerical
% approximation of) the partial derivative of J with respect to the
% i-th input argument, evaluated at theta. (i.e., numgrad(i) should
% be the (approximately) the partial derivative of J with respect
% to theta(i).)
%
numgrad = zeros(size(theta));
perturb = zeros(size(theta));
e = 1e-4;
for p = 1:numel(theta)
% Set perturbation vector
perturb(p) = e;
loss1 = J(theta - perturb);
loss2 = J(theta + perturb);
% Compute Numerical Gradient
numgrad(p) = (loss2 - loss1) / (2*e);
perturb(p) = 0;
end
end
```
```octave
function checkNNGradients(lambda)
%CHECKNNGRADIENTS Creates a small neural network to check the
%backpropagation gradients
% CHECKNNGRADIENTS(lambda) Creates a small neural network to check the
% backpropagation gradients, it will output the analytical gradients
% produced by your backprop code and the numerical gradients (computed
% using computeNumericalGradient). These two gradient computations should
% result in very similar values.
%
if ~exist('lambda', 'var') || isempty(lambda)
lambda = 0;
end
input_layer_size = 3;
hidden_layer_size = 5;
num_labels = 3;
m = 5;
% We generate some 'random' test data
Theta1 = debugInitializeWeights(hidden_layer_size, input_layer_size);
Theta2 = debugInitializeWeights(num_labels, hidden_layer_size);
% Reusing debugInitializeWeights to generate X
X = debugInitializeWeights(m, input_layer_size - 1);
y = 1 + mod(1:m, num_labels)';
% Unroll parameters
nn_params = [Theta1(:) ; Theta2(:)];
% Short hand for cost function
costFunc = @(p) nnCostFunction(p, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda);
[cost, grad] = costFunc(nn_params);
numgrad = computeNumericalGradient(costFunc, nn_params);
% Visually examine the two gradient computations. The two columns
% you get should be very similar.
disp([numgrad grad]);
fprintf(['The above two columns you get should be very similar.\n' ...
'(Left-Your Numerical Gradient, Right-Analytical Gradient)\n\n']);
% Evaluate the norm of the difference between two solutions.
% If you have a correct implementation, and assuming you used EPSILON = 0.0001
% in computeNumericalGradient.m, then diff below should be less than 1e-9
diff = norm(numgrad-grad)/norm(numgrad+grad);
fprintf(['If your backpropagation implementation is correct, then \n' ...
'the relative difference will be small (less than 1e-9). \n' ...
'\nRelative Difference: %g\n'], diff);
end
```
```octave
checkNNGradients(0);
```
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
m = 5
-0.0092782523 -0.0092782524
0.0088991196 0.0088991196
-0.0083601076 -0.0083601076
0.0076281355 0.0076281355
-0.0067479837 -0.0067479837
-0.0000030498 -0.0000030498
0.0000142869 0.0000142869
-0.0000259383 -0.0000259383
0.0000369883 0.0000369883
-0.0000468760 -0.0000468760
-0.0001750601 -0.0001750601
0.0002331464 0.0002331464
-0.0002874687 -0.0002874687
0.0003353203 0.0003353203
-0.0003762156 -0.0003762156
-0.0000962661 -0.0000962661
0.0001179827 0.0001179827
-0.0001371497 -0.0001371497
0.0001532471 0.0001532471
-0.0001665603 -0.0001665603
0.3145449700 0.3145449701
0.1110565882 0.1110565882
0.0974006970 0.0974006970
0.1640908188 0.1640908188
0.0575736494 0.0575736493
0.0504575855 0.0504575855
0.1645679323 0.1645679323
0.0577867379 0.0577867378
0.0507530173 0.0507530173
0.1583393339 0.1583393339
0.0559235296 0.0559235296
0.0491620841 0.0491620841
0.1511275275 0.1511275275
0.0536967009 0.0536967009
0.0471456249 0.0471456249
0.1495683347 0.1495683347
0.0531542052 0.0531542052
0.0465597186 0.0465597186
The above two columns you get should be very similar.
(Left-Your Numerical Gradient, Right-Analytical Gradient)
If your backpropagation implementation is correct, then
the relative difference will be small (less than 1e-9).
Relative Difference: 2.25001e-11
## Training
When training neural networks, it is important to randomly initialize the parameters for symmetry breaking. One effective strategy for random initialization is to randomly select values for $ \Theta^{(l)} $ uniformly in the range $ [-\epsilon, +\epsilon] $.
The training done again by using the well know optimization function:
```octave
function W = randInitializeWeights(L_in, L_out)
%RANDINITIALIZEWEIGHTS Randomly initialize the weights of a layer with L_in
%incoming connections and L_out outgoing connections
% W = RANDINITIALIZEWEIGHTS(L_in, L_out) randomly initializes the weights
% of a layer with L_in incoming connections and L_out outgoing
% connections.
%
% Note that W should be set to a matrix of size(L_out, 1 + L_in) as
% the first column of W handles the "bias" terms
%
W = zeros(L_out, 1 + L_in);
% Randomly initialize the weights to small values
eps = 0.12;
W = rand(L_out, 1 + L_in) * 2 * eps - eps;
end
```
```octave
% After you have completed the assignment, change the MaxIter to a larger
% value to see how more training helps.
options = optimset('MaxIter', 50);
% You should also try different values of lambda
lambda = 1;
% Initial parameters
initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels);
% Unroll parameters
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];
% Create "short hand" for the cost function to be minimized
costFunction = @(p) nnCostFunction(p, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, X, y, lambda);
% Now, costFunction is a function that takes in only one argument (the
% neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
% Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
```
m = 5000
m = 5000
m = 5000
m = 5000
m = 5000 1 | Cost: 3.297939e+00
m = 5000
m = 5000
m = 5000 2 | Cost: 3.254918e+00
m = 5000
m = 5000 3 | Cost: 3.221777e+00
m = 5000
m = 5000 4 | Cost: 3.118353e+00
m = 5000
m = 5000 5 | Cost: 2.871056e+00
m = 5000
m = 5000 6 | Cost: 2.384246e+00
m = 5000 7 | Cost: 2.039487e+00
m = 5000
m = 5000 8 | Cost: 1.945045e+00
m = 5000 9 | Cost: 1.826975e+00
m = 5000 10 | Cost: 1.768237e+00
m = 5000 11 | Cost: 1.650729e+00
m = 5000 12 | Cost: 1.546114e+00
m = 5000
m = 5000 13 | Cost: 1.514566e+00
m = 5000
m = 5000 14 | Cost: 1.394788e+00
m = 5000
m = 5000 15 | Cost: 1.163227e+00
m = 5000 16 | Cost: 1.087384e+00
m = 5000
m = 5000 17 | Cost: 1.050586e+00
m = 5000 18 | Cost: 9.678755e-01
m = 5000 19 | Cost: 8.928639e-01
m = 5000 20 | Cost: 8.257586e-01
m = 5000 21 | Cost: 7.821587e-01
m = 5000
m = 5000 22 | Cost: 7.643840e-01
m = 5000 23 | Cost: 7.426642e-01
m = 5000 24 | Cost: 7.263109e-01
m = 5000 25 | Cost: 7.109944e-01
m = 5000 26 | Cost: 7.014560e-01
m = 5000 27 | Cost: 6.904134e-01
m = 5000 28 | Cost: 6.646547e-01
m = 5000 29 | Cost: 6.455411e-01
m = 5000 30 | Cost: 6.310349e-01
m = 5000
m = 5000 31 | Cost: 6.221766e-01
m = 5000 32 | Cost: 6.145356e-01
m = 5000
m = 5000 33 | Cost: 6.092355e-01
m = 5000 34 | Cost: 6.040427e-01
m = 5000 35 | Cost: 5.998618e-01
m = 5000
m = 5000 36 | Cost: 5.878160e-01
m = 5000 37 | Cost: 5.715380e-01
m = 5000 38 | Cost: 5.613109e-01
m = 5000
m = 5000 39 | Cost: 5.536140e-01
m = 5000 40 | Cost: 5.442903e-01
m = 5000
m = 5000 41 | Cost: 5.411204e-01
m = 5000
m = 5000 42 | Cost: 5.393187e-01
m = 5000
m = 5000 43 | Cost: 5.334700e-01
m = 5000
m = 5000 44 | Cost: 5.181383e-01
m = 5000 45 | Cost: 4.979996e-01
m = 5000 46 | Cost: 4.874777e-01
m = 5000
m = 5000 47 | Cost: 4.842996e-01
m = 5000
m = 5000 48 | Cost: 4.734155e-01
m = 5000
m = 5000 49 | Cost: 4.694667e-01
Iteration 50 | Cost: 4.659794e-01
## Results
Let's implement a function that uses the neural network defined by the parameters $ (\Theta^{(1)},\Theta^{(2)}) $ to predict the digits for a given data set of our training data:
```octave
function p = predict(Theta1, Theta2, X)
%PREDICT Predict the label of an input given a trained neural network
% p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
% trained weights of a neural network (Theta1, Theta2)
% Useful values
m = size(X, 1);
num_labels = size(Theta2, 1);
p = zeros(size(X, 1), 1);
% Add ones to the X data matrix
X = [ones(m, 1) X];
% Theta1, Theta2 need to be transposed, since h(theta, X) expects theta to be a column vector
% In Theta1, Theta2 however, the parameters for each node are represented as a row
% Hidden layer
alpha2 = h(Theta1', X);
% Add additional bias node alpha2(0)
alpha2 = [ones(m, 1) alpha2];
% Output layer
alpha3 = h(Theta2', alpha2);
% Pick the best output und use it as label
[M, p] = max(alpha3, [], 2);
end
```
```octave
pred = predict(Theta1, Theta2, X);
fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);
```
Training Set Accuracy: 95.620000
Let's see what our trained logistic regression function is doing for our randomly selected examples from the beginning of this exercise:
```octave
displayData(sel);
```
```octave
% Predict the labels for the selection and convert label 10 to 0
guessed_numbers = mod(predict(Theta1, Theta2, sel), 10);
```
```octave
% ... and print them as a matrix
reshape(guessed_numbers, [10,10])'
```
ans =
0 3 0 8 7 3 6 8 7 2
2 4 2 9 7 8 6 8 4 2
0 5 0 4 9 8 9 9 7 0
2 6 2 5 0 8 7 1 7 3
9 0 3 4 7 4 8 5 5 4
9 3 3 1 7 5 7 2 0 7
0 1 2 4 8 7 4 1 1 4
1 7 6 8 5 0 6 5 0 9
0 9 7 6 7 1 7 2 9 3
0 6 7 2 1 1 1 3 8 2
We can now "visualize" what the neural network is learning by
displaying the hidden units to see what features they are capturing in
the data.
```octave
displayData(Theta1(:, 2:end));
```
```octave
```
|
b0acb875389bd591ecf01c5d0151ef82728056f3
| 83,875 |
ipynb
|
Jupyter Notebook
|
exercise4/exercise4-octave.ipynb
|
mabauer/coursera-machine-learning
|
828225c9426d96f0bf0e11caf391461523a16e4d
|
[
"MIT"
] | null | null | null |
exercise4/exercise4-octave.ipynb
|
mabauer/coursera-machine-learning
|
828225c9426d96f0bf0e11caf391461523a16e4d
|
[
"MIT"
] | null | null | null |
exercise4/exercise4-octave.ipynb
|
mabauer/coursera-machine-learning
|
828225c9426d96f0bf0e11caf391461523a16e4d
|
[
"MIT"
] | null | null | null | 70.305951 | 17,236 | 0.749389 | true | 9,174 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.901921 | 0.853913 | 0.770162 |
__label__eng_Latn
| 0.925432 | 0.627676 |
### Electromechanical differential equations
\begin{eqnarray}
f_1 &=& \dot \delta = \Omega_b \left( \omega - \omega_s \right) \\
f_2 &=& \dot \omega = \frac{1}{2H} \left( p_m - p_e - D \left( \omega - \omega_s \right) \right)
\end{eqnarray}
### Electric rotor differential equations
\begin{eqnarray}
f_3 &=& \dot e_q' = \frac{1}{T'_{d0}} \left( -e'_q - \left(X_d - X'_d \right) i_d + v_f^\star \right) \\
f_4 &=& \dot e'_d = \frac{1}{T'_{q0}} \left( -e'_d - \left(X_q - X'_q \right) i_q \right)
\end{eqnarray}
### AVR/Exitation dynamic
\begin{eqnarray}
f_5 &=& \dot v_c = (v_t - v_c)/T_e
\end{eqnarray}
### Park transform
\begin{eqnarray}
g_1 &=&-v_d + v_t \sin\left(\delta - \theta_t\right) \\
g_2 &=&-v_q + v_t \cos\left(\delta - \theta_t\right)
\end{eqnarray}
### Stator equations
\begin{eqnarray}
g_3 &=& v_q + R_a i_q + X'_d i_d - e'_q\\
g_4 &=& v_d + R_a i_d - X'_q i_q - e'_d\\
\end{eqnarray}
### Powers
\begin{eqnarray}
g_5 &=& -p_e + \left( v_q + R_a i_q \right) i_q + \left( v_d + R_a i_d \right) i_d \\
g_6 &=& i_d v_d + i_q v_q - p_t \\
g_7 &=& i_d v_q - i_q v_d - q_t
\end{eqnarray}
### Network equations
\begin{eqnarray}
g_8 &=& p_t - \left(v_t V_0 \sin\left(\theta_t - \theta_0\right)\right)/X_l\\
g_9 &=& q_t + \left(v_t V_0 \cos\left(\theta_t - \theta_0\right)\right)/X_l - v_t^2/X_l
\end{eqnarray}
### AVR algebraic equations
\begin{eqnarray}
g_{10} &=& K_a (v^\star - v_c + v_s) - v_f
\end{eqnarray}
```python
import numpy as np
import sympy as sym
import numba
import pydae.build as db
```
## System definition
```python
params_dict = {'X_d':1.81,'X1d':0.3, 'T1d0':8.0, # synnchronous machine d-axis parameters
'X_q':1.76,'X1q':0.65,'T1q0':1.0, # synnchronous machine q-axis parameters
'R_a':0.003,'X_l': 0.1,
'H':3.5,'D':0.0,
'Omega_b':2*np.pi*50,'omega_s':1.0,
'v_0':1.0,'theta_0':0.0,
'K_a':100, 'T_e':0.1, 'v_pss':0.0}
u_ini_dict = {'p_t':0.8,'v_t':1.0} # for the initialization problem
u_run_dict = {'p_m':0.8,'v_ref':1.0} # for the running problem (here initialization and running problem are the same)
x_list = ['delta','omega','e1q','e1d','v_c'] # dynamic states
y_ini_list = ['v_d','v_q','i_d','i_q','p_e','p_m','q_t','v_ref','theta_t','v_f']
y_run_list = ['v_d','v_q','i_d','i_q','p_e','p_t','q_t','v_t','theta_t','v_f']
sys_vars = {'params':params_dict,
'u_list':u_run_dict,
'x_list':x_list,
'y_list':y_run_list}
exec(db.sym_gen_str()) # exec to generate the required symbolic varables and constants
```
```python
ddelta = Omega_b*(omega - omega_s)
domega = 1/(2*H)*(p_m - p_e - D*(omega - omega_s))
de1q = 1/T1d0*(-e1q - (X_d - X1d)*i_d + v_f)
de1d = 1/T1q0*(-e1d + (X_q - X1q)*i_q)
dv_c = (v_t - v_c)/T_e
g_1 = -v_d + v_t*sin(delta - theta_t)
g_2 = -v_q + v_t*cos(delta - theta_t)
g_3 = v_q + R_a*i_q + X1d*i_d - e1q
g_4 = v_d + R_a*i_d - X1q*i_q - e1d
g_5 = -p_e + i_d*(v_d + R_a*i_d) + i_q*(v_q + R_a*i_q)
g_6 = i_d*v_d + i_q*v_q - p_t
g_7 = i_d*v_q - i_q*v_d - q_t
g_8 = p_t - (v_t*v_0*sin(theta_t - theta_0))/X_l
g_9 = q_t + (v_t*v_0*cos(theta_t - theta_0))/X_l - v_t**2/X_l
g_10 = K_a*(v_ref - v_c + v_pss) - v_f
h_1 = p_m
sys = {'name':'smib_milano_ex8p1_4ord_avr',
'params_dict':params_dict,
'f_list':[ddelta,domega,de1q,de1d,dv_c],
'g_list':[g_1,g_2,g_3,g_4,g_5,g_6,g_7,g_8,g_9,g_10],
'x_list':x_list,
'y_ini_list':y_ini_list,
'y_run_list':y_run_list,
'u_run_dict':u_run_dict,
'u_ini_dict':u_ini_dict,
'h_dict':{'p_m':p_m}}
sys = db.system(sys)
db.sys2num(sys)
```
jacobians respect u = 0
```python
```
```python
```
|
919d621652369e23dfcc96d7670041ed0e78f865
| 6,001 |
ipynb
|
Jupyter Notebook
|
examples/grids/smib_milano_ex8p1/smib_milano_ex8p1_4ord_avr/smib_milano_ex8p1_4ord_avr_builder.ipynb
|
pydae/pydae
|
8076bcfeb2cdc865a5fc58561ff8d246d0ed7d9d
|
[
"MIT"
] | 1 |
2020-12-20T03:45:26.000Z
|
2020-12-20T03:45:26.000Z
|
examples/grids/smib_milano_ex8p1/smib_milano_ex8p1_4ord_avr/smib_milano_ex8p1_4ord_avr_builder.ipynb
|
pydae/pydae
|
8076bcfeb2cdc865a5fc58561ff8d246d0ed7d9d
|
[
"MIT"
] | null | null | null |
examples/grids/smib_milano_ex8p1/smib_milano_ex8p1_4ord_avr/smib_milano_ex8p1_4ord_avr_builder.ipynb
|
pydae/pydae
|
8076bcfeb2cdc865a5fc58561ff8d246d0ed7d9d
|
[
"MIT"
] | null | null | null | 30.93299 | 127 | 0.48092 | true | 1,560 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.7773 | 0.749087 | 0.582265 |
__label__kor_Hang
| 0.177658 | 0.191128 |
### CS 109A/STAT 121A/AC 209A/CSCI E-109A
# Lab 5: Regularization
**Harvard University**<br>
**Fall 2017**<br>
**Instructors: Pavlos Protopapas, Kevin Rader, Rahul Dave, Margo Levine**
---
# Table of Contents
<ol start="0">
<li> Learning Goals </li>
<li> Introduction to regularized regression </li>
<li> Ridge regression with one predictor on a grid</li>
<li> Ridge regression with polynomial features on a grid</li>
<li> Cross-validation </li>
<li> Refitting on full training set </li>
</ol>
**END OF LAB**
<ol start="6">
<li> Feature selection with LASSO regression - good for homework 4!</li>
</ol>
## Part 0: Learning Goals
In this lab we continue where we left off in Lab 4, with regularized regression. By the end of this lab, you will be able to:
- Implement ridge and LASSO regression using `sklearn`.
- Interpret the results of ridge and LASSO regression, and compare to the results from simple and multiple linear regression.
*This lab maps on to lectures 7 and 8 and homework 4.*
```python
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn.apionly as sns
sns.set_style("whitegrid")
sns.set_context("poster")
```
## Part 1: Introduction to regularized regression
Recall from lecture the main idea of regularization. In the ordinary least squares problem we minimize the loss function
\begin{equation}
L(\mathbf{\beta}) = \frac{1}{n} \sum_{i = 1}^n |y_i - \mathbf{\beta}^T \mathbf{x}_i|^2,
\end{equation}
to determine regression coefficients $\mathbf{\beta}$. Here $y_i$ is the response variable for observation $i$, and $\mathbf{x}_i$ is a vector from the predictor matrix corresponding to observation $i$.
The general idea behind regularization is to penalize the loss function to account for possibly very large values of the coefficients $\mathbf \beta$. The aforementioned optimization problem is then adjusted accordingly. Instead of minimizing $L(\mathbf{\beta})$, we minimize the regularized loss function
\begin{equation}
L_{\mathrm{reg}}(\mathbf{\beta}) = L(\mathbf{\beta}) + \lambda R(\mathbf{\beta}),
\end{equation}
where $R(\mathbf{\beta})$ is a penalty function and $\lambda$ is a scalar that weighs the relative importance of this penalty. In this lab we will explore two regularized regression models, ridge and LASSO. In ridge regression, the penalty function is the sum of the squares of the parameters, giving the regularized loss function
\begin{equation}
L_{\mathrm{Ridge}}(\mathbf{\beta}) = \frac{1}{n} \sum_{i = 1}^n |y_i - \mathbf{\beta}^T \mathbf{x}_i|^2 + \lambda \sum_{j = 1}^d \beta_j^2.
\end{equation}
In LASSO regression the penalty function is the sum of the magnitudes of the parameters, leading to
\begin{equation}
L_{\mathrm{LASSO}}(\mathbf{\beta}) = \frac{1}{n} \sum_{i = 1}^n |y_i - \mathbf{\beta}^T \mathbf{x}_i|^2 + \lambda \sum_{j = 1}^d |\beta_j|.
\end{equation}
We will show how these optimization problems can be solved with `sklearn` to determine the model parameters $\mathbf \beta$. We will also show how to choose $\lambda$ appropriately via cross-validation.
Let's continue working with our data from last time. We load and split the data as in Lab 4.
```python
df=pd.read_csv("data/noisypopulation.csv")
df.head()
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>f</th>
<th>x</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.047790</td>
<td>0.00</td>
<td>0.011307</td>
</tr>
<tr>
<th>1</th>
<td>0.051199</td>
<td>0.01</td>
<td>0.010000</td>
</tr>
<tr>
<th>2</th>
<td>0.054799</td>
<td>0.02</td>
<td>0.007237</td>
</tr>
<tr>
<th>3</th>
<td>0.058596</td>
<td>0.03</td>
<td>0.000056</td>
</tr>
<tr>
<th>4</th>
<td>0.062597</td>
<td>0.04</td>
<td>0.010000</td>
</tr>
</tbody>
</table>
</div>
Here `x` and `y` are the predictor and measured response variables, and `f` is the true response.
```python
f = df.f.values
x = df.x.values
y = df.y.values
df.shape
```
(200, 3)
```python
indexes=np.sort(np.random.choice(x.shape[0], size=60, replace=False))
samplex = x[indexes]
samplef = f[indexes]
sampley = y[indexes]
sample_df=pd.DataFrame(dict(x=x[indexes],f=f[indexes],y=y[indexes]))
```
We split the sample data into training and testing sets.
```python
from sklearn.model_selection import train_test_split
datasize=sample_df.shape[0]
#split dataset using the index, as we have x,f, and y that we want to split.
itrain, itest = train_test_split(np.arange(60),train_size=0.8)
xtrain= sample_df.x[itrain].values
ftrain = sample_df.f[itrain].values
ytrain = sample_df.y[itrain].values
xtest= sample_df.x[itest].values
ftest = sample_df.f[itest].values
ytest = sample_df.y[itest].values
```
## Part 2: Ridge regression with one predictor on a grid
To begin, we'll use `sklearn` to do simple linear regression on the sampled training data. We'll then do ridge regression with the same data, setting the penalty parameter $\lambda$ to zero. Setting $\lambda = 0$ reduces the ridge problem to the simple ordinary least squares problem, so we expect the results of these models to be identical.
```python
from sklearn.linear_model import LinearRegression
```
```python
#build the the ordinary least squares model
simp_reg = LinearRegression()
#fit the model to training data
simp_reg.fit(xtrain.reshape(len(xtrain),1), ytrain)
#save the beta coefficients
beta0_sreg = simp_reg.intercept_
beta1_sreg = simp_reg.coef_[0]
#make predictions everywhere
ypredict = lambda x : beta0_sreg + beta1_sreg*x
print("(beta0, beta1) = (%f, %f)" %(beta0_sreg, beta1_sreg))
```
(beta0, beta1) = (-0.039332, 1.111996)
We will use the above $\beta$ coefficients as a benchmark for comparision to ridge and LASSO methods. Let's see that we get the same coefficients with ridge regression.
```python
from sklearn.linear_model import Ridge
```
For reference, [here](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) is the ridge regression documentation. Notice that the weight $\lambda$ is referred to as `alpha` in the documentation.
The snippet of code below implements the ridge regression with $\lambda = 0$.
```python
#build the ridge regression model with specified lambda, ie, alpha
ridge_reg = Ridge(alpha = 0)
#fit the model to training data
ridge_reg.fit(xtrain.reshape(-1,1), ytrain) #xtrain.reshape(-1,1) and xtrain.reshape(len(xtrain),1) are equivalent
#save the beta coefficients
beta0_ridge = ridge_reg.intercept_
beta1_ridge = ridge_reg.coef_[0]
#make predictions everywhere
ypredict_ridge = ridge_reg.predict(x.reshape(-1,1))
print("(beta0, beta1) = (%f, %f)" %(beta0_ridge, beta1_ridge))
```
(beta0, beta1) = (-0.039332, 1.111996)
The beta coefficients for linear and ridge regressions coincide for $\lambda = 0$, as expected. We plot the data and fits.
```python
colors = sns.color_palette()
```
```python
#plot in-sample training data
plt.plot(xtrain, ytrain, 's', alpha=0.3, ms=10, label="in-sample y (observed)")
#plot population data
plt.plot(x, y, '.', alpha=0.8, label="population y");
plt.plot(x, f, color = colors[1], label="God function")
#plot simple linear regression fit
plt.plot(x, ypredict(x), alpha=0.5, label="OLS")
#plot ridge regression fit
plt.plot(x, ypredict_ridge, 'k.', lw = 1, alpha=0.3, label="ridge")
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc=4);
```
> **EXERCISE:** Play around with the values of $\lambda$ in the ridge regression code. Increase $\lambda$ from 0 to .01, from 0.01 to 1, from 1 to 5. What do you observe? What happens as $\lambda$ goes to $\infty$?
> **YOUR DISCUSSION HERE:**
## Part 3: Ridge regression with polynomial features on a grid
Now we'll make a more complex model by adding polynomial features. Instead of building the linear model $y = \beta_0 + \beta_1 x$, we build a polynomial model $y = \beta_0 + \beta_1 x + \beta_2 x^2 + \ldots \beta_d x^d$ for some $d$ to be determined (see Lab 4 for details on choosing hyper-parameter $d$). This regression will be linear though, since we'll be treating $x^2, \ldots, x^d$ themselves as predictors in the linear model. We did this in Lab 4 but it's worth a review.
We map $x$ to $1, x, x^2, \ldots, x^d$, and then build a linear regression model on this linear function of polynomial features. To do this, we use `sklearn` to build what is known as the *Vandermonde* matrix, the generalizaiton of the predictor matrix $X$ discussed in Lab 3. For example, if we have three observations
\begin{equation*}\begin{pmatrix}
x_1 \\
x_2 \\
x_3\\
\end{pmatrix}, \end{equation*}
and we want polynomial features up to and including degree 4, we build the predictor matrix
\begin{equation*}\begin{pmatrix}
x_1 \\
x_2 \\
x_3 \\
\end{pmatrix} \rightarrow X = \begin{bmatrix}
x_1^0 & x_1^1 & x_1^2 & x_1^3 & x_1^4\\
x_2^0 & x_2^1 & x_2^2 & x_2^3 & x_2^4\\
x_3^0 & x_3^1 & x_3^2 & x_3^3 & x_3^4\\
\end{bmatrix} =
\begin{bmatrix}
1& x_1^1 & x_1^2 & x_1^3 & x_1^4\\
1 & x_2^1 & x_2^2 & x_2^3 & x_2^4\\
1 & x_3^1 & x_3^2 & x_3^3 & x_3^4\\
\end{bmatrix}.
\end{equation*}
```python
from sklearn.preprocessing import PolynomialFeatures
```
> **EXERCISE: ** Before we continue working with the data, make a toy vector called `toy`, where
>\begin{equation}
\mathrm{toy} = \begin{pmatrix}
0 \\
2 \\
5 \\
\end{pmatrix}
.
\end{equation}
> Build the feature matrix up to (and including) degree 4. Confirm that the entries in the matrix are what you'd expect based on the above discussion.
```python
# your code here
PolynomialFeatures(4).fit_transform(np.array([0, 2, 5]).reshape(-1,1))
```
array([[ 1., 0., 0., 0., 0.],
[ 1., 2., 4., 8., 16.],
[ 1., 5., 25., 125., 625.]])
We now continue working with our data. We write a function to make polynomial features of given degrees as we did in Lab 4, and we store the features in a dictionary.
```python
def make_features(train_set, test_set, degrees):
train_dict = {}
test_dict = {}
for d in degrees:
traintestdict={}
train_dict[d] = PolynomialFeatures(d).fit_transform(train_set.reshape(-1,1))
test_dict[d] = PolynomialFeatures(d).fit_transform(test_set.reshape(-1,1))
return train_dict, test_dict
```
> **EXERCISE: ** Fill in the code below to perform ridge regression on the training data for the given set of $\lambda$. Then predict on the grid and store the results in `ypredict_ridge`.
```python
d = 20
rows = 7
cols = 2
lambdas = [0., 1e-6, 1e-3, 1e-2, 1e-1, 1, 10]
grid_to_predict = np.arange(0, 1, .01)
train_dict, grid_dict = make_features(xtrain, grid_to_predict, range(0,d + 1))
fig, axs = plt.subplots(rows, cols, figsize=(12, 24))
axs = axs.ravel()
Xtrain = train_dict[d]
for i, lam in enumerate(lambdas):
#your code here
ridge_reg = Ridge(alpha = lam)
ridge_reg.fit(Xtrain, ytrain)
ypredict_ridge = ridge_reg.predict(grid_dict[d])
#code provided from here on
left = 2*i
right = 2*i + 1
axs[left].plot(xtrain, ytrain, 's', alpha=0.3, ms=10, label="in-sample y (observed)")
axs[left].plot(x, y, '.', alpha=0.8, label="population y")
axs[left].plot(grid_to_predict, ypredict_ridge, 'k-', label="lambda = %s" % str(lam))
axs[left].set_ylabel('$y$')
axs[left].set_ylim((0, 1))
axs[left].set_xlim((0, 1))
axs[left].legend(loc=2)
coef = ridge_reg.coef_.ravel()
axs[right].semilogy(np.abs(coef), marker='o', label="lambda = %s" % str(lam))
axs[right].set_ylim((1e-1, 1e15))
axs[right].yaxis.set_label_position("right")
axs[right].set_ylabel('abs(coefficient)')
axs[right].legend(loc='upper left')
axs[2*(rows-1)].set_xlabel("x")
axs[2*(rows-1) + 1].set_xlabel("coefficients");
```
As you can see, as we increase $\lambda$ from 0 to 1, we start out overfitting, then doing well, and then our fits develop a mind of their own irrespective of data, as the penalty term dominates.
> **EXERCISE:** What would you expect if you compared a performance metric between these models on a grid?
> **YOUR DISCUSSION HERE:**
## Part 4: Cross-validation
Let's use cross-validation to determine the critical value of $\lambda$, which we'll refer to as $\lambda^*$. To do this we use the concept of a *meta-estimator* from scikit-learn. As the API paper from Lab 4 explains:
>In scikit-learn, model selection is supported in two distinct meta-estimators, GridSearchCV and RandomizedSearchCV. They take as input an estimator (basic or composite), whose hyper-parameters must be optimized, and a set of hyperparameter settings to search through.
The concept of a meta-estimator allows us to wrap, for example, cross-validation, or methods that build and combine simpler models or schemes. For example:
est = Ridge()
parameters = {"alpha": [1e-8, 1e-6, 1e-5, 5e-5, 1e-4, 5e-4, 1e-3, 1e-2, 1e-1, 1.0]}
gridclassifier=GridSearchCV(est, param_grid=parameters, cv=4, scoring="neg_mean_squared_error")
The `GridSearchCV` replaces the manual iteration over the folds using `KFolds` and the averaging we did in Lab 4, doing it all for us. It takes a hyper-parameter grid in the shape of a dictionary as input, and sets $\lambda$ to the values you want to try, one by one. It then trains the model using cross-validation, and gets the error for each value of the hyper-parameter $\lambda$. Finally it compares the errors for the different $\lambda$'s, and picks the best choice model.
```python
from sklearn.model_selection import GridSearchCV
def cv_optimize_ridge(x, y, list_of_lambdas, n_folds=4):
est = Ridge()
parameters = {'alpha': list_of_lambdas}
#the scoring parameter below is the default one in ridge, but you can use a different one
#in the cross-validation phase if you want.
gs = GridSearchCV(est, param_grid=parameters, cv=n_folds, scoring="neg_mean_squared_error")
gs.fit(x, y)
return gs
```
> **EXERCISE:** Use the function above to fit the model on the training set with 4-fold cross validation. Save the fit as the variable `fitmodel`.
```python
# your code here
lol = [1e-8, 1e-6, 1e-5, 5e-5, 1e-4, 5e-4, 1e-3, 1e-2, 1e-1, 1.0, 10.0]
fitmodel = cv_optimize_ridge(Xtrain, ytrain, lol, n_folds=4)
```
```python
fitmodel.best_estimator_, fitmodel.best_params_, fitmodel.best_score_
```
(Ridge(alpha=1e-05, copy_X=True, fit_intercept=True, max_iter=None,
normalize=False, random_state=None, solver='auto', tol=0.001),
{'alpha': 1e-05},
-0.0061059779088181)
We also output the mean cross-validation error at different $\lambda$ (with a negative sign, as scikit-learn likes to maximize negative error which is equivalent to minimizing error).
```python
fitmodel.cv_results_
```
{'mean_fit_time': array([ 0.00261408, 0.00083631, 0.00075918, 0.00068599, 0.00097597,
0.00114334, 0.00085258, 0.00105309, 0.00091922, 0.00076503,
0.00059855]),
'mean_score_time': array([ 0.00039995, 0.00027311, 0.00023305, 0.00025278, 0.00033861,
0.00037467, 0.00029874, 0.00029731, 0.00029558, 0.00023782,
0.00019926]),
'mean_test_score': array([-0.01804484, -0.0062449 , -0.00610598, -0.00640911, -0.00657694,
-0.0066853 , -0.00658704, -0.00618014, -0.0066234 , -0.01141202,
-0.03100209]),
'mean_train_score': array([-0.00491064, -0.00501986, -0.00505567, -0.00509398, -0.00511917,
-0.00517101, -0.00518639, -0.00524733, -0.0056585 , -0.00957192,
-0.02702877]),
'param_alpha': masked_array(data = [1e-08 1e-06 1e-05 5e-05 0.0001 0.0005 0.001 0.01 0.1 1.0 10.0],
mask = [False False False False False False False False False False False],
fill_value = ?),
'params': ({'alpha': 1e-08},
{'alpha': 1e-06},
{'alpha': 1e-05},
{'alpha': 5e-05},
{'alpha': 0.0001},
{'alpha': 0.0005},
{'alpha': 0.001},
{'alpha': 0.01},
{'alpha': 0.1},
{'alpha': 1.0},
{'alpha': 10.0}),
'rank_test_score': array([10, 3, 1, 4, 5, 8, 6, 2, 7, 9, 11], dtype=int32),
'split0_test_score': array([-0.01211287, -0.01034037, -0.01026381, -0.01040945, -0.01048741,
-0.01058692, -0.01058834, -0.01052545, -0.01103576, -0.01920886,
-0.04510693]),
'split0_train_score': array([-0.00359924, -0.00370356, -0.00373518, -0.00374805, -0.00375346,
-0.00376358, -0.00376714, -0.00380304, -0.0042778 , -0.00816658,
-0.02351017]),
'split1_test_score': array([-0.04681443, -0.00271475, -0.00161641, -0.00145975, -0.00151806,
-0.00165726, -0.00167334, -0.00174026, -0.00284047, -0.01187218,
-0.04342125]),
'split1_train_score': array([-0.00614228, -0.00639602, -0.00642792, -0.00646188, -0.00648416,
-0.00652139, -0.0065299 , -0.00656877, -0.00697327, -0.01070494,
-0.02533763]),
'split2_test_score': array([-0.00374652, -0.00324341, -0.00308246, -0.00307 , -0.00310868,
-0.00323026, -0.0032688 , -0.00341147, -0.00377373, -0.00462288,
-0.01218781]),
'split2_train_score': array([-0.00581964, -0.0058765 , -0.00592501, -0.00599307, -0.00604307,
-0.0061551 , -0.00618171, -0.00621768, -0.00652974, -0.01052546,
-0.03095784]),
'split3_test_score': array([-0.00950555, -0.00868108, -0.00946123, -0.01069722, -0.01119359,
-0.01126674, -0.01081768, -0.00904338, -0.00884364, -0.00994419,
-0.02329239]),
'split3_train_score': array([-0.00408141, -0.00410338, -0.00413457, -0.00417293, -0.00419598,
-0.00424399, -0.00426682, -0.00439983, -0.00485318, -0.00889069,
-0.02830944]),
'std_fit_time': array([ 2.36724489e-03, 7.80028522e-05, 1.15312468e-04,
1.08386404e-04, 5.03251020e-05, 2.13435270e-04,
1.05852613e-04, 3.48777576e-04, 1.09243656e-04,
4.45295367e-05, 3.32097911e-05]),
'std_score_time': array([ 5.39159513e-05, 1.86626321e-05, 1.92555293e-05,
6.87095241e-05, 2.24388333e-05, 1.11428552e-04,
5.24141434e-05, 5.97872322e-05, 3.41835106e-05,
2.05762969e-05, 3.40686301e-06]),
'std_test_score': array([ 0.01688371, 0.00332336, 0.00380274, 0.00418439, 0.00430773,
0.00428459, 0.00415523, 0.00368978, 0.0034216 , 0.00522612,
0.01384376]),
'std_train_score': array([ 0.0010898 , 0.0011402 , 0.00114356, 0.00115535, 0.00116557,
0.00118661, 0.00118907, 0.00117175, 0.00112279, 0.0010761 ,
0.00284245])}
```python
fit_lambdas = [d['alpha'] for d in fitmodel.cv_results_['params']]
fit_scores = fitmodel.cv_results_['mean_test_score']
```
> **EXERCISE:** Plot log10-log10 plot of `-fit_scores` versus `fit_lambdas`.
```python
#your code here
plt.scatter(np.log10(fit_lambdas), np.log10(-fit_scores))
plt.xlabel('$\log_{10}(\lambda)$')
plt.ylabel('$-\log_{10}(\mathrm{scores})$')
```
## Part 5: Refitting on full training set
We now refit the estimator on the training set, and calculate and plot the test set error and the polynomial coefficients. Notice how many of these coefficients have been pushed to lower values or 0.
> **EXERCISE:** Assign to variable `est` the classifier obtained by fitting the entire training set using the best $\lambda$ found above. Assign the predictions to the variable `ypredict_ridge_best`.
```python
# your code here
lambdawechoose = fitmodel.best_params_['alpha']
est = Ridge(alpha=lambdawechoose).fit(Xtrain,ytrain)
ypredict_ridge_best = est.predict(grid_dict[d])
```
```python
#code provided from here on
fig, axs = plt.subplots(1, 2, figsize=(12, 4))
left = 0
right = 1
axs[left].plot(xtrain, ytrain, 's', alpha=0.3, ms=10, label="in-sample y (observed)")
axs[left].plot(x, y, '.', alpha=0.8, label="population y")
axs[left].plot(grid_to_predict, ypredict_ridge_best, 'k-', label="lambda = %s" % str(lambdawechoose))
axs[left].set_ylabel('$y$')
axs[left].set_ylim((0, 1))
axs[left].set_xlim((0, 1))
axs[left].legend(loc=2)
coef = est.coef_.ravel()
axs[right].semilogy(np.abs(coef), marker='o', label="lambda = %s" % str(lambdawechoose))
axs[right].set_ylim((1e-1, 1e15))
axs[right].yaxis.set_label_position("right")
axs[right].set_ylabel('abs(coefficient)')
axs[right].legend(loc='upper left')
axs[left].set_xlabel("x")
axs[right].set_xlabel("coefficients");
```
# END OF LAB
## Part 6: Feature selection with LASSO regression
Below is a completely worked example of feature selection with LASSO, which will be helpful for homework 4. For reference [here](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) is the documentation for LASSO.
```python
from sklearn.linear_model import Lasso
#function to do lasso with cross validation
def cv_optimize_lasso(X, y, list_of_lambdas, n_folds=4):
#build the lasso model
clf = Lasso()
parameters = {"alpha": list_of_lambdas}
#the scoring parameter below is the default one in ridge, but you can use a
#different one in the cross-validation phase if desired.
gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds, scoring="neg_mean_squared_error")
gs.fit(X, y)
return gs
```
```python
#List of Lambda (lol!) values
lol = [1e-8,1e-6, 1e-5, 5e-5, 1e-4, 5e-4, 1e-3, 1e-2, 1e-1, 1.0, 10.0]
#fit lasso model to training data with cross-validation
fitmodel_lasso = cv_optimize_lasso(Xtrain, ytrain, lol, n_folds=4)
```
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/margolevine/anaconda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
```python
#choose the optimal lambda
lambdawechoose_lasso = fitmodel_lasso.best_params_['alpha']
#estimate with this optimal lambda
est_lasso = Lasso(alpha=lambdawechoose_lasso).fit(Xtrain,ytrain)
est_lasso
```
Lasso(alpha=0.0005, copy_X=True, fit_intercept=True, max_iter=1000,
normalize=False, positive=False, precompute=False, random_state=None,
selection='cyclic', tol=0.0001, warm_start=False)
```python
#function that pulls out the important features
def nonzero_lasso(est, lcols):
featuremask=(est.coef_ !=0.0)
return pd.DataFrame(dict(feature=lcols, coef=est.coef_,
abscoef=np.abs(est.coef_)))[featuremask].sort_values('abscoef',
ascending=False)
```
```python
#x^1, x^2, x^6, x^20 are the important features
lasso_importances=nonzero_lasso(est_lasso, list(range(d+1)))
lasso_importances.set_index("feature", inplace=True)
lasso_importances
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>abscoef</th>
<th>coef</th>
</tr>
<tr>
<th>feature</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1.192801</td>
<td>1.192801</td>
</tr>
<tr>
<th>13</th>
<td>0.188825</td>
<td>-0.188825</td>
</tr>
<tr>
<th>12</th>
<td>0.040679</td>
<td>-0.040679</td>
</tr>
<tr>
<th>14</th>
<td>0.000113</td>
<td>-0.000113</td>
</tr>
</tbody>
</table>
</div>
```python
#function that pulls out the trivial features
def zero_lasso(est, lcols):
featuremask=(est.coef_ ==0.0)
return pd.DataFrame(dict(feature=lcols, coef=est.coef_,
abscoef=np.abs(est.coef_)))[featuremask].sort_values('abscoef',
ascending=False)
```
```python
#calculate and print the trivial features.
lasso_zeros=zero_lasso(est_lasso, list(range(d+1)))
lasso_zeros.set_index("feature", inplace=True)
lasso_zeros
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>abscoef</th>
<th>coef</th>
</tr>
<tr>
<th>feature</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>10</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>19</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>18</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>17</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>16</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>15</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>11</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>9</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>2</th>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>8</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>7</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>6</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>5</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>4</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
<tr>
<th>3</th>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>20</th>
<td>0.0</td>
<td>-0.0</td>
</tr>
</tbody>
</table>
</div>
```python
#17 of the 21 features are trivial, 4 are important
len(lasso_zeros), len(lasso_importances)
```
(17, 4)
```python
```
|
841fd6e0957296f16d822ab4127b40202e30e5d9
| 592,171 |
ipynb
|
Jupyter Notebook
|
labs/Lab5_Regularization/Lab5_Regularization_Final_partialsolns.ipynb
|
larsonma/DataScienceIntro
|
7aa776cbadffb38e678e5da195a4208c22b86488
|
[
"MIT"
] | 6 |
2018-09-04T13:07:38.000Z
|
2019-09-20T18:39:15.000Z
|
labs/Lab5_Regularization/Lab5_Regularization_Final_partialsolns.ipynb
|
larsonma/DataScienceIntro
|
7aa776cbadffb38e678e5da195a4208c22b86488
|
[
"MIT"
] | 7 |
2018-09-06T14:48:10.000Z
|
2018-11-06T19:39:21.000Z
|
labs/Lab5_Regularization/Lab5_Regularization_Final_partialsolns.ipynb
|
larsonma/DataScienceIntro
|
7aa776cbadffb38e678e5da195a4208c22b86488
|
[
"MIT"
] | 10 |
2018-08-21T16:19:11.000Z
|
2021-01-20T20:44:48.000Z
| 442.248693 | 385,412 | 0.919886 | true | 9,563 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.637031 | 0.785309 | 0.500266 |
__label__eng_Latn
| 0.81433 | 0.000614 |
# Nonlinear Equations
We want to find a root of the nonlinear function $f$ using different methods.
1. Bisection method
2. Newton method
3. Chord method
4. Secant method
5. Fixed point iterations
```python
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
import sympy as sym
```
```python
t = sym.symbols('t')
f_sym = t/8. * (63.*t**4 - 70.*t**2. +15.) # Legendre polynomial of order 5
f_prime_sym = sym.diff(f_sym,t)
f = sym.lambdify(t, f_sym, 'numpy')
f_prime = sym.lambdify(t,f_prime_sym, 'numpy')
phi = lambda x : 63./70.*x**3 + 15./(70.*x)
#phi = lambda x : 70.0/15.0*x**3 - 63.0/15.0*x**5
#phi = lambda x : sqrt((63.*x**4 + 15.0)/70.)
# Let's plot
n = 1025
x = linspace(-1,1,n)
c = zeros_like(x)
_ = plot(x,f(x))
_ = plot(x,c)
_ = grid()
```
```python
# Initial data for the variuos algorithms
# interval in which we seek the solution
a = 0.7
b = 1.
# initial points
x0 = (a+b)/2.0
x00 = b
```
```python
# stopping criteria
eps = 1e-10
n_max = 1000
```
## Bisection method
$$
x^k = \frac{a^k+b^k}{2}
$$
```
if (f(a_k) * f(x_k)) < 0:
b_k1 = x_k
a_k1 = a_k
else:
a_k1 = x_k
b_k1 = b_k
```
```python
def bisect(f,a,b,eps,n_max):
assert f(a)*f(b)<0
a_new = a
b_new = b
x = mean([a,b])
err = eps + 1.
errors = [err]
it = 0
while (err > eps and it < n_max):
if ( f(a_new) * f(x) < 0 ):
# root in (a_new,x)
b_new = x
else:
# root in (x,b_new)
a_new = x
x_new = mean([a_new,b_new])
#err = 0.5 *(b_new -a_new)
err = abs(f(x_new))
#err = abs(x-x_new)
errors.append(err)
x = x_new
it += 1
semilogy(errors)
print(it)
print(x)
print(err)
return errors
errors_bisect = bisect(f,a,b,eps,n_max)
```
```python
# is the number of iterations coherent with the theoretical estimation?
```
In order to find out other methods for solving non-linear equations, let's compute the Taylor's series of $f(x^k)$ up to the first order
$$
f(x^k) \simeq f(x^k) + (x-x^k)f^{\prime}(x^k)
$$
which suggests the following iterative scheme
$$
x^{k+1} = x^k - \frac{f(x^k)}{f^{\prime}(x^k)}
$$
The following methods are obtained applying the above scheme where
$$
f^{\prime}(x^k) \approx q^k
$$
## Newton's method
$$
q^k = f^{\prime}(x^k)
$$
$$
x^{k+1} = x^k - \frac{f(x^k)}{q^k}
$$
```python
def newton(f,f_prime,x0,eps,n_max):
pass # TODO
%time errors_newton = newton(f,f_prime,1.0,eps,n_max)
```
## Chord method
$$
q^k \equiv q = \frac{f(b)-f(a)}{b-a}
$$
$$
x^{k+1} = x^k - \frac{f(x^k)}{q}
$$
```python
def chord(f,a,b,x0,eps,n_max):
pass # TODO
errors_chord = chord (f,a,b,x0,eps,n_max)
```
## Secant method
$$
q^k = \frac{f(x^k)-f(x^{k-1})}{x^k - x^{k-1}}
$$
$$
x^{k+1} = x^k - \frac{f(x^k)}{q^k}
$$
Note that this algorithm requirs **two** initial points
```python
def secant(f,x0,x00,eps,n_max):
pass # TODO
errors_secant = secant(f,x0,x00,eps,n_max)
```
## Fixed point iterations
$$
f(x)=0 \to x-\phi(x)=0
$$
$$
x^{k+1} = \phi(x^k)
$$
```python
def fixed_point(phi,x0,eps,n_max):
pass # TODO
errors_fixed = fixed_point(phi,0.3,eps,n_max)
```
## Comparison
```python
# plot the error convergence for the methods
loglog(errors_bisect, label='bisect')
loglog(errors_chord, label='chord')
loglog(errors_secant, label='secant')
loglog(errors_newton, label ='newton')
loglog(errors_fixed, label ='fixed')
_ = legend()
```
```python
# Let's compare the scipy implmentation of Newton's method with our..
```
```python
import scipy.optimize as opt
%time opt.newton(f, 1.0, f_prime, tol = eps)
```
|
729dcf9afae02c68f7c154ee109936d1746b9254
| 8,242 |
ipynb
|
Jupyter Notebook
|
notebooks/03-nonlinear-equations.ipynb
|
debarshibanerjee/numerical-analysis-2021-2022
|
e23043a56a66ff119301088c2a85ba9ca1ba37ce
|
[
"CC-BY-4.0"
] | 1 |
2022-01-12T23:19:50.000Z
|
2022-01-12T23:19:50.000Z
|
notebooks/03-nonlinear-equations.ipynb
|
giovastabile/numerical-analysis-2021-2022
|
15b4557cc06eb089077931e08367845a7c10935c
|
[
"CC-BY-4.0"
] | null | null | null |
notebooks/03-nonlinear-equations.ipynb
|
giovastabile/numerical-analysis-2021-2022
|
15b4557cc06eb089077931e08367845a7c10935c
|
[
"CC-BY-4.0"
] | null | null | null | 21.463542 | 146 | 0.440063 | true | 1,325 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.931463 | 0.834825 |
__label__eng_Latn
| 0.675125 | 0.77791 |
# Gym Intro + Monte Carlo Learning
Originally from https://skettee.github.io/post/monte_carlo_learning/ (in Korean)
## Load Libraries and Extensions
```python
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
```python
from IPython.display import display, clear_output, Pretty
import numpy as np
from time import sleep
from tqdm import tqdm_notebook as tqdm
import gym
```
## Frozen Lake Environment
```python
ENV_NAME = 'FrozenLake-v0'
N_STEP = 100
```
```python
env = gym.make(ENV_NAME, is_slippery=False)
state = env.reset()
```
```python
world = env.render(mode='ansi')
Pretty(world)
```
[41mS[0mFFF
FHFH
FFFH
HFFG
```python
for step in range(N_STEP):
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
state = next_state
# updated world display
world = env.render(mode='ansi')
clear_output(wait=True)
display(Pretty(world))
sleep(0.5)
if done: # an episode finished
print("Episode finished after {} timesteps".format(step + 1))
break
```
(Right)
SFFF
F[41mH[0mFH
FFFH
HFFG
Episode finished after 2 timesteps
```python
env.action_space
```
Discrete(4)
There are 4 actions in Fronzen Lake.
$A = \{0, 1, 2, 3\}$
Num | Action
----|----
0 | Move Left
1 | Move Down
2 | Move Right
3 | Move Up
```python
env.observation_space
```
Discrete(16)
There are 16 states as follows:
$S = \{0, 1, \cdots , 15\}$
$\begin{vmatrix}
0 & 1 & 2 & 3 \\
4 & 5 & 6 & 7 \\
8 & 9 & 10 & 11 \\
12 & 13 & 14 & 15
\end{vmatrix}$
For each state, there are actions possible associated with next state, reward, and done as follows:
`{action: [(probability, nextstate, reward, done)]}`
For example, let's look at State 6 and 14:
```python
env.P[6]
```
{0: [(1.0, 5, 0.0, True)],
1: [(1.0, 10, 0.0, False)],
2: [(1.0, 7, 0.0, True)],
3: [(1.0, 2, 0.0, False)]}
```python
env.P[14]
```
{0: [(1.0, 13, 0.0, False)],
1: [(1.0, 14, 0.0, False)],
2: [(1.0, 15, 1.0, True)],
3: [(1.0, 10, 0.0, False)]}
## Monte Carlo Learning
$G_t = R_{t+1} + \gamma R_{t+2} + \cdots + = \sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1}$
$q_{\pi}(s,a) = \mathbb E_{\pi} [G_t | S_t = s, A_t = a]$
$q_*(s,a) = \max_{\pi} q_{\pi}(s,a)$
$ \pi_*(s, a) = \begin{cases}
1 & \text{if } a= \text{argmax}_{a \in A} q_\star(s,a) \\
0 & \text{otherwise}
\end{cases} $
$\begin{align}
N(S_t, A_t) & \leftarrow N(S_t, A_t) + 1 \\
Q(S_t, A_t) & \leftarrow Q(S_t, A_t) + \dfrac{1}{N(S_t, A_t)} \left( G_t - Q(S_t, A_t) \right)
\end{align}$
```python
n_state = env.observation_space.n
n_action = env.action_space.n
n_episode = 5000
GAMMA = .6
```
```python
Q_table = np.zeros((n_state, n_action))
N_table = np.zeros((n_state, n_action))
R_table = np.zeros((n_state, n_action))
for episode in tqdm(range(n_episode)):
memory = []
state = env.reset()
for step in range(N_STEP):
action = env.action_space.sample()
memory.append((state, action))
next_state, reward, done, info = env.step(action)
R_table[state][action] = reward
state = next_state
if done:
for i in range(len(memory)):
G_t = 0
gamma = GAMMA
for j in range(i, len(memory)):
S_t, A_t = memory[j]
if i == j:
G_t += R_table[S_t][A_t]
else:
G_t += gamma * R_table[S_t][A_t]
gamma = gamma * gamma
S_t, A_t = memory[i]
N_table[S_t][A_t] += 1
Q_table[S_t][A_t] += (G_t - Q_table[S_t][A_t]) / N_table[S_t][A_t]
break
```
HBox(children=(IntProgress(value=0, max=5000), HTML(value='')))
## Solution
```python
state = env.reset()
done = False
world = env.render(mode='ansi')
display(Pretty(world))
sleep(.5)
```
[41mS[0mFFF
FHFH
FFFH
HFFG
```python
while not done:
action = np.argmax(Q_table[state])
state, reward, done, info = env.step(action)
world = env.render(mode='ansi')
clear_output(wait=True)
display(Pretty(world))
sleep(.5)
if done and state == 15:
print('\nSuccess!')
```
(Right)
SFFF
FHFH
FFFH
HFF[41mG[0m
Success!
```python
```
|
bedccf866ab8d2fc134bc3492e535fb27df3789d
| 12,437 |
ipynb
|
Jupyter Notebook
|
notebooks/frozen_lake_4x4_monte_carlo.ipynb
|
jeongyoonlee/gym_example
|
bf8c71459854401a2eaf7e04af0986e7bbe36084
|
[
"Apache-2.0"
] | null | null | null |
notebooks/frozen_lake_4x4_monte_carlo.ipynb
|
jeongyoonlee/gym_example
|
bf8c71459854401a2eaf7e04af0986e7bbe36084
|
[
"Apache-2.0"
] | null | null | null |
notebooks/frozen_lake_4x4_monte_carlo.ipynb
|
jeongyoonlee/gym_example
|
bf8c71459854401a2eaf7e04af0986e7bbe36084
|
[
"Apache-2.0"
] | null | null | null | 22.090586 | 109 | 0.467235 | true | 1,467 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.808067 | 0.689306 | 0.557005 |
__label__eng_Latn
| 0.348922 | 0.13244 |
# Episodic Mountain Car with function appoximation and control
This Notebook is intended to solve the Episodic Mountain car problem using Semi-gradient sarsa and Tile Coding.
The description of the problem is given below:
"A car is on a one-dimensional track, positioned between two "mountains". The goal is to drive up the mountain on the right; however, the car's engine is not strong enough to scale the mountain in a single pass. Therefore, the only way to succeed is to drive back and forth to build up momentum."
An extensive description and solution of the problem can be seen here [Section 10.1 Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdf#page=267)
Image and Text taken from Taken from [Official documentaiton Mountain car](https://gym.openai.com/envs/MountainCar-v0/).
```python
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import tiles3 as tc
from tqdm import tqdm
import gym
from gym.wrappers import Monitor
from utils import *
%matplotlib inline
```
## Undestanding the Workflow of OpenAI
The following variables are used at each timestep and they are returned by the Mountain Car environment.
- **observation** (object): an environment-specific object representing your observation of the environment. For example, pixel data from a camera, joint angles and joint velocities of a robot, or the board state in a board game.
- **reward** (float): amount of reward achieved by the previous action. The scale varies between environments, but the goal is always to increase your total reward.
- **done** (boolean): whether it’s time to reset the environment again. Most (but not all) tasks are divided up into well-defined episodes, and done being True indicates the episode has terminated. (For example, perhaps the pole tipped too far, or you lost your last life.)
- **info** (dict): diagnostic information useful for debugging. It can sometimes be useful for learning (for example, it might contain the raw probabilities behind the environment’s last state change). However, official evaluations of your agent are not allowed to use this for learning.
As a quick recap, the diagram below explains the workflow of a Markov Decision Process (MDP)
Image taken from [Section 3.1 Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdf#page=70)
## Environment and Agent specifications
Below are presented the main features of the environment and agent. Overall, the action space of the problem is discrete with three posible actions. The observations or state space is continuios, therefore it is necessary to use a function approximation technique to solve this challenge. The agent receives a reward of -1 at each timestep unless it reaches the goal. The episode ends if the agent reaches the goal or a specific number of iterations are done. Additionally, the agent will always start at a random position between $-0.6$ and $-0.4$ with zero velocity.
**Observation**:
Type: Box(2)
Num Observation Min Max
0 Car Position -1.2 0.6
1 Car Velocity -0.07 0.07
**Actions**:
Type: Discrete(3)
Num Action
0 Accelerate to the Left
1 Don't accelerate
2 Accelerate to the Right
Note: This does not affect the amount of velocity affected by the gravitational pull acting on the car
**Reward**:
Reward of 0 is awarded if the agent reached the flag(position = 0.5) on top of the mountain
Reward of -1 is awarded if the position of the agent is less than 0.5
**Starting State**:
The position of the car is assigned a uniform random value in [-0.6 , -0.4]
The velocity of the car is always assigned to 0
**Episode Termination**:
The car position is more than 0.5
Episode length is greater than 200
For further information see [Github source code](https://github.com/openai/gym/blob/master/gym/envs/classic_control/mountain_car.py)
The next cell aims to show how to iterate with the action and observation space of the agent and extract relevant information from it
```python
env = gym.make("MountainCar-v0")
observation = env.reset()
# Object's type in the action Space
print("The Action Space is an object of type: {0}\n".format(env.action_space))
# Shape of the action Space
print("The shape of the action space is: {0}\n".format(env.action_space.n))
# Object's type in the Observation Space
print("The Environment Space is an object of type: {0}\n".format(env.observation_space))
# Shape of the observation space
print("The Shape of the dimension Space are: {0}\n".format(env.observation_space.shape))
# The high and low values in the observation space
print("The High values in the observation space are {0}, the low values are {1}\n".format(
env.observation_space.high, env.observation_space.low))
# Minimum and Maximum car position
print("The minimum and maximum car's position are: {0}, {1}\n".format(
env.observation_space.low[0], env.observation_space.high[0]))
# Minimum and Maximum car velocity
print("The minimum and maximum car's velocity are: {0}, {1}\n".format(
env.observation_space.low[1], env.observation_space.high[1]))
# Example of observation
print("The Observations at a given timestep are {0}\n".format(env.observation_space.sample()))
```
The Action Space is an object of type: Discrete(3)
The shape of the action space is: 3
The Environment Space is an object of type: Box(2,)
The Shape of the dimension Space are: (2,)
The High values in the observation space are [0.6 0.07], the low values are [-1.2 -0.07]
The minimum and maximum car's position are: -1.2000000476837158, 0.6000000238418579
The minimum and maximum car's velocity are: -0.07000000029802322, 0.07000000029802322
The Observations at a given timestep are [-1.0707569 0.0590123]
# Tile Coding Class
For a complete explanation about what is tile coding and how it works, see [Section 9.5.4 of Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdf#page=239). Overall, this is a way to create features that can both provide good generalization and discrimination for value function approximation. Tile coding consists of multiple overlapping tiling, where each tiling is a partitioning of the space into tiles.
**Note**: Tile coding can be only be used with 2d observation spaces.
This technique is implemented using Tiles3, which is a python library written by Richard S. Sutton. For the full documentation see [Tiles3 documentation](http://incompleteideas.net/tiles/tiles3.html)
Image taken from [Section 9.5.4 of Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdf#page=239)
```python
# Tile Coding Class
class MountainCarTileCoder:
def __init__(self, iht_size=4096, num_tilings=8, num_tiles=8):
"""
Initializes the MountainCar Tile Coder
Initializers:
iht_size -- int, the size of the index hash table, typically a power of 2
num_tilings -- int, the number of tilings
num_tiles -- int, the number of tiles. Here both the width and height of the
tile coder are the same
Class Variables:
self.iht -- tc.IHT, the index hash table that the tile coder will use
self.num_tilings -- int, the number of tilings the tile coder will use
self.num_tiles -- int, the number of tiles the tile coder will use
"""
self.iht = tc.IHT(iht_size)
self.num_tilings = num_tilings
self.num_tiles = num_tiles
def get_tiles(self, position, velocity):
"""
Takes in a position and velocity from the mountaincar environment
and returns a numpy array of active tiles.
Arguments:
position -- float, the position of the agent between -1.2 and 0.5
velocity -- float, the velocity of the agent between -0.07 and 0.07
returns:
tiles - np.array, active tiles
"""
# Set the max and min of position and velocity to scale the input
# The max position is set to 0.5 as this is the position to end the experiment
POSITION_MIN = -1.2
POSITION_MAX = 0.5
VELOCITY_MIN = -0.07
VELOCITY_MAX = 0.07
# Scale position and velocity by multiplying the inputs of each by their scale
position_scale = self.num_tiles / (POSITION_MAX - POSITION_MIN)
velocity_scale = self.num_tiles / (VELOCITY_MAX - VELOCITY_MIN)
# Obtain active tiles for current position and velocity
tiles = tc.tiles(self.iht, self.num_tilings, [position * position_scale,
velocity * velocity_scale])
return np.array(tiles)
```
```python
# Test the TileCoder class
mctc = MountainCarTileCoder(iht_size = 1024, num_tilings = 8, num_tiles = 8)
tiles = mctc.get_tiles(position = -1.0, velocity = 0.01)
# Tiles obtained at a random pos and vel
print("The Tiles obtained are: {0}\n".format(tiles))
```
The Tiles obtained are: [0 1 2 3 4 5 6 7]
# Implementing Sarsa Agent
To solve the Mountain Car problem, Value Function approximation and control will be used (Owing to the continuous state space). As a quick recap, Action-values can be computed using value function approximation giving the following equation.
\begin{equation}
q_\pi(s) \approx \hat{q}(s, a, w) \doteq w^T x(s,a)
\end{equation}
Where $w$ are a set of weights and $x(s,a)$ are the features vector which are computed using tile coding.
Using the Tile coder implemented above it is possible to compute the action-values $\hat{q}(s, a, w)$ and solve this RL task.
The equation to update the weights using the Sarsa algorithm is given below. Here, $\nabla \hat{q}(S_t, A_t, w)$ is the gradient of the action-values approximation but as $x(s,a)$ is a linear function, the gradient is one only for the active features.
\begin{equation}
w \leftarrow w + \alpha[R_{t+1} + \gamma \hat{q}(S_{t+1}, A_{t+1}, w)- \hat{q}(S_t, A_t, w)]\nabla \hat{q}(S_t, A_t, w)
\end{equation}
Additionally, the update "Target" is composed of the following terms:
\begin{equation}
\delta \leftarrow R_{t+1} + \gamma \hat{q}(S_{t+1}, A_{t+1}, w)
\end{equation}
\begin{equation}
w \leftarrow w + \alpha[\delta - \hat{q}(S_t, A_t, w)]\nabla \hat{q}(S_t, A_t, w)
\end{equation}
The Pseudo-code implementation of this algorithm is given below.
For further details, see [Section 9.5.4 of Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdf#page=266). Image taken from the last reference.
```python
# SARSA
class SarsaAgent():
"""
Initialization of Sarsa Agent. All values are set to None so they can
be initialized in the agent_init method.
"""
def __init__(self, agent_info={}):
"""Setup for the agent called when the experiment first starts."""
self.last_action = None
self.last_state = None
self.epsilon = None
self.gamma = None
self.iht_size = None
self.w = None
self.alpha = None
self.num_tilings = None
self.num_tiles = None
self.mctc = None
self.initial_weights = None
self.num_actions = None
self.previous_tiles = None
def agent_init(self, agent_info={}):
"""Setup for the agent called when the experiment first starts."""
self.num_tilings = agent_info.get("num_tilings", 8)
self.num_tiles = agent_info.get("num_tiles", 8)
self.iht_size = agent_info.get("iht_size", 4096)
self.epsilon = agent_info.get("epsilon", 0.0)
self.gamma = agent_info.get("gamma", 1.0)
self.alpha = agent_info.get("alpha", 0.5) / self.num_tilings
self.initial_weights = agent_info.get("initial_weights", 0.0)
self.num_actions = agent_info.get("num_actions", 3)
# Initialize self.w to three times the iht_size. Recall this is because
# we need to have one set of weights for each action (Stacked values).
self.w = np.ones((self.num_actions, self.iht_size)) * self.initial_weights
# Initialize self.mctc to the mountaincar verions of the tile coder created
self.mctc = MountainCarTileCoder(iht_size = self.iht_size,
num_tilings = self.num_tilings,
num_tiles = self.num_tiles)
def select_action(self, tiles):
"""
Selects an action using epsilon greedy
Args:
tiles - np.array, an array of active tiles
Returns:
(chosen_action, action_value) - (int, float), tuple of the chosen action
and it's value
"""
action_values = []
chosen_action = None
# Obtain action values for all actions (sum through rows)
action_values = np.sum(self.w[:, tiles], axis = 1)
# Epsilon Greedy action selecion
if np.random.random() < self.epsilon:
# Select random action among the three posible actions
chosen_action = np.random.randint(self.num_actions)
else:
# Select the greedy action
chosen_action = argmax(action_values)
return chosen_action, action_values[chosen_action]
def agent_start(self, state):
"""The first method called when the experiment starts, called after
the environment starts.
Args:
state (Numpy array): the state observation from the
environment's env.reset() function.
Returns:
The first action the agent takes.
"""
# Current state
position, velocity = state
# Obtain tiles activated at state cero
active_tiles = self.mctc.get_tiles(position = position, velocity = velocity)
# Select an action and obtain action values of the state
current_action, action_value = self.select_action(active_tiles)
# Save action as last action
self.last_action = current_action
# Save tiles as previous tiles
self.previous_tiles = np.copy(active_tiles)
return self.last_action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (Numpy array): the state observation from the
environment's step based, where the agent ended up after the
last step
Returns:
The action the agent is taking.
"""
# Current state
position, velocity = state
# Compute current tiles
active_tiles = self.mctc.get_tiles(position = position, velocity = velocity)
# Obtain new action and action value before updating actition values
current_action, action_value = self.select_action(active_tiles)
# Update the Sarsa Target (delta)
target = reward + (self.gamma * action_value)
# Compute last action values to update weights
last_action_val = np.sum(self.w[self.last_action][self.previous_tiles])
# As we are using tile coding, which is a variant of linear function approximation
# The gradient of the active tiles are one, otherwise cero.
grad = 1
self.w[self.last_action][self.previous_tiles] = self.w[self.last_action][self.previous_tiles] + \
self.alpha * (target - last_action_val) * grad
self.last_action = current_action
self.previous_tiles = np.copy(active_tiles)
return self.last_action
def agent_end(self, reward):
"""Run when the agent terminates.
Args:
reward (float): the reward the agent received for entering the
terminal state.
"""
# There is no action_value used here because this is the end
# of the episode.
# Compute delta
target = reward
# Compute last action value
last_action_val = np.sum(self.w[self.last_action][self.previous_tiles])
grad = 1
# Update weights
self.w[self.last_action][self.previous_tiles] = self.w[self.last_action][self.previous_tiles] + \
self.alpha * (target - last_action_val) * grad
def return_action_value(self, state):
"""Run to obtain action-values for a given state.
Args:
state (Numpy array): the state observation
Returns:
The max action-value
"""
# Current state
position, velocity = state
# Obtain tiles activated at state cero
active_tiles = self.mctc.get_tiles(position = position, velocity = velocity)
# Obtain action values for all actions (sum through rows)
action_values = np.sum(self.w[:, active_tiles], axis = 1)
# Obtain max action value
max_action_value = np.max(action_values)
return max_action_value
```
## Running the experiment
The following lines solves the Mountain Car problem and plot the average reward obtained over episodes and steps taken to solve the challenge at a specific episode.
```python
# Test Sarsa Agent
num_runs = 10
num_episodes = 100
agent_info_options = {"num_tilings": 8, "num_tiles": 8, "iht_size": 4096,
"epsilon": 0.0, "gamma": 1.0, "alpha": 0.5,
"initial_weights": 0.0, "num_actions": 3}
# Variable to store the amount of steps taken to solve the challeng
all_steps = []
# Variable to save the rewards in an episode
all_rewards = []
# Agent
agent = SarsaAgent(agent_info_options)
# Environment
env = gym.make('MountainCar-v0')
env.reset()
# Maximum number of possible iterations (default was 200)
env._max_episode_steps = 10000
# Number of runs are the times the experiment will start again (a.k.a episode)
for n_runs in tqdm(range(num_runs)):
# Resets environment
observation = env.reset()
# Reset agent
agent.agent_init(agent_info_options)
# Generate last state and action in the agent
last_action = agent.agent_start(observation)
# Steps taken at each episode to solve the challenge
steps_per_episode = []
rewards_per_episode = []
# Times the environment will start again without resetting the agent
for t in range(num_episodes):
# Store number of steps taken to solve experiment
n_steps = 0
rewards = 0
# Reset done flag
done = False
# Reset environment
observation = env.reset()
# Run until the experiment is over
while not done:
# Take a step with the environment
observation, reward, done, info = env.step(last_action)
# Number of steps the agent take to solve the challenge
n_steps += 1
# Accumulate reward
rewards += reward
# If the goal has been reached stop
if done:
# Last step with the agent
agent.agent_end(reward)
else:
# Take a step with the agent
last_action = agent.agent_step(reward, observation)
# Save the amount of steps needed to complete the experiment
# Without rebooting the agent
steps_per_episode.append(n_steps)
# Save the amount of award obtained at each episode
rewards_per_episode.append(rewards)
# Save the list of steeps needed to finish the experiment
# in all the Episodes
all_steps.append(np.array(steps_per_episode))
# Awards obtained in every episode
all_rewards.append(np.array(rewards_per_episode))
env.close()
```
100%|██████████| 10/10 [00:47<00:00, 4.74s/it]
```python
steps_average = np.mean(np.array(all_steps), axis=0)
plt.plot(steps_average, label = 'Steps')
plt.xlabel("Episodes")
plt.ylabel("Iterations",rotation=0, labelpad=40)
plt.xlim(-0.2, num_episodes)
plt.ylim(steps_average.min(), steps_average.max())
plt.title("Average iterations to solve the experiment over runs")
plt.legend()
plt.show()
print("The Minimum number of iterations used to solve the experiment were: {0}\n".format(np.array(all_steps).max()))
print("The Maximum number of iterations used to solve the experiment were: {0}\n".format(np.array(all_steps).min()))
```
```python
rewards_average = np.mean(all_rewards, axis=0)
plt.plot(rewards_average, label = 'Average Reward')
plt.xlabel("Episodes")
plt.ylabel("Sum of\n rewards\n during\n episode" ,rotation=0, labelpad=40)
plt.xlim(-0.2, num_episodes)
plt.ylim(rewards_average.min(), rewards_average.max())
plt.title("Average iterations to solve the experiment over runs")
plt.legend()
plt.show()
print("The best reward obtained solving the experiment was: {0}\n".format(np.array(all_rewards).max()))
print("The Wordt reward obtained solving the experiment was: {0}\n".format(np.array(all_rewards).min()))
```
## Using the last trained Agent
This lines shows in a video the performance of the last trained agent and save a video with the results.
```python
# Test Sarsa Agent
num_runs = 1
num_episodes = 1000
# Environment
env_to_wrap = gym.make('MountainCar-v0')
# Maximum number of possible iterations (default was 200)
env_to_wrap._max_episode_steps = 1500
env = Monitor(env_to_wrap, "./videos/mountainCar", video_callable=lambda episode_id: True, force=True)
# Number of runs are the times the experiment will start again (a.k.a episode)
for n_runs in tqdm(range(num_runs)):
# Resets environment
observation = env.reset()
# Generate last state and action in the agent
last_action = agent.agent_start(observation)
# Times the environment will start again without resetting the agent
for t in tqdm(range(num_episodes)):
# View environment
env.render()
# Take a step with the environment
observation, reward, done, info = env.step(last_action)
# If the goal has been reached stop
if done:
# Last step with the agent
agent.agent_end(reward)
break
else:
# Take a step with the agent
last_action = agent.agent_step(reward, observation)
env.close()
env_to_wrap.close()
print("Episode finished after {} timesteps".format(t+1))
```
0%| | 0/1 [00:00<?, ?it/s]
0%| | 0/1000 [00:00<?, ?it/s][A
1%|▏ | 13/1000 [00:00<00:07, 124.86it/s][A
3%|▎ | 26/1000 [00:00<00:07, 123.89it/s][A
4%|▍ | 39/1000 [00:00<00:07, 123.81it/s][A
5%|▌ | 52/1000 [00:00<00:07, 123.93it/s][A
6%|▋ | 65/1000 [00:00<00:07, 123.64it/s][A
8%|▊ | 79/1000 [00:00<00:07, 126.19it/s][A
9%|▉ | 92/1000 [00:00<00:07, 127.29it/s][A
10%|█ | 105/1000 [00:00<00:07, 126.48it/s][A
12%|█▏ | 119/1000 [00:00<00:06, 128.02it/s][A
13%|█▎ | 132/1000 [00:01<00:06, 128.56it/s][A
14%|█▍ | 145/1000 [00:01<00:06, 127.71it/s][A
16%|█▌ | 158/1000 [00:01<00:06, 126.67it/s][A
17%|█▋ | 171/1000 [00:01<00:06, 125.91it/s][A
18%|█▊ | 184/1000 [00:01<00:06, 125.88it/s][A
20%|█▉ | 197/1000 [00:01<00:06, 124.24it/s][A
22%|██▏ | 221/1000 [00:01<00:06, 125.14it/s][A
100%|██████████| 1/1 [00:02<00:00, 2.15s/it]
Episode finished after 222 timesteps
## Plotting the Action-Values of the agent
This final plot aims to show the action-values learned by the agent with Sarsa. The action value for a given state was calculated using: -$max_a\hat{q}(s, a, w)$
```python
# Resolution
values = 500
# Vector of positions
pos_vals = np.linspace(-1.2, 0.5, num = values)
# Vector of velocities
vel_vals = np.linspace(-0.07, 0.07, num = values)
# Z grid values
av_grid = np.zeros((values, values))
# Compute Action-values for each pos - vel pair
for ix in range(len(pos_vals)):
for iy in range(len(vel_vals)):
av_grid[ix][iy] = -1 * agent.return_action_value([pos_vals[ix], vel_vals[iy]])
```
```python
# Plot the 3D surface
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
Px, Vy = np.meshgrid(pos_vals, vel_vals)
ax.plot_surface(Vy, Px, av_grid, color = 'gray')
ax.set_title("Cost-to-go function learned", y = 1.1)
ax.set_xlabel('Velocity')
ax.set_ylabel('Position')
ax.set_zlabel('Iterations')
ax.view_init(45, azim=30)
plt.tight_layout()
plt.show()
```
```python
```
|
6b556213b136942c36cfaa13995a842211b14054
| 194,800 |
ipynb
|
Jupyter Notebook
|
Mountain Car.ipynb
|
MikeS96/rl_openai
|
072d640d4a96914e18b563100482a535c65e9738
|
[
"MIT"
] | 3 |
2020-06-29T14:55:45.000Z
|
2022-03-17T02:53:26.000Z
|
Mountain Car.ipynb
|
MikeS96/rl_openai
|
072d640d4a96914e18b563100482a535c65e9738
|
[
"MIT"
] | null | null | null |
Mountain Car.ipynb
|
MikeS96/rl_openai
|
072d640d4a96914e18b563100482a535c65e9738
|
[
"MIT"
] | 2 |
2021-12-08T15:38:24.000Z
|
2022-03-06T08:18:56.000Z
| 221.112372 | 98,132 | 0.894209 | true | 6,210 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.715424 | 0.746139 | 0.533806 |
__label__eng_Latn
| 0.966912 | 0.078539 |
# Espacios de fases de la ecuación con difusión
```python
from sympy.solvers import solve
from sympy import Symbol
import matplotlib.pyplot as plt
import numpy as np
```
```python
u1 = Symbol('u1')
u2 = Symbol('u2')
```
```python
# TABLE II paper diffusion
r1 = 0.0001 # en este caso, ambas poblaciones tienen una interacción
r2 = 0.6 # positiva con el entorno, aunque los depredadores son casi autónomos
b11 = 0.0019 # las presas son cooperativas entre sí
b12 = -0.00075 # la población 1 corresponde a las presas
b21 = 0.00091 # la población 2 a los depredadores
b22 = -0.0019 # los depredadores son competitivos entre sí
a1 = 0.0005
a2 = 0.000625
c1 = 0.001251
c2 = 0.001
s = r2/r1 # estos son los parámetros del sistema adimensionalizado, s es la proporción de rs
q1 = a1/(c1*r1) # los qs son las proporciones entre as y cs
q2 = a2/(c2*r1) # ambos tienen a r1 de denominador, por la factorización que se hizo para s
p11 = b11/(c1*r1) # los ps equivalen a los bs
p12 = b12/(c2*r1) # de modo que incorporan tanto la interacción
p21 = b21/(c1*r1) # como el límite de las cs
p22 = b22/(c2*r1) # y tienen el factor de r1 en el denominador, por la factorización de s
g = 1 # es el gamma que propone Murray
d = 19 # estoy escogiendo un valor de difusión mayor a la d crítica, que es 18.algo
```
```python
# Functionals. In the first step, we define the functionals.
f1 = g*u1*(1-q1*u1+(p11*u1+p12*u2)*(1-u1))
f2 = g*u2*(s-q2*u2+(p21*u1+p22*u2)*(1-u2))
```
```python
# Calculate the solutions and print them
solucion = solve([f1,f2],[u1,u2])
print(solucion)
```
[(-8.93463091074237e-5, 0.0), (0.0, 0.0), (0.0, 0.309881237415477), (0.0, 1.01906613100558), (0.424693552787801, 0.466861893539141), (0.441900679728548, 0.473154874821710), (0.736931451572265, 0.0), (0.619932130000084 - 0.348403740313958*I, 1.02658982019912 - 0.00728446411237864*I), (0.619932130000084 + 0.348403740313958*I, 1.02658982019912 + 0.00728446411237864*I)]
```python
xlist = [i[0] for i in solucion if not (np.iscomplex(complex(i[0])))]
ylist = [i[1] for i in solucion if not(np.iscomplex(complex(i[1]))) ]
plt.axvline(0, c='grey')
plt.axhline(0, c='grey')
plt.plot(xlist,ylist,'ko')
y, x = np.mgrid[min(ylist)-1:max(ylist)+1:150j, min(xlist)-1:max(xlist)+1:150j]
eq1 = g*x*(1-q1*x+(p11*x+p12*y)*(1-x))
eq2 = g*y*(s-q2*y+(p21*x+p22*y)*(1-y))
plt.streamplot(x, y, eq1, eq2, color=eq1, cmap='Greys',linewidth=1.5,density = 4.0)
plt.xlim(-0.1, 1.0)
plt.ylim(-0.1, 1.0)
plt.xlabel(r'$U_{1}$',size = 14)
plt.ylabel(r'$U_{2}$',size= 14)
#plt.tight_layout()
#plt.savefig('figure5_1.eps',dpi= 1500)
#plt.savefig('figure5_1.pdf',dpi= 1500)
plt.savefig('figure5_1.jpg',dpi= 1500)
plt.show()
```
```python
```
|
c651cfba85073eb1de1f460a71d8126798cfe9a9
| 113,960 |
ipynb
|
Jupyter Notebook
|
Chapter2_difussion/Phase_Spaces_diffusion .ipynb
|
JJ-Lab/Lucianos_Thesis
|
7ed50b0d5d12903066f8caec7df8cdf38490a060
|
[
"MIT"
] | null | null | null |
Chapter2_difussion/Phase_Spaces_diffusion .ipynb
|
JJ-Lab/Lucianos_Thesis
|
7ed50b0d5d12903066f8caec7df8cdf38490a060
|
[
"MIT"
] | null | null | null |
Chapter2_difussion/Phase_Spaces_diffusion .ipynb
|
JJ-Lab/Lucianos_Thesis
|
7ed50b0d5d12903066f8caec7df8cdf38490a060
|
[
"MIT"
] | null | null | null | 690.666667 | 109,272 | 0.951035 | true | 1,057 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.921922 | 0.841826 | 0.776097 |
__label__spa_Latn
| 0.549356 | 0.641467 |
# ENGR 1330 Exam 1 Sec 003/004 Fall 2020
Take Home Portion of Exam 1
<hr>
## Full name
## R#:
## HEX:
## ENGR 1330 Exam 1 Sec 003/004
## Date:
<hr>
## Question 1 (1 pts):
Run the cell below, and leave the results in your notebook (Windows users may get an error, leave the error in place)
```python
#### RUN! the Cell ####
import sys
! hostname
! whoami
print(sys.executable) # OK if generates an exception message on Windows machines
```
atomickitty
sensei
/opt/jupyterhub/bin/python3
<hr>
## Question 2 (9 pts):
- __When it is 8:00 in Lubbock,__
- __It is 9:00 in New York__
- __It is 14:00 in London__
- __It is 15:00 in Cairo__
- __It is 16:00 in Istanbul__
- __It is 19:00 in Hyderabad__
- __It is 22:00 in Tokyo__ <br>
__Write a function that reports the time in New York, London, Cairo, Istanbul, Hyderabad, and Tokyo based on the time in Lubbock. Use a 24-hour time format. Include error trapping that:__<br>
1- Issues a message like "Please Enter A Number from 00 to 23" if the first input is numeric but outside the range of [0,23].<br>
2- Takes any numeric input for "Lubbock time" selection , and forces it into an integer.<br>
3- Issues an appropriate message if the user's selection is non-numeric.<br>
__Check your function for these times:__
- 8:00
- 17:00
- 0:00
```python
# Code and run your solution here:
```
<hr>
## Question 3 (28 pts):
Follow the steps below. Add comments to your script and signify when each step and each task is done. *hint: For this problem you will need the numpy and pandas libraries.
- __STEP1: There are 8 digits in your R#. Define a 2x4 array with these 8 digits, name it "Rarray", and print it__
- __STEP2: Find the maximum value of the "Rarray" and its position__
- __STEP3: Sort the "Rarray" along the rows, store it in a new array named "Rarraysort", and print the new array out__
- __STEP4: Define and print a 4x4 array that has the "Rarray" as its two first rows, and "Rarraysort" as its next rows. Name this new array "DoubleRarray"__
- __STEP5: Slice and print a 4x3 array from the "DoubleRarray" that contains the last three columns of it. Name this new array "SliceRarray".__
- __STEP6: Define the "SliceRarray" as a panda dataframe:__
- name it "Rdataframe",
- name the rows as "Row A","Row B","Row C", and "Row D"
- name the columns as "Column 1", "Column 2", and "Column 3"
- __STEP7: Print the first few rows of the "Rdataframe".__
- __STEP8: Create a new dataframe object ("R2dataframe") by adding a column to the "Rdataframe", name it "Column X" and fill it with "None" values. Then, use the appropriate descriptor function and print the data model (data column count, names, data types) of the "R2dataframe"__
- __STEP9: Replace the **'None'** in the "R2dataframe" with 0. Then, print the summary statistics of each numeric column in the data frame.__
- __STEP10: Define a function based on the equation below, apply on the entire "R2dataframe", store the results in a new dataframe ("R3dataframe"), and print the results and the summary statistics again.__
$$ y = x^2 - 5x +7 $$
- __STEP11: Print the number of occurrences of each unique value in "Column 3"__
- __STEP12: Sort the data frame with respect to "Column 1" with a descending order and print it__
- __STEP13: Write the final format of the "R3dataframe" on a CSV file, named "Rfile.csv"__
- __STEP14: Read the "Rfile.csv" and print its content.__<br>
** __Make sure to attach the "Rfile.csv" file to your midterm exam submission.__
```python
# Code and Run your solution here:
```
<hr>
## Problem 4 (32 pts)
Graphing Functions Special Functions
Consider the two functions listed below:
\begin{equation}
f(x) = e^{-\alpha x}
\label{eqn:fofx}
\end{equation}
\begin{equation}
g(x) = \gamma sin(\beta x)
\label{eqn:gofx}
\end{equation}
Prepare a plot of the two functions on the same graph.
Use the values in Table below for $\alpha$, $\beta$, and $\gamma$.
|Parameter|Value|
|:---|---:|
|$\alpha$|0.50|
|$\beta$|3.00|
|$\gamma$|$\frac{\pi}{2}$|
The plot should have $x$ values ranging from $0$ to $10$ (inclusive) in sufficiently small increments to see curvature in the two functions as well as to identify the number and approximate locations of intersections. In this problem, intersections are locations in the $x-y$ plane where the two "curves" cross one another of the two plots.
#### By-hand evaluate f(x) for x=1, alpha = 1/2 (Simply enter your answer from a calculator)
f(x) = 0.606
#### By-hand evaluate g(x) for x=3.14, beta = 1/2, gamma = 2 (Simply enter your answer from a calculator)
g(x) = 2.0
```python
# Define the first function f(x,alpha), test the function using your by hand answer
def f(x,alpha):
import math
output = math.exp(-1.*alpha*x)
return(output)
f(1.0,0.5)
```
0.6065306597126334
```python
# Define the second function g(x,beta,gamma), test the function using your by hand answer
def g(x,beta,gamma):
import math
output = gamma*math.sin(beta*x)
return(output)
g(3.14,0.5,2.0)
```
1.9999993658636692
```python
# Built a list for x that ranges from 0 to 10, inclusive, with adjustable step sizes for plotting later on
xlist = []
flist = []
glist = []
alpha = 1/2
beta = 1/2
gamma = 2.
for item in range(0,10,1):
flist.append(f(item,alpha))
glist.append(g(item,beta,gamma))
xlist.append(item)
for item in range(0,10,1):
print(item,flist[item],glist[item])
```
0 1.0 0.0
1 0.6065306597126334 0.958851077208406
2 0.36787944117144233 1.682941969615793
3 0.22313016014842982 1.994989973208109
4 0.1353352832366127 1.8185948536513634
5 0.0820849986238988 1.196944288207913
6 0.049787068367863944 0.2822400161197344
7 0.0301973834223185 -0.7015664553792397
8 0.01831563888873418 -1.5136049906158564
9 0.011108996538242306 -1.955060235330194
```python
# Build a plotting function that plots both functions on the same chart
```
```python
# Using the plot as a guide, find the approximate values of x where the two curves intercept (i.e. f(x) = g(x))
# You can either use interactive input, or direct specify x values, but need to show results
```
<hr>
## Bonus Problem 1. Extra Credit (You must complete the regular problems)!
__create a class to compute the average grade (out of 10) of the students based on their grades in Quiz1, Quiz2, the Mid-term, Quiz3, and the Final exam.__
| Student Name | Quiz 1 | Quiz 2 | Mid-term | Quiz 3 | Final Exam |
| ------------- | -----------| -----------| -------------| -----------| -------------|
| Harry | 8 | 9 | 8 | 10 | 9 |
| Ron | 7 | 8 | 8 | 7 | 9 |
| Hermione | 10 | 10 | 9 | 10 | 10 |
| Draco | 8 | 7 | 9 | 8 | 9 |
| Luna | 9 | 8 | 7 | 6 | 5 |
1. __Use docstrings to describe the purpose of the class.__
2. __Create an object for each car brand and display the output as shown below.__
"Student Name": **Average Grade**
3. __Create and print out a dictionary with the student names as keys and their average grades as data.__
```python
#Code and run your solution here:
```
<hr>
## Bonus 2 Extra credit (You must complete the regular problems)!
#### Write the VOLUME Function to compute the volume of Cylinders, Spheres, Cones, and Rectangular Boxes. This function should:
- First, ask the user about __the shape of the object__ of interest using this statement:<br>
*"Please choose the shape of the object. Enter 1 for "Cylinder", 2 for "Sphere", 3 for "Cone", or 4 for "Rectangular Box""*<br>
- Second, based on user's choice in the previous step, __ask for the right inputs__.
- Third, print out an statement with __the input values and the calculated volumes__.
#### Include error trapping that:
1. Issues a message that **"The object should be either a Cylinder, a Sphere, a Cone, or a Rectangular Box. Please Enter A Number from 1,2,3, and 4!"** if the first input is non-numeric.
2. Takes any numeric input for the initial selection , and force it into an integer.
4. Issues an appropriate message if the user's selection is numeric but outside the range of [1,4]
3. Takes any numeric input for the shape characteristics , and force it into a float.
4. Issues an appropriate message if the object characteristics are as non-numerics.
#### Test the script for:
1. __Sphere, r=10__
2. __r=10 , Sphere__
3. __Rectangular Box, w=5, h=10, l=0.5__
- <font color=orange>__Volume of a Cylinder = πr²h__</font>
- <font color=orange>__Volume of a Sphere = 4(πr3)/3__</font>
- <font color=orange>__Volume of a Cone = (πr²h)/3__</font>
- <font color=orange>__Volume of a Rectangular Box = whl__</font>
```python
#Code and Run your solution here
#First
shape = input("Please choose the shape of the object. Enter 1 for Cylinder, 2 for Sphere, 3 for Cone, or 4 for Rectangular Box")
#Second, based on user's choice in the previous step, ask for the right inputs.
if shape == "1":
print('its a cylinder')
radius=float(input("Enter the radius of the cylinder"))
height=float(input("Enter the height of the cylinder"))
volumeCyl = 3.141597254*radius*radius*height
print("Volume of Cylinder is : ",round(volumeCyl,3))
elif shape == "2":
print('its a sphere')
radius=float(input("Enter the radius of the sphere"))
volumeSphere = 4*(3.141597254*pow(radius,3))/3.0
print("Volume of Sphere is : ",round(volumeSphere,3))
elif shape == "3":
print('its a cone')
radius=float(input("Enter the radius of the cone"))
height=float(input("Enter the height of the cone"))
volumeCone = (3.141597254*(radius**2)*height)/3.0
print("Volume of cone is : ",round(volumeCone,3))
elif shape == "4":
print('its a box')
width=float(input("Enter the width of the box"))
height=float(input("Enter the height of the box"))
length=float(input("Enter the length of the box"))
volumeBox = width*height*length
print("Volume of box is : ",round(volumeBox,3))
else:
print('doh')
#Third, print out an statement with the input values and the calculated volumes.
```
Please choose the shape of the object. Enter 1 for Cylinder, 2 for Sphere, 3 for Cone, or 4 for Rectangular Box 3
its a cone
Enter the radius of the cone 1
Enter the height of the cone 1
Volume of cone is : 1.047
```python
```
|
469ea54577d36241d30d8827975ced2e7481ca26
| 15,910 |
ipynb
|
Jupyter Notebook
|
5-ExamProblems/Exam1/Exam1/fall2020/Exam1-Deploy-Student-Version-Copy1.ipynb
|
dustykat/engr-1330-psuedo-course
|
3e7e31a32a1896fcb1fd82b573daa5248e465a36
|
[
"CC0-1.0"
] | null | null | null |
5-ExamProblems/Exam1/Exam1/fall2020/Exam1-Deploy-Student-Version-Copy1.ipynb
|
dustykat/engr-1330-psuedo-course
|
3e7e31a32a1896fcb1fd82b573daa5248e465a36
|
[
"CC0-1.0"
] | null | null | null |
5-ExamProblems/Exam1/Exam1/fall2020/Exam1-Deploy-Student-Version-Copy1.ipynb
|
dustykat/engr-1330-psuedo-course
|
3e7e31a32a1896fcb1fd82b573daa5248e465a36
|
[
"CC0-1.0"
] | null | null | null | 33.923241 | 348 | 0.554494 | true | 3,078 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.70253 | 0.855851 | 0.601261 |
__label__eng_Latn
| 0.984765 | 0.235261 |
# Investigating the Maxwell-Boltzmann distribution for different gases and temperatures
```python
import numpy as np
from scipy.constants import pi
import scipy.integrate as integrate
import matplotlib.pyplot as plt
%matplotlib inline
```
## Definitions
### Definition of functions
In the following the function of the Maxwell-Boltzmann distribution and the function describing the propability that a particle of a gas is moving at some certain speed $v$ in one direction are defined. The functions are as follows:
\begin{equation}
f(v) = 4 \pi \sqrt{\frac{m}{2 \pi k T}} v^2 e^{-m v^2 / 2 k T} \\
f(v_{i}) = \sqrt{\frac{m}{2 \pi k T}} \cdot e^{-m v_{i}^2 / 2 k T}
\end{equation}
These expressions can be derived from the kinetic gas model under the assumption that the fraction of molecules with velocity $v$ follows $f(v) = K e^{- E_{Kin} / k T}$. Since $E_{Kin} = \frac{1}{2} m (v_{x}^2 + v_{y}^2 + v_{z}^2)$ one can find out the constant factor $K$ through integration over whole space.
```python
def maxwell_boltzmann_distribution(x, m, k, T):
return 4 * np.pi * (m / (2 * np.pi * k * T))**0.5 * x**2 * np.exp(- (m * x**2) / (2 * k * T))
```
```python
def velocity_distribution_direction(x, m, k, T):
return (m / (2 * np.pi * k * T))**0.5 * np.exp(-m * x**2 / (2 * k * T))
```
### Definition of constants
```python
M_carbon_dioxide = 44 # mass of carbon dioxide in g/mol
m_carbon_dioxide = M_carbon_dioxide / scipy.constants.Avogadro # mass in kg
M_hydrogen = 2 # mass of hydrogen in g/mol
m_hydrogen = M_hydrogen / scipy.constants.Avogadro # mass in kg
M_boran = 14 # mass of BH3 in g/mol
m_boran = M_boran / scipy.constants.Avogadro # mass in kg
T = 298 # Temperature in K
k = scipy.constants.k # Boltzmann constant
```
## The velocity distribution in one direction
```python
integral_of_velocity_distribution_carbon_dioxide = integrate.quad(velocity_distribution_direction, -np.inf, np.inf, args=(m_carbon_dioxide, k, T))[0]
integral_of_velocity_distribution_hydrogen = integrate.quad(velocity_distribution_direction, -np.inf, np.inf, args=(m_hydrogen, k, T))[0]
integral_of_velocity_distribution_boran = integrate.quad(velocity_distribution_direction, -np.inf, np.inf, args=(m_boran, k, T))[0]
print("Integral for CO2 = ", round(integral_of_velocity_distribution_carbon_dioxide, 1))
print("Integral for H2 = ", round(integral_of_velocity_distribution_hydrogen, 1))
print("Integral for BH3 = ", round(integral_of_velocity_distribution_boran, 1))
```
Integral for CO2 = 1.0
Integral for H2 = 1.0
Integral for BH3 = 1.0
```python
x_min = -100
x_max = 100
y_max = 1.2 * (m_carbon_dioxide / (2 * np.pi * k * T))**0.5
x_data = np.linspace(x_min, x_max, 1000)
y_data_carbon_dioxide = velocity_distribution_direction(x_data, m_carbon_dioxide, k, T)
y_data_hydrogen = velocity_distribution_direction(x_data, m_hydrogen, k, T)
y_data_boran = velocity_distribution_direction(x_data, m_boran, k, T)
plt.figure(figsize=(15, 8))
plt.plot(x_data, y_data_carbon_dioxide, label="CO$_{2}$")
plt.plot(x_data, y_data_hydrogen, label="H$_{2}$")
plt.plot(x_data, y_data_boran, label="BH$_{3}$")
plt.legend(loc='best', prop={'size': 15})
plt.xlim(xmin = x_min, xmax = x_max)
plt.ylim(ymin = 0, ymax = y_max)
plt.xlabel('$v_{i}$ in ms$^{-1}$', fontsize=20)
plt.ylabel('$f(v_{i})$', fontsize=20)
plt.show()
```
## Maxwell-Boltzmann distribution
```python
x_min = 0
x_max = 120
x_peak_hydrogen = ((2 * k * T) / m_hydrogen)**0.5
y_max = 1.2 * maxwell_boltzmann_distribution(x_peak_carbon_dioxide, m_hydrogen, k, T) # 1.2 * (m_carbon_dioxide / (2 * np.pi * k * T))**0.5
x_data = np.linspace(x_min, x_max, 1000)
y_data_carbon_dioxide = maxwell_boltzmann_distribution(x_data, m_carbon_dioxide, k, T)
y_data_hydrogen = maxwell_boltzmann_distribution(x_data, m_hydrogen, k, T)
y_data_boran = maxwell_boltzmann_distribution(x_data, m_boran, k, T)
plt.figure(figsize=(15, 8))
plt.plot(x_data, y_data_carbon_dioxide, label="CO$_{2}$")
plt.plot(x_data, y_data_hydrogen, label="H$_{2}$")
plt.plot(x_data, y_data_boran, label="BH$_{3}$")
# plt.axvline(x = x_peak_carbon_dioxide, linestyle="--")
plt.legend(loc='best', prop={'size': 15})
plt.xlim(xmin = x_min, xmax = x_max)
plt.ylim(ymin = 0, ymax = y_max)
plt.xlabel('$v_{i}$ in ms$^{-1}$', fontsize=20)
plt.ylabel('$f(v_{i})$', fontsize=20)
plt.show()
```
```python
T_1 = 100
T_2 = 298
T_3 = 600
```
```python
x_min = 0
x_max = 60
x_peak_carbon_dioxide = 1.2 * ((2 * k * T_3) / m_carbon_dioxide)**0.5
y_max = 1.2 * maxwell_boltzmann_distribution(x_peak_carbon_dioxide, m_carbon_dioxide, k, T) # 1.2 * (m_carbon_dioxide / (2 * np.pi * k * T))**0.5
x_data = np.linspace(x_min, x_max, 1000)
y_data_T1 = maxwell_boltzmann_distribution(x_data, m_carbon_dioxide, k, T_1)
y_data_T2 = maxwell_boltzmann_distribution(x_data, m_carbon_dioxide, k, T_2)
y_data_T3 = maxwell_boltzmann_distribution(x_data, m_carbon_dioxide, k, T_3)
plt.figure(figsize=(15, 8))
plt.plot(x_data, y_data_T1, label="100 K")
plt.plot(x_data, y_data_T2, label="298 K ")
plt.plot(x_data, y_data_T3, label="600 K")
# plt.axvline(x = x_peak_carbon_dioxide, linestyle="--")
plt.legend(loc='best', prop={'size': 15})
plt.xlim(xmin = x_min, xmax = x_max)
plt.ylim(ymin = 0, ymax = y_max)
plt.xlabel('$v_{i}$ in ms$^{-1}$', fontsize=20)
plt.ylabel('$f(v_{i})$', fontsize=20)
plt.show()
```
```python
```
|
29034e2175eb63293208a016914f95452a06837e
| 158,828 |
ipynb
|
Jupyter Notebook
|
Semester3/Thermodynamics/PropertiesOfGases/KineticModel/TheMaxwellBoltzmannDistributionOfSpeeds.ipynb
|
Progklui/studyChemistryFloKlui
|
7b08dcf93cd888d3a93eda5b1835814b37245aa5
|
[
"MIT"
] | null | null | null |
Semester3/Thermodynamics/PropertiesOfGases/KineticModel/TheMaxwellBoltzmannDistributionOfSpeeds.ipynb
|
Progklui/studyChemistryFloKlui
|
7b08dcf93cd888d3a93eda5b1835814b37245aa5
|
[
"MIT"
] | null | null | null |
Semester3/Thermodynamics/PropertiesOfGases/KineticModel/TheMaxwellBoltzmannDistributionOfSpeeds.ipynb
|
Progklui/studyChemistryFloKlui
|
7b08dcf93cd888d3a93eda5b1835814b37245aa5
|
[
"MIT"
] | null | null | null | 502.620253 | 56,274 | 0.93323 | true | 1,761 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.868827 | 0.831143 | 0.722119 |
__label__eng_Latn
| 0.445768 | 0.516057 |
<h1><center>Guia básico para experimental no Python</center></h1>
# 1. Bibliotecas
Primeiramente é importante verificar corretamente quais bibliotecas importar, em geral importamos:
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.optimize as optimization
import scipy.odr.odrpack as odrpack
```
### Numpy
$*$ Numpy é uma biblioteca usada para administrar arrays, sendo esses "vetores" que podem ser interpretados como as coordenadas no $\mathbb{R}^{n}$. Então, você pode colocar seus dados em arrays para fazer operações matemáticas com eles, por exemplo, podemos criar um array com incertezas de dados e operar com eles.
$*$ Também podemos usar ferramentas matemáticas com muita precisão, por exemplo, Cos(x), Sin(x), Sinh(x) e MUITOS outros.
$*$ Cálculo numérico ( integração, solução de sistemas lineares e não lineares, solução de EDO,etc)
$*$ Todas as informações podem ser encontradas na referência da biblioteca: https://numpy.org/doc/stable/reference/ ,
ou também nos StackOverflow da vida
```python
incertezas = np.array([0.5,0.6,0.4,0.3,0.6])
incertezas = incertezas*2
incertezas
```
Podemos mudar apenas uma componente no array, ou então, podemos mudar conjuntos dentro do array.
(Perceba que os valores estão mudados na célula de baixo devido as operações acima)
(Perceba também que o python não conta a terceira coordenda do vetor na separação, ou seja, o intervalo é aberto no final)
```python
incertezas[1:3] = incertezas[1:3]*np.cos(2)
incertezas
```
Podemos separar o array nas componentes que nos interessam. Isso se chama "Slicing".
###### Quando declarar uma variável como um slicing de um array temos um cópia, ou seja, no nosso caso, mudar o qualquer elemento do "pedaco" mudará o mesmo elemento na variável "incerteza". Para evitar isso, basta fazer uma cópia com np.copy(array)
```python
pedaco = incertezas[:4]
pedaco
```
Vou usar numpy para criar vetores com valores para embarcar no ajuste e no gráfico, então, vou definir os valores de x, y e sigmaX.
```python
x = np.linspace(0,100,10)
```
### Modelo
```python
def f(x, b, a):
return a*x + b
# Criação dos valores de y
y = f(x,0.5,1)
#criando valores aleatórios entre o range [0.5,0.6]
sigma_y = np.random.uniform(low=2, high=5, size=(10,))
```
## Scipy
$*$ Essa biblioteca é um grupo de várias bibliotecas, nela temos quase todas as ferramentas numéricas para integração, Álgebra Linear, Analise de Fourier, estatística, etc.
$*$ A referência https://docs.scipy.org/doc/scipy/reference/index.html
### Scipy.optimization
$*$ Scipy.optimization é uma sub-biblioteca do scipy para optimização de parâmetros, no caso, usaremos o curve_fit, que serve para ajuste de funções lineares e não lineares usando MMQ, da mesma forma que o webROOT(para apenas incertezas em y), além de fornecer a matriz de covariância.
###### $*$ O método curve_fit só considera o $\sigma_y$, uma vez que utiliza o MMQ ordinário. Para incerteza em ambas as variáveis temos o MMQ total, que é feito com o método ODR do Scipy. Isso será abordado na última parte do notebook! ( parte prática).
$*$ A referência https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html
```python
#xo é o chute dos parâmetros
xo = [0,0]
fit = optimization.curve_fit(f, x, y, p0=xo, sigma=sigma_y,absolute_sigma=True)
fit
```
Agora perceba que o "fit" devolvou um dois arrays, como se fosse uma lista contendo dois arrays. O primeiro array é o chute do parâmetros e o segundo array é a matriz de covariância( note que esse array tem 2 colunas e 2 linhas). Podemos apresentar tudo isso de maneira mais agradável, como será feito na próxima sessão com a biblioteca Pandas.
(Você pode calcular o $\chi^2$ sem biblioteca, visto que é bem simples). Equação em baixo só para mostrar como usa latex aqui.
$$
\begin{align}
{\chi}^2=\sum_{k=1}^{n} \frac{(y_i - f(x_i))^2}{\sigma_y^2}
\end{align}
$$
```python
a = fit[0][1]
b = fit[0][0]
#np.diag pega a diagonal da matriz de covariância, outro artifício do numpy
stdevs = np.sqrt(np.diag(fit[1]))
#stdevs é um array contendo duas componentes stdevs[0] é a incerteza do parâmetro (b) e stdevs[1] do parâmetro (a)
Chi2 = chi = sum((y - f(x, b, a))**2/(sigma_y**2))
Chi2
```
## Pandas
$*$ Essa biblioteca se preocupa em manipular dados de forma estrutural, por exemplo, criando e operando "tabelas" que nós chamamos de DATA FRAME. Mas ela é muito mais complexa do que isso, tendo muita aplicação em machine learning( numpy também tem bastante), então, só vou abordar de forma bem superfícial, mas encorajo você a pesquisar e aprender a mexer em DATA FRAMES.
$*$ Vou usa-la para criar uma tabela dos coeficientes, uma aplicação bem porca para a biblioteca rs. Em geral eu monto os dados numa planilha do Google Sheets, baixo em csv e passo para o python em forma de Data Frame, assim fica muito fácil trabalhar. O pandas faz com que as colunas de Data Frames se comportem como arrays, dessa forma, posso fazer operações matemáticas com colunas e linhas.
$*$ O guia/referência é : https://pandas.pydata.org/docs/user_guide/index.html
```python
df = pd.DataFrame({'Parâmetro(a)' : [a,stdevs[1]], 'Parâmetro(b)' : [b,stdevs[0]]}, index=['Valor', 'Incerteza'])
df
```
### Matplotlib
$*$ O nome da biblioteca fala por ela. Para plotar todo tipo de coisa e ainda dá configurar ao máximo os gráficos.
$*$ Tem muito conteúdo na internet e a referencia dela é bacana e recheada de exemplo https://matplotlib.org/tutorials/index.html#introductory, novamente o StackOverflow é uma ótima fonte de pesquisa.
```python
h = plt.figure(1)
h = plt.plot(x,f(x,b,a), label="model", color='black')
h = plt.errorbar(x,y,sigma_y,fmt='.',color='red')
h = plt.grid()
h = plt.ylabel(r'$f(x)$')
h = plt.xlabel(r'$x$')
#para salvar o gráfico no mesmo diretório
plt.savefig('teste_1.png', format='png', dpi=150,bbox_inches = "tight")
plt.show(h)
```
Agora o gráfico de resíduos. (Lembrando, que estes são os gráficos defaults mais simples possíveis, caso queira sofisticar é só pesquisar, tem muito conteúdo!)
```python
g = plt.figure(2)
g.set_size_inches(6, 1)
g =plt.errorbar(x,y-f(x,b,a),sigma_y,fmt='.',color='red')
g = plt.hlines(0,x.min(),x.max(),color='black')
g = plt.grid()
g = plt.ylabel("f (x)")
g = plt.xlabel('x')
g = plt.savefig('teste_1_resid.png', format='png', dpi=150,bbox_inches = "tight")
g = plt.show(g)
```
## Dicas adicionais
Quero relembrar que todas as bibliotecas são vastas e os conteúdos sobre elas não se acabam, então, sempre que tiver dúvida você provavelmente vai achar com bastante facilidade na internet, principalmente, em referências e no StackOverflow.
$(*)$ Ótimo vídeo sobre o básico de curve fitting no python:
https://www.youtube.com/watch?v=Jl-Ye38qkRc&t=810s
$(*)$ Página com os códigos e noções básicas sobre o curve fitting no python:
https://towardsdatascience.com/basic-curve-fitting-of-scientific-data-with-python-9592244a2509
$(*)$ Eu usual faço tabelas no Google Sheets e import para cá com o pandas. Não é difícil fazer e nem mexer, mas exige um pouco para se acostumar e tals.
# 2. Colocando na prática
#### Vamos usar dados com incerteza em x e y ( como geralmente ocorre na disciplina de experimental)
$*$ É importante lembrar que a função curve_fit vista acima é apenas para o caso com incerteza em x
$*$ Para simular uma atividade foram gerados dados que estão em dois formatos, csv e txt.
#### Importando txt com Pandas
```python
data = pd.read_csv('exemplo.txt',sep="\t")
# No caso do separador com vírgulas é preciso fazer
# data_1 = pd.read_csv('exemplo.txt', sep="\t", decimal=",")
data.head()
```
#### Importando csv com Pandas
```python
data = pd.read_csv('exemplo.csv')
data.head()
```
### Análise dos dados
Vamos começar pelo modelo ( os dados criados são feitos para serem ajustados por uma parábola)
#### Modelo
```python
# Importante que notar que agora nossos parâmetros são representados por array
# Outro fator a se notar é a ordem na a f, ou seja, f(p,x) "p" vem primeiro
def f(p,x):
return p[0]*x**2 + p[1]*x + p[2]
```
##### Usando os valores de $x$, $y$, $\sigma_x$ e $\sigma_y$
```python
x = data['x (cm)']
y = data['y (cm)']
sigma_x = data['sigma_x (cm)']
sigma_y = data['sigma_y (cm)']
# O x é uma "Série", seria algo como um array com indexes, mas se comporta de maneira
# bastante similar
x.head()
```
### O ajuste
Como agora nossa incerteza está nas duas variáveis, precisamos utiliar um outro método da mesma biblioteca( Scipy ). O método é o ODR.
```python
# objeto modelo ( isso é um termo técnico para dizer que o modelo está guardado em um "objeto")
model_object = odrpack.Model(f)
# Objeto data ( termo técnico para dizer que dizer que o modelo está guardado em um "objeto")
data_object = odrpack.RealData(x,y,sx=sigma_x, sy=sigma_y)
# Estabelece a relação entre data e modelo, e em geral necessita de das condições iniciais
# No caso do nosso modelo temos 3 parâmetros, ou seja beta0 = lista de 3 elementos
odr = odrpack.ODR(data_object, model_object, beta0=[1,1,1])
# Roda a regressão
odr.set_job(fit_type=0) # fit_type = 0 executa a regressão por ODR e fit_type = 2 por MMQ
# O tópico do fit_type é um complicado e vai além do proposito dessa introdução
out = odr.run()
out.pprint()
```
Agora só falta implementar o $\chi^2$
$$
\begin{align}
{\chi}^2=\sum_{k=1}^{n} \frac{(y_i - f(x_i))^2}{\sigma_i^2 + \big(\frac{\partial f}{\partial x}\big)^2 \sigma_{x_i}^2 }
\end{align}
$$
Não impletarei, mas você pode fazer essa linha de código fácilmente sabendo a derivada analítica
### Gráficos
Vale ressaltar que o método plt.savefig() salvará os gráficos no diretório do .ipynb ou do .py
```python
ax2 = plt.figure(1)
ax2 = plt.errorbar(x, y, xerr=sigma_x, yerr=sigma_y, linestyle='None', marker='+',color='black')
#ax2 = plt.ylim(-1,2)
#ax2 = plt.xlim(0, 20000)
ax2 = plt.plot(x, f([2.01584576e-01, -3.87102908e+01, 1.85196706e+03],x), label='model',color='red')
# nomeando os eixos
ax2 = plt.xlabel(r'$x (cm)$')
ax2 = plt.ylabel(r'$y (cm)$')
ax2 = plt.grid()
#Salvando a figura
ax2 = plt.savefig('exemplo.png',dpi=150,bbox_inches = "tight")
ax3 = plt.figure(2)
ax3 = plt.errorbar(x, abs(y - f([2.01584576e-01, -3.87102908e+01, 1.85196706e+03],x))
,xerr=sigma_x, yerr=sigma_y, linestyle='None', marker='+',color='black')
ax3 = plt.hlines(0,x.min(),x.max(),color='red')
ax3 = plt.grid()
# Nomeando os eixos
ax2 = plt.xlabel(r'$x (cm)$')
ax2 = plt.ylabel(r'$y(cm) - modelo$')
ax3 = plt.gcf()
ax3.set_size_inches(6, 1)
#Salvando a figura
ax3 = plt.savefig('exemplo_resid.png',dpi=150,bbox_inches = "tight")
```
|
4e447a2b449a034f9ccfdeac4e831c901e5b7f69
| 17,085 |
ipynb
|
Jupyter Notebook
|
Notebook-pt.ipynb
|
Rodrigo-Motta/Intro-Data-Analysis-Exp-Physis-pt-en
|
d97b50a9739ca3825ff149d607e64a7342a511da
|
[
"MIT"
] | null | null | null |
Notebook-pt.ipynb
|
Rodrigo-Motta/Intro-Data-Analysis-Exp-Physis-pt-en
|
d97b50a9739ca3825ff149d607e64a7342a511da
|
[
"MIT"
] | null | null | null |
Notebook-pt.ipynb
|
Rodrigo-Motta/Intro-Data-Analysis-Exp-Physis-pt-en
|
d97b50a9739ca3825ff149d607e64a7342a511da
|
[
"MIT"
] | null | null | null | 30.346359 | 404 | 0.581972 | true | 3,293 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.857768 | 0.853913 | 0.732459 |
__label__por_Latn
| 0.997456 | 0.54008 |
These are the packages we need:
```python
import sympy as sp
import numpy as np
from itertools import combinations_with_replacement as itTuples
import os.path
from multiprocessing import Pool
```
# Below you will find *all* functions defined in the module.
Generates al possible combination --of length k-- from a List of elements
```python
def Tuples(List,k):
return list(itTuples(List,k))
```
MatrixProd([list1,list2,list3,...]) returns np.dot(list1,np.dot(list2,np.dot(list3,...)).
Notice that it works recursively.
```python
def MatrixProd(a):
n=len(a)-1
if n!=0:
return np.dot(MatrixProd(a[:n]),a[n])
else:
return a[0]
```
Calculates the derivative of $\mathcal{L}$ with respect to $\phi$: $\frac{d\mathcal{L}}{d\phi}$ (with all fields sent to 0).
If $\phi$ is a list, it calculates the derivative $\frac{d^{n}\mathcal{L}}{d\phi_{1}d\phi_{2}...d\phi_{n}}$.
This is used to get the feynmann rules.
```python
def Deriv(L,a):
try:
n=len(a)-1
if n>=0:
return sp.diff(Deriv(L,a[:n]),a[n])
else:
return L
except:
return sp.diff(L,a)
```
Get specific assumtions--given in list(assL)--for a Symbol--Sym.
It is used for ParameterSymbols, which is a list of all parameters and their assumtions.
```python
def GetAssumptions(Sym,assL):
tmpA=[]
for i in assL:
try:
tmpA.append(Sym.assumptions0[i] )
except:
tmpA.append(None )
return tmpA
```
Defines the paricles, parameters of the model for SU(DimN).
If Gauge='un', the G^{0}, G^{+} and G^{-} are not defined.
Also, this function defines the various useful rules for substitution needed, such as subs0, which sets all fields to 0 (needed for idendifying the vertices and minimizing the potential).
```python
def Definitions(DimN, Gauge):
global gauge, dimN
global dimRange, indexRange, mPhi2, mPhip2, v, vPhi, muH, lamH, lamHPhi, lamPhi
global Gp, H0, Gm, H0t, h, G0, H, Ht, Phi, Phit, chi, rho, phi, s
global sqrt2, subsvev, subsexpand
'''gauge, dimN, dimRange, indexRange, mPhi2, mPhip2, v, vPhi, muH, lamH, lamHPhi, lamPhi,\
Gp, H0, Gm, H0t, h, G0, H, Ht, Phi, Phit, chi, rho, phi, s,\
sqrt2, subsvev, subsexpand=0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\
0, 0, 0'''
dimN=DimN
gauge=Gauge
dimRange=np.arange(1,dimN+1);
dimRangeM=range(dimN-1)
indexRange=range(0,dimN);
sqrt2=sp.sqrt(2);
mPhi2=np.array( sp.symbols('mPhi2(1:{})(1:{})'.format(str(dimN+1),str(dimN+1)),complex=True,real=False ) ).reshape(dimN,dimN)
mPhi2[dimN-1][dimN-1]=sp.Symbol('mPhi2{}{}'.format(dimN,dimN),real=True )#this is real, due to the minimization conditions
mPhip2=np.array( sp.symbols('mPhip2(1:{})(1:{})'.format(str(dimN+1),str(dimN+1)),complex=True,real=False ) ).reshape(dimN,dimN)
#make mPhi symmetric (faster than np.triu(mPhi,+1).T+np.triu(mPhi))
for i in range(dimN):
for j in range(i+1,dimN):
mPhi2[j][i]=mPhi2[i][j]
#make mPhip hermitian (faster than np.conjugate(np.triu(mPhi,+1).T)+np.triu(mPhi))
for i in range(dimN):
for j in range(i+1,dimN):
mPhip2[j][i]=sp.conjugate(mPhip2[i][j])
#make the diagonal real. keep in mind that the squared elements of the diagonal are real.
#So the elements can be either real or imaginary
for i in range(dimN):
exec( 'mPhip2[{}][{}]=sp.Symbol( \'mPhip2{}{}\' ,real=True)'.format(str(i),str(i),str(i+1),str(i+1)) )
tmpMPHI=(np.triu(mPhi2)).reshape(dimN**2)
ParameterSymbols= np.array( [ (tmpMPHI[i], GetAssumptions(tmpMPHI[i],['complex','real','positive'] ) ) \
for i in np.nonzero(tmpMPHI)[0]] )
tmpMPHI=(np.triu(mPhip2)).reshape(dimN**2)
ParameterSymbols=np.append(ParameterSymbols, np.array( [ (tmpMPHI[i], GetAssumptions(tmpMPHI[i],['complex','real','positive'] ) )\
for i in np.nonzero(tmpMPHI)[0]] ) )
del tmpMPHI
#print EverySymbol
Phi = sp.symbols('Phi1:{}'.format(str(dimN+1)))
Phit = sp.symbols('Phi1:{}t'.format(str(dimN+1)))
if gauge=='un':
H0, H0t=sp.symbols('H0, H0t')
H = [0,H0];
Ht = [0, H0t];
else:
H0,H0t,Gp,Gm,G0=sp.symbols('H0,H0t,Gp,Gm,G0')
H = [Gp,H0];
Ht = [Gm, H0t];
##################--Declare symbols for expaned scalars
phi = list(sp.symbols('phi1:{}'.format(str(dimN))))
s = list(sp.symbols('s1:{}'.format(str(dimN))))
h , chi, rho=sp.symbols('h chi rho')
v=sp.Symbol('v',positive=True);
vPhi=sp.Symbol('vPhi',positive=True);
muH=sp.Symbol('muH');
lamH=sp.Symbol('lamH',real=True,positive=True);
lamHPhi=sp.Symbol('lamHPhi',real=True,positive=None);
lamPhi=sp.Symbol('lamPhi',real=True,positive=True);
ParameterSymbols=np.append(ParameterSymbols, np.array( [\
(v,GetAssumptions(v,['complex','real','positive'] )),\
(vPhi,GetAssumptions(vPhi,['complex','real','positive'] )),\
(lamH,GetAssumptions(lamH,['complex','real','positive'] )),\
(lamHPhi,GetAssumptions(lamHPhi,['complex','real','positive'] )),\
(lamPhi,GetAssumptions(lamPhi,['complex','real','positive'] ))]))
#Expand the fields at their vevs
if gauge=='un':
subsexpand =np.array(\
[(H0,(h+v)/sqrt2 ),(H0t,(h+v)/sqrt2 ),\
(Phi[dimN-1],(rho+ sp.I*chi+vPhi)/sqrt2 ),\
(Phit[dimN-1],(rho-sp.I*chi+vPhi)/sqrt2 )]+ \
[(Phi[i], (phi[i]+sp.I*s[i])/sqrt2 ) for i in dimRangeM]+\
[(Phit[i],(phi[i]-sp.I*s[i])/sqrt2) for i in dimRangeM])
Fields=np.array(sp.flatten([h,rho,s,chi,phi]))
subsvev = np.array(\
[(H0,v/sqrt2 ),(H0t,v/sqrt2 ),\
(Phi[dimN-1], vPhi/sqrt2 ),\
(Phit[dimN-1],vPhi/sqrt2 )]+ \
[(Phi[i], 0) for i in dimRangeM]+\
[(Phit[i],0) for i in dimRangeM])
else:
subsexpand = np.array(\
[(H0,(h+sp.I*G0+v)/sqrt2 ),(H0t,(h-sp.I*G0+v)/sqrt2 ),\
(Phi[dimN-1], (rho+sp.I*chi+vPhi)/sqrt2 ),\
(Phit[dimN-1],(rho-sp.I*chi+vPhi)/sqrt2 )]+ \
[(Phi[i], (phi[i]+sp.I*s[i])/sqrt2) for i in dimRangeM]+\
[(Phit[i],(phi[i]-sp.I*s[i])/sqrt2) for i in dimRangeM])
Fields=np.array(sp.flatten([h,rho,s,chi,phi,G0,Gp,Gm]))
subsvev = np.array(\
[(H0,v/sqrt2 ),(H0t,v/sqrt2 ),\
(G0,0),(Gm,0),(Gp,0),\
(Phi[dimN-1], vPhi/sqrt2 ),\
(Phit[dimN-1],vPhi/sqrt2 )]+ \
[(Phi[i], 0) for i in dimRangeM]+\
[(Phit[i],0) for i in dimRangeM])
return list(Fields),ParameterSymbols
```
Should run after Definitions(DimN,Gauge)! Since all parameters, fields and rules are global parameters in Definitions, GetLagrangian(AllFields) takes them and calculates the Potential and returns the Lagrangian. Here we define the substitution rules for the minimization of the potential.
AllFields is needed in order to run CheckMinimizations, which check the vanishing of the first derivatives of the potential.
```python
def GetLagrangian(AllFields=False):
#global V, constV, subsmin#these are for internal checks. Not really useful
mPhi2C=[[sp.conjugate(i) for i in x] for x in mPhi2]
V0=-muH**2/2*MatrixProd([H,Ht])+lamH/2*MatrixProd([H,Ht])**2+lamPhi/2*MatrixProd([Phi,Phit])**2\
+lamHPhi*MatrixProd([H,Ht])*MatrixProd([Phi,Phit] );
Vsoft=MatrixProd([Phi,mPhi2,Phi])+MatrixProd([Phit,mPhi2C,Phit])+MatrixProd([Phit,mPhip2,Phi])
V=(V0+Vsoft)#.subs(subsexpand)
subsmin= [ (mPhi2[i][dimN-1], -mPhip2[dimN-1][i]/2 ) for i in range(0,dimN-1)]+ \
[(muH, sp.sqrt(v**2*lamH + vPhi**2*lamHPhi)),\
(lamPhi,-(lamHPhi*v**2 + 2*mPhi2[dimN-1][dimN-1] + 2*mPhip2[dimN-1][dimN-1] + 2*sp.conjugate(mPhi2[dimN-1][dimN-1]))/vPhi**2),\
(sp.conjugate(mPhi2[dimN-1][dimN-1]),mPhi2[dimN-1][dimN-1] )]
constV=sp.simplify((V.subs(subsmin).subs(subsvev)) )
if AllFields!=False:
try:
CheckMinimizations(AllFields,V, constV, subsmin)
except:
print 'Something went wrong while checking the minimization. \nHave you passed the fields correctly? '
LMassInt = -( (V.subs(subsmin)).subs(subsexpand) -constV );
return LMassInt
def CheckMinimizations(AllFields,V, constV, subsmin):#uses only global
subs0=[ (i,0) for i in AllFields]
print 'Checking vanishing of the first derivatives of the potential...'
minV=np.unique(map(lambda i: \
sp.simplify(Deriv(V.subs(subsexpand),i ).subs(subs0).subs(subsmin) ),AllFields))
if (minV==0).all():
print 'The conditions are correct!'
else:
print 'The potential is not minimized correctlly...'
```
IdentifyInteractions(Langrangian,All_Fields,Parallel=True/False) idendifies the 2-,3-,4-point interactions
of the Fields given a Langrangian. It returns a dictionary of the form:
$$\rm{ \{2:[2-point interactions], 3:[3-point interactions],4:[4-point interactions]\} }$$
DEF_TMP is needed so that Pool does not complain:
Pool needs defined functions at the top level. So we need to define a functions which defines TMP_int (called in
IdentifyInteractions)
TMP_int calculates the derivative of the Lagrangian with respect to particles (a list of particles) and returns the particles, tmpval=the interaction term in the Langrangian and SymF=the symmetry factor (the factorial of the number of identical particles )
```python
def DEF_TMP(Langrangian,Fields):
set_fields_to_0=[(i,0) for i in Fields ]
global TMP_int
def TMP_int(particles):
SymF=np.product([ sp.factorial(particles.count(j)) for j in set(particles)])
tmpval=1/SymF*sp.simplify(Deriv(Langrangian,particles).subs(set_fields_to_0))
if tmpval!=0:
return [particles, tmpval,SymF]
else:
return 0
OPTIONS_Int=['Parallel']
DEF_OPT_Int={'Parallel':True}
def IdentifyInteractions(Langrangian,All_Fields,**opts):
#----------------Begin check opts
if len(opts) == 0:
print 'Using default options...'
opts=DEF_OPT_Int
for i in opts:
if not (i in OPTIONS_Int):
print 'invalid option '+i
print 'availabe options: '
print OPTIONS_Int
return 'ERR:: invalid option. Abort!'
xtmp=opts.copy()
for i in OPTIONS_Int:
if not (i in opts):
xtmp.update({i:DEF_OPT_Int[i]})
Parallel=xtmp['Parallel']
if Parallel!=True:
Parallel=False
#----------------End check opts
#extract all interactions involving from Min_in to Max_int particles
Min_int=2
Max_int=4
Point_N={}
DEF_TMP(Langrangian,All_Fields)
###########################################################
for i in range(Min_int,Max_int+1):
tmpTuples=Tuples(All_Fields,i)
print 'calculating {}-point interactions'.format(i)
if Parallel:
p=Pool()
FR=np.array(p.map(TMP_int,tmpTuples))
Point_N[i]= [FR[TMPI] for TMPI in np.nonzero(FR)[0] ]
p.close()
del p,FR
else:
FR=np.array(map(TMP_int,tmpTuples))
Point_N[i]= [FR[TMPI] for TMPI in np.nonzero(FR)[0] ]
del FR
return Point_N
```
FRules takes a list with the n-point interactions:
$$\text{ [(particles_1, interaction_term_1, symmetry_factor_1 ), (particles_2, interaction_term_2, symmetry_factor_2 ),... ] }$$
and returns a list with the feynamn rules and mass matrix entries.
It multiplies each 2-point interactions with -1*symmetry_factor (mass matrix entries).
It multiplies each n-point (n>2) interactions with -I*symmetry_factor (Feynman rules).
Make_Feynman_Rules calls FRules for a dictionary of the form:
$$\text{ \{2:[2-point interactions], 3:[3-point interactions],4:[4-point interactions]\} }$$
and calculates a dictionary --which is globaly available-- of the form:
$$\text{ \{2:[mass matrix entries], 3:[3-point Fenman rules],4:[4-point Fenman rules]\} }$$
```python
#The function prtcls, gets a list of particles (prts) and returns a sorted list of them
#(needed for use in FRules and VertexValue)
#Example:
#prtcls(['z','b'])---->('b', 'z')
def prtcls(prts):
return tuple(sorted( prts ) )
#---------------------------------------------------------------------
def FRules(N_Point):
N=len(N_Point[0][0])
NPoint_dict={}
if N==2:
for i in N_Point:
NPoint_dict[prtcls( map( str, i[0] ) ) ]=i[1]*(-i[2])
else:
for i in N_Point:
NPoint_dict[prtcls( map( str, i[0] ) ) ]=i[1]*(-sp.I*i[2])
return NPoint_dict
def Make_Feynman_Rules(NPoint_dict):
global DictP
DictP={}
for k in NPoint_dict.keys():
DictP[k] = FRules(NPoint_dict[k])
```
VertexValue(particle_1, particle_2, ...) gets a number of particles, and returns the corresponding Feynman rule (or
mass matrix entry, if the input consists of two particles).
```python
def VertexValue(*particles):
lp=len(particles)
try:
return DictP[lp][ prtcls( map(str, particles) ) ]
#return eval('DictP'+str(lp)+'[ prtcls( map(str, particles) ) ]' )
except:
return 0
```
CheckInteractions takes the output of IdentifyInteractions, the Initial_Lagrangian (subtracted by its constant term) and the Fields, and compares them.
```python
def CheckInteractions(N_Point_dict, Initial_Lagrangian,AllFields):
if N_Point_dict!=False and Initial_Lagrangian!=False and AllFields!=False:
testL=True
else:
testL=False
if testL:
global LMassIntfinal, L_in
print 'Checking Vertices...'
LMassIntfinal=0
SUBS0=[ (i,0) for i in AllFields]
for TypeOfVert in N_Point_dict.keys():
TypeV=N_Point_dict[TypeOfVert]
LMassIntfinal+=np.sum([ np.product(tmpi[0])*tmpi[1] for tmpi in TypeV])
L_in=Initial_Lagrangian-sp.simplify(Initial_Lagrangian.subs(SUBS0))
if (sp.simplify(LMassIntfinal-L_in))==0:
print 'The interactions have been identified correctly!!'
else:
print 'The final Lagrangian is not the same as the initial one... (check it!)'
```
StoreVert takes a dictionary of the form--the output of IdentifyInteractions-- :
$\rm{ \{2:[2-point interactions], 3:[3-point interactions],4:[4-point interactions]\} },$
All Fields, All Parameter Symbols, and prints files with the Feynman rules, mass matrix entries, fields and parameetrs.
Change 'Directory', to store them in anoter directory.
```python
def StoreVert(N_Points,AllFields,AllParameterSymbols,Directory='Frules'):
print 'Writing Vertices (Feynman Rules and mass matrix entries)...'
dirV=Directory
if not os.path.exists(dirV):
os.makedirs(dirV)
if not os.path.exists(dirV+"/SU" + str(dimN)):
os.makedirs(dirV+"/SU" + str(dimN))
files=N_Points.keys()
tmp =open(dirV+"/SU" + str(dimN)+ "/SU" + str(dimN) +'_'+gauge+ ".fields","w")
[tmp.write(str(ff)+'\n') for ff in AllFields]
tmp =open(dirV+"/SU" + str(dimN)+ "/SU" + str(dimN)+'_'+gauge+".parameters","w")
[tmp.write(str(ff)+'\n') for ff in AllParameterSymbols]
for file in files:
tmp = open(dirV+"/SU" + str(dimN)+ "/SU" + str(dimN)+"_" +str(file)+"-point_"+gauge + ".vrt","w")
if file==2:
factorI=-1
else:
factorI=-sp.I
for i in N_Points[file]:
particles=str(i[0])
vertex=str(factorI*i[1]*i[2])
line='{:<40} {:<40} {:<0}'.format(particles, '|' , vertex)
#tmp.write( particles +"|\t|"+ vertex + "\n" )
tmp.write( line +'\n')
tmp.close()
print 'All Done!'
```
# Putting everything together:
```python
#Run first this. This example defines SU(2) in the Feynman gauge.
Fields ,ParameterSymbol =Definitions(2,'feyn')
#The definitions can be used to construct the Lagrangian.
LMassInt=GetLagrangian(Fields)
#The Lagrangian can be used to find all interaction terms.
Point_N=IdentifyInteractions(LMassInt,Fields ,Parallel=True)
#Once the interactions are known, Make_Feynman_Rules makes the Feynamn rules,
#and defines a global dictionary DictP used in VertexValue.
Make_Feynman_Rules(Point_N)
#This checks that the interactions have been identified correctly.
CheckInteractions(Point_N,LMassInt,Fields )
#Saves the Feynamn rules, and parameters in a directory (./test in this case)
StoreVert(Point_N,Fields ,ParameterSymbol,'test' )
```
Checking vanishing of the first derivatives of the potential...
The conditions are correct!
calculating 2-point interactions
calculating 3-point interactions
calculating 4-point interactions
Checking Vertices...
The interactions have been identified correctly!!
Writing Vertices (Feynman Rules and mass matrix entries)...
All Done!
```python
```
|
7eef3688594adfee58798fac671bcf8fd88cdfda
| 25,316 |
ipynb
|
Jupyter Notebook
|
PseudoGoldstone-Explain.ipynb
|
dkaramit/pseudo-Goldstone_DM
|
70fbb4ad4be190226d230a20dfb19b804e15aae6
|
[
"MIT"
] | null | null | null |
PseudoGoldstone-Explain.ipynb
|
dkaramit/pseudo-Goldstone_DM
|
70fbb4ad4be190226d230a20dfb19b804e15aae6
|
[
"MIT"
] | null | null | null |
PseudoGoldstone-Explain.ipynb
|
dkaramit/pseudo-Goldstone_DM
|
70fbb4ad4be190226d230a20dfb19b804e15aae6
|
[
"MIT"
] | null | null | null | 36.01138 | 298 | 0.49131 | true | 5,193 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.740174 | 0.630594 |
__label__eng_Latn
| 0.579329 | 0.303411 |
# tSNE(t-distributed stochastic neighbor embedding)
## 符号定义
|符号|含义|
|:--:|:--:|
|$\pmb{x}$|数据点|
|X|数据点集合|
|N|数据点总数|
|$\pmb{y}$|降维后数据点|
|$p_{ij}$|原空间中数据点$\pmb{x_j}$与数据点$\pmb{x_i}$的联合概率|
|$q_{ij}$|低维空间中数据点$\pmb{y_j}$与数据点$\pmb{y_i}$的联合概率|
|$\mathcal{L}$|损失函数|
|$d$|原空间维度|
|$m$|降维后维度|
|$W$|权重矩阵|
|$D$|度矩阵|
## 概念
对于可视化来说,SNE有一个很大的问题(Crowding Problem):不同类的数据聚集在一起,边界不清晰。若不手动标注类别信息很难区分不同的类。对于降维来说,这个问题始终存在,相较于低维空间,在高维空间中有更大的空间可供距离相同的点去分布。降维到低维后,可容纳这些点的空间不够,并且对于不同的距离,空间减小的程度不同,最终导致Crowding Problem。相较于距离较近的点,可容纳中等距离的点的空间减少的更多。反映到降维上就是:非常相近的数据聚集在一起,这没有问题;但是相距更远一些的数据会有相互远离的趋势。在SNE的优化过程中则会设法对这些相互远离的数据点施加“吸引力”使得这些点聚集到一起,最终导致不同类数据的分界线不明显。
上述问题的一个改进方式是人为添加“斥力”,这正是UNI-SNE的做法,但是并没有根本上解决Crowding Problem。
t-SNE在SNE的基础上有如下改进:
1. 使用对称SNE代替原有的SNE
2. 在低维空间使用t-分布代替高斯分布计算概率
其中第一点并不能解决上述的Crowding Problem,主要是使用联合概率$p_{ij}$以及$q_{ij}$代替了原有的条件概率$p_{j|i}$以及$q_{j|i}$。这样操作后,一方面优化更为简洁,另一方面相较于SNE也有一定的提升。
第二点才是t-SNE的主要工作。t-分布相较于高斯分布“更平”,即数据点落在离期望更远的位置的概率更大。
对于SNE系列方法,在高维空间计算联合概率实际上是在将距离转化为概率,在低维空间则是将概率转化为距离。当高维空间和低维空间使用相同的方法计算概率时,相当于距离保持,即保持高维空间内数据点之间的距离分布和低维空间内数据点之间的距离分布一致。t-SNE在低维空间中用t-分布代替了高斯分布,显然,对于高维空间中距离较远的两个点,其对应的利用高斯函数计算得到的联合概率也比较小,反映到t-分布中,同一概率则会对应到距离相对更远的两个点,从而实现高维空间中相距较远的两个点在降维后不至于相距过近。
## 推导
* **对称SNE**
SNE中使用的是条件概率,对于条件概率,$p_{i|j}$和$p_{j|i}$不相等,t-SNE中使用的是联合概率,其计算方式如下
对于高维空间有
$$
\begin{equation}
p_{ij} = \frac{\exp(-||\pmb{x_i}-\pmb{x_j}||^2/2\sigma^2)}{\sum_{k\neq l}\exp(-||\pmb{x_k}-\pmb{x_l}||^2/2\sigma^2)}
\end{equation}
$$
对于低维空间有
$$
\begin{equation}
q_{ij} = \frac{\exp(-||\pmb{y_i}-\pmb{y_j}||^2)}{\sum_{k\neq l}\exp(-||\pmb{y_k}-\pmb{y_l}||^2)}
\end{equation}
$$
对于高维空间中的点来说,式-1并不是一个非常合适的选择。原始数据分布可能相当分散,对于离群点,其与其他点的距离均较大,使得与该点相关的概率非常小,不利于对该点的降维结果进行监督。在t-SNE中,对于高维空间使用下式作为概率计算的方式
$$
\begin{equation}
p_{ij} = \frac{p_{i|j} + p_{j|i}}{2N}
\end{equation}
$$
当将概率计算方式替换为式-2和式-3后,会得到一个更为简洁的梯度计算表达。
$$
\begin{equation}
\frac{\partial{\mathcal{L}}}{\partial{\pmb{y_i}}} = 4\sum_{j=1}^N(p_{ij}-q_{ij})(\pmb{y_i}-\pmb{y_j})
\end{equation}
$$
* **t-分布计算联合概率**
式-2仍是使用高斯函数计算低维空间内的联合概率,t-SNE中将这一计算方式改为
$$
\begin{equation}
q_{ij} = \frac{(1+||\pmb{y_i}-\pmb{y_j}||^2)^{-1}}{\sum_{k\neq l}(1+||\pmb{y_k}-\pmb{y_l}||^2)^{-1}}
\end{equation}
$$
* **损失函数与优化**
在式-3以及式-5的基础上可以得到对应的梯度计算方式
$$
\begin{equation}
\frac{\partial{\mathcal{L}}}{\partial{\pmb{y_i}}} = 4\sum_{j=1}^N(p_{ij}-q_{ij})(\pmb{y_i}-\pmb{y_j})(1+||\pmb{y_i}-\pmb{y_j}||^2)^{-1}
\end{equation}
$$
t-SNE原论文中对SNE、UNI-SNE以及t-SNE不同距离下梯度计算结果的对比能很好的说明这三个算法的区别
图中正值表示两个点相互吸引(降维后两点距离有减小的趋势),负值表示两个点相互排斥(降维后两点距离有增大的趋势)
主要是分析两种极端情况:
1. 原空间中两点相距较远,而降维后,尚未优化前两点相距较近
2. 原空间中两点相距较近,而降维后,尚未优化前两点相距较远
首先分析SNE
从子图a的左侧区域可以判断SNE能很好的处理第二种情况,当发生第二种情况时,SNE会迅速减小两点的间的距离以匹配原空间中的距离;但是SNE无法处理第一种情况,当原空间中两点相距较远,而降维后两点距离较近时,SNE没有足够的“修正能力”(梯度太小)来修正这种错误
然后分析UNI-SNE
不同于SNE,UNI-SNE在整个范围内添加了一个基础“排斥力”。同样的,从子图b的左侧区域可以非常清晰的判断SNE能很好的处理第二种情况。但是对于第一种情况,UNI-SNE同样没能有效解决。并且可以注意到子图b的右上角区域,即原空间和降维空间距离均较大的区域梯度为负,这会导致该区域$q_{ij}$总是要大于$p_{ij}$
最后分析t-SNE
从子图c就能看出t-SNE对于上述两个问题有更好的表现。对于原空间相距较远,而降维后相距较近的两个点,t-SNE会促使这两个点相互远离(子图c底部靠近横轴区域);对于原空间相距较近,而降维后相距较远的两个点,t-SNE会促使这两个点相互靠近(子图c左侧靠近纵轴区域);而对于原空间以及降维空间距离相近的区域(子图c左下角以及右上角区域),t-SNE会尽量保持不变(梯度为0)
* **训练优化技巧**
t-SNE原论文给出了一些训练优化技巧,可以总结为如下几点
1. 带动量的梯度下降法
$$
\begin{equation}
\pmb{y_i}(t+1) = \pmb{y_i}(t) - \eta\frac{\partial{\mathcal{L}}}{\partial{\pmb{y_i}}} + \alpha(t)(\pmb{y_i}(t) - \pmb{y_i}(t-1))
\end{equation}
$$
2. 学习率衰减。t-SNE中借鉴的Increased rates of convergence through learning rate adaptation
3. early exaggeration。在训练早期,将所有$p_{ij}$扩大一定倍数,这样操作后,会促使$q_{ij}$尽可能大,即使得训练早期所有相同类别的样本点尽可能聚集,不同类别的样本点则相互远离,以便于样本点以簇的形式迭代更新,并有利于形成比较好的全局结构
t-SNE还给出了一个训练流程示例:
1. 总迭代次数设定为1000
2. early exaggeration:设定early exaggeration=4训练50轮
3. 动量设置:前250轮设定动量为0.5,后续训练设定动量为0.8
4. 学习率设置:初始学习率为100,然后依照上述的学习率衰减策略进行学习率衰减。
## 算法步骤
1. 定义数据集$X$,降维后维度m
2. 确定参数$\sigma$
3. 在低维空间中随机生成降维结果
4. 利用下式计算降维前的联合概率
$$
p_{j|i} = \frac{\exp(-||\pmb{x_i}-\pmb{x_j}||^2/2\sigma^2)}{\sum_{k\neq i}\exp(-||\pmb{x_i}-\pmb{x_k}||^2/2\sigma^2)}
$$
$$
p_{ij} = \frac{p_{j|i} + p_{i|j}}{2N}
$$
5. 利用下式计算降维后的联合概率
$$
q_{ij} = \frac{(1+||\pmb{y_i}-\pmb{y_j}||^2)^{-1}}{\sum_{k\neq l}(1+||\pmb{y_k}-\pmb{y_l}||^2)^{-1}}
$$
6. 利用下式计算梯度
$$
\frac{\partial{\mathcal{L}}}{\partial{\pmb{y_i}}} = 4\sum_{j=1}^N(p_{ij}-q_{ij})(\pmb{y_i}-\pmb{y_j})(1+||\pmb{y_i}-\pmb{y_j}||^2)^{-1}
$$
7. 利用带动量的梯度下降法更新降维结果
$$
\pmb{y_i}(t+1) = \pmb{y_i}(t) - \eta\frac{\partial{\mathcal{L}}}{\partial{\pmb{y_i}}} + \alpha(t)(\pmb{y_i}(t) - \pmb{y_i}(t-1))
$$
8. 重复6、7步直至达到迭代停止条件
## 参考资料
https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html
L. Van der Maaten, G. Hinton. Visualizing data using t-SNE[J]. Journal of machine learning research, 2008, 9(11).
```python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import torch
from torchvision import transforms, datasets
from sklearn.manifold import _utils, TSNE
from scipy.stats import entropy
from sklearn.metrics.pairwise import pairwise_distances
```
```python
class MytSNE(object):
def __init__(self, n_components, perplexity, random_state, learning_rate, n_iter):
self.n_components = n_components
self.perplexity = perplexity
self.random_state = random_state
self.learning_rate = learning_rate
self.n_iter = n_iter
self.condition_p = None
self.condition_q = None
def fit_transform(self, input_data, reduction_mat_init=None):
self.input_data = np.array(input_data)
n_samples, sample_dims = self.input_data.shape
# compute condition p
self._compute_condition_p(self.input_data)
# create reduction result
if reduction_mat_init is not None:
reduction_mat = reduction_mat_init.copy()
else:
np.random.seed(self.random_state)
reduction_mat = 1e-4 * np.random.randn(n_samples, self.n_components).astype(np.float32)
# part 1
# momentum: 0.5
# early exaggeration:4
# iter:250
print("learning schedule part 1 begin...")
self.condition_p *= 12.
reduction_mat = self._optimize(reduction_mat, 0.5, 250, self.learning_rate, n_samples=n_samples)
print("learning schedule part 1 done...")
# part 2
# momentum: 0.8
# early exaggeration:1
# iter:max_iter - 250
print("learning schedule part 2 begin...")
self.condition_p /= 12.
reduction_mat = self._optimize(reduction_mat, 0.8, self.n_iter, self.learning_rate, n_samples=n_samples)
print("learning schedule part 2 done...")
return reduction_mat
def _compute_condition_p(self, input_data):
distance_vector = pairwise_distances(input_data, squared=True).astype(np.float32, copy=False)
self.condition_p = _utils._binary_search_perplexity(distance_vector, self.perplexity, False)
self.condition_p = (self.condition_p + self.condition_p.T)/(2 * np.sum(self.condition_p))
def _optimize(self, params, momentum, max_iter, learning_rate, n_samples):
temp_params = params.copy()
temp_update_mat = np.zeros_like(params)
gains = np.ones_like(params)
for i in range(max_iter):
train_loss, grad_mat = self.kl_loss(temp_params, n_samples)
inc = temp_update_mat * grad_mat
gains[np.argwhere(inc < 0)] += 0.2
gains[np.argwhere(inc >= 0)] *= 0.8
np.clip(gains, 0.01, np.inf, out=gains)
grad_mat *= gains
temp_update_mat = - learning_rate * grad_mat + momentum * temp_update_mat
temp_params += temp_update_mat
return temp_params
def kl_loss(self, input_data, n_samples):
distance_mat = pairwise_distances(input_data, squared=True).astype(np.float32, copy=False)
distance_mat += 1.
distance_mat = np.power(distance_mat, -1)
self.condition_q = distance_mat / (np.sum(distance_mat) - np.sum(np.diag(distance_mat)))
_loss = np.sum(entropy(self.condition_p, self.condition_q))
grad_mat = np.zeros((n_samples, self.n_components), dtype=input_data.dtype)
PQd = (self.condition_p - self.condition_q) * distance_mat
for i in range(n_samples):
grad_mat[i] = np.matmul(PQd[i].reshape(1, -1), input_data[i] - input_data).reshape(-1)
grad_mat *= 4.
return _loss, grad_mat
```
```python
# ------------------------------- data -------------------------------------------
transform_ = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5), (0.5))])
data_total = datasets.MNIST('../data/mnist', train=True, transform=transform_, download=True)
# using 0~4
data_index = torch.where(data_total.targets < 5)
data_total.targets = data_total.targets[data_index][:1000]
data_total.data = data_total.data[data_index][:1000]
# init
np.random.seed(0)
reduction_init = 1e-4 * np.random.randn(data_total.data.numpy().shape[0], 2)
# ---------------------------- sklearn TSNE ---------------------------
sklearn_tsne = TSNE(n_components=2, random_state=0, perplexity=50, learning_rate=100.0, n_iter=1000, method="exact", init=reduction_init)
sklearn_tsne_result = sklearn_tsne.fit_transform(data_total.data.numpy().reshape(-1, 28*28))
# ---------------------------- My TSNE ---------------------------
my_tsne = MytSNE(n_components=2, random_state=0, perplexity=50, learning_rate=100.0, n_iter=1000)
my_tsne_result = my_tsne.fit_transform(data_total.data.numpy().reshape(-1, 28*28), reduction_mat_init=reduction_init)
# ---------------------- draw --------------------------
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(121)
plt.title("Projection of MNIST using Sklearn t-SNE", fontsize=15)
for i in np.unique(data_total.targets.numpy()):
point_index_list = np.argwhere(data_total.targets == i)
ax.scatter(sklearn_tsne_result[point_index_list, 0], sklearn_tsne_result[point_index_list, 1], cmap=plt.cm.Spectral, label=i)
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
ax.axis("tight")
plt.legend()
ax = fig.add_subplot(122)
plt.title("Projection of MNIST using My t-SNE", fontsize=15)
for i in np.unique(data_total.targets.numpy()):
point_index_list = np.argwhere(data_total.targets == i)
ax.scatter(my_tsne_result[point_index_list, 0], my_tsne_result[point_index_list, 1], cmap=plt.cm.Spectral, label=i)
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
ax.axis("tight")
plt.legend()
plt.show()
```
|
51c5bc11ae013efb45da50c82cb1d899cac54e64
| 126,217 |
ipynb
|
Jupyter Notebook
|
13_tSNE/tSNE.ipynb
|
koolo233/dimensionality_reduction_python
|
452a927772c546f68d6a63e96cdb017b23e4077c
|
[
"MIT"
] | null | null | null |
13_tSNE/tSNE.ipynb
|
koolo233/dimensionality_reduction_python
|
452a927772c546f68d6a63e96cdb017b23e4077c
|
[
"MIT"
] | null | null | null |
13_tSNE/tSNE.ipynb
|
koolo233/dimensionality_reduction_python
|
452a927772c546f68d6a63e96cdb017b23e4077c
|
[
"MIT"
] | null | null | null | 326.142119 | 111,362 | 0.919179 | true | 5,042 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.865224 | 0.817574 | 0.707385 |
__label__yue_Hant
| 0.184191 | 0.481824 |
# Qiskit Aer: Pulse simulation of two qubits using a Duffing oscillator model
This notebook shows how to use the Qiskit Aer pulse simulator, which simulates experiments specified as pulse `Schedule` objects at the Hamiltonian level. The simulator solves the Schrodinger equation for a specified Hamiltonian model and pulse `Schedule` in the frame of the drift Hamiltonian.
In particular, in this tutorial we will:
- Construct a model of a two qubit superconducting system.
- Calibrate $\pi$ pulses on each qubit in the simulated system.
- Observe cross-resonance oscillations when driving qubit 1 with target qubit 0.
The Introduction outlines the concepts and flow of this notebook.
## 1. Introduction <a name='introduction'></a>
The main sections proceed as follows.
### Section 3: Duffing oscillator model
To simulate a physical system, it is necessary to specify a model. In this notebook, we will model superconducting qubits as a collection of *Duffing oscillators*. The model is specified in terms of the following parameters:
- Each Duffing oscillator is specified by a frequency $\nu$, anharmonicity $\alpha$, and drive strength $r$, which result in the Hamiltonian terms:
$$\begin{equation}
2\pi\nu a^\dagger a + \pi \alpha a^\dagger a(a^\dagger a - 1) + 2 \pi r (a + a^\dagger) \times D(t),
\end{equation}$$
where $D(t)$ is the signal on the drive channel for the qubit, and $a^\dagger$ and $a$ are, respectively, the creation and annihilation operators for the qubit. Note that the drive strength $r$ sets the scaling of the control term, with $D(t)$ assumed to be a complex and unitless number satisfying $|D(t)| \leq 1$.
- A coupling between a pair of oscillators $(l,k)$ is specified by the coupling strength $J$, resulting in an exchange coupling term:
$$\begin{equation}
2 \pi J (a_l a_k^\dagger + a_l^\dagger a_k),
\end{equation}$$
where the subscript denotes which qubit the operators act on.
- Additionally, for numerical simulation, it is necessary to specify a cutoff dimension; the Duffing oscillator model is *infinite dimensional*, and computer simulation requires restriction of the operators to a finite dimensional subspace.
**In the code:** We will define a model of the above form for two coupled qubits using the helper function `duffing_system_model`.
### Section 4: $\pi$-pulse calibration using Ignis
Once the model is defined, we will calibrate $\pi$-pulses on each qubit. A $\pi$-pulse is defined as a pulse on the drive channel of a qubit that "flips" the qubit; i.e. that takes the ground state to the first excited state, and the first excited state to the ground state.
We will experimentally find a $\pi$-pulse for each qubit using the following procedure:
- A fixed pulse shape is set - in this case it will be a Gaussian pulse.
- A sequence of experiments is run, each consisting of a Gaussian pulse on the qubit, followed by a measurement, with each experiment in the sequence having a subsequently larger amplitude for the Gaussian pulse.
- The measurement data is fit, and the pulse amplitude that completely flips the qubit is found (i.e. the $\pi$-pulse amplitude).
**In the code:** Using Ignis we will construct `Schedule` objects for the above experiments, then fit the data to find the $\pi$-pulse amplitudes.
### Section 5: Cross-resonance oscillations
Once the $\pi$-pulses are calibrated, we will simulate the effects of cross-resonance driving on qubit $1$ with target qubit $0$. This means that we will drive qubit $1$ at the frequency of qubit $0$, with the goal of observing that the trajectory and oscillations of qubit $0$ *depends* on the state of qubit $1$. This phenomenon provides a basis for creating two-qubit *controlled* gates. Note: This section requires the calibration of the $\pi$-pulse in Section 4.
To observe cross-resonance driving, we will use experiments very similar to the $\pi$-pulse calibration case:
- Initially, qubit $1$ is either left in the ground state, or is driven to its first excited state using the $\pi$-pulse found in Section 4.
- A sequence of experiments is run, each consisting of a Gaussian pulse on qubit $1$ driven at the frequency of qubit $0$, followed by a measurement of both qubits, with each experiment of the sequence having a subsequently larger amplitude for the Gaussian pulse.
**In the code:** Functions for defining the experiments and visualizing the data are constructed, including a visualization of the trajectory of the target qubit on the Bloch sphere.
## 2. Imports <a name='imports'></a>
This notebook makes use of the following imports.
```python
import numpy as np
from scipy.optimize import curve_fit, root
# visualization tools
import matplotlib.pyplot as plt
from qiskit.visualization.bloch import Bloch
```
Import qiskit libraries for working with `pulse` and calibration:
```python
import qiskit.pulse as pulse
from qiskit.pulse.library import Gaussian, GaussianSquare
from qiskit.compiler import assemble
import qiskit.ignis
from qiskit.ignis.characterization.calibrations import rabi_schedules, RabiFitter
```
Imports for qiskit pulse simulator:
```python
# The pulse simulator
from qiskit.providers.aer import PulseSimulator
# function for constructing duffing models
from qiskit.providers.aer.pulse import duffing_system_model
```
## 3. Duffing oscillator system model <a name='duffing'></a>
An object representing a model for a collection of Duffing oscillators can be constructed using the `duffing_system_model` function. Here we construct a $2$ Duffing oscillator model with cutoff dimension $3$.
```python
# cutoff dimension
dim_oscillators = 3
# frequencies for transmon drift terms, harmonic term and anharmonic term
# Number of oscillators in the model is determined from len(oscillator_freqs)
oscillator_freqs = [5.0e9, 5.2e9]
anharm_freqs = [-0.33e9, -0.33e9]
# drive strengths
drive_strengths = [0.02e9, 0.02e9]
# specify coupling as a dictionary (qubits 0 and 1 are coupled with a coefficient 0.002e9)
coupling_dict = {(0,1): 0.002e9}
# sample duration for pulse instructions
dt = 1e-9
# create the model
two_qubit_model = duffing_system_model(dim_oscillators=dim_oscillators,
oscillator_freqs=oscillator_freqs,
anharm_freqs=anharm_freqs,
drive_strengths=drive_strengths,
coupling_dict=coupling_dict,
dt=dt)
```
/tmp/qiskit_release/lib/python3.9/site-packages/qiskit/providers/aer/pulse/system_models/string_model_parser/string_model_parser.py:280: DeprecationWarning: Using the `__mul__` operator `A * B` as shorthand for `A.dot(B)` is deprecated as of version 0.17.0 and will be removed no earlier than 3 months after the release date. As an alternative, use the compose operator `B & A` in place of `A * B` as a replacement.
stack.append(op1 * op2)
The function `duffing_system_model` returns a `PulseSystemModel` object, which is a general object for storing model information required for simulation with the `PulseSimulator`.
## 4 Calibrating $\pi$ pulses on each qubit using Ignis <a name='rabi'></a>
As described in the introduction, we now calibrate $\pi$ pulses on each qubit in `two_qubit_model`. The experiments in this calibration procedure are known as *Rabi experiments*, and the data we will observe are known as *Rabi oscillations*.
### 4.1 Constructing the schedules
We construct the schedules using the `rabi_schedules` function in Ignis. To do this, we need to supply an `InstructionScheduleMap` containing a measurement schedule.
```python
# list of qubits to be used throughout the notebook
qubits = [0, 1]
# Construct a measurement schedule and add it to an InstructionScheduleMap
meas_amp = 0.025
meas_samples = 1200
meas_sigma = 4
meas_width = 1150
meas_pulse = GaussianSquare(duration=meas_samples, amp=meas_amp,
sigma=meas_sigma, width=meas_width)
acq_sched = pulse.Acquire(meas_samples, pulse.AcquireChannel(0), pulse.MemorySlot(0))
acq_sched += pulse.Acquire(meas_samples, pulse.AcquireChannel(1), pulse.MemorySlot(1))
measure_sched = pulse.Play(meas_pulse, pulse.MeasureChannel(0)) | pulse.Play(meas_pulse, pulse.MeasureChannel(1)) | acq_sched
inst_map = pulse.InstructionScheduleMap()
inst_map.add('measure', qubits, measure_sched)
```
Next, construct the Rabi schedules.
```python
# construct Rabi experiments
drive_amps = np.linspace(0, 0.9, 48)
drive_sigma = 16
drive_duration = 128
drive_channels = [pulse.DriveChannel(0), pulse.DriveChannel(1)]
rabi_experiments, rabi_amps = rabi_schedules(amp_list=drive_amps,
qubits=qubits,
pulse_width=drive_duration,
pulse_sigma=drive_sigma,
drives=drive_channels,
inst_map=inst_map,
meas_map=[[0, 1]])
```
The `Schedule`s in `rabi_schedules` correspond to experiments to generate Rabi oscillations on both qubits in parallel. Each experiment consists of a Gaussian pulse on the qubits of a given magnitude, followed by measurement.
For example:
```python
rabi_experiments[10].draw()
```
### 4.2 Simulate the Rabi experiments
To simulate the Rabi experiments, assemble the `Schedule` list into a qobj. When assembling, pass the `PulseSimulator` as the backend.
Here, we want to use local oscillators with frequencies automatically computed from Duffing model Hamiltonian.
```python
# instantiate the pulse simulator
backend_sim = PulseSimulator(system_model=two_qubit_model)
# compute frequencies from the Hamiltonian
qubit_lo_freq = two_qubit_model.hamiltonian.get_qubit_lo_from_drift()
rabi_qobj = assemble(rabi_experiments,
backend=backend_sim,
qubit_lo_freq=qubit_lo_freq,
meas_level=1,
meas_return='avg',
shots=512)
```
Run the simulation using the simulator backend.
```python
# run the simulation
rabi_result = backend_sim.run(rabi_qobj).result()
```
/tmp/qiskit_release/lib/python3.9/site-packages/qiskit/providers/aer/pulse/system_models/string_model_parser/operator_generators.py:154: DeprecationWarning: Using the `__matmul__` operator `A @ B` as shorthand for `A.compose(B)` is deprecated as of version 0.17.0 and will be removed no earlier than 3 months after the release date. Use the `A & B` instead.
proj_op += estate @ estate.adjoint()
### 4.3 Fit and plot the data
Next, we use `RabiFitter` in Ignis to fit the data, extract the $\pi$-pulse amplitude, and then plot the data.
```python
rabifit = RabiFitter(rabi_result, rabi_amps, qubits, fit_p0 = [0.5,0.5,0.6,1.5])
plt.figure(figsize=(15, 10))
q_offset = 0
for qubit in qubits:
ax = plt.subplot(2, 2, qubit + 1)
rabifit.plot(qubit, ax=ax)
print('Pi Amp: %f'%rabifit.pi_amplitude(qubit))
plt.show()
```
Plotted is the averaged IQ data for observing each qubit. Observe that here, each qubit oscillates between the 0 and 1 state. The amplitude at which a given qubit reaches the peak of the oscillation is the desired $\pi$-pulse amplitude.
## 5. Oscillations from cross-resonance drive <a name='cr'></a>
Next, we simulate the effects of a cross-resonance drive on qubit $1$ with target qubit $0$, observing that the trajectory and oscillations of qubit $0$ *depends* on the state of qubit $1$.
**Note:** This section depends on the $\pi$-pulse calibrations of Section 2.
### 5.1 Cross-resonance `ControlChannel` indices
Driving qubit $1$ at the frequency of qubit $0$ requires use of a pulse `ControlChannel`. The model generating function `duffing_system_model`, automatically sets up `ControlChannels` for performing cross-resonance drives between pairs of coupled qubits. The index of the `ControlChannel` for performing a particular cross-resonance drive is retrievable using the class method `control_channel_index` on the returned `PulseSystemModel`. For example, to get the `ControlChannel` index corresponding to a CR drive on qubit 1 with target 0, call the function `control_channel_index` with the tuple `(1,0)`:
```python
two_qubit_model.control_channel_index((1,0))
```
1
Hence, to perform a cross-resonance drive on qubit $1$ with target qubit $0$, use `ControlChannel(1)`. This will be made use of when constructing `Schedule` objects in this section.
### 5.2 Functions to generate the experiment list, and analyze the output
First, we define a function `cr_drive_experiments`, which, given the drive and target indices, and the option to either start with the drive qubit in the ground or excited state, returns a list of experiments for observing the oscillations.
```python
# store the pi amplitudes from Section 2 in a list
pi_amps = [rabifit.pi_amplitude(0), rabifit.pi_amplitude(1)]
def cr_drive_experiments(drive_idx,
target_idx,
flip_drive_qubit = False,
cr_drive_amps=np.linspace(0, 0.9, 16),
cr_drive_samples=800,
cr_drive_sigma=4,
pi_drive_samples=128,
pi_drive_sigma=16):
"""Generate schedules corresponding to CR drive experiments.
Args:
drive_idx (int): label of driven qubit
target_idx (int): label of target qubit
flip_drive_qubit (bool): whether or not to start the driven qubit in the ground or excited state
cr_drive_amps (array): list of drive amplitudes to use
cr_drive_samples (int): number samples for each CR drive signal
cr_drive_sigma (float): standard deviation of CR Gaussian pulse
pi_drive_samples (int): number samples for pi pulse on drive
pi_drive_sigma (float): standard deviation of Gaussian pi pulse on drive
Returns:
list[Schedule]: A list of Schedule objects for each experiment
"""
# Construct measurement commands to be used for all schedules
meas_amp = 0.025
meas_samples = 1200
meas_sigma = 4
meas_width = 1150
meas_pulse = GaussianSquare(duration=meas_samples, amp=meas_amp,
sigma=meas_sigma, width=meas_width)
acq_sched = pulse.Acquire(meas_samples, pulse.AcquireChannel(0), pulse.MemorySlot(0))
acq_sched += pulse.Acquire(meas_samples, pulse.AcquireChannel(1), pulse.MemorySlot(1))
# create measurement schedule
measure_sched = (pulse.Play(meas_pulse, pulse.MeasureChannel(0)) |
pulse.Play(meas_pulse, pulse.MeasureChannel(1))|
acq_sched)
# Create schedule
schedules = []
for ii, cr_drive_amp in enumerate(cr_drive_amps):
# pulse for flipping drive qubit if desired
pi_pulse = Gaussian(duration=pi_drive_samples, amp=pi_amps[drive_idx], sigma=pi_drive_sigma)
# cr drive pulse
cr_width = cr_drive_samples - 2*cr_drive_sigma*4
cr_rabi_pulse = GaussianSquare(duration=cr_drive_samples,
amp=cr_drive_amp,
sigma=cr_drive_sigma,
width=cr_width)
# add commands to schedule
schedule = pulse.Schedule(name='cr_rabi_exp_amp_%s' % cr_drive_amp)
# flip drive qubit if desired
if flip_drive_qubit:
schedule += pulse.Play(pi_pulse, pulse.DriveChannel(drive_idx))
# do cr drive
# First, get the ControlChannel index for CR drive from drive to target
cr_idx = two_qubit_model.control_channel_index((drive_idx, target_idx))
schedule += pulse.Play(cr_rabi_pulse, pulse.ControlChannel(cr_idx)) << schedule.duration
schedule += measure_sched << schedule.duration
schedules.append(schedule)
return schedules
```
Next we create two functions for observing the data:
- `plot_cr_pop_data` - for plotting the oscillations between the ground state and the first excited state
- `plot_bloch_sphere` - for viewing the trajectory of the target qubit on the bloch sphere
```python
def plot_cr_pop_data(drive_idx,
target_idx,
sim_result,
cr_drive_amps=np.linspace(0, 0.9, 16)):
"""Plot the population of each qubit.
Args:
drive_idx (int): label of driven qubit
target_idx (int): label of target qubit
sim_result (Result): results of simulation
cr_drive_amps (array): list of drive amplitudes to use for axis labels
"""
amp_data_Q0 = []
amp_data_Q1 = []
for exp_idx in range(len(cr_drive_amps)):
exp_mem = sim_result.get_memory(exp_idx)
amp_data_Q0.append(np.abs(exp_mem[0]))
amp_data_Q1.append(np.abs(exp_mem[1]))
plt.plot(cr_drive_amps, amp_data_Q0, label='Q0')
plt.plot(cr_drive_amps, amp_data_Q1, label='Q1')
plt.legend()
plt.xlabel('Pulse amplitude, a.u.', fontsize=20)
plt.ylabel('Signal, a.u.', fontsize=20)
plt.title('CR (Target Q{0}, driving on Q{1})'.format(target_idx, drive_idx), fontsize=20)
plt.grid(True)
def bloch_vectors(drive_idx, drive_energy_level, sim_result):
"""Plot the population of each qubit.
Args:
drive_idx (int): label of driven qubit
drive_energy_level (int): energy level of drive qubit at start of CR drive
sim_result (Result): results of simulation
Returns:
list: list of Bloch vectors corresponding to the final state of the target qubit
for each experiment
"""
# get the dimension used for simulation
dim = int(np.sqrt(len(sim_result.get_statevector(0))))
# get the relevant dressed state indices
idx0 = 0
idx1 = 0
if drive_idx == 0:
if drive_energy_level == 0:
idx0, idx1 = 0, dim
elif drive_energy_level == 1:
idx0, idx1 = 1, dim + 1
if drive_idx == 1:
if drive_energy_level == 0:
idx0, idx1 = 0, 1
elif drive_energy_level == 1:
idx0, idx1 = dim, dim + 1
# construct Pauli operators for correct dressed manifold
state0 = np.array([two_qubit_model.hamiltonian._estates[idx0]])
state1 = np.array([two_qubit_model.hamiltonian._estates[idx1]])
outer01 = np.transpose(state0)@state1
outer10 = np.transpose(state1)@state0
outer00 = np.transpose(state0)@state0
outer11 = np.transpose(state1)@state1
X = outer01 + outer10
Y = -1j*outer01 + 1j*outer10
Z = outer00 - outer11
# function for computing a single bloch vector
bloch_vec = lambda vec: np.real(np.array([np.conj(vec)@X@vec, np.conj(vec)@Y@vec, np.conj(vec)@Z@vec]))
return [bloch_vec(sim_result.get_statevector(idx)) for idx in range(len(sim_result.results))]
def plot_bloch_sphere(bloch_vectors):
"""Given a list of Bloch vectors, plot them on the Bloch sphere
Args:
bloch_vectors (list): list of bloch vectors
"""
sphere = Bloch()
sphere.add_points(np.transpose(bloch_vectors))
sphere.show()
```
### 5.3 Drive qubit 1 to observe CR oscillations on qubit 0
#### Qubit 1 in the ground state
First, we drive with both qubit 0 and qubit 1 in the ground state.
```python
# construct experiments
drive_idx = 1
target_idx = 0
flip_drive = False
experiments = cr_drive_experiments(drive_idx, target_idx, flip_drive)
# compute frequencies from the Hamiltonian
qubit_lo_freq = two_qubit_model.hamiltonian.get_qubit_lo_from_drift()
# assemble the qobj
cr_rabi_qobj = assemble(experiments,
backend=backend_sim,
qubit_lo_freq=qubit_lo_freq,
meas_level=1,
meas_return='avg',
shots=512)
```
Run the simulation:
```python
sim_result = backend_sim.run(cr_rabi_qobj).result()
plot_cr_pop_data(drive_idx, target_idx, sim_result)
```
Observe that qubit 1 remains in the ground state, while excitations are driven in qubit 0.
We may also observe the trajectory of qubit 0 on the Bloch sphere:
```python
bloch_vecs = bloch_vectors(drive_idx, int(flip_drive), sim_result)
plot_bloch_sphere(bloch_vecs)
```
#### Qubit 1 in the first excited state
Next, we again perform a CR drive qubit 1 with qubit 0 as the target, but now we start each experiment by flipping qubit 1 into the first excited state.
```python
# construct experiments, now with flip_drive == True
drive_idx = 1
target_idx = 0
flip_drive = True
experiments = cr_drive_experiments(drive_idx, target_idx, flip_drive)
# compute frequencies from the Hamiltonian
qubit_lo_freq = two_qubit_model.hamiltonian.get_qubit_lo_from_drift()
# assemble the qobj
cr_rabi_qobj = assemble(experiments,
backend=backend_sim,
qubit_lo_freq=qubit_lo_freq,
meas_level=1,
meas_return='avg',
shots=512)
```
```python
sim_result = backend_sim.run(cr_rabi_qobj).result()
plot_cr_pop_data(drive_idx, target_idx, sim_result)
```
Observe that now qubit 1 is in the excited state, while oscillations are again being driven on qubit 0, now at a different rate as before.
Again, observe the trajectory of qubit 0 on the Bloch sphere:
```python
bloch_vecs = bloch_vectors(drive_idx, int(flip_drive), sim_result)
plot_bloch_sphere(bloch_vecs)
```
Here we see that qubit 0 takes a *different* trajectory on the Bloch sphere when qubit 1 is in the excited state. This is what enables controlled operations between two qubits.
```python
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
/tmp/qiskit_release/lib/python3.9/site-packages/qiskit/aqua/__init__.py:86: DeprecationWarning: The package qiskit.aqua is deprecated. It was moved/refactored to qiskit-terra For more information see <https://github.com/Qiskit/qiskit-aqua/blob/main/README.md#migration-guide>
warn_package('aqua', 'qiskit-terra')
<h3>Version Information</h3><table><tr><th>Qiskit Software</th><th>Version</th></tr><tr><td><code>qiskit-terra</code></td><td>0.18.2</td></tr><tr><td><code>qiskit-aer</code></td><td>0.8.2</td></tr><tr><td><code>qiskit-ignis</code></td><td>0.6.0</td></tr><tr><td><code>qiskit-ibmq-provider</code></td><td>0.16.0</td></tr><tr><td><code>qiskit-aqua</code></td><td>0.9.5</td></tr><tr><td><code>qiskit</code></td><td>0.29.1</td></tr><tr><td><code>qiskit-nature</code></td><td>0.2.1</td></tr><tr><td><code>qiskit-finance</code></td><td>0.2.1</td></tr><tr><td><code>qiskit-optimization</code></td><td>0.2.2</td></tr><tr><td><code>qiskit-machine-learning</code></td><td>0.2.1</td></tr><tr><th>System information</th></tr><tr><td>Python</td><td>3.9.6 (default, Jun 30 2021, 10:22:16)
[GCC 11.1.0]</td></tr><tr><td>OS</td><td>Linux</td></tr><tr><td>CPUs</td><td>32</td></tr><tr><td>Memory (Gb)</td><td>125.65557479858398</td></tr><tr><td colspan='2'>Thu Sep 16 07:39:30 2021 EDT</td></tr></table>
<div style='width: 100%; background-color:#d5d9e0;padding-left: 10px; padding-bottom: 10px; padding-right: 10px; padding-top: 5px'><h3>This code is a part of Qiskit</h3><p>© Copyright IBM 2017, 2021.</p><p>This code is licensed under the Apache License, Version 2.0. You may<br>obtain a copy of this license in the LICENSE.txt file in the root directory<br> of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.<p>Any modifications or derivative works of this code must retain this<br>copyright notice, and modified files need to carry a notice indicating<br>that they have been altered from the originals.</p></div>
|
5611a7666f8239e47b5cc0081ec79e3db524cd3a
| 341,321 |
ipynb
|
Jupyter Notebook
|
tutorials/circuits_advanced/09_pulse_simulator_duffing_model.ipynb
|
jwoehr/qiskit-tutorials
|
0c67cbbd40cfe7efa83ee38867caccf48aea8765
|
[
"Apache-2.0"
] | 1,186 |
2018-12-16T02:57:50.000Z
|
2022-03-31T02:03:58.000Z
|
tutorials/circuits_advanced/09_pulse_simulator_duffing_model.ipynb
|
jwoehr/qiskit-tutorials
|
0c67cbbd40cfe7efa83ee38867caccf48aea8765
|
[
"Apache-2.0"
] | 540 |
2018-12-15T19:14:41.000Z
|
2022-03-31T13:15:36.000Z
|
tutorials/circuits_advanced/09_pulse_simulator_duffing_model.ipynb
|
jwoehr/qiskit-tutorials
|
0c67cbbd40cfe7efa83ee38867caccf48aea8765
|
[
"Apache-2.0"
] | 838 |
2018-12-15T22:51:08.000Z
|
2022-03-31T06:51:57.000Z
| 365.049198 | 88,544 | 0.928229 | true | 6,137 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.651355 | 0.524752 |
__label__eng_Latn
| 0.96107 | 0.057505 |
# Autoregressive (AR) HMM Demo
[](https://colab.research.google.com/github/lindermanlab/ssm-jax-refactor/blob/main/notebooks/arhmm-example.ipynb)
This notebook illustrates the use of the _auto_regression_ observation model.
Let $x_t$ denote the observation at time $t$. Let $z_t$ denote the corresponding discrete latent state.
The autoregressive hidden Markov model has the following likelihood,
$$
\begin{align}
x_t \mid x_{t-1}, z_t &\sim
\mathcal{N}\left(A_{z_t} x_{t-1} + b_{z_t}, Q_{z_t} \right).
\end{align}
$$
(Technically, higher-order autoregressive processes with extra linear terms from inputs are also implemented.)
```python
try:
import ssm
except:
!pip install git+https://github.com/lindermanlab/ssm-jax-refactor.git -qqq
import ssm
```
```python
import jax.numpy as np
import jax.random as jr
from tensorflow_probability.substrates import jax as tfp
from ssm.distributions.linreg import GaussianLinearRegression
from ssm.arhmm import GaussianARHMM
from ssm.utils import random_rotation
from ssm.plots import gradient_cmap #, white_to_color_cmap
import matplotlib.pyplot as plt
import seaborn as sns
```
/Users/collinschlager/miniforge3/envs/ssmjax/lib/python3.9/site-packages/jax/_src/lib/__init__.py:33: UserWarning: JAX on Mac ARM machines is experimental and minimally tested. Please see https://github.com/google/jax/issues/5501 in the event of problems.
warnings.warn("JAX on Mac ARM machines is experimental and minimally tested. "
```python
sns.set_style("white")
sns.set_context("talk")
color_names = [
"windows blue",
"red",
"amber",
"faded green",
"dusty purple",
"orange",
"brown",
"pink"
]
colors = sns.xkcd_palette(color_names)
cmap = gradient_cmap(colors)
```
```python
# Make a transition matrix
num_states = 5
transition_probs = (np.arange(num_states)**10).astype(float)
transition_probs /= transition_probs.sum()
transition_matrix = np.zeros((num_states, num_states))
for k, p in enumerate(transition_probs[::-1]):
transition_matrix += np.roll(p * np.eye(num_states), k, axis=1)
plt.imshow(transition_matrix, vmin=0, vmax=1, cmap="Greys")
plt.xlabel("next state")
plt.ylabel("current state")
plt.title("transition matrix")
plt.colorbar()
```
```python
# Make observation distributions
data_dim = 2
num_lags = 1
keys = jr.split(jr.PRNGKey(0), num_states)
angles = np.linspace(0, 2 * np.pi, num_states, endpoint=False)
theta = np.pi / 25 # rotational frequency
weights = np.array([0.8 * random_rotation(key, data_dim, theta=theta) for key in keys])
biases = np.column_stack([np.cos(angles), np.sin(angles), np.zeros((num_states, data_dim - 2))])
covariances = np.tile(0.001 * np.eye(data_dim), (num_states, 1, 1))
# Compute the stationary points
stationary_points = np.linalg.solve(np.eye(data_dim) - weights, biases)
```
# Plot dynamics functions
```python
if data_dim == 2:
lim = 5
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(1, num_states, figsize=(3 * num_states, 6))
for k in range(num_states):
A, b = weights[k], biases[k]
dxydt_m = xy.dot(A.T) + b - xy
axs[k].quiver(xy[:, 0], xy[:, 1],
dxydt_m[:, 0], dxydt_m[:, 1],
color=colors[k % len(colors)])
axs[k].set_xlabel('$x_1$')
axs[k].set_xticks([])
if k == 0:
axs[k].set_ylabel("$x_2$")
axs[k].set_yticks([])
axs[k].set_aspect("equal")
plt.tight_layout()
```
# Sample data from the ARHMM
```python
import warnings
warnings.filterwarnings("error")
```
```python
# Make an Autoregressive (AR) HMM
true_initial_distribution = tfp.distributions.Categorical(logits=np.zeros(num_states))
true_transition_distribution = tfp.distributions.Categorical(probs=transition_matrix)
import logging
logging.captureWarnings(True)
true_arhmm = GaussianARHMM(num_states,
transition_matrix=transition_matrix,
emission_weights=weights,
emission_biases=biases,
emission_covariances=covariances)
time_bins = 10000
true_states, data = true_arhmm.sample(jr.PRNGKey(0), time_bins)
```
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
```python
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
plt.plot(*data[true_states==k].T, 'o', color=colors[k],
alpha=0.75, markersize=3)
plt.plot(*data[:1000].T, '-k', lw=0.5, alpha=0.2)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
# plt.gca().set_aspect("equal")
```
Below, we visualize each component of of the observation variable as a time series. The colors correspond to the latent state. The dotted lines represent the stationary point of the the corresponding AR state while the solid lines are the actual observations sampled from the HMM.
```python
# Plot the data and the smoothed data
plot_slice = (0, 200)
lim = 1.05 * abs(data).max()
plt.figure(figsize=(8, 6))
plt.imshow(true_states[None, :],
aspect="auto",
cmap=cmap,
vmin=0,
vmax=len(colors)-1,
extent=(0, time_bins, -lim, (data_dim)*lim))
Ey = np.array(stationary_points)[true_states]
for d in range(data_dim):
plt.plot(data[:,d] + lim * d, '-k')
plt.plot(Ey[:,d] + lim * d, ':k')
plt.xlim(plot_slice)
plt.xlabel("time")
plt.yticks(lim * np.arange(data_dim), ["$x_{{{}}}$".format(d+1) for d in range(data_dim)])
plt.tight_layout()
```
# Fit an ARHMM
```python
# Now fit an HMM to the data
key1, key2 = jr.split(jr.PRNGKey(0), 2)
test_num_states = num_states
initial_distribution = tfp.distributions.Categorical(logits=np.zeros(test_num_states))
transition_distribution = tfp.distributions.Categorical(logits=np.zeros((test_num_states, test_num_states)))
emission_distribution = GaussianLinearRegression(
weights=np.tile(0.99 * np.eye(data_dim), (test_num_states, 1, 1)),
bias=0.01 * jr.normal(key2, (test_num_states, data_dim)),
scale_tril=np.tile(np.eye(data_dim), (test_num_states, 1, 1))
)
arhmm = GaussianARHMM(test_num_states,
data_dim,
num_lags,
seed=jr.PRNGKey(0))
lps, arhmm, posterior = arhmm.fit(data, method="em")
```
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
Initializing...
Done.
[jit compiling...]: 0%| | 0/100 [00:00<?, ?it/s]WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
[converged] LP: 38590.152: 7%|▋ | 7/100 [00:00<00:11, 8.22it/s]
```python
# Plot the log likelihoods against the true likelihood, for comparison
true_lp = true_arhmm.marginal_likelihood(data)
plt.plot(lps, label="EM")
plt.plot(true_lp * np.ones(len(lps)), ':k', label="True")
plt.xlabel("EM Iteration")
plt.ylabel("Log Probability")
plt.legend(loc="lower right")
plt.show()
```
```python
# # Find a permutation of the states that best matches the true and inferred states
# most_likely_states = posterior.most_likely_states()
# arhmm.permute(find_permutation(true_states[num_lags:], most_likely_states))
# posterior.update()
# most_likely_states = posterior.most_likely_states()
```
```python
if data_dim == 2:
lim = abs(data).max()
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(2, max(num_states, test_num_states), figsize=(3 * num_states, 6))
for i, model in enumerate([true_arhmm, arhmm]):
for j in range(model.num_states):
dist = model._emissions._distribution[j]
A, b = dist.weights, dist.bias
dxydt_m = xy.dot(A.T) + b - xy
axs[i,j].quiver(xy[:, 0], xy[:, 1],
dxydt_m[:, 0], dxydt_m[:, 1],
color=colors[j % len(colors)])
axs[i,j].set_xlabel('$x_1$')
axs[i,j].set_xticks([])
if j == 0:
axs[i,j].set_ylabel("$x_2$")
axs[i,j].set_yticks([])
axs[i,j].set_aspect("equal")
plt.tight_layout()
```
```python
# Plot the true and inferred discrete states
plot_slice = (0, 1000)
plt.figure(figsize=(8, 4))
plt.subplot(211)
plt.imshow(true_states[None,num_lags:], aspect="auto", interpolation="none", cmap=cmap, vmin=0, vmax=len(colors)-1)
plt.xlim(plot_slice)
plt.ylabel("$z_{\\mathrm{true}}$")
plt.yticks([])
plt.subplot(212)
# plt.imshow(most_likely_states[None,: :], aspect="auto", cmap=cmap, vmin=0, vmax=len(colors)-1)
plt.imshow(posterior.expected_states[0].T, aspect="auto", interpolation="none", cmap="Greys", vmin=0, vmax=1)
plt.xlim(plot_slice)
plt.ylabel("$z_{\\mathrm{inferred}}$")
plt.yticks([])
plt.xlabel("time")
plt.tight_layout()
```
```python
# Sample the fitted model
sampled_states, sampled_data = arhmm.sample(jr.PRNGKey(0), time_bins)
```
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
WARNING:root:The use of `check_types` is deprecated and does not have any effect.
```python
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
plt.plot(*sampled_data[sampled_states==k].T, 'o', color=colors[k],
alpha=0.75, markersize=3)
plt.plot(*sampled_data.T, '-k', lw=0.5, alpha=0.2)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
# plt.gca().set_aspect("equal")
```
|
4f942ffd8e6ed9425e756aa5573f8e1de0450624
| 794,553 |
ipynb
|
Jupyter Notebook
|
notebooks/arhmm-example.ipynb
|
lindermanlab/ssm-jax-refactor
|
879243a5b649daeacea3467fab09b5a405cb4ff9
|
[
"MIT"
] | 19 |
2021-12-02T08:40:57.000Z
|
2022-03-08T15:23:37.000Z
|
notebooks/arhmm-example.ipynb
|
lindermanlab/ssm-jax-refactor
|
879243a5b649daeacea3467fab09b5a405cb4ff9
|
[
"MIT"
] | 17 |
2021-11-20T00:21:58.000Z
|
2022-02-25T11:05:53.000Z
|
notebooks/arhmm-example.ipynb
|
lindermanlab/ssm-jax-refactor
|
879243a5b649daeacea3467fab09b5a405cb4ff9
|
[
"MIT"
] | 1 |
2022-03-02T05:55:16.000Z
|
2022-03-02T05:55:16.000Z
| 1,012.169427 | 236,230 | 0.950231 | true | 4,250 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.785309 | 0.703834 |
__label__eng_Latn
| 0.980929 | 0.473573 |
```python
from sympy import *
```
```python
init_printing()
```
```python
from sympy.abc import a, b, c, d, f, g, h, x, y, z
```
```python
expr = a * x**2 + b * x + c
```
```python
expr
```
```python
solve(expr, x)
```
```python
m = Matrix([[1, 2], [3, 4]])
```
```python
m
```
```python
m.inv()
```
```python
m.det()
```
```python
%matplotlib inline
```
```python
plot(x**2)
```
|
0b23be83b61dc4aa07e55caac6a09befd026615f
| 24,492 |
ipynb
|
Jupyter Notebook
|
meeting-materials/2016-01-14/sympy_demo.ipynb
|
moorepants/thehackerwithin-davis
|
49c2ae031a4624cc4e106e634cc66fa6b5341c04
|
[
"BSD-3-Clause"
] | null | null | null |
meeting-materials/2016-01-14/sympy_demo.ipynb
|
moorepants/thehackerwithin-davis
|
49c2ae031a4624cc4e106e634cc66fa6b5341c04
|
[
"BSD-3-Clause"
] | null | null | null |
meeting-materials/2016-01-14/sympy_demo.ipynb
|
moorepants/thehackerwithin-davis
|
49c2ae031a4624cc4e106e634cc66fa6b5341c04
|
[
"BSD-3-Clause"
] | null | null | null | 93.125475 | 14,136 | 0.849624 | true | 148 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.894789 | 0.766294 | 0.685671 |
__label__eng_Latn
| 0.249991 | 0.431376 |
# The Setting: Hamiltonian Dynamics on Phase Space
(sec:Ham1)=
## Configuration Space and Phase space
<!-- \label{sec:Ham1} -->
> The concepts in this section are adapted from the books {cite}`lifshitz1978,wiggins2003applied`.
* __Generalized Coordinates:__ The location of the particles comprising a system are described by a set of coordinates, known in Mechanics as generalized coordinates. The space, i.e. all possible values, described by these coordinates is referred to as *configuration space*.
* __Degrees-of-freedom (DoF):__ The number of DoF is the number of independent generalized coordinates required to describe the configuration of the system, i.e. it is the dimension of the configuration space.
* __Momenta:__ In the canonical Hamiltonian framework, each configuration space variable has an associated canonically conjugate variable which are referred to as momentum variables.
* __Phase space:__ The collection of all the configuration and momentum variables is referred to as the phase space of the Hamiltonian dynamical system.
## Hamilton's Equations
Consider an $n$ DoF system described by the scalar function $H(\mathbf{q},\mathbf{p},t)$, known as the Hamiltonian of the system, which smoothly depends on the configuration space coordinates $\mathbf{q} = (q_1,\ldots,q_n) \in \mathbb{R}^{n}$, their canonically conjugate momenta $\mathbf{p} = (p_1,\ldots,p_n) \in \mathbb{R}^{n}$ and time. Hamilton's equations are a set of $2n$ first-order differential equations:
```{math}
---
label: eq:hamiltoneq
---
\begin{cases}
\dot{q}_i = \dfrac{\partial H}{\partial p_i} \\[.4cm]
\dot{p}_i = -\dfrac{\partial H}{\partial q_i}
\end{cases}
\; , \qquad \; i = 1, 2, \ldots, n,
```
that describe the dynamics of the system, where the dot symbol over a variable denotes the total time derivative, that is $\cdot \equiv d/dt$. When the Hamiltonian function does not depend on time explicitly, the dynamical system it generates by means of Eq. {eq}`eq:hamiltoneq` is said to be __autonomous__.
## Conserved quantities in phase space
> The reader can find more details on Poisson brackets and their properties in many books and lecture notes, see e.g. {cite}`lifshitz1978,arnold1978,giorgilli2002`
Given a Hamiltonian system, a function $A(\mathbf{q}, \mathbf{p},t)$ is called an **integral of motion**, __constant of motion__ or **first integral** if it remains constant along a trajectory. Then, its total time derivative is zero along solutions $(\mathbf{q}(t),\mathbf{p}(t))$ of Hamilton's equations, that is
```{math}
:label: eq:integral
\dfrac{dA}{dt} = \dfrac{\partial A}{\partial t} + \sum\limits_{i = 1}^n \left( \dfrac{\partial A}{\partial q_i} \dfrac{d q_i}{dt} + \dfrac{\partial A}{\partial p_i} \dfrac{d p_i}{dt} \right) = \dfrac{\partial A}{\partial t} +
\sum\limits_{i = 1}^n \left( \dfrac{\partial A}{\partial q_i}\dfrac{\partial H}{\partial p_i}- \dfrac{\partial A}{\partial p_i} \dfrac{\partial H}{\partial q_i} \right) = 0.
```
The quantity
```{math}
---
label: eq:poisson
---
\{A,H\} = \sum\limits_{i = 1}^n \left( \dfrac{\partial A}{\partial q_i}\dfrac{\partial H}{\partial p_i}- \dfrac{\partial A}{\partial p_i} \dfrac{\partial H}{\partial q_i} \right),
```
\noindent
is called the *Poisson bracket* of the functions $A$ and $H$. Therefore, Eq. {eq}`eq:integral` is equivalent to:
```{math}
---
label: eq:integral2
---
\dfrac{dA}{dt} = \frac{\partial A}{\partial t} + \{A,H\} = 0
```
Notice that if the function $A$ does not explicitly depend on time, that is $\partial A / \partial t = 0$, then $A$ is an integral of motion if and only if $\{A, H\} = 0$. In other words, the Poisson bracket provides us with a useful test to see if a function of the phase space variables and time is conserved or not in a Hamiltonian system. In particular, if the Hamiltonian of the system is time-independent, that is $H = H(\mathbf{q},\mathbf{p})$, then we can deduce that:
```{math}
\frac{dH}{dt} = \dfrac{\partial H}{\partial t} + \{ H, H \} = 0 + \sum\limits_{i = 1}^n \left( \dfrac{\partial H}{\partial q_i}\dfrac{\partial H}{\partial p_i}- \dfrac{\partial H}{\partial p_i} \dfrac{\partial H}{\partial q_i} \right) = 0 \;,
```
which implies that the Hamiltonian itself is a constant of the motion. If the Hamiltonian represents the total energy of the physical system, this property is just the law of conservation of total energy along trajectories of an autonomous Hamiltonian system. The implicit time dependence of the position and momentum coordinates may increase/decrease the kinetic energy at the expense/gain of the potential energy, but the sum of kinetic and potential energy remains constant.
Another example of a quantity that is a constant of the motion is provided by ignorable or cyclic coordinates. A generalized coordinate $q_i$ is said to be *ignorable* or *cyclic* if it does not appear in the expression of the Hamiltonian function. By Hamilton's equations in Eq. {eq}`eq:hamiltoneq`, this implies that
\begin{equation}
\dot{p}_i = - \dfrac{\partial H}{\partial q_i} = 0
\end{equation}
and therefore the momentum $p_i$ is constant along trajectories, that is, $p_i(t) = p_i^0$.
We introduce next a concept which is important for the study of integrable Hamiltonian systems. Integrable Hamiltonian systems are those that can be solved by quadratures and are characterized by Liouville-Arnold theorem, see {cite}`arnold1978` and the contents in Section {ref}`2 <sec:Ham2>`. Two constants of the motion $A$ and $B$ are said to be in *sinvolution* if they satisfy
```{math}
---
label: eq:involution
---
\{A,B\} = 0 \;.
```
## Invariant Sets
> The concepts here are adapted from {cite}`wiggins2003applied`.
Invariant sets play a fundamental role in how we understand the nature of phase space dynamics. We will give here a definition for this concept for an autonomous dynamical system in the continuous time setting:
```{math}
---
label: eq:cont_ds
---
\dot{x} = f(x) \;, \quad x \in \mathbb{R}^{n}
```
and also for a map (discrete time dynamics):
```{math}
---
label: eq:disc_ds
---
x \mapsto g(x) \;, \quad x\in \mathbb{R}^{n}
```
__Definition__
\label{def:invset}
Let $S \subset \mathbb{R}^{n}$ be a set of the phase space of the dynamical system, then
* __Continuous time:__ $S$ is invariant under the flow generated by Eq. {eq}`eq:cont_ds` if for any point $x_{0} \in S$ we have that $x(t;x_{0}) \in S$ for all $t \in I$, where $x(t;x_{0})$ denotes the solution of Eq. {eq}`eq:cont_ds` with initial condition $x(0) = x_{0}$, and $I$ is the time interval of existence of the solution.
* __Discrete time:__ $S$ is said to be invariant under the map in Eq. {eq}`eq:disc_ds` if for any $x_{0}\in S$, the orbit (trajectory) associated to that initial condition remains inside the set for all iterates of the map, that is $g^{n}(x_{0})\in S$, for all $n$.
Invariant sets play an important role for the analysis of dynamical systems, as they allow us to break up the dynamics into smaller parts. For example the dynamics in invariant sets can be investigated separately from the rest of the system. As we will show in the following sections, some invariant sets, such as invariant manifolds, naturally divide the phase space of the system into regions of qualitatively distinct dynamical behavior that can be studied independently.
# References
```{bibliography} bibliography/chapter1.bib
```
|
d4c0b27fad313313213ae13cdcfb9a0970659901
| 9,549 |
ipynb
|
Jupyter Notebook
|
book/content/.ipynb_checkpoints/chapter1_1-checkpoint.ipynb
|
champsproject/lagrangian_descriptors
|
b3a88a2243bd5b0dce7cc945f9504bfadc9a4b19
|
[
"CC-BY-4.0"
] | 12 |
2020-07-24T17:35:42.000Z
|
2021-08-12T17:31:53.000Z
|
book/_build/html/_sources/content/chapter1_1.ipynb
|
champsproject/lagrangian_descriptors
|
b3a88a2243bd5b0dce7cc945f9504bfadc9a4b19
|
[
"CC-BY-4.0"
] | 12 |
2020-05-26T17:28:38.000Z
|
2020-07-27T10:40:54.000Z
|
book/content/chapter1_1.ipynb
|
champsproject/lagrangian_descriptors
|
b3a88a2243bd5b0dce7cc945f9504bfadc9a4b19
|
[
"CC-BY-4.0"
] | null | null | null | 54.565714 | 492 | 0.624777 | true | 2,110 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.887205 | 0.863392 | 0.766005 |
__label__eng_Latn
| 0.997044 | 0.618018 |
This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also http://splines.readthedocs.io/.
# Bézier Splines
See also https://pomax.github.io/bezierinfo/.
There are several ways to get to Bézier curves, one was already shown in
[the notebook about Hermite curves](hermite-uniform.ipynb#Relation-to-Bézier-Splines)
(but only for cubic curves).
TODO: first explain control polylines and then link to Hermite splines?
Another one is the so-called De Casteljau's algorithm. (TODO: link to De Casteljau)
One nice aspect of this is that the algorithm can be used for arbitrary polynomial degrees.
A Bézier spline is defined by a so-called *control polyline* (or *control polygon*), which comprises a sequence of *control points*.
Some of those control points are part of the final spline curve, others lie outside of it.
The degree of a spline segment determines how many "off-curve" control points are between two "on-curve" control points.
For example, in a cubic (degree = 3) Bézier spline there are two (= degree - 1) "off-curve" control points.
Two equally valid viewpoints for what a Bézier spline is:
* A sequence of curve segments, each defined by degree + 1 control points.
The first control point of a segment is the same as the last control point of the previous one.
* A sequence of control points that can be used to shape the resulting curve.
Every degree'th control point lies on the curve and the others define the shape of the curve segments.
TODO: most well-known: cubic Bézier splines (show screenshot from drawing program, e.g. Inkscape).
The two "off-curve" control points are shown as "handles".
TODO: typical set of constraints on continuity in drawing programs: C0, C1, G1
### Preparations
Before we continue, here are are few preparations for the following calculations:
```python
%matplotlib inline
%config InlineBackend.print_figure_kwargs = {'bbox_inches': None}
import matplotlib.pyplot as plt
import numpy as np
import sympy as sp
sp.init_printing()
```
We import stuff from the file [utility.py](utility.py):
```python
from utility import NamedExpression, NamedMatrix
```
Let's prepare a few symbols for later use:
```python
t, x0, x1, x2, x3, x4 = sp.symbols('t, xbm:5')
```
... and a helper function for plotting:
```python
def plot_curve(func, points, dots=30, ax=None):
if ax is None:
ax = plt.gca()
times = np.linspace(0, 1, dots)
ax.plot(*func(points, times).T, '.')
ax.scatter(*np.asarray(points).T, marker='x', c='black')
ax.set_title(func.__name__ + ' Bézier curve')
ax.axis('equal')
```
We also need to prepare for the animations we will see below.
This is using code from the file [casteljau.py](casteljau.py):
```python
from casteljau import create_animation
from IPython.display import display, HTML
def show_casteljau_animation(points, frames=30, interval=200):
ani = create_animation(points, frames=frames)
display(HTML(ani.to_jshtml(default_mode='reflect')))
plt.close() # avoid spurious figure display
```
### Degree 1, a.k.a. linear
But let's start with the trivial case:
A Bézier spline of degree 1 is just a piecewise linear curve connecting all the control points.
There are no "off-curve" control points that could bend the curve segments.
Assume that we have two control points, $\boldsymbol{x}_0$ and $\boldsymbol{x}_1$ ...
... linear equation ...:
\begin{equation}
\boldsymbol{p}_{0,1}(t) = \boldsymbol{x}_0 + t (\boldsymbol{x}_1 - \boldsymbol{x}_0)
\end{equation}
... in other words ... this is called *affine combination*, but we don't really have to worry about it ...
\begin{equation}
\boldsymbol{p}_{0,1}(t) = (1 - t) \boldsymbol{x}_0 + t \boldsymbol{x}_1
\end{equation}
... with $t \in [0, 1]$ (which is called *uniform*)
TODO: show change of variables for *non-uniform* curve?
Since we will be needing quite a bunch of those affine combinations, let's create a helper function:
```python
def affine_combination(one, two):
return (1 - t) * one + t * two
```
Now we can define the equation in SymPy:
```python
p01 = NamedExpression('pbm_0,1', affine_combination(x0, x1))
p01
```
```python
b1 = [p01.expr.expand().coeff(x.name).factor() for x in (x0, x1)]
b1
```
Doesn't look like much, but those are the Bernstein bases for degree 1 (<https://en.wikipedia.org/wiki/Bernstein_polynomial>).
It doesn't get much more interesting if we plot them:
```python
sp.plot(*b1, (t, 0, 1));
```
If you want to convert this to coefficients for the monomial basis $[t, 1]$ instead of the Bernstein basis functions, you can use this matrix:
```python
M_B1 = NamedMatrix(
r'{M_\text{B}^{(1)}}',
sp.Matrix([[c.coeff(x) for x in (x0, x1)]
for c in p01.expr.as_poly(t).all_coeffs()]))
M_B1
```
Applying this matrix leads to the coefficients of the linear equation mentioned in the beginning of this section
($\boldsymbol{p}_{0,1}(t) = t (\boldsymbol{x}_1 - \boldsymbol{x}_0) + \boldsymbol{x}_0$):
```python
sp.MatMul(M_B1.expr, sp.Matrix([x0, x1]))
```
```python
_.doit()
```
If you ever need that, here's the inverse:
```python
M_B1.I
```
Anywho, let's calculate points on the curve by using the Bernstein basis functions:
```python
def linear(points, times):
"""Evaluate linear Bézier curve (given by two points) at given times."""
return np.column_stack(sp.lambdify(t, b1)(times)) @ points
```
```python
points = [
(0, 0),
(1, 0.5),
]
```
```python
plot_curve(linear, points)
```
```python
show_casteljau_animation(points)
```
I know, not very exciting. But it gets better!
### Degree 2, a.k.a. quadratic
Consider three control points, $\boldsymbol{x}_0$, $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ ...
We use the affine combinations of the first two points from above ...
```python
p01
```
... and we do the same thing for the second and third point:
```python
p12 = NamedExpression('pbm_1,2', affine_combination(x1, x2))
p12
```
Finally, we make another affine combination of those two results:
```python
p02 = NamedExpression('pbm_0,2', affine_combination(p01.expr, p12.expr))
p02
```
Bernstein basis functions:
```python
b2 = [p02.expr.expand().coeff(x.name).factor() for x in (x0, x1, x2)]
b2
```
```python
sp.plot(*b2, (t, 0, 1));
```
```python
M_B2 = NamedMatrix(
r'{M_\text{B}^{(2)}}',
sp.Matrix([[c.coeff(x) for x in (x0, x1, x2)]
for c in p02.expr.as_poly(t).all_coeffs()]))
M_B2
```
```python
M_B2.I
```
```python
def quadratic(points, times):
"""Evaluate quadratic Bézier curve (given by three points) at given times."""
return np.column_stack(sp.lambdify(t, b2)(times)) @ points
```
```python
points = [
(0, 0),
(0.2, 0.5),
(1, -0.3),
]
```
```python
plot_curve(quadratic, points)
```
```python
show_casteljau_animation(points)
```
For some more insight, let's look at the first derivative of the curve (i.e. the tangent vector):
```python
v02 = p02.expr.diff(t)
```
... at the beginning and the end of the curve:
```python
v02.subs(t, 0)
```
```python
v02.subs(t, 1)
```
This shows that the tangent vector at the beginning and end of the curve is parallel to the line
from $\boldsymbol{x}_0$ to $\boldsymbol{x}_1$ and
from $\boldsymbol{x}_1$ to $\boldsymbol{x}_2$, respectively.
The length of the tangent vectors is twice the length of those lines.
You might have already seen that coming, but it turns out that the last line in de Casteljau's algorithm ($\boldsymbol{p}_{1,2}(t) - \boldsymbol{p}_{0,1}(t)$ in our case) is exactly half of the tangent vector (at any given $t \in [0, 1]$).
```python
(v02 - 2 * (p12.expr - p01.expr)).simplify()
```
In case you are wondering, the factor 2 comes from the degree 2 of our quadratic curve.
### Degree 3, a.k.a. cubic
Consider four control points, $\boldsymbol{x}_0$, $\boldsymbol{x}_1$, $\boldsymbol{x}_2$ and $\boldsymbol{x}_3$ ...
By now, the pattern should be clear: We take the result from the first three points from above and affine-combine it with the result for the three points $\boldsymbol{x}_1$, $\boldsymbol{x}_2$ and $\boldsymbol{x}_3$.
Combination of $\boldsymbol{x}_2$ and $\boldsymbol{x}_3$:
```python
p23 = NamedExpression('pbm_2,3', affine_combination(x2, x3))
p23
```
Combination of $\boldsymbol{x}_1$, $\boldsymbol{x}_2$ and $\boldsymbol{x}_3$:
```python
p13 = NamedExpression('pbm_1,3', affine_combination(p12.expr, p23.expr))
p13
```
Combination of $\boldsymbol{x}_0$, $\boldsymbol{x}_1$, $\boldsymbol{x}_2$ and $\boldsymbol{x}_3$:
```python
p03 = NamedExpression('pbm_0,3', affine_combination(p02.expr, p13.expr))
p03
```
Bernstein bases:
```python
b3 = [p03.expr.expand().coeff(x.name).factor() for x in (x0, x1, x2, x3)]
b3
```
TODO: show that those are the same Bernstein bases as in the notebook about Hermite splines
```python
sp.plot(*b3, (t, 0, 1));
```
```python
M_B3 = NamedMatrix(
r'{M_\text{B}^{(3)}}',
sp.Matrix([[c.coeff(x) for x in (x0, x1, x2, x3)]
for c in p03.expr.as_poly(t).all_coeffs()]))
M_B3
```
```python
M_B3.I
```
```python
def cubic(points, times):
"""Evaluate cubic Bézier curve (given by four points) at given times."""
return np.column_stack(sp.lambdify(t, b3)(times)) @ points
```
```python
points = [
(0, 0.3),
(0.2, 0.5),
(0.1, 0),
(1, 0.2),
]
```
```python
plot_curve(cubic, points)
```
```python
show_casteljau_animation(points)
```
As before, let's look at the derivative (i.e. the tangent vector) of the curve:
```python
v03 = p03.expr.diff(t)
```
... at the beginning and the end of the curve:
```python
v03.subs(t, 0)
```
```python
v03.subs(t, 1)
```
This shows that the tangent vector at the beginning and end of the curve is parallel to the line
from $\boldsymbol{x}_0$ to $\boldsymbol{x}_1$ and
from $\boldsymbol{x}_2$ to $\boldsymbol{x}_3$, respectively.
The length of the tangent vectors is three times the length of those lines.
We can now see that the last line in de Casteljau's algorithm ($\boldsymbol{p}_{1,3}(t) - \boldsymbol{p}_{0,2}(t)$ in this case) is exactly a third of the tangent vector (at any given $t \in [0, 1]$):
```python
(v03 - 3 * (p13.expr - p02.expr)).simplify()
```
Again, the factor 3 comes from the degree 3 of our curve.
We now know the tangent vectors at the beginning and the end of the curve, and obviously we know the values of the curve at the beginning and the end:
```python
p03.expr.subs(t, 0), p03.expr.subs(t, 1)
```
With these four pieces of information, we can find a transformation from the four Bézier control points to the two control points and two tangent vectors of Hermite splines:
```python
M_BtoH = NamedMatrix(
r'{M_\text{B$\to$H}}',
sp.Matrix([[expr.coeff(cv) for cv in [x0, x1, x2, x3]]
for expr in [x0, x3, v03.subs(t, 0), v03.subs(t, 1)]]))
M_BtoH
```
And we can simply invert this if we want to go in the other direction, from Hermite to Bézier:
```python
M_BtoH.I.pull_out(sp.S.One / 3)
```
Of course, those are the same matrices as shown in the [notebook about uniform cubic Hermite splines](hermite-uniform.ipynb).
TODO: show tangent vectors for non-uniform case
### Degree 4, a.k.a. quartic
Consider five control points, $\boldsymbol{x}_0$, $\boldsymbol{x}_1$, $\boldsymbol{x}_2$, $\boldsymbol{x}_3$ and $\boldsymbol{x}_4$ ...
More combinations!
```python
p34 = NamedExpression('pbm_3,4', affine_combination(x3, x4))
p24 = NamedExpression('pbm_2,4', affine_combination(p23.expr, p34.expr))
p14 = NamedExpression('pbm_1,4', affine_combination(p13.expr, p24.expr))
p04 = NamedExpression('pbm_0,4', affine_combination(p03.expr, p14.expr))
p04
```
Kinda long, but anyway, let's try to extract the Bernstein bases:
```python
b4 = [p04.expr.expand().coeff(x.name).factor() for x in (x0, x1, x2, x3, x4)]
b4
```
```python
sp.plot(*b4, (t, 0, 1));
```
```python
M_B4 = NamedMatrix(
'{M_B^{(4)}}',
sp.Matrix([[c.coeff(x) for x in (x0, x1, x2, x3, x4)]
for c in p04.expr.as_poly(t).all_coeffs()]))
M_B4
```
```python
M_B4.I
```
```python
def quartic(points, times):
"""Evaluate quartic Bézier curve (given by five points) at given times."""
return np.column_stack(sp.lambdify(t, b4)(times)) @ points
```
```python
points = [
(0, 0),
(0.5, 0),
(0.7, 1),
(1, 1.5),
(-1, 1),
]
```
```python
plot_curve(quartic, points)
```
```python
show_casteljau_animation(points)
```
For completeness' sake, let's look at the derivative (i.e. the tangent vector) of the curve:
```python
v04 = p04.expr.diff(t)
```
... at the beginning and the end of the curve:
```python
v04.subs(t, 0)
```
```python
v04.subs(t, 1)
```
By now it shouldn't be surprising that the tangent vector at the beginning and end of the curve is parallel to the line
from $\boldsymbol{x}_0$ to $\boldsymbol{x}_1$ and
from $\boldsymbol{x}_3$ to $\boldsymbol{x}_4$, respectively.
The length of the tangent vectors is four times the length of those lines.
The last line in de Casteljau's algorithm ($\boldsymbol{p}_{1,4}(t) - \boldsymbol{p}_{0,3}(t)$ in this case) is exactly a fourth of the tangent vector (at any given $t \in [0, 1]$):
```python
(v04 - 4 * (p14.expr - p03.expr)).simplify()
```
Again, the factor 4 comes from the degree 4 of our curve.
### Arbitrary Degree
We could go on doing this for higher and higher degrees, but this would get more and more annoying.
Luckily, there is a closed formula available to calculate Bernstein polynomials for an arbitrary degree $n$!
\begin{equation}
b_{i,n}(x) = {n \choose i} x^i \left( 1 - x \right)^{n - i}, \quad i = 0, \ldots, n.
\end{equation}
with the *binomial coefficient* ${n \choose i} = \frac{n!}{i!(n - i)!}$.
TODO: link to proof?
TODO: show Bernstein polynomials for "quintic" etc.?
```python
show_casteljau_animation([
(0, 0),
(-1, 1),
(-0.5, 2),
(1, 2.5),
(2, 2),
(2, 1.5),
(0.5, 0.5),
(1, -0.5),
])
```
```python
```
|
07371fcc0aa24a27c6803015fe33aa353fa85f37
| 27,399 |
ipynb
|
Jupyter Notebook
|
doc/bezier.ipynb
|
mgeier/splines
|
f54b09479d98bf13f00a183fd9d664b5783e3864
|
[
"MIT"
] | null | null | null |
doc/bezier.ipynb
|
mgeier/splines
|
f54b09479d98bf13f00a183fd9d664b5783e3864
|
[
"MIT"
] | null | null | null |
doc/bezier.ipynb
|
mgeier/splines
|
f54b09479d98bf13f00a183fd9d664b5783e3864
|
[
"MIT"
] | null | null | null | 23.763226 | 248 | 0.533414 | true | 4,197 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.859664 | 0.798187 | 0.686172 |
__label__eng_Latn
| 0.970564 | 0.432539 |
```python
%matplotlib inline
import seaborn as sns
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from scipy import stats
```
# Regression Diagnostics
Whenever you undertake regression analysis of any kind you should run diagnostic tests that check the shape of your data and the fit of your model to that data.
## Not common in predictive modeling
You won't see many Kaggle competitions running these tests because they aren't as important for predictive modeling. There are less important (and sometimes completely ignored) in predictive modeling because the end-all be-all of predictive modeling is how accurate your model's predctions are on an "out of sample" dataset. This is why we split our dataset into two random halves and and then fit our model parameters using one half, and test the accuracy of our model's predictions using the other half. (It doesn't have to be 50-50 necessarily, but just an example.)
## Necessary for inferential regression modeling
However, if you ever need to run regression analysis for the purposes of inferentital modeling --because you intend to interpret and be informed by variable coefficients --these tests are of utmost importance. Each of these tests exists to test a certain assumption that we're making about the shape of our data or our model's fit to it. If one or multipile of these assumptions are violated, then doubt is cast on the reliability of our variable coefficients.
# Estimating Parameters
You'll remember that OLS and Gradient-Descent based methods of linear regression modeling both seek to **estimate** parameters that "minimize the sum of the squared error." Because we have been more focused on predictive modeling we haven't talked as much about what it means for a parameter to be an "estimate."
An estimated regression coefficient represents the **mean** change in our response variable (y) given a one unit change in our response given a one unit change in the predictor. But because it is an estimate, there is a certain confidence interval around our prediction of our coefficient. The confidence interval is vital to our interpretation of regression coefficients.
## A Parameter Estimation Example
Suppose I was fitting a regression model and calculated its coefficients and substituted them into the equation:
\begin{align}
\hat{y} = .42+ 2.05x
\end{align}
We've well established in past lectures that $\hat{\beta}_1$ reprents the slope of our regression line, but we haven't talked about how this is just an **estimate** for the slope of our regression line, and as an estimate has an associated confidence interval.
Lets say that we calculated the 95% confidence interval for $\hat{\beta}_1$ and it came out to be $(1.9 , 2.2)$. This means that we can only be 95% confident that the average effect of x on y is within this range. Up to this point we have just taken the reported coefficient as gospel, but a lot of conditions need to be satisfied in order for us to trust regression coefficients. We'll talk about a few of them today.
```python
# We can create scatterplots that show the confidence interval!
heights = np.array([50,52,53,54,58,60,62,64,66,67, 68,70,72,74,76,55,50,45,65])
weights = np.array([25,50,55,75,80,85,50,65,85,55,45,45,50,75,95,65,50,40,45])
sns.regplot(heights, weights, color='blue').set_title('Height by Weight');
```
## Standard Error of a Coefficient
While we can calculate a 95% confidence interval for any estimated parameter, we usually won't refer to the potential spread of parameter estimates by its confidence interval. We'll usually refer to how wide or how narrow the spread is by referring to what's called the "Standard Error."
The Standard Error (SE) of a coefficient estimate is the estimated standard deviation of the error in measuring it. So the coefficient itself is the **estimated mean effect** of x on y. and the Standard Error is the **estimated standard deviation** of our coefficient. We use standard errors to calculate the confidence interval.
## Standard Error of the Regression
The standard error of a coefficient is different from the standard error of the regression. The standard error of the regression as a whole is the average distance that points fall from the regression line.
\begin{align}
SE_{est} = \sqrt{\frac{\sum(y_i-\hat{y})^2}{N}}
\end{align}
Does the numerator of that equation look familiar to you? I hope it does by now.
Standard Error of the regression as a whole is the average distance that datapoints fall from the regression line.
## Precision vs Accuracy
### Accuracy
A regression coefficient that is "Accurate" is centered around its "true" value. The problem here is that we don't know what the true value actually is, so when we say that a coefficient is more accurate we mean that we suspect that it better represents ground truth.
The more observations we have, the more precise our estimates will be.
### Precision
A regression coefficient that is "Precise" has a small standard error. It has a tighter confidence interval as well.
# Gauss Markov Assumptions
There are five Gauss Markov assumptions (also called conditions) that are required for OLS to be BLUE (the "Best Linear Unbiased Estimator").
**0) Well Defined:** $X^{T}X$ is invertible (No perfect multicollinearity), $|X| \neq 0$
**1) Linearity:** the parameters we are estimating using the OLS method must be themselves linear.
**2) Random:** our data must have been randomly sampled from the population.
**3) Non-Collinearity:** the regressors (x vars) being calculated aren’t perfectly (or highly) correlated with each other.
**4) Exogeneity:** the regressors (x vars) aren’t correlated with the error term.
- Omitted Variables Bias (Ice Cream Sales and Burglaries)
- Instrumental Variables: A regression of education on earnings would be biased both education and ability are both influenced by influenced by natural ability. We use an additional "Instrumental Variable" that is correlated with of schooling and earnings but isn't correlated with ability in order to estimate the effect of years of schooling on earnings. (Month of birth - Angrist and Kreuger)
**5) Homoskedasticity:** no matter what the values of our regressors might be, the error of the variance is constant.
[Statistics How To - Gauss Markov Assumptions](https://www.statisticshowto.datasciencecentral.com/gauss-markov-theorem-assumptions/)
# Enough Terminology Zoo, Lets Do Stuff!
# Finding Standard Errors of Coefficients
Scikit-Learn is built to be a machine learning library, and machine learning typically prioritizes making accurate predictions over interpreting model parameters. Due to this, there aren't any easy ways to calculate standard errors of our coefficients using Sklearn. We'll need to use a different library called **statsmodels**.
### Preliminary steps
```python
# Read in dataset
df = pd.read_csv("https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv")
df.columns.tolist()
```
```python
# Most homes weren't renovated
df['yr_renovated'].value_counts().head()
```
```python
# Drop columns that I don't care about
df = df.drop(columns=['id','date','zipcode','lat','long','yr_renovated'])
```
```python
# Plot scatterplots
target = 'price'
features = df.columns.drop(target)
for feature in features:
sns.scatterplot(x=feature, y=target, data=df, alpha=0.1)
plt.show()
```
```python
# Prepare X and y
target = 'price'
features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot',
'floors', 'waterfront', 'view', 'condition', 'grade',
'sqft_above', 'sqft_basement', 'yr_built',
'sqft_living15', 'sqft_lot15']
X = df[features]
y = df[target]
```
```python
# Use Statsmodels to run a regression
model = sm.OLS(y, sm.add_constant(X))
results = model.fit()
print(results.summary())
```
### Interpretation of P-Value
"The p-value for each term tests the null hypothesis that the coefficient is equal to zero (no effect). A low p-value (< 0.05) indicates that you can reject the null hypothesis. In other words, a predictor that has a low p-value is likely to be a meaningful addition to your model because changes in the predictor's value are related to changes in the response variable." [Minitab Blog](http://blog.minitab.com/blog/adventures-in-statistics-2/how-to-interpret-regression-analysis-results-p-values-and-coefficients)
## Remove Outliers
```python
print(df.shape)
df = df[(np.abs(stats.zscore(df)) < 3).all(axis=1)]
print(df.shape)
```
```python
# Re-run regression without outliers.
X = df[features]
y = df[target]
model = sm.OLS(y, sm.add_constant(X))
results = model.fit()
print(results.summary())
```
## Log-Linear Regression
```python
df['ln_price'] = np.log(df['price'])
df = df.drop(columns='price')
target = 'ln_price'
features = df.columns.drop(target)
for feature in features:
sns.scatterplot(x=feature, y=target, data=df, alpha=0.1)
plt.show()
```
```python
# Log-Linear Regression
X = df[features]
y = df[target]
model = sm.OLS(y, sm.add_constant(X))
results = model.fit()
print(results.summary())
```
[King County](https://www.google.com/maps/place/King+County,+WA/@47.4269284,-122.9244266,8z/data=!3m1!4b1!4m5!3m4!1s0x54905c8c832d7837:0xe280ab6b8b64e03e!8m2!3d47.5480339!4d-121.9836029)
# Collinearity/Multicollinearity
When two variables are close to being a linear combination of each other we call this **collinearity** or having high levels of collinearity. If there are three of more variables all with significant levels of collinearity we call this "multicollinearity" but people basically use the two terms interchangeably.
## Perfect Multicollinearity
Variables variables are **perfectly** collinear when the vectors that represent them are linearly dependent. This means that if plotted against each other in a scatter plot all of the points would fall on the same line. We mentioned briefly that perfect multicollinearity breaks OLS because it makes it so that the X matrix is not invertible.
Perfect multicollinearity usually is caused by careless feature engineering usually through transforming the units of a variable and then keeping both variables in the regression. It can also be created through the one-hot-encoding of binary categorical variables.
## Why is Collinearity Bad?
High levels of Collinearity in a dataset is bad because it increases standard errors and therefore makes estimates of our coefficients less precise. Very high levels of collinearity (nearing perfect multicollinearity can cause standard errors to grow drastically.)
### Example of two collinear features:
```python
sns.scatterplot(x='sqft_basement', y='sqft_living', data=df, alpha=0.1);
```
## Testing for high levels of collinearity
We test for high levels of collinearity by calculating the dataset's **Variance Inflation Factor** or VIF. From Wikipedia:
> "In statistics, the variance inflation factor (VIF) is the ratio of variance in a model with multiple terms, divided by the variance of a model with one term alone. It quantifies the severity of multicollinearity in an ordinary least squares regression analysis. It provides an index that measures how much the variance (the square of the estimate's standard deviation) of an estimated regression coefficient is increased because of collinearity." [VIF Wikipedia](https://en.wikipedia.org/wiki/Variance_inflation_factor)
As a rule of thumb any variable that has a VIF > 10 needs to be dealt with (probably dropped from your model). If you see a VIF greater than 10 it is likely that two x variables are highly correlated. Remember that we can use the correlation matrix to check levels of correlation between our independent variables.
(Ignore the variance inflation factor for the constant. It should be high, even infinite.)
https://www.statsmodels.org/stable/generated/statsmodels.stats.outliers_influence.variance_inflation_factor.html
```python
from statsmodels.stats.outliers_influence import variance_inflation_factor
X = sm.add_constant(X)
vif = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]
pd.Series(vif, X.columns)
```
### Exclude collinear features and refit model
```python
target = 'ln_price'
features = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'yr_built',
'sqft_living15',
'sqft_lot15']
y = df[target]
X = df[features]
X = sm.add_constant(X)
vif = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]
pd.Series(vif, X.columns)
```
```python
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
```
# Homoskedasticity and Heteroskedasticity
What a big complicated words. Also, some poeple spell them "homoscedasticity" and "heteroscedasticity" but that just feels wrong to me somehow.
## Homoskedasticity
Homoskedasticity means that along our entire domain (x axis) the residuals are about the same distance from our regression line (on average).
## Heteroskedasticity.
Our data points exhibit heteroskedasticity when they don't exhibit homoskedasticity. This is much easier to explain by just showing a picture.
Looking at scatterplots of our data are there any places where we might be worried about heteroskedasticity?
```python
target = 'ln_price'
features = df.columns.drop(target)
for feature in features:
sns.lmplot(x=feature, y=target, data=df, scatter_kws=dict(alpha=0.1))
plt.show()
```
## Which variables might potentially be offenders?
## Addressing Heteroskedasticity
If heteroskedasticity exists in our dataset it will damage our standard errors and make our estimates less precise. You have to remember that any challenges that damages the reliability of standard errors also damages the reliability of confidence intervals and hypothesis tests. Therefore, these challenges that damage standard errors also damage a whole host of statistical tools that we would normally like to rely on.
Dealing with heteroskedasticity is pretty straightforward, we simply employ what are called "robust standard errors" I won't go into depth on how they this works here, but robust standard errors essentially correct heteroskedasticity in our data while the side effects are minimal. Due to this if you are suspicious of heteroskedasticity in your dataset and you intend to interpret the coefficients of your model. You should run the regression using robust standard errors the majority of the time. Lets see how much our regression output changes when we use robust standard errors.
```python
# Let's run our regression again using Robust Standard Errors
# cov_type='HC3' parameter to .fit() function
# Log-Linear Regression
X = df[['bedrooms', 'bathrooms', 'sqft_living',
'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade',
'sqft_above', 'sqft_basement', 'yr_built',
'sqft_living15', 'sqft_lot15']]
y = df['ln_price']
model = sm.OLS(y, X)
results = model.fit(cov_type='HC3')
print(results.summary())
```
# Function Form Misspecification
Say we wanted to fit a polynomial log-linear model to this data. How might we identify (besides visually) potential candidates for polynomial functions? First off, what does the eyeball test point out might be potential candidates for polynomial forms? Here come scatter plots again.
```python
target = 'ln_price'
features = df.columns.drop(target)
for feature in features:
sns.lmplot(x=feature, y=target, data=df, scatter_kws=dict(alpha=0.1))
plt.show()
```
I think sqft_living and sqft_above at a minimum are potential candidates for polynomial terms. I want to remind you what an underfit linear regression looks like:
This shows that the residuals of an underfit curved functional form will oscilate from negative residuals, to positive and then back to negative.
We might expect the residual plot to look something like this:
Truly, any bowing in our residuals is cause for concern. Lets plot the actual distribution of the residual graphs and see if our residuals match our eyeball test.
# Residual Plots
Plotting our residuals to see their distribution is an extremely useful model diagnostic technique. Lets get familiar with it.
The Seaborn library coming through like a champ, yet again.
```python
for feature in features:
sns.residplot(X[feature], y, lowess=True, line_kws=dict(color='r'))
plt.show()
```
From our residual plots, I think we can suspect that sqft_lot sqft_lot15 and yr_built all might be candidates for polynomial forms. Lets generate some squared terms and then re-plot the residuals graphs and see if we get any improvement.
```python
df['sqft_lot_squared'] = df['sqft_lot']**2
df['sqft_lot15_squared'] = df['sqft_lot15']**2
```
Lets also create a few features from our eyeball test and we'll see which ones seem to be more statistically significant.
```python
df['sqft_living_squared'] = df['sqft_living']**2
```
Lets add these to our regression and run it again to see if it has any considerable impact on coefficients.
```python
# log-polynomial? linear regression model with robust standard errors
# to use Robust Standard Errors pass:
# cov_type='HC3' parameter to .fit() function
X = df[['bedrooms', 'bathrooms', 'sqft_living', 'sqft_living_squared',
'sqft_lot', 'sqft_lot_squared', 'floors', 'waterfront', 'view', 'condition', 'grade',
'sqft_basement', 'sqft_living15',
'sqft_lot15', 'sqft_lot15_squared']]
y = df['ln_price']
model = sm.OLS(y, X)
results = model.fit(cov_type='HC3')
print(results.summary())
```
```python
X = df[['bedrooms', 'bathrooms', 'sqft_living', 'sqft_living_squared',
'sqft_lot', 'sqft_lot_squared', 'floors', 'waterfront', 'view', 'condition', 'grade',
'sqft_basement', 'sqft_living15',
'sqft_lot15']]
y = df['ln_price']
model = sm.OLS(y, X)
results = model.fit(cov_type='HC3')
print(results.summary())
```
|
6daf5eba73563f15ee534121f3268f1f34ba7018
| 31,376 |
ipynb
|
Jupyter Notebook
|
module3-regression-diagnostics/regression-diagnostics.ipynb
|
tortas/DS-Unit-2-Sprint-2-Regression
|
a83d06816a658ec07f2fdfbec797870c443a2798
|
[
"MIT"
] | null | null | null |
module3-regression-diagnostics/regression-diagnostics.ipynb
|
tortas/DS-Unit-2-Sprint-2-Regression
|
a83d06816a658ec07f2fdfbec797870c443a2798
|
[
"MIT"
] | null | null | null |
module3-regression-diagnostics/regression-diagnostics.ipynb
|
tortas/DS-Unit-2-Sprint-2-Regression
|
a83d06816a658ec07f2fdfbec797870c443a2798
|
[
"MIT"
] | null | null | null | 34.141458 | 592 | 0.618594 | true | 4,287 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.66888 | 0.822189 | 0.549946 |
__label__eng_Latn
| 0.994955 | 0.116039 |
# Problem Statement:
\begin{equation} H_{0} : p_{gate30} - p_{gate40} >= 0 \end{equation}
\begin{equation} H_{1} : p_{gate30} - p_{gate40} < 0 \end{equation}
```python
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
```
## Read & Understand the data
```python
### Your Code Here ###
df = pd.read_csv('cookie_cats.csv')
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>userid</th>
<th>version</th>
<th>sum_gamerounds</th>
<th>retention_1</th>
<th>retention_7</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>116</td>
<td>gate_30</td>
<td>3</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>1</th>
<td>337</td>
<td>gate_30</td>
<td>38</td>
<td>True</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>377</td>
<td>gate_40</td>
<td>165</td>
<td>True</td>
<td>False</td>
</tr>
<tr>
<th>3</th>
<td>483</td>
<td>gate_40</td>
<td>1</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<th>4</th>
<td>488</td>
<td>gate_40</td>
<td>179</td>
<td>True</td>
<td>True</td>
</tr>
</tbody>
</table>
</div>
```python
df.shape
```
(90189, 5)
```python
df.isnull().any().sum()
```
0
```python
df.userid.duplicated().sum()
```
0
### How many player in each group?
##### Hint: Use groupby with count
```python
### Your Code Here ###
df.groupby('version').count()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>userid</th>
<th>sum_gamerounds</th>
<th>retention_1</th>
<th>retention_7</th>
</tr>
<tr>
<th>version</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>gate_30</th>
<td>44700</td>
<td>44700</td>
<td>44700</td>
<td>44700</td>
</tr>
<tr>
<th>gate_40</th>
<td>45489</td>
<td>45489</td>
<td>45489</td>
<td>45489</td>
</tr>
</tbody>
</table>
</div>
### What is the percentage of users that came back the day after they installed?
```python
### Your Code Here ###
df.retention_1.mean()
```
0.4452095044850259
### What is the percentage of users of each group [gate_30, gate_40] that came back the day after they installed?
```python
### Your Code Here ###
gate_30 = df.query("version == 'gate_30'").retention_1.mean()
gate_40 = df.query("version == 'gate_40'").retention_1.mean()
gate_30, gate_40
obs_sample = gate_30 - gate_40
obs_sample
```
0.005905169787341458
### Bootstrap the data by resampling the dataset with replacement for retention_1
##### Hint: use .sample method with frac = 1 and replace = True
##### Hint: groupby the result of sampling by version column then select retention_1 column and apply mean as an agg function
##### Hint: take difference in mean between the 2 groups in each iteration and append it to a list
```python
df_30 = df.query("version == 'gate_30'")
df_40 = df.query("version == 'gate_40'")
```
```python
### Your Code Here ###
diffs = []
for _ in range(10000):
sample_30 = np.random.choice([0,1],df_30.shape[0], p = [1-gate_30, gate_30])
sample_40 = np.random.choice([0,1],df_40.shape[0], p = [1-gate_40, gate_40])
diff = sample_30.mean() - sample_40.mean()
diffs.append(diff)
```
### Plot the difference distribution
```python
### Your Code Here ###
plt.hist(diffs);
```
### At alpha level 0.05, should we reject the null ?
##### Hint: Calculate the STDerr, Simulate under the null, Calculate the p-value
```python
diffs = np.array(diffs)
```
```python
### Your Code Here ###
under_null = np.random.normal(0, diffs.std(), 10000)
```
```python
### Your Code Here ###
plt.hist(under_null)
plt.axvline(x= obs_sample, color = 'r');
```
```python
P_value = (under_null < obs_sample).mean()
P_value
```
0.9653
**Since P_Value is (0.9648) > alpha (0.05), So we Fail to Reject the Null Hypothesis.**
```python
```
|
c49877941fe9dd8eab3fb6dd69571e1695920bf4
| 23,066 |
ipynb
|
Jupyter Notebook
|
Cookie Cats AB testing.ipynb
|
asmaamahrous91/AB-Test-for-Cookie-Cats
|
a6ba41806cc2b509f8e2f38dcc05c49b27e404a4
|
[
"Unlicense"
] | null | null | null |
Cookie Cats AB testing.ipynb
|
asmaamahrous91/AB-Test-for-Cookie-Cats
|
a6ba41806cc2b509f8e2f38dcc05c49b27e404a4
|
[
"Unlicense"
] | null | null | null |
Cookie Cats AB testing.ipynb
|
asmaamahrous91/AB-Test-for-Cookie-Cats
|
a6ba41806cc2b509f8e2f38dcc05c49b27e404a4
|
[
"Unlicense"
] | null | null | null | 44.78835 | 5,720 | 0.687375 | true | 1,527 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.899121 | 0.808067 | 0.726551 |
__label__eng_Latn
| 0.65367 | 0.526352 |
```python
import numpy as np, cmath,scipy as sp
import scipy.io
from matplotlib import pyplot as plt
from numpy import pi, sin, cos, exp, sqrt, log, random #import basic functions from numpy that we'll need
%matplotlib inline
```
```python
import seaborn as sns
sns.set_palette('muted')
sns.set_style('darkgrid')
```
```python
data=scipy.io.loadmat('sampleEEGdata')
```
```python
EEGdata=data["EEG"][0,0]["data"]
srate = float(data["EEG"][0,0]["srate"][0,0])
```
###Figure 12.1
The real component of a Morlet wavelet is a cosine multiplied by a Gaussian window:
\begin{equation}
Re(\text{Morlet wavelet}) = \cos(2\pi f t) e^{-t^2/(2s^2)}
\end{equation}
```python
time = np.arange(-1,1+1/srate,1/srate)
f=4 #sinewave frequency, Hz
#create a sinewave (cosine wave)
sine_wave = cos(2*pi*f*time)
#create a Gaussian
s=4/(2*pi*f) #standard deviation
gaussian_win = exp(-time**2/(2*s**2))
#plot our first wavelet!
_=plt.plot(time,sine_wave*gaussian_win)
```
###Figure 12.2
```python
fig=plt.figure()
fig.add_subplot(share_x=True)
plt.subplot(511)
plt.plot(np.squeeze(EEGdata[46,:,0]))
plt.subplot(512)
sine_wave = cos(2*pi*12*time) # 12Hz cosine wave
plt.plot(time,sine_wave)
plt.subplot(513)
#boxcar envelope
boxcar = np.zeros(len(sine_wave))
midpoint = (len(time)-1)/2.
boxcar[midpoint-np.round(srate/12./5.):midpoint+np.round(srate/12./1.25)] = 1
plt.plot(time,sine_wave*boxcar)
plt.subplot(514)
#boxcar of different length
boxcar = np.zeros(len(sine_wave))
midpoint = (len(time)-1)/2.
boxcar[midpoint-50:midpoint+50] = 1
plt.plot(time,sine_wave*boxcar)
plt.subplot(515)
s = 1.5/(2*pi*f)
gaussian_win = exp(-time**2/(2*s**2))
plt.plot(time,sine_wave*gaussian_win)
plt.tight_layout()
```
###Figure12.3
```python
srate = 500. #sample rate in Hz
f = 10
time = np.arange(-1,1,1/srate)
#complex sinusoid
sine_wave = exp(2*pi*1j*f*time)
#Gaussian window
s = 6/(2*pi*f)
gaussian_win = exp(-time**2/(2*s**2))
#together they make a complex morlet wavelet!
wavelet = sine_wave*gaussian_win
#create plots for each component
fig = plt.figure()
fig.add_subplot(share_x=True)
plt.subplot(311)
plt.plot(time,np.real(sine_wave))
plt.title("sine wave")
plt.subplot(312)
plt.plot(time,gaussian_win)
plt.title("gaussian window")
plt.subplot(313)
plt.plot(time,np.real(wavelet))
plt.title("my first wavelet")
_=plt.xlabel("time (ms)")
plt.tight_layout()
```
###Figure 12.4
```python
num_wavelets = 80 # number of frequency bands
lowest_frequency = 2 #in Hz
highest_frequency = 100 # in Hz
#(linear) equally spaced frequencies for our wavelet family
frequencies = np.linspace(lowest_frequency,highest_frequency,num_wavelets)
plt.figure()
plt.plot(frequencies)
plt.xlabel("Frequency order")
_=plt.ylabel("Frequency in Hz")
```
```python
#initialize our wavelet family
wavelet_family = np.zeros([num_wavelets,len(time)])*1j #1j is to create a complex array of zeros
#iterate through freqs and make a wavelet family
for fi in xrange(num_wavelets):
#create a sine wave
sinewave = exp(2*1j*pi*frequencies[fi]*time)
#create gaussian window
gaus_win = exp(-time**2/(2*(6/(2*pi*frequencies[fi]))**2))
#create wavelet by multiplying our sine wave by the gaussian window
wavelet_family[fi,:] = sinewave*gaus_win
#this could be accomplished on one line
# wavelet_family[fi,:] = exp(2*1j*pi*frequencies[fi]*time) * exp(-time**2/(2*(6/(2*pi*frequencies[fi]))**2))
#plot some of our wavelet family
fig=plt.figure()
fig.add_subplot(share_x=True)
plt.subplot(211)
_=plt.plot(time,np.real(wavelet_family[::np.round(random.rand()*30),:].T))
plt.subplot(212)
plt.plot(time,np.real(wavelet_family[30,:]))
plt.plot(time,np.imag(wavelet_family[30,:]),'r:')
plt.title("real and imaginary parts of one wavelet")
plt.legend(["real","imaginary"])
plt.tight_layout()
```
```python
fig=plt.figure(figsize=(6,6))
plt.imshow(np.real(wavelet_family),
extent=[time[0], time[-1], frequencies[0], frequencies[-1]],
aspect="auto",
cmap=plt.get_cmap("hot"),
origin="lower")
plt.xlabel("time (s)")
_=plt.ylabel("frequency (Hz)")
```
###Figure 12.5
```python
from numpy.fft import fft, ifft #import fft functions for ease of use
from scipy import signal as sig
#EEG data from one trial (electrode FCz)
eegdata = np.squeeze(EEGdata[46,:,9])
EEGpnts = data["EEG"][0,0]["pnts"][0,0] #number of points in EEG data
EEGtimes = data["EEG"][0,0]["times"][0]
EEGsrate = float(data["EEG"][0,0]["srate"][0])
#create wavelet
time = np.arange(-1,1 + 1/EEGsrate,1/EEGsrate)
f = 6 #frequency in Hz
sine_wave = exp(2*1j*pi*f*time)
#compute gaussian
s=4.5/(2*pi*f)
gaussian_win = exp(-time**2/(2*s**2))
#window the sinewave by a gaussian to create complex morlet wavelet
wavelet = sine_wave * gaussian_win
#half of wavelet size, useful for chopping off edges after convolution
halfwaveletsize = np.ceil(len(wavelet)/2)
#convolve with data
n_conv = len(wavelet) + EEGpnts - 1 #number of points in our convolution
fft_w = fft(wavelet,n_conv)
fft_e = fft(eegdata,n_conv)
#convolution theorem -- convolution = pointwise multiplication in frequency-space
ift = ifft(fft_e*fft_w,n_conv)*sqrt(s)/10 #sqrt(s)/20 is empirical scaling factor (sqrt(s)/10 in the book)
wavelet_conv_data = np.real(ift[halfwaveletsize: - halfwaveletsize]) #take middle portion of convolution
wavelet_conv_data = np.real(ift[halfwaveletsize:-halfwaveletsize]) #take middle portion of convolution
#create a filter to apply to data
nyquist = EEGsrate/2
transition_width = 0.2 #percent
filter_low = 4 #Hz
filter_high = 8 #Hz
ffrequencies = np.array([0 ,filter_low*(1-transition_width),
filter_low, filter_high, filter_high*(1+transition_width), nyquist])/nyquist
ideal_response = np.array([0, 0, 1, 1, 0, 0])
#there doesn't seem to be a python equivalent to MATLAB's firls function,
#so I am going to use butterworth filter as a close approximation.
b, a = sig.butter(5, np.array([filter_low*(1-transition_width),filter_high*(1+transition_width)])/nyquist,btype="bandpass")
eeg_4to8 = sig.filtfilt(b, a, eegdata, padlen=150)
plt.plot(EEGtimes,eegdata)
plt.plot(EEGtimes,wavelet_conv_data,'r')
plt.plot(EEGtimes,eeg_4to8,'g')
plt.axis([-200,1200,-40,40])
plt.xlabel("time (ms)")
plt.ylabel("voltage (mV)")
_=plt.legend(["raw","wavelet conv","band-passed"])
```
###Figure 12.6
```python
time = np.arange(-1,1+1/EEGsrate,1/EEGsrate)
n_conv = EEGpnts + len(time) -1
n2p1 = np.floor(n_conv/2)+1
f = 6 #hz
s = 6/(2*pi*f)
wavelet = exp(2*pi*1j*f*time) * exp(-time**2/(2*s**2))
halfwaveletsize = np.ceil(len(wavelet)/2)
eegdata = np.squeeze(EEGdata[46,:,9])
plt.figure()
plt.subplot(311)
plt.plot(EEGtimes,eegdata)
plt.xlim([-500,1200])
plt.title("raw")
plt.subplot(323)
fft_w = fft(wavelet,n_conv)
hz = np.linspace(0,EEGsrate/2.,n2p1)
plt.plot(hz,np.absolute(fft_w[:n2p1])/np.max(np.absolute(fft_w[:n2p1])),'b')
fft_e = fft(eegdata,n_conv)
plt.plot(hz,np.absolute(fft_e[:n2p1])/np.max(np.absolute(fft_e[:n2p1])),'g')
plt.axis([0,40,0,1.05])
plt.title("individual power spectra")
plt.subplot(324)
plt.plot(hz,np.absolute(fft_e[:n2p1]*np.absolute(fft_w[:n2p1])))
plt.xlim([0, 40])
plt.title("convolved power spectrum")
plt.subplot(313)
plt.plot(EEGtimes,eegdata)
ift = ifft(fft_e*fft_w,n_conv)*sqrt(s)/10 #sqrt(s)/20 is empirical scaling factor (sqrt(s)/10 in the book)
plt.plot(EEGtimes,np.real(ift[halfwaveletsize:-halfwaveletsize]),'r')
plt.title("wavelet filtered")
plt.tight_layout()
```
###Figure 12.7
```python
#create 10Hz wavelet kernel
time = np.arange(-(EEGpnts/EEGsrate/2),EEGpnts/EEGsrate/2 + 1/EEGsrate,1/EEGsrate)
f = 10. #hz
s = 4/(2*pi*f) #sd of gaussian
wavelet = cos(2*pi*f*time) * exp(-time**2/(2*s**2))
#signal is one sine cycle
timeS = np.arange(0,1/f + 1/EEGsrate,1/EEGsrate)
signal = sin(2*pi*f*timeS)
#zeropad the signal
zz = np.zeros(EEGpnts/2 - len(timeS)/2)
signal = np.concatenate([zz,signal,zz])
plt.figure(figsize=(6,6))
#plot waves
plt.subplot(321)
plt.plot(wavelet,'r')
plt.xlim(200, len(time) - 200)
plt.title("wavelet")
plt.subplot(323)
plt.plot(signal)
plt.xlim([200, len(time)-200])
plt.title("1 cycle of signal")
plt.subplot(325)
plt.plot(np.convolve(wavelet,signal,mode="same"),'purple')
plt.axis([200,len(time)-200,-12,12])
plt.title("convolved wavelet and signal")
#plot the dot products at selected phase lags
plt.subplot(322)
plt.plot(wavelet[np.round(100/f)-2-1:],'r')
plt.plot(signal)
plt.xlim([200,len(time)-200])
plt.title("dot product: " + str( np.fix(np.sum(wavelet[np.round(100/f)-2-1:]*signal[:-np.round(100/f)+3]))))
plt.legend(["wavelet","signal"])
plt.subplot(324)
plt.plot(wavelet[np.round(2.3*100/f)-2-1:],'r')
plt.plot(signal)
plt.xlim([200,len(time)-200])
plt.title("dot product: " + str( np.fix(np.sum(
wavelet[np.round(2.3*100/f)-2-1:]*signal[:-np.round(2.3*100/f)+3]))))
plt.subplot(326)
plt.plot(wavelet,'r')
plt.plot(signal)
plt.xlim([200,len(time)-200])
plt.title("dot product: " + str( np.fix(np.sum(
wavelet*signal))))
plt.tight_layout()
```
|
90a943aa94aa7f40c7585419db436f9523d42235
| 395,741 |
ipynb
|
Jupyter Notebook
|
chapter12.ipynb
|
stfnrpplngr/Analyzing_Neural_Time_Series
|
f849534584ec8756c912ce8d621e2549d5a1b832
|
[
"MIT"
] | 1 |
2019-02-28T18:48:13.000Z
|
2019-02-28T18:48:13.000Z
|
chapter12.ipynb
|
ElJAZRY/Analyzing_Neural_Time_Series
|
f849534584ec8756c912ce8d621e2549d5a1b832
|
[
"MIT"
] | null | null | null |
chapter12.ipynb
|
ElJAZRY/Analyzing_Neural_Time_Series
|
f849534584ec8756c912ce8d621e2549d5a1b832
|
[
"MIT"
] | 1 |
2020-07-10T00:59:25.000Z
|
2020-07-10T00:59:25.000Z
| 655.200331 | 66,540 | 0.938596 | true | 2,893 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.927363 | 0.828939 | 0.768727 |
__label__eng_Latn
| 0.586518 | 0.624344 |
# Heat transfer during dike cooling
In this notebook, we are interested in modeling heat transfer during the emplacement and subsequent cooling of dikes. In doing so, we are particularly interested in understanding the timescale for cooling within the dike itself and the magnitude of heating of the host rock into which a dike intruded.
This problem was dealt with nicely in an article published by Paul T. Delaney of the US Geological Survey:
Delaney, P.T. 1987. Heat transfer during emplacement and cooling of mafic dykes In Mafic dyke swarms. Edited by H.C. Halls and W.F. Fahrig. Geological Association of Canada, Special Paper 34, pp. 31-46.
## An analytical solution to transient heat conduction
Delaney (1987) formulates the problem by idealizing a dike as a tabular channel of infinite extent. Coordinates are based on the position of the dike wall with the $X$-direction being the direction orthogonal to the wall such that negative $X$ values are within the dike and positive $X$ values are in the host rock. The dike has a thickness $T$ and an initial temperature $\Theta_{mi}$ (subscript stands for magma initial). The host rock has an initial temperature $\Theta_{hi}$ and a thermal diffusivity $\kappa_h$.
> Conservation of energy for a motionless material undergoing one-dimensional heat transfer with no chemical reactions is (Carslaw and Jaeger, 1959, Ch. 1; Bird et al., 1960, Ch.10):
\begin{equation}
\rho C\frac{\partial\Theta}{\partial t} = \frac{\partial}{\partial X}k\frac{\partial\Theta}{\partial X}
\end{equation}
This equation states that the heat conducted into a unit volume minus the heat conducted out is equal to the accumulation of heat within the volume. The right-hand side of equation 1 is the gradient in heat flux, which is given by Fourier's Law, $Q = - k\partial\Theta/\partial X$ where $k$ is thermal conductivity; the left-hand side is the rate of accumulation of heat, where pC is heat capacity per unit volume. If k is constant, then:
\begin{equation}
\frac{\partial\Theta}{\partial t} = \kappa\frac{\partial^2\Theta}{\partial X^2}
\end{equation}
Thermal diffusivity, $\kappa = k/(pC)$, measures the ability of a material to conduct heat relative to its ability to accumulate heat.
> Generality and simplicity are gained by introducing non-dimensional temperature $\theta$, distance $x$, and time $\tau$:
> \begin{equation}
\theta = (\Theta-\Theta_{hi})/(\Theta_{mi}-\Theta_{hi})
\end{equation}
> \begin{equation}
x = X/(T/2)
\end{equation}
> \begin{equation}
\tau = t*\kappa_h/(T/2)^2
\end{equation}
Following this introduction, Delaney builds up to presenting the first and simplest whole-time solution. This solution neglects themal property constrasts between the host rock and dike (i.e. $\kappa_m/\kappa_h=1$). These thermal property contrasts can affect the maximum temperatures reached in the host rock and early cooling rates, but the influence is rather small. This whole-time solution is:
> \begin{equation}
\theta = \frac{1}{2}[erf\big(\frac{2+x}{\sqrt{4\tau}}\big)-erf\big(\frac{x}{\sqrt{4\tau}}\big)]
\end{equation}
Delaney also presents numerical solutions that incorporate the effects of the heat of crystallization, magma flow and the temperature dependance of thermal conductivity and diffusivity. In the application that we are exploring here, the cooling of a breccia dike emplaced within an impact crater, neither the heat of crystallization nor magma flow apply and therefore the analytical solution using transient heat conduction theory will work well for our analysis.
## Implementing the whole-time solution
### Important some scientific Python libraries
```python
from scipy import special
import numpy as np
import matplotlib.pyplot as plt
#import seaborn as sns
%matplotlib inline
```
### Define the function dike_cooling()
A function can be defined that returns the temperature at a given time and distance from the contact (within or outside of the dike) for a given initial dike temperature, initial host rock temperature, dike width and thermal diffusivity. This function calculates non-dimensional distance and time and then solves for non-dimensional temperature using the whole-time solution detailed above. The temperature of interest can then be extracted from the non-dimensional temperature using the specified intial temperatures.
```python
def dike_cooling(t,distance_from_contact,temp_dike,temp_host,dike_width,kn):
x_nd = distance_from_contact/(dike_width/2)
tau_nd = t * kn/((dike_width/2.0)**2)
temp_nd = 0.5 * (special.erf((2+x_nd)/np.sqrt(4*tau_nd)) - special.erf(x_nd/np.sqrt(4*tau_nd)))
temp = temp_nd*(temp_dike-temp_host) + temp_host
return temp
```
### Input parameters
```python
dike1_temp = 800.0 #in Celcius
dike1_host_temp = 250.0 #in Celcius
dike1_width = 0.5 #in meters
dike1_kn = 7e-7 #thermal diffusivity (m^2/s)
```
### Plot temperature vs distance at a number of times
```python
plt.figure(figsize=(8,6))
for time in [0.0,60*60,6*60*60,12*60*60,24*60*60,100*24*60*60]:
temp = []
distance = []
for distance_from_contact in np.arange(-dike1_width/2,dike1_width*2,0.00001):
temp_at_distance = dike_cooling(time,distance_from_contact,dike1_temp,dike1_host_temp,dike1_width,dike1_kn)
temp.append(temp_at_distance)
distance.append(distance_from_contact)
plt.plot(distance,temp,c=np.random.rand(3,),label=str(time/60/60)+' hours')
plt.xlabel('distance from dike wall (m)')
plt.ylabel('temperature ($^\circ$C)')
plt.ylim((0,dike1_temp+100))
plt.xlim((-dike1_width/2,dike1_width*2))
plt.legend()
plt.title('cooling of a dike emplaced at high temperature')
plt.show()
```
```python
x = 'yo'
for letter in x:
print letter
```
y
o
### Plot temperature vs time at the center of the dike
```python
distance_from_contact = -dike1_width/2.0 #center of dike in meters
time = []
time_days = []
time_hours = []
temp = []
for t in range(0,500000,100):
temp_at_t = dike_cooling(t,distance_from_contact,dike1_temp,dike1_host_temp,dike1_width,dike1_kn)
temp.append(temp_at_t)
time.append(t)
time_hours.append(t/60.0/60.0)
time_days.append(t/60.0/60.0/24.0)
plt.plot(time_hours,temp)
plt.xlabel('hours since dike emplacement')
plt.ylabel('temperature ($^\circ$C)')
plt.title('temperature at center of dike')
plt.show()
```
## Temperature dependence of thermal diffusivity
The above analysis does not incorporate the temperature-dependence of thermal diffusivity which is something that Delaney (1987) explores in some detail. Laser flash-analysis has enabled advances in measurements of thermal conductivity at elevated temperature since the work of Delaney (1987). Such from schist, granite and rhyolite were published by:
Whittington A. G., Hofmeister A. M., Nabelek P. I. (2009) Temperature-dependent thermal diffusivity of the Earth's crust and implications for magmatism. Nature 458:319–321
These data were similar between the three rock types and the following empirical fits were proposed by Whittington et al. (2009) for the temperature dependence of thermal diffusivity (in square millimetres per second) in the continental crust.
\begin{equation}
\kappa_{crust}(T<846K)=567.3/T-0.062
\end{equation}
\begin{equation}
\kappa_{crust}(T>846K)=0.732-0.000135T
\end{equation}
A next step for this analysis would be to incorporate this temperature dependence of thermal diffusivity into the model. Taking all of the data from Whittington et al. (2009) between 350 and 580ºC (chosen as interval of interest due to blocking temperature of magnetite) gives an average value of 7.1E-6 m$^2$/s (1$\sigma$ of .12) which is what is used in the analysis above.
### Plot the Whittington et al. (2009) temperature dependent empirical fit
```python
T = []
kappa = []
for temp in range(290,845,20):
k = 567.3/temp - 0.062
kappa.append(k*10**-7)
T.append(temp-273)
for temp in range(848,1200,20):
k = 0.732 - 0.000135*temp
kappa.append(k*10**-7)
T.append(temp-273)
plt.plot(T,kappa,marker='o')
plt.ylabel('thermal diffusivity (m$^2$/s)')
plt.xlabel('temperature ($^\circ$C)')
plt.title('temperature at center of dike')
plt.show()
```
```python
T_for_avg = []
kappa_for_avg = []
for temp in range(400+273,800+273):
if temp > 846:
k = 0.732 - 0.000135*temp
kappa_for_avg.append(k*10**-7)
T_for_avg.append(temp-273)
if temp < 846:
k = 567.3/temp - 0.062
kappa_for_avg.append(k*10**-7)
T_for_avg.append(temp-273)
average_kappa = np.average(kappa_for_avg)
print(average_kappa)
plt.plot(T,kappa,marker='o')
plt.hlines(average_kappa,400,800,color='r')
plt.ylabel('thermal diffusivity (m$^2$/s)')
plt.xlabel('temperature ($^\circ$C)')
plt.title('temperature at center of dike')
plt.show()
```
```python
```
```python
```
|
a64a42a19642f31d51c1483b3d9abd9b5e0c46e7
| 111,000 |
ipynb
|
Jupyter Notebook
|
cooling_of_a_dike/cooling_of_a_dike.ipynb
|
Swanson-Hysell/Earth_science_notebooks
|
589825ecfdb881c8eff82fa4f8ddd877538f4c7a
|
[
"CC-BY-3.0"
] | 3 |
2017-03-21T05:37:01.000Z
|
2021-12-03T19:29:17.000Z
|
cooling_of_a_dike/cooling_of_a_dike.ipynb
|
Swanson-Hysell/Earth_science_notebooks
|
589825ecfdb881c8eff82fa4f8ddd877538f4c7a
|
[
"CC-BY-3.0"
] | null | null | null |
cooling_of_a_dike/cooling_of_a_dike.ipynb
|
Swanson-Hysell/Earth_science_notebooks
|
589825ecfdb881c8eff82fa4f8ddd877538f4c7a
|
[
"CC-BY-3.0"
] | 6 |
2015-09-25T19:15:21.000Z
|
2021-12-17T06:55:07.000Z
| 274.752475 | 44,878 | 0.909153 | true | 2,398 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.935347 | 0.859664 | 0.804083 |
__label__eng_Latn
| 0.967444 | 0.706488 |
<p align="center">
</p>
## Subsurface Data Analytics
### Naive Bayes Classification for Subsurface Data Analytics in Python
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
### PGE 383 Exercise: Naive Bayes Classification for Subsurface Data Analytics in Python
Here's a simple workflow, demonstration of naive Bayes classification for subsurface modeling workflows. This should help you get started with building subsurface models that with predictions based on multiple sources of information.
This method is great as it builds directly on our knowledge Bayesian statistics to provide a simple, but flexible classification method.
#### Bayesian Updating
The naive Bayes classifier is based on the conditional probability of a category, $k$, given $n$ features, $x_1, \dots , x_n$.
\begin{equation}
p(C_k | x_1, \dots , x_n)
\end{equation}
we can solve this with Bayesian updating:
\begin{equation}
p(C_k | x_1, \dots , x_n) = \frac{p(x_1, \dots , x_n | C_k) p(C_k)}{p(x_1, \dots , x_n)}
\end{equation}
let's combine the likelihood and prior for the momment:
\begin{equation}
p(x_1, \dots , x_n | C_k) p(C_k) = p(x_1, \dots , x_n, C_k)
\end{equation}
we can exand the full joint distribution recursively as follows:
\begin{equation}
p(x_1, \dots , x_n, C_k)
\end{equation}
expansion of the joint with the conditional and prior
\begin{equation}
p(x_1 | x_2, \dots , x_n, C_k) p(x_2, \dots , x_n, C_k)
\end{equation}
continue recursively expanding
\begin{equation}
p(x_1 | x_2, \dots , x_n, C_k) p(x_2 | x_3, \dots , x_n, C_k) p(x_3, \dots , x_n, C_k)
\end{equation}
we can generalize as
\begin{equation}
p(x_1 | x_2, \dots , x_n, C_k) p(x_2 | x_3, \dots , x_n, C_k) p(x_3 | x_4, \dots , x_n, C_k) \ldots (x_{n-1} | x_n, C_k) (x_{n} | C_k) p(C_k)
\end{equation}
#### Naive Bayes Approach
The likelihood, conditional probability with the joint conditional is difficult to calculate. It requires information about the joint relationship between $x_1, \dots , x_n$ features. As $n$ increases this requires a lot of data to inform the joint distribution.
With the naive bayes approach we make the 'naive' assumption that the features are all **conditionally independent**. This entails:
\begin{equation}
p(x_i | x_{i+1}, \ldots , x_n, C_k) = p(x_i | C_k)
\end{equation}
for all $i = 1, \ldots, n$ features.
We can now solve for the needed conditional probability as:
\begin{equation}
p(C_k | x_1, \dots , x_n) = \frac{p(C_k) \prod_{i=1}^{n} p(x_i | C_k)}{p(x_1, \dots , x_n)}
\end{equation}
We only need the prior, $p(C_k)$, and a set of conditionals, $p(x_i | C_k)$, for all predictor features, $i = 1,\ldots,n$ and all categories, $k = 1,\ldots,K$.
The evidence term, $p(x_1, \dots , x_n)$, is only based on the features $x_1, \dots , x_n$; therefore, is a constant over the categories $k = 1,\ldots,n$.
* it ensures closure - probabilities over all categories sum to one
* we simply standardize the numerators to sum to one over the categories.
The naive Bayes approach is:
* simple to understand, builds on fundamental Bayesian statistics
* pratical even with small datasets since with the conditional independence we only need to estimate simple conditional distributions
#### Objective
In the PGE 383: Stochastic Subsurface Modeling class I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows.
The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods.
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
3. In the terminal type: pip install geostatspy.
4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
You will need to copy the data file to your working directory. They are available here:
* Tabular data - [unconv_MV_v4.csv](https://git.io/fhHLT).
There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code.
#### Import Required Packages
Let's import the GeostatsPy package. I actually don't use it in this workflow, but just incase.
```python
import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper
import geostatspy.geostats as geostats # GSLIB methods convert to Python
```
We will also need some standard packages. These should have been installed with Anaconda 3.
```python
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # for plotting
from scipy import stats # summary statistics
import math # trig etc.
from sklearn.model_selection import train_test_split # train and test split
from sklearn.naive_bayes import GaussianNB # naive Bayes model and prediction
from sklearn import metrics # measures to check our models
```
If you get a package import error, you may have to first install some of these packages. This can usually be accomplished by opening up a command window on Windows and then typing 'python -m pip install [package-name]'. More assistance is available with the respective package docs.
#### Declare functions
Let's define a couple of functions to streamline plotting correlation matrices and visualization of a decision tree regression model.
```python
def plot_corr(dataframe,size=10): # plots a graphical correlation matrix
corr = dataframe.corr()
fig, ax = plt.subplots(figsize=(size, size))
im = ax.matshow(corr,vmin = -1.0, vmax = 1.0)
plt.xticks(range(len(corr.columns)), corr.columns);
plt.yticks(range(len(corr.columns)), corr.columns);
plt.colorbar(im, orientation = 'vertical')
plt.title('Correlation Matrix')
def visualize_model(model,xfeature,x_min,x_max,yfeature,y_min,y_max,response,z_min,z_max,title,):# plots the data points and the decision tree prediction
cmap = plt.cm.plasma
xplot_step = (x_max - x_min)/300.0; yplot_step = (y_max - y_min)/300.0 # resolution of the model visualization
xx, yy = np.meshgrid(np.arange(x_min, x_max, xplot_step), # set up the mesh
np.arange(y_min, y_max, yplot_step))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) # predict with our trained model over the mesh
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap,vmin=z_min, vmax=z_max, levels = 50) # plot the predictions
# add the data values as a colored by response feature scatter plot
im = plt.scatter(xfeature,yfeature,s=None, c=response, marker=None, cmap=cmap, norm=None, vmin=z_min, vmax=z_max, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title(title) # add the labels
plt.xlabel(xfeature.name); plt.ylabel(yfeature.name)
plt.xlim([x_min,x_max]); plt.ylim([y_min,y_max])
cbar = plt.colorbar(im, orientation = 'vertical') # add the color bar
cbar.set_label(response.name, rotation=270, labelpad=20)
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.0, top=1.2, wspace=0.2, hspace=0.2)
return(plt)
def visualize_model_prob(model,xfeature,x_min,x_max,yfeature,y_min,y_max,response,title,):# plots the data points and the prediction probabilities
n_classes = 10
cmap = plt.cm.plasma
xplot_step = (x_max - x_min)/300.0; yplot_step = (y_max - y_min)/300.0 # resolution of the model visualization
xx, yy = np.meshgrid(np.arange(x_min, x_max, xplot_step), # set up the mesh
np.arange(y_min, y_max, yplot_step))
z_min = 0.0; z_max = 1.0
Z = model.predict_proba(np.c_[xx.ravel(), yy.ravel()])
Z1 = Z[:,0].reshape(xx.shape); Z2 = Z[:,1].reshape(xx.shape)
plt.subplot(121)
cs1 = plt.contourf(xx, yy, Z1, cmap=cmap,vmin=z_min, vmax=z_max, levels=np.linspace(z_min, z_max, 100))
im = plt.scatter(xfeature,yfeature,s=None, c=response, marker=None, cmap=plt.cm.Greys, norm=None, vmin=z_min, vmax=z_max, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title(title + ' Probability of Low Production')
plt.xlabel(xfeature.name)
plt.ylabel(yfeature.name)
cbar = plt.colorbar(cs1, orientation = 'vertical')
cbar.set_label('Probability', rotation=270, labelpad=20)
plt.subplot(122)
cs2 = plt.contourf(xx, yy, Z2, cmap=cmap,vmin=z_min, vmax=z_max, levels=np.linspace(z_min, z_max, 100))
im = plt.scatter(xfeature,yfeature,s=None, c=response, marker=None, cmap=plt.cm.Greys, norm=None, vmin=z_min, vmax=z_max, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title(title + ' Probability of High Production')
plt.xlabel(xfeature.name)
plt.ylabel(yfeature.name)
cbar = plt.colorbar(cs2, orientation = 'vertical')
cbar.set_label('Probability', rotation=270, labelpad=20)
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.0, wspace=0.2, hspace=0.2)
plt.show()
```
#### Set the working directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time).
```python
os.chdir("c:/PGE383") # set the working directory
```
You will have to update the part in quotes with your own working directory and the format is different on a Mac (e.g. "~/PGE").
#### Read the data table
First copy the "unconv_MV.csv" comma delimited file from https://github.com/GeostatsGuy/GeoDataSets to your working directory, then run this command to read the file into a DataFrame object (part of Pandas package).
```python
my_data = pd.read_csv("unconv_MV_v4.csv") # load the comma delimited data file
```
Let's visualize the first several rows of our data stored in a DataFrame so we can make sure we successfully loaded the data file.
```python
my_data.head(n=13) # preview the first n rows of the DataFrame
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Well</th>
<th>Por</th>
<th>Perm</th>
<th>AI</th>
<th>Brittle</th>
<th>TOC</th>
<th>VR</th>
<th>Prod</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>12.08</td>
<td>2.92</td>
<td>2.80</td>
<td>81.40</td>
<td>1.16</td>
<td>2.31</td>
<td>1695.360819</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>12.38</td>
<td>3.53</td>
<td>3.22</td>
<td>46.17</td>
<td>0.89</td>
<td>1.88</td>
<td>3007.096063</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>14.02</td>
<td>2.59</td>
<td>4.01</td>
<td>72.80</td>
<td>0.89</td>
<td>2.72</td>
<td>2531.938259</td>
</tr>
<tr>
<th>3</th>
<td>4</td>
<td>17.67</td>
<td>6.75</td>
<td>2.63</td>
<td>39.81</td>
<td>1.08</td>
<td>1.88</td>
<td>5288.514854</td>
</tr>
<tr>
<th>4</th>
<td>5</td>
<td>17.52</td>
<td>4.57</td>
<td>3.18</td>
<td>10.94</td>
<td>1.51</td>
<td>1.90</td>
<td>2859.469624</td>
</tr>
<tr>
<th>5</th>
<td>6</td>
<td>14.53</td>
<td>4.81</td>
<td>2.69</td>
<td>53.60</td>
<td>0.94</td>
<td>1.67</td>
<td>4017.374438</td>
</tr>
<tr>
<th>6</th>
<td>7</td>
<td>13.49</td>
<td>3.60</td>
<td>2.93</td>
<td>63.71</td>
<td>0.80</td>
<td>1.85</td>
<td>2952.812773</td>
</tr>
<tr>
<th>7</th>
<td>8</td>
<td>11.58</td>
<td>3.03</td>
<td>3.25</td>
<td>53.00</td>
<td>0.69</td>
<td>1.93</td>
<td>2670.933846</td>
</tr>
<tr>
<th>8</th>
<td>9</td>
<td>12.52</td>
<td>2.72</td>
<td>2.43</td>
<td>65.77</td>
<td>0.95</td>
<td>1.98</td>
<td>2474.048178</td>
</tr>
<tr>
<th>9</th>
<td>10</td>
<td>13.25</td>
<td>3.94</td>
<td>3.71</td>
<td>66.20</td>
<td>1.14</td>
<td>2.65</td>
<td>2722.893266</td>
</tr>
<tr>
<th>10</th>
<td>11</td>
<td>15.04</td>
<td>4.39</td>
<td>2.22</td>
<td>61.11</td>
<td>1.08</td>
<td>1.77</td>
<td>3828.247174</td>
</tr>
<tr>
<th>11</th>
<td>12</td>
<td>16.19</td>
<td>6.30</td>
<td>2.29</td>
<td>49.10</td>
<td>1.53</td>
<td>1.86</td>
<td>5095.810104</td>
</tr>
<tr>
<th>12</th>
<td>13</td>
<td>16.82</td>
<td>5.42</td>
<td>2.80</td>
<td>66.65</td>
<td>1.17</td>
<td>1.98</td>
<td>4091.637316</td>
</tr>
</tbody>
</table>
</div>
Let's remove the well index and check the summary summary statistics.
```python
my_data = my_data.iloc[:,1:] # remove the well index
my_data.describe().transpose() # calculate summary statistics for the data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>Por</th>
<td>200.0</td>
<td>14.991150</td>
<td>2.971176</td>
<td>6.550000</td>
<td>12.912500</td>
<td>15.070000</td>
<td>17.402500</td>
<td>23.550000</td>
</tr>
<tr>
<th>Perm</th>
<td>200.0</td>
<td>4.330750</td>
<td>1.731014</td>
<td>1.130000</td>
<td>3.122500</td>
<td>4.035000</td>
<td>5.287500</td>
<td>9.870000</td>
</tr>
<tr>
<th>AI</th>
<td>200.0</td>
<td>2.968850</td>
<td>0.566885</td>
<td>1.280000</td>
<td>2.547500</td>
<td>2.955000</td>
<td>3.345000</td>
<td>4.630000</td>
</tr>
<tr>
<th>Brittle</th>
<td>200.0</td>
<td>48.161950</td>
<td>14.129455</td>
<td>10.940000</td>
<td>37.755000</td>
<td>49.510000</td>
<td>58.262500</td>
<td>84.330000</td>
</tr>
<tr>
<th>TOC</th>
<td>200.0</td>
<td>0.990450</td>
<td>0.481588</td>
<td>-0.190000</td>
<td>0.617500</td>
<td>1.030000</td>
<td>1.350000</td>
<td>2.180000</td>
</tr>
<tr>
<th>VR</th>
<td>200.0</td>
<td>1.964300</td>
<td>0.300827</td>
<td>0.930000</td>
<td>1.770000</td>
<td>1.960000</td>
<td>2.142500</td>
<td>2.870000</td>
</tr>
<tr>
<th>Prod</th>
<td>200.0</td>
<td>3864.407081</td>
<td>1553.277558</td>
<td>839.822063</td>
<td>2686.227611</td>
<td>3604.303507</td>
<td>4752.637556</td>
<td>8590.384044</td>
</tr>
</tbody>
</table>
</div>
It is good that we checked the summary statistics, because we have some negative values for brittleness and total organic carbon. The is physically imposible. The values must be in error. We know the lowest possible values are 0.0, so we will truncate on 0.0. We use the *get_numerical_data()* DataFrame member function to get a shallow copy of the data from the DataFrame. Since it is a shallow copy, any changes we make to the copy are made to the data in the original DataFrame. This allows us to apply this simple conditional statement to all the data values in the DataFrame all at once.
Let's also make a categorical variable for production, based on a threshold of 4,000 MCFPD.
* high production > 4,000 MCFPD, cprod = 1
* low production <= 4,000 MCFPD, cprod = 0
```python
num = my_data._get_numeric_data() # get shallow copy of the numerical values from the DataFrame
num[num < 0] = 0 # truncate negative values to 0.0
my_data['cProd'] = np.where(my_data['Prod']>=4000, 1, 0) # conditional statement assign a new feature
my_data.describe().transpose() # calculate summary statistics for the data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>Por</th>
<td>200.0</td>
<td>14.991150</td>
<td>2.971176</td>
<td>6.550000</td>
<td>12.912500</td>
<td>15.070000</td>
<td>17.402500</td>
<td>23.550000</td>
</tr>
<tr>
<th>Perm</th>
<td>200.0</td>
<td>4.330750</td>
<td>1.731014</td>
<td>1.130000</td>
<td>3.122500</td>
<td>4.035000</td>
<td>5.287500</td>
<td>9.870000</td>
</tr>
<tr>
<th>AI</th>
<td>200.0</td>
<td>2.968850</td>
<td>0.566885</td>
<td>1.280000</td>
<td>2.547500</td>
<td>2.955000</td>
<td>3.345000</td>
<td>4.630000</td>
</tr>
<tr>
<th>Brittle</th>
<td>200.0</td>
<td>48.161950</td>
<td>14.129455</td>
<td>10.940000</td>
<td>37.755000</td>
<td>49.510000</td>
<td>58.262500</td>
<td>84.330000</td>
</tr>
<tr>
<th>TOC</th>
<td>200.0</td>
<td>0.991950</td>
<td>0.478264</td>
<td>0.000000</td>
<td>0.617500</td>
<td>1.030000</td>
<td>1.350000</td>
<td>2.180000</td>
</tr>
<tr>
<th>VR</th>
<td>200.0</td>
<td>1.964300</td>
<td>0.300827</td>
<td>0.930000</td>
<td>1.770000</td>
<td>1.960000</td>
<td>2.142500</td>
<td>2.870000</td>
</tr>
<tr>
<th>Prod</th>
<td>200.0</td>
<td>3864.407081</td>
<td>1553.277558</td>
<td>839.822063</td>
<td>2686.227611</td>
<td>3604.303507</td>
<td>4752.637556</td>
<td>8590.384044</td>
</tr>
<tr>
<th>cProd</th>
<td>200.0</td>
<td>0.435000</td>
<td>0.497001</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
Let's make sure that we have the new categorical feature for production.
```python
my_data.head() # preview the first n rows of the updated DataFrame
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Por</th>
<th>Perm</th>
<th>AI</th>
<th>Brittle</th>
<th>TOC</th>
<th>VR</th>
<th>Prod</th>
<th>cProd</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>12.08</td>
<td>2.92</td>
<td>2.80</td>
<td>81.40</td>
<td>1.16</td>
<td>2.31</td>
<td>1695.360819</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>12.38</td>
<td>3.53</td>
<td>3.22</td>
<td>46.17</td>
<td>0.89</td>
<td>1.88</td>
<td>3007.096063</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>14.02</td>
<td>2.59</td>
<td>4.01</td>
<td>72.80</td>
<td>0.89</td>
<td>2.72</td>
<td>2531.938259</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>17.67</td>
<td>6.75</td>
<td>2.63</td>
<td>39.81</td>
<td>1.08</td>
<td>1.88</td>
<td>5288.514854</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>17.52</td>
<td>4.57</td>
<td>3.18</td>
<td>10.94</td>
<td>1.51</td>
<td>1.90</td>
<td>2859.469624</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
This dataset has variables from 200 unconventional wells including well average porosity, log transform of permeability (to linearize the relationships with other variables), accoustic impedance (kg/m2s*10^6), brittness ratio (%), total organic carbon (%), vitrinite reflectance (%), and initial production 90 day average (MCFPD). Note, the dataset is synthetic.
#### Calculate the correlation matrix
For multivariate analysis it is a good idea to check the correlation matrix. We can calculate it and view it in the console with these commands.
```python
corr_matrix = np.corrcoef(my_data.iloc[:,:7], rowvar = False) # correlation matrix without the categorical value
print(np.around(corr_matrix,2)) # print the correlation matrix to 2 decimals
```
[[ 1. 0.76 -0.46 -0.22 0.71 0.11 0.88]
[ 0.76 1. -0.24 -0.12 0.47 0.05 0.71]
[-0.46 -0.24 1. 0.13 -0.53 0.5 -0.37]
[-0.22 -0.12 0.13 1. -0.21 0.32 -0.02]
[ 0.71 0.47 -0.53 -0.21 1. 0.3 0.64]
[ 0.11 0.05 0.5 0.32 0.3 1. 0.23]
[ 0.88 0.71 -0.37 -0.02 0.64 0.23 1. ]]
Note the 1.0 diagonal resulting from the correlation of each variable with themselves.
Let's use our function declared above to make a graphical correlation matrix visualization. This may inprove our ability to spot features. It relies on the built in correlation matrix method with Numpy DataFrames and MatPlotLib for plotting.
```python
plot_corr(my_data.iloc[:,:7],10) # using our correlation matrix visualization function
plt.show()
```
#### Working with Only Two Features
Let's simplify the problem to 2 feature), Porosity and Brittleness to predict Production rate. By working with only 2 features, it is very easy to visualize the segmentation of the feature space (it is only 2D and can be shown compleltely on a single plot).
```python
my_data_subset = my_data.iloc[:,[0,3,7]] # extract just por, brittle and prod with 100 samples
X_train, X_test, y_train, y_test = train_test_split(my_data_subset.iloc[:,[0,1]], my_data_subset.iloc[:,2], test_size=0.25, random_state=73073)
y_train = pd.DataFrame({'cprod':y_train.values})
y_test = pd.DataFrame({'cprod':y_test.values})
```
#### Set the Min and Max for Plotting
x1 and x2 are the predictor features
* x1 on x axis and x2 on the y axis for the plots below
y is the response feature
* used for the color bar
```python
x1min = 5.0; x1max = 25.0
x2min = 0.0; x2max = 100.0
ymin = 0.0; ymax = 9000.0
```
Let's first check the univariate statistics of Porosity, Brittleness and Producton.
```python
X_train.describe().transpose() # calculate summary statistics for the data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>Por</th>
<td>150.0</td>
<td>15.005267</td>
<td>2.971274</td>
<td>6.55</td>
<td>12.8975</td>
<td>15.055</td>
<td>17.500</td>
<td>23.55</td>
</tr>
<tr>
<th>Brittle</th>
<td>150.0</td>
<td>47.857067</td>
<td>13.886701</td>
<td>10.94</td>
<td>37.8400</td>
<td>49.150</td>
<td>57.985</td>
<td>81.40</td>
</tr>
</tbody>
</table>
</div>
```python
X_test.describe().transpose() # calculate summary statistics for the data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>Por</th>
<td>50.0</td>
<td>14.9488</td>
<td>3.00064</td>
<td>7.38</td>
<td>13.3525</td>
<td>15.195</td>
<td>16.9325</td>
<td>20.96</td>
</tr>
<tr>
<th>Brittle</th>
<td>50.0</td>
<td>49.0766</td>
<td>14.94183</td>
<td>15.68</td>
<td>37.3550</td>
<td>52.545</td>
<td>60.0575</td>
<td>84.33</td>
</tr>
</tbody>
</table>
</div>
```python
y_train.describe()[:2] # calculate summary statistics for the data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>cprod</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>150.000000</td>
</tr>
<tr>
<th>mean</th>
<td>0.433333</td>
</tr>
</tbody>
</table>
</div>
```python
y_test.describe()[:2] # calculate summary statistics for the data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>cprod</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>50.00</td>
</tr>
<tr>
<th>mean</th>
<td>0.44</td>
</tr>
</tbody>
</table>
</div>
Let's first check the univariate distributions of Porosity, Brittleness and Producton.
```python
plt.subplot(231)
plt.hist(X_train["Por"], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Porosity Training Data (%)')
plt.subplot(232)
plt.hist(X_train["Brittle"], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Britteness Training Data (%)')
plt.subplot(233)
plt.hist(y_train['cprod'], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Production Training Data (MCFPD)')
plt.subplot(234)
plt.hist(X_test["Por"], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Porosity Testing Data (%)')
plt.subplot(235)
plt.hist(X_test["Brittle"], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Britteness Testing Data (%)')
plt.subplot(236)
plt.hist(y_test['cprod'], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Production Testing Data (MCFPD)')
plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=3.2, wspace=0.2, hspace=0.2)
plt.show()
```
The distributions are well behaved, we cannot observe obvious gaps nor truncations. Let's look at a scatter plot of Porosity vs. Brittleness with points colored by Production.
```python
plt.subplot(121)
im = plt.scatter(X_train["Por"],X_train["Brittle"],s=None, c=y_train['cprod'], marker=None, cmap=None, norm=None, vmin=None, vmax=None, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title('Training Production vs. Brittleness and Porosity'); plt.xlabel('Porosity (%)'); plt.ylabel('Brittleness (%)')
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label("Production", rotation=270, labelpad=20)
plt.subplot(122)
im = plt.scatter(X_test["Por"],X_test["Brittle"],s=None, c=y_test['cprod'], marker=None, cmap=None, norm=None, vmin=None, vmax=None, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title('Testing Production vs. Brittleness and Porosity'); plt.xlabel('Porosity (%)'); plt.ylabel('Brittleness (%)')
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label("Production", rotation=270, labelpad=20)
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
#### Instantiate, Fit and Predict with Gausian Naive Bayes
Let's build a Gaussian naive Bayes model.
We select the Gaussian model as it simplifies the inference problem to just a set of conditional means and variances given each feature.
Recall we can set a prior probability of each response category
* We will use the proportions from the training dataset.
* 0.43 for high production (the mean of the binary dataset is the proportion of 1's)
* 0.57 for low production (1 - proportion of high production)
Another option would be to assume a naive, uniform prior, substitute the following:
```python
priors = (0.5,0.5) # naive prior
```
```python
priors = (1.0,0.0) # set the prior probabilities of low and high production
```
Let's build our Gaussian naive Bayes model.
* instantiate it with the priors
* train with the training data, we use the standard fit function
```python
gnb = GaussianNB(priors = priors) # instantiate the Gaussian naive Bayes model
GaussianNB_fit = gnb.fit(X_train,y_train['cprod'].values) # train with the training data
```
Let's predict with our new model over the testing dataset.
* test by predicting with the testing data, we use the standard prediction function
```python
y_pred = GaussianNB_fit.predict(np.c_[X_test['Por'].values,X_test['Brittle'].values]) # predict over the testing data
```
C:\Users\17137\anaconda3\lib\site-packages\sklearn\naive_bayes.py:449: RuntimeWarning: divide by zero encountered in log
jointi = np.log(self.class_prior_[i])
#### Model Checking
Let's check our model. With scikit learn we have great built in tools to evaluate our classification model. Let's try the classification report first.
```python
classification_report(truth, predicted) # build a classification report to check our classification model
```
We get a table with summary metrics for model performance.
```python
from sklearn.metrics import classification_report
print(classification_report(y_test['cprod'].values, y_pred, labels=[0,1]))
```
precision recall f1-score support
0 0.56 1.00 0.72 28
1 0.00 0.00 0.00 22
accuracy 0.56 50
macro avg 0.28 0.50 0.36 50
weighted avg 0.31 0.56 0.40 50
C:\Users\17137\anaconda3\lib\site-packages\sklearn\metrics\_classification.py:1272: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
The metrics include:
* recall - the ratio of true positives divided by all cases of the category in the testing dataset
* precision - the ratio of true positives divided by all positives (true positives + false positives)
* f1-score - the harmonic mean of recall and precision
* support - the number of samples of each category in the testing data
I also like to look at the confusion matrix.
* the x axis is the prediction - category 0 or 1
* the y axis is the truth - category 0 or 1
```python
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test['cprod'].values, y_pred))
```
[[28 0]
[22 0]]
From above we can observe:
* 26 low production wells classified correctly as low production
* 1 high production well misclassified as low production
* 2 low production wells misclassified as high production
* 21 high production wells classified correctly as high production
#### Visualizing the Classification Model
Let's visualize the model over the entire feature space.
* here's the training data with the classification over the full range of predictor features.
* blue for low production and yellow for high production
Note: naive Bayes provides the posterior probability of high and low production
* the classifications below are based on maximum apriori selection (MAPS), selecting the category with the highest probability
Let's visualize the classification model (blue - low production, yellow - high production) over the predictor feature space with the training data plotted (white - low production, black - high production).
```python
visualize_model(GaussianNB_fit,X_train["Por"],x1min,x1max,X_train["Brittle"],x2min,x2max,y_train['cprod'],0.0,1.0,'Training Data and Naive Bayes Model')
```
We could also visualize the posterior probabilities of low and high production.
* here's the posterior probability of low and high production over the predictor feature space
```python
visualize_model_prob(GaussianNB_fit,X_train["Por"],x1min,x1max,X_train["Brittle"],x2min,x2max,y_train['cprod'],'Training Data and Naive Bayes Model')
```
Finally, let's look at the classification model over the predictor feature space (blue - low production, yellow - high production) with the testing data plotted (white - low production, black - high production).
```python
visualize_model(GaussianNB_fit,X_test["Por"],x1min,x1max,X_test["Brittle"],x2min,x2max,y_test['cprod'],0.0,1.0,'Testing Data and Naive Bayes Model')
```
We have a reasonable model to predict well production from porosity and brittleness for an unconventional reservoir.
#### Comments
This was a basic demonstration of naive Bayes for prediction. A lot more could be done, for example, we could have applied variants such as:
* multinomial naive Bayes
* compliment naive Bayes
* Bernoulli naive Bayes
We could have worked with more predictor features, but for learning the method, it is nice to be able to visualize the entire classification in one plot!
If you struggled with the basic Python used here check out my other basic demonstrations for DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
I hope this was helpful,
*Michael*
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
I hope this was helpful,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
```python
```
|
a970d8e9f4775c32871436c2b5e5d49e087403fc
| 388,942 |
ipynb
|
Jupyter Notebook
|
SubsurfaceDataAnalytics_NaiveBayes.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 403 |
2017-10-15T02:07:38.000Z
|
2022-03-30T15:27:14.000Z
|
SubsurfaceDataAnalytics_NaiveBayes.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 4 |
2019-08-21T10:35:09.000Z
|
2021-02-04T04:57:13.000Z
|
SubsurfaceDataAnalytics_NaiveBayes.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 276 |
2018-06-27T11:20:30.000Z
|
2022-03-25T16:04:24.000Z
| 210.808672 | 121,480 | 0.885566 | true | 12,256 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.712232 | 0.737158 | 0.525028 |
__label__eng_Latn
| 0.81897 | 0.058145 |
# Solar-powered pump
A solar-powered pump transports water from a source underground up to a tank. The PV solar panel converts solar energy into electrical power with an efficiency of $\eta_{\text{PV}} = 0.08$ and stores the electrical energy in a battery, which powers the pump continuously. The 24-hour annual average solar flux is SF = 225 W/m$^2$.
The isentropic efficiency of the pump is 0.58, and the pump performance is given as pressure head versus flow rate:
| Head (ft) | Flow rate (gpm) |
| ------------- | --------------- |
| 60 | 0 |
| 60 | 2 |
| 57 | 4 |
| 52 | 6 |
| 44 | 8 |
| 33 | 10 |
| 18 | 12 |
| 0 | 14 |
The water source is 20 ft below ground level (where the tank is located), so the pump must provide a pressure rise equal to the 20 ft of water. There is also head loss due to friction in the pipe; the total head loss (pressure drop due to friction) between the inlet and outlet is:
\begin{equation}
\Delta P_{\text{pipe}} = 20 \left[ \text{ft H}_2 \text{O} \right] + 0.085 \left[ \frac{ \text{ft H}_2 \text{O} }{\text{gpm}^2} \right] \dot{V}^2
\end{equation}
where $\dot{V}$ is the volumetric flow rate.
We can assume water is an incompressible fluid here.
**Problem**:
- Determine the flow rate delivered by the pump
- Determine the area of the solar panel required by the system
```python
import numpy as np
from scipy import optimize
import cantera as ct
from pint import UnitRegistry
ureg = UnitRegistry()
Q_ = ureg.Quantity
```
```python
import matplotlib.pyplot as plt
%matplotlib inline
# these are mostly for making the saved figures nicer
import matplotlib_inline.backend_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('pdf', 'png')
plt.rcParams['figure.dpi']= 150
plt.rcParams['savefig.dpi'] = 150
```
## Determine the flow rate of water delivered
First, specify the input information:
```python
solar_flux = Q_(225, 'W/m^2')
efficiency_pump = 0.58
efficiency_solar = 0.08
```
Next, to model the pump, we can fit the performance data to a third-order polynomial:
\begin{equation}
\Delta P_{\text{pump}} = a_0 + a_1 \dot{V} + a_2 \dot{V}^2 + a_3 \dot{V}^3
\end{equation}
We can find this fit using the NumPy function `polyfit()`, and place the result in a `poly1d` object so we can easily evaluate the polynomial:
```python
# Construct NumPy arrays with the pressure head and
# flow rate from the table
head = Q_(np.asarray([60, 60, 57, 52, 44, 33, 18, 0]), 'ft')
flow_rate = Q_(np.asarray([0, 2, 4, 6, 8, 10, 12, 14]), 'gallon per minute')
# Perform the fit with degree 3, and place in a poly1d object
pump_fit = np.poly1d(np.polyfit(flow_rate.magnitude, head.magnitude, 3))
fig, ax = plt.subplots(figsize=(5, 3))
# Plot the measurements
ax.plot(flow_rate.magnitude, head.magnitude, 'o')
# Generate a dense sample of flow rate values, then plot the polynomial fit
flow_rate_dense = np.linspace(flow_rate[0].magnitude, flow_rate[-1].magnitude, 100)
ax.plot(flow_rate_dense, pump_fit(flow_rate_dense))
plt.xlabel(f'Flow rate ({flow_rate.units})')
plt.ylabel(f'Pressure head ({head.units})')
plt.grid(True)
plt.legend(['Pump data', 'Best fit curve'])
fig.tight_layout()
plt.show()
```
To find the volumetric flow rate of the pump, we need to find the condition where the pressure head of the pump matches the pressure drop of the pipe between the inlet and outlet. We can do this by setting two $\Delta P$ expressions equal to each other, and the finding the root of this equation:
\begin{equation}
\Delta P_{\text{pump}} \left(\dot{V}\right) = \Delta P_{\text{pipe}} \left(\dot{V}\right)
\end{equation}
To find the root, we can use the [SciPy function `root_scalar`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root_scalar.html), part of its `optimize` module. This needs needs us to provide a function that rearranges the above expression to equal zero, and returns that value. In other words, the function should return:
\begin{equation}
\Delta P_{\text{pump}} \left(\dot{V}\right) - \Delta P_{\text{pipe}} \left(\dot{V}\right)
\end{equation}
which will equal zero when the root is found.
```python
def pump_performance(volume_flow_rate, pump_fit):
'''Find root of pump performance equations'''
deltaP_pump = pump_fit(volume_flow_rate) * ureg.ft
volume_flow_rate *= ureg('gal/min')
deltaP_pipe = (
Q_(20, 'ft') +
Q_(0.085, 'ft/((gal/min)**2)') * volume_flow_rate**2
)
return (deltaP_pump - deltaP_pipe).to('ft').magnitude
```
Then, we call the `root_scalar` function, giving it the function we just created along with a range of possible values and the extra argument our function needs:
```python
sol = optimize.root_scalar(
pump_performance, bracket=[0, 14],
args=(pump_fit,),
)
volume_flow_rate = Q_(sol.root, 'gal/min')
print(f'Pump flow rate: {volume_flow_rate: .2f}')
```
Pump flow rate: 10.51 gallon / minute
## Determine area of solar panels required
To determine the area of solar panels required, we first need to calculate the work needed for the pump:
$$
\dot{W}_p = \frac{\dot{m} v \Delta P_{\text{pump}} }{\eta_p} = \frac{\dot{V} \Delta P_{\text{pump}} }{\eta_p}
$$
First, however, we'll need to convert the pressure head (given in feet of water) to an actual pressure difference, using $h = \frac{P}{\rho g}$.
```python
pressure_head = Q_(pump_fit(volume_flow_rate.magnitude), 'ft')
water = ct.Water()
pressure_drop = pressure_head * Q_(water.density, 'kg/m^3') * ureg.gravity
work_pump = (
volume_flow_rate * pressure_drop.to('Pa') /
efficiency_pump
)
print(f'Pump work: {work_pump.to(ureg.watt): .2f}')
```
Pump work: 100.13 watt
Finally, we can calculate the area required by relating the power generated by the solar panels to the rate of work required by the pump:
$$
\text{SF} \, A \eta_{\text{PV}} = \dot{W}_p
$$
```python
area_solar = work_pump / (solar_flux * efficiency_solar)
print(f'Area of solar panels required: {area_solar.to("m^2"): .2f}')
```
Area of solar panels required: 5.56 meter ** 2
|
5fa432e7922744a64e48d8af231d87d6853212d9
| 75,861 |
ipynb
|
Jupyter Notebook
|
book/content/second-law/solar-powered-pump.ipynb
|
kyleniemeyer/computational-thermo
|
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | 13 |
2020-04-01T05:52:06.000Z
|
2022-03-27T20:25:59.000Z
|
book/content/second-law/solar-powered-pump.ipynb
|
kyleniemeyer/computational-thermo
|
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | 1 |
2020-04-28T04:02:05.000Z
|
2020-04-29T17:49:52.000Z
|
book/content/second-law/solar-powered-pump.ipynb
|
kyleniemeyer/computational-thermo
|
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | 6 |
2020-04-03T14:52:24.000Z
|
2022-03-29T02:29:43.000Z
| 252.0299 | 44,728 | 0.916123 | true | 1,754 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.914901 | 0.824462 | 0.754301 |
__label__eng_Latn
| 0.978328 | 0.590826 |
# Quiz 1
# Machine Learning 2018-1
A logistic regression model is a statistical classification method that uses a generalized linear regression model to estimate $P(C=1 | \mathbf{x})$, the probability of the sample $\mathbf{x}\in\mathbb{R}^2$ belonging to class $C_1$.
\begin{equation}
y=P(C=1|\mathbf{x},\mathbf{w})=\sigma(w_0+w_1x_0 + w_2x_1)
\end{equation}
where
\begin{equation}
\sigma(x)=\frac{1}{1+e^{-x}}
\end{equation}
### 1.
Write a function that implements a logistic regression model.
```python
import numpy as np
def ro(x):
return 1 / (1 + np.exp(-x))
def f_1(w, x):
'''
w: weight vector with shape (3,)
x: input sample with shape (1,2)
returns: P(C=1|x,w)
'''
### Your code here
y = ro(w[0] + w[1]*x[0] + w[2]*x[1])
return y
print(f_1(np.array([0, 1, 2]), np.array([1, 2])))
lst = []
for x1 in np.linspace(-2, 1.5, 4):
for x2 in np.linspace(-2, 1.5, 4):
lst.append([x1, x2])
X = np.array(lst)
print(X[0, :])
```
0.9933071490757153
[-2. -2.]
```python
```
### 2.
Assume that the cost of a false positive (predicting class $C_1$ when the real class is $C_0$) is $L_0$ and the cost of a false negative is $L_1$. Write a function that calculates the risk of classifying a sample $\mathbf{x}$ in class $y \in \{0,1\}$.
```python
def f_2(w, L, x, y):
'''
w: weight vector with shape (3,)
L: loss vector with shape (2,)
x: input sample with shape (2,1)
y: class value {0, 1}
returns: R(y|x,w)
'''
### Your code here
p = f_1(w, x)
if y == 0:
R = L[1]*p
else:
R = L[0]*(1-p)
return R
W1 = np.array([0, 1, 2])
L1 = np.array([0.9, 2, 0.3])
X = np.array([-2, -2])
y = 0
f_2(W1, L1, X, y)
```
0.004945246313269549
### 3.
Write a function that implements a classifier that makes the prediction that minimizes the risk.
```python
def f_3(w, L, x):
'''
w: weight vector with shape (3,)
L: loss vector with shape (2,)
x: input sample with shape (2,1)
returns: predicted class {0, 1}
'''
### Your code here
r0 = f_2(w, L, x, 0)
r1 = f_2(w, L, x, 1)
# Choose the minimum risk
if r0 <= r1 :
return 0
else:
return 1
return 0
```
### 4.
Write a function that implements a classifier that makes the prediction that minimizes the risk, but that can also reject the sample. The cost of rejection is $L_2$.
```python
def f_4(w, L, x):
'''
w: weight vector with shape (3,)
L: loss vector with shape (3,)
x: input sample with shape (2,1)
returns: predicted class {0, 1, 2}. Rejection is 2.
'''
### Your code here
rejectRisk = L[2]
risk0 = f_2(w, L, x, 0)
risk1 = f_2(w, L, x, 1)
r = [risk0, risk1, rejectRisk]
# Choose the minimum risk
if np.argmin(r) == 0 : # risk of 0
return 0
elif np.argmin(r) == 1: # risk of 1
return 1
elif np.argmin(r) == 2: # risk of reject
return 2
return -1
# Test the function
C2= [ 0., 0., 2., 1., 0., 0., 2., 1., 0., 2., 1., 1., 0., 2., 1., 1.]
for i in range(len(lst)):
print(C2[i], "=", f_4(W1, L1, X[i, :]))
```
0.0 = 0
0.0 = 0
2.0 = 2
1.0 = 1
0.0 = 0
0.0 = 0
2.0 = 2
1.0 = 1
0.0 = 0
2.0 = 2
1.0 = 1
1.0 = 1
0.0 = 0
2.0 = 2
1.0 = 1
1.0 = 1
### Grader
Run the following cell to grade your quiz.
```python
def compare(val1, val2, error):
if abs(val1 - val2) > error:
return False
return True
lst = []
for x1 in np.linspace(-2, 1.5, 4):
for x2 in np.linspace(-2, 1.5, 4):
lst.append([x1, x2])
X = np.array(lst)
W1 = np.array([0, 1, 2])
L1 = np.array([0.9, 2, 0.3])
W2 = np.array([-0.3, 1, -0.5])
Y1= [ 0.00247262, 0.02492443, 0.20860853, 0.73105858, 0.00789708, 0.07585818,
0.45842952, 0.89721598, 0.02492443, 0.20860853, 0.73105858, 0.9655548,
0.07585818, 0.45842952, 0.89721598, 0.98901306]
R10= [ 0.00494525, 0.04984885, 0.41721705, 1.46211716, 0.01579417, 0.15171636,
0.91685903, 1.79443195, 0.04984885, 0.41721705, 1.46211716, 1.93110961,
0.15171636, 0.91685903, 1.79443195, 1.97802611]
R11= [ 0.89777464, 0.87756802, 0.71225233, 0.24204728, 0.89289262, 0.83172764,
0.48741343, 0.09250562, 0.87756802, 0.71225233, 0.24204728, 0.03100068,
0.83172764, 0.48741343, 0.09250562, 0.00988825]
C1= [ 0., 0., 0., 1., 0., 0., 1., 1., 0., 0., 1., 1., 0., 1., 1., 1.]
C2= [ 0., 0., 2., 1., 0., 0., 2., 1., 0., 2., 1., 1., 0., 2., 1., 1.]
Y2= [ 0.21416502, 0.13200647, 0.07822826, 0.04521747, 0.46671596, 0.32812743,
0.21416502, 0.13200647, 0.73756162, 0.61063923, 0.46671596, 0.32812743,
0.90024951, 0.83433491, 0.73756162, 0.61063923]
def test1():
for i in range(len(lst)):
if not compare(Y1[i], f_1(W1, X[i, :]), 0.0001):
return False
if not compare(Y2[i], f_1(W2, X[i, :]), 0.0001):
return False
return True
def test2():
for i in range(len(lst)):
if not compare(R10[i], f_2(W1, L1, X[i, :], 0), 0.0001):
return False
if not compare(R11[i], f_2(W1, L1, X[i, :], 1), 0.0001):
return False
return True
def test3():
for i in range(len(lst)):
if not compare(C1[i], f_3(W1, L1[:2], X[i, :]), 0.0001):
return False
return True
def test4():
for i in range(len(lst)):
if not compare(C2[i], f_4(W1, L1, X[i, :]), 0.0001):
return False
return True
def evaluate():
score = 0
for test in [test1, test2, test3, test4]:
if test():
score += 1
return score
print ("Score: ", evaluate(), "/ 4")
```
Score: 4 / 4
```python
```
|
cf488e63024efda50e49f69c13e32fee3048b438
| 9,818 |
ipynb
|
Jupyter Notebook
|
quizzes/quiz1.ipynb
|
ingJSNA/machineLearning2018
|
63102dceb06c97d62b0ebcaee19363b72d3fdf13
|
[
"MIT"
] | null | null | null |
quizzes/quiz1.ipynb
|
ingJSNA/machineLearning2018
|
63102dceb06c97d62b0ebcaee19363b72d3fdf13
|
[
"MIT"
] | null | null | null |
quizzes/quiz1.ipynb
|
ingJSNA/machineLearning2018
|
63102dceb06c97d62b0ebcaee19363b72d3fdf13
|
[
"MIT"
] | null | null | null | 26.89863 | 263 | 0.438684 | true | 2,456 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.924142 | 0.828939 | 0.766057 |
__label__eng_Latn
| 0.720464 | 0.618139 |
# Método dos Mínimos Quadrados (MMQ)
## License
All content can be freely used and adapted under the terms of the
[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
## Imports
Coloque **todos** os `import` na célula abaixo. Não se esqueça do `%matplotlib inline` para que os gráficos apareçam no notebook.
```python
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
```
## IMPORTANTE
Agora que vocês sabem técnicas de programação defensiva, eu espero que todo o código que vocês fizerem abaixo utilizem essas técnicas. Crie docstrings para suas funções, cheque as entradas (quando for possível) e cheque as saídas. **Não esqueçam dos comentários**.
## Fabricando dados para teste
Para saber se nosso código está funcionando, precisamos fabricar alguns dados com parâmetros conhecidos. Vamos gerar dados que seguem a equação da reta:
$$
d_i = a x_i + b
$$
**IMPORTANTE**: Eu vou usar a biblioteca numpy as gerar os dados abaixo.
Vocês **não podem** utilizar o numpy para calcular a sua solução.
Uso do numpy deve ser conquistado com ~~sangue~~esforço.
O código abaixo serve como exemplo do que vocês poderão fazer ao utilizar o Python no seu trabalho (fora da aula).
```python
a = 10
b = 50
N = 50
# Vou utilizar a função linspace do numpy para facilitar a vida
# Essa função cria N valores igualmente espaçados entre dois números (5 e 50)
x = np.linspace(5, 50, N)
# Agora podemos usar os valores de x, a e b acima para simular dados observados
dados_obs = a*x + b
# Vamos adicionar erro aleatório aos dados para ficar mais interessante
# O erro seguirá uma distribuição normal com os seguintes parâmetros
media_erro = 0
std_erro = 20
# A linha abaixo faz com que os valores aleatórios não sejam verdadeiramente aleatórios
# veja https://en.wikipedia.org/wiki/Pseudorandom_number_generator
np.random.seed(42)
# Gera a lista de numéros aleatórios
erro = np.random.normal(loc=media_erro, scale=std_erro, size=len(dados_obs))
# Agora podemos adicionar o erro aos dados observados
dados_obs += erro
```
Utilize a célula abaixo para gerar um gráfico de círculos pretos (`ok`) de seus dados.
```python
# Cria um gráfico.
plt.figure()
# Plota os valores das coordenadas x, y em bolas pretas.
plt.plot(x, dados_obs, "ok")
# Coloca título no gráfico.
plt.title("Gráfico 1")
# Dá título ao eixo x.
plt.xlabel("Valores")
# Dá título ao eixo y.
plt.ylabel("Dados Observados")
```
## Forma matricial da equação da reta e a matriz Jacobiana
Temos uma equação da reta para cada valor de $x_i$:
$$
\begin{align}
d_1 &= ax_1 + b \\
d_2 &= ax_2 + b \\
\vdots \\
d_N &= ax_N + b \\
\end{align}
$$
Esse sistema pode ser escrito de forma matricial com os parâmetros sendo $a$ e $b$:
$$
\begin{bmatrix}
d_1 \\ d_2 \\ \vdots \\ d_N
\end{bmatrix} =
\begin{bmatrix}
x_1 & 1 \\
x_2 & 1 \\
\vdots & \vdots \\
x_N & 1
\end{bmatrix}
\begin{bmatrix}
a \\ b
\end{bmatrix}
$$
$$
\bar{d} = \bar{\bar{A}}\bar{p}
$$
## Tarefa
Faça uma função chamada `jacobiana` que calcule e retorne a matrix Jacobiana ($\bar{\bar{A}}$).
**Para pensar**: o que essa função deve receber como argumento? (**Dica**: ela só precisa de 1)
```python
# Criamos uma função que retorna a matriz Jacobiana
def jacobiana(x):
"""
Cria uma matriz Jacobiana:
Ex: [[x1, 1], [x2, 1]]
"""
# Criamos esta lista vazia para fazermos a matriz Jacobiana
jacobiana = []
# Este loop adiciona linhas no formato x,1 de x1 até xN.
for i in range(N):
jacobiana.append([x[i], 1])
return jacobiana
```
```python
# Atribui a uma variavél a matriz jacobiana
jac = jacobiana(x)
```
### Resultado esperado
A célula abaixo testa a sua Jacobiana contra uma produzida pelo numpy.
```python
assert np.allclose(jacobiana(x), np.transpose([x, np.ones_like(x)]))
```
## Tarefa
Calcule dados preditos para o vetor de parâmetros definido abaixo **utilizando a forma matricial da equação**. Guarde o resultado em uma variável chamada `preditos`.
Faça um gráfico dos dados observados (gerados acima) como pontos pretos e os dados preditos que você calculou como uma linha vermelha.
**Dica**: utilize as funções que você criou na aula passada.
```python
p = [5, 15]
```
```python
# Cria uma funçao que multiplica matriz por vetor.
def vmult(m, v):
# Docstring
"""
Multiplica uma matriz por um vetor
"""
assert len(m[0]) == len(v), 'Número de colunas da matriz diferente do número de linhas do vetor.'
# Cria uma lista vazia.
U = []
# Faz um loop que percorre as linhas da matriz.
for i in range(len(m)):
# Cria uma variavel que recebe o valor zero.
soma = 0
# Faz um loop que percorre as colunas da matriz.
for k in range(len(m[0])):
# Realiza a multiplicaçao de matriz por vetor.
soma = soma + (m[i][k] * v[k])
# Adiciona os resultados a lista nova gerada.
U.append(soma)
return U
```
```python
# Atribui a uma variável a multiplicação da matriz transposta pelo vetor.
preditos = vmult(jac, p)
```
```python
# Cria um gráfico.
plt.figure()
# Plota os valores das coordenadas x, y em bolas pretas.
plt.plot(x, dados_obs, "ok")
# Plota os valores das coordenadas x, y em uma linha vermelha.
plt.plot(x, preditos, "-r")
# Coloca título no gráfico.
plt.title("Gráfico 2")
```
### Resultado esperado
A célula abaixo testa seus resultados contra um calculado com o numpy.
```python
assert np.allclose(preditos, np.dot(jacobiana(x), p))
```
O gráfico deve ser parecido com o abaixo:
## Sistema de equações normais
A solução de mínimos quadrados é o vetor $\bar{p}$ que resolve o sistema linear abaixo (chamado de sistema de equações normais):
$$
\bar{\bar{A}}^T\bar{\bar{A}}\bar{p} = \bar{\bar{A}}^T\bar{d}^o
$$
Para resolver esse sistema, precisamos primeiramente calcular a matriz do sistema $\bar{\bar{A}}^T\bar{\bar{A}}$ e o vetor do lado direito $\bar{\bar{A}}^T\bar{d}^o$.
## Tarefa
Faça uma função chamada `eqnormais_sistema` que calcule e retorne a matriz $\bar{\bar{A}}^T\bar{\bar{A}}$ dada a matriz Jacobiana.
Utilize as funções criadas na aula anterior.
**Dica**: É possível saber quantas linhas e colunas o sistema deve conter. Cheque se o seu resultado possui esse número.
```python
def transpm(m):#matriz m
#docstring
"""
Pega uma matriz qualquer e retorna sua transposta.
Exemplos:
M = [[1, 2], [3, 4]]
transpm(M) = [[1, 3], [2, 4]]
"""
for i in range(1, len(m), 1):
#garante que todas as linhas da matriz estão completas com o mesmo número de elementos
assert len(m[i]) == len(m[i - 1]), "Alguma linha da matriz não apresenta o mesmo número de elementos das outras"
Mt = [] #lista vazia representando a matriz Mt transposta de m
for i in range(len(m[0])):#i variando no número de colunas de m
U = []
for j in range(len(m)):# j variando no número de linhas de m
L = m[j][i] #pega o elemento da posição i da linha j
U.append(L) #adiciona a U o elemento na posição i de cada coluna
Mt.append(U) #adiciona a Mt cada lista U
return(Mt)
# Define a função multiplicaçao de matrizes.
def mmult(m1, m2):
# Docstring
"""
Multiplica duas matrizes
"""
# Se certifica de que o número de colunas de uma matriz é igual ao numero de linhas de outra.
assert len(m1[0]) == len(m2), "Número de Colunas de A != Número de Linhas de B"
# Cria uma lista vazia.
C = []
# Faz um loop que percorre todas as linhas da matriz.
for i in range(len(m1)):
# Adiciona valores a lista C.
C.append([])
# Faz um loop que percorre todas as colunas da matriz 2.
for j in range(len(m2[0])):
# Cria uma variavel que recebe o valor zero.
soma = 0
# Faz um loop nas colunas da matriz 1.
for k in range(len(m1[0])):
# Realiza o somatorio da muitiplicacao das matrizes.
soma = soma + (m1[i][k]*m2[k][j])
# Adiciona o resultado a matriz criada.
C[i].append(soma)
# Retorna o valor de C.
return C
```
```python
# Define uma função eqnormais_sistema.
def eqnormais_sistema(x):
# Docstring
"""
Realiza a multiplicação da matriz jacobiana transposta pela matriz jacobiana
"""
# Retorna o valor da multiplicação da nova função.
return mmult(transpm(x), x)
```
```python
# Função que multiplica a jacobiana transposta pela jacobiana.
resultado = eqnormais_sistema(jac)
```
### Resultado esperado
A célula abaixo testa seus resultados contra um calculado com o numpy.
```python
assert np.allclose(eqnormais_sistema(jacobiana(x)), np.transpose(jacobiana(x)).dot(jacobiana(x)))
```
## Tarefa
Faça uma função chamada `eqnormais_lado_direito` que calcule e retorne o vetor do lado direito do sistema de equações normais.
**Dicas**:
* Essa função deve receber 2 argumentos.
* O que essa função deve retornar é um vetor ou matriz?
* É possível saber o número de elementos que o resultado deve conter. Cheque esse número.
```python
# Cria uma função eqnormais_lado_direito.
def eqnormais_lado_direito(x,y):
#docstring
"""
Realiza a multiplicação da matriz jacobiana transposta por um vetor
"""
# Retorna a multiplicação da jacobiana transposta pelos dados observados.
return vmult(transpm(x), y)
```
```python
jact = transpm(jac)
```
```python
resultado = eqnormais_lado_direito(jac, dados_obs)
```
```python
assert len(resultado) == len(jact), "Número de linhas do resultado é diferente do número de linhas da transposta."
```
### Resultado esperado
A célula abaixo testa seus resultados contra um calculado com o numpy.
```python
assert np.allclose(eqnormais_lado_direito(jacobiana(x), dados_obs), np.transpose(jacobiana(x)).dot(dados_obs))
```
## Solução de mínimos quadrados
Agora que temos o sistema de equações normais, podemos resolvê-lo numericamente para encontrar os valores de $a$ e $b$ que produzem a reta que melhor ajusta nossos dados.
## Tarefa
Faça uma função chamada `elim_gauss` que resolve um sistema de equações utilizando a eliminação de Gauss. Essa função deve receber como argumento a matriz do sistema e o vetor lado direito e retornar o vetor de solução.
**Dicas**:
* Cheque o número de elementos na matriz e no vetor.
* A matriz deve ser quadrada.
```python
def elim_gauss(A, x):
"""
Realiza o escalonamento e resolve o sistema linear
"""
# Feito um loop para rodar a cada linha, sendo k o indice delas. A ultima linha não é utilizada pois não há nada além
for k in range (0, len(A)-1, 1):
# Feito um loop que começa a partir da segunda linha, o indice i indica as colunas.
for i in range (k+1, len(A[0]), 1):
# Criada uma variável para guardar os valores que escalonam a matriz.
temp = -A[i][k]/A[k][k]
# Feito um loop para pegar todas as linhas da matriz. 'k' não é usada pois ignora a última linha. Aqui acontece o escalonamento.
for j in range (k, len(A), 1):
# Operação para escalonar a matriz sistema
A[i][j] = A[i][j] + A[k][j] * temp
# Operação para gerar o novo resultado do vetor lado direito
x[i] = x[i] + x[k] * temp
vetor_y = [0]*len(A)
# Loop para linhas
for k in range(len(A)-1, -1, -1):
# Loop para colunas que o código usará. Elas vão aumentando a cada vez que roda o programa
for i in range(len(A[0])-1, k, -1):
# Y é somatório do produto de cada solução por cada elemento correspondente da matriz sistema
vetor_y[k] = vetor_y[k] + (vetor_y[i]*A[k][i])
# Segunda parte da operação. lado_direito[k] está aqui pq só é usado 1 vez para cada linha.
vetor_y[k] = (x[k] - vetor_y[k]) / A[k][k]
return vetor_y
```
### Resultado esperado
A célula abaixo testa seus resultados contra um calculado com o numpy.
```python
np.random.seed(42)
A_teste = np.random.uniform(10, 50, size=(21, 21))
x_teste = np.random.uniform(5, 20, size=21)
y_teste = A_teste.dot(x_teste)
assert np.allclose(elim_gauss(A_teste, y_teste), x_teste)
```
## Tarefa
Faça uma função `ajuste_reta` que recebe um vetor de valores de x e um de dados observados e retorna a solução de minimos quadrados $\bar{p}$ (vetor com os valores de $a$ e $b$ estimados).
Aplique essa função nos dados observados simulados acima. Cheque se a solução bate com o valor esperado (você pode fazer isso usando um `assert`).
Faça um gráfico dos dados observados (pontos pretos) pelos dados preditos pela solução que você obteve agora (linha vermelha). O gráfico deve conter uma legenda. A legenda para os dados preditos deve ser da forma "y = 234x + 244" (trocando os números pelos valores que você estimou).
**Dica**:r
* Quantos elementos deve ter o vetor retornado?
* Para inserir números em uma string (texto): `"y = {}".format(123.1)"` $\to$ `"y = 123.1"`
* Para formatar os números que você quer inserir numa string: `"y = {:.4f}".format(123.242524536362446353436335)` $\to$ `"y = 123.2425"`
```python
# Define a função ajuste_reta
def ajuste_reta(x, y):
"""
Retorna a solução dos mínimos quadrados, dando o vetor de parâmetros
"""
vetor = eqnormais_lado_direito(jacobiana(x), y)
matriz = eqnormais_sistema(jacobiana(x))
p = elim_gauss(matriz, vetor)
return p
# O Arthur sugeriu a seguinte linha de código para a função:
# return elim_gauss(eqnormais_sistema(jacobiana(x)),eqnormais_lado_direito(jacobiana(x),y)). Mas ele foi minoria no grupo.
```
```python
d = ajuste_reta(x, dados_obs)
print(d)
```
[9.7422960022585752, 52.577381832766335]
```python
"a = {:.4f}".format(9.7422960022585752)
"b = {:.4f}".format(52.577381832766335)
y = a*x + b
```
```python
# Cria um gráfico.
plt.figure()
# Plota os valores das coordenadas x, y em bolas pretas.
plt.plot(x, dados_obs, "ok", label='Dados observados')
plt.plot(x, y, "-r", linewidth=3, label='y = 9.742x + 52.577')
#legenda
legend = plt.legend(loc='upper left', shadow=True, fontsize='large')
# Dá título ao eixo x.
plt.xlabel("x")
# Dá título ao eixo y.
plt.ylabel("y")
# Colocamos legenda bege porque somos diferenciados
legend.get_frame().set_facecolor('#F5F5DC')
```
### Resultado esperado
Os valores estimados para $\bar{p}$ devem ser aproximadamente:
[9.742296, 52.57738183]
O gráfico deve ser parecido com o abaixo:
## Tarefa Bônus
Podemos utilizar o método dos mínimos quadrados para ajustar qualquer equação que seja linear com relação as parâmetros ($a$ e $b$ no caso da reta). Isso quer dizer que podemos ajustar uma parábola:
$$
d_i = ax_i^2 + bx + c
$$
Dessa vez, os parâmetros que queremos estimar são $a$, $b$ e $c$. Note que agora temos 3 parâmetros, não 2. Por isso, a Jacobiana terá 3 colunas ao invés de 2.
Faça ao menos as seguintes funções:
* `jacobiana_parabola`: calcula e retorna a matriz Jacobiana para o caso da parábola. Deve receber como argumento somente o vetor de coordenadas x.
* `ajuste_parabola`: calcula a solução de mínimos quadrados para o caso de uma parábola. Deve receber como argumento o vetor de coordenadas x e o vetor de dados. Deve retornar o vetor de parâmetros $\bar{p}$ estimado (contem os valores de $a$, $b$ e $c$)
Teste suas funções com os dados gerados abaixo. Note que estamos usando o mesmo vetor x. Gere gráficos dos dados fabricados e também dos dados preditos pela estimativa (como os que foram feitos acima).
O que acontece se você tentar ajustar uma reta aos dados da parábola? E se tentar ajustar uma parábola aos dados da reta?
**Dicas**:
* Você precisa criar outras funções para montar o sistema de equações normais e calcular a solução do sistema?
```python
#A forma de se resolver o sistema de equações e calcular a solução do sistema para o caso de reta e de parabóla não muda.
#O pensamento é o mesmo, mas com mais variavéis e linhas e colunas
#Podemos utilizar as funções previamente criadas.
#O que muda é a matriz jacobiana e o parâmetro que agora é a,b,c.
```
```python
a_par, b_par, c_par = 2, 20, 200
dados_parabola = a_par*x**2 + b_par*x + c_par + erro
```
```python
# Cria um gráfico.
plt.figure()
# Plota os valores das coordenadas x, y em bolas pretas.
plt.plot(x, dados_parabola, "ok")
# Coloca título no gráfico.
plt.title("Gráfico de Dados")
# Dá título ao eixo x.
plt.xlabel("Valores")
# Dá título ao eixo y.
plt.ylabel("Dados Observados")
```
```python
# Criamos uma função que retorna a matriz Jacobiana para caso de parábola
def jacobiana_parabola(x):
"""
Cria uma matriz Jacobiana para parábola:
Ex: [[x1², x1, 1], [x2², x2, 1]] ... [[xn², xn, 1]]
"""
# Criamos esta lista vazia para fazermos a matriz Jacobiana
jacobiana = []
# Este loop adiciona linhas no formato x,1 de x1 até xN.
for i in range(N):
jacobiana.append([x[i]*x[i], x[i], 1])
return jacobiana
```
```python
#varíavel com a matriz gerada pela função
jac_par = jacobiana_parabola(x)
```
```python
# Define a função ajuste_parabola
def ajuste_parabola(x,y):
"""
Retorna a solução dos mínimos quadrados, dando o vetor de parâmetros
"""
vetor = eqnormais_lado_direito(jacobiana_parabola(x), y)
matriz = eqnormais_sistema(jacobiana_parabola(x))
p = elim_gauss(matriz, vetor)
return p
```
```python
l = ajuste_parabola(x, dados_parabola)
print(l)
```
[2.0211512867558628, 18.578975230686243, 214.85807791856433]
```python
"a_par = {:.4f}".format(2.0211512867558628)
"b_par = {:.4f}".format(18.578975230686243)
"c_par = {:.4f}".format(214.85807791856433)
y = a_par*x*x + b_par*x + c_par
```
```python
# Cria um gráfico.
plt.figure()
# Plota os valores das coordenadas x, y em bolas pretas.
plt.plot(x, dados_parabola, "ok", label='Dados observados')
plt.plot(x, y, "-r", linewidth=3, label='y = 2.021x² + 18.57x + 214.85')
#legenda
legend = plt.legend(loc='upper left', shadow=True, fontsize='large')
# Dá título ao eixo x.
plt.xlabel("x")
# Dá título ao eixo y.
plt.ylabel("y")
```
### Resultado esperado
Os gráficos que você deve gerar deverão ser parecidos com os abaixo:
|
3fe6b3529491c980d680ed55c7edad1cb3e8d263
| 108,126 |
ipynb
|
Jupyter Notebook
|
minimos-quadrados.ipynb
|
mat-esp-uerj/minimos-quadrados-221b-baker-street-london
|
839ef430eb383c3fa39ce362f867ce72e6fa0c94
|
[
"CC-BY-4.0"
] | 2 |
2015-11-28T18:39:51.000Z
|
2015-11-28T18:39:56.000Z
|
minimos-quadrados.ipynb
|
mat-esp-uerj/minimos-quadrados-221b-baker-street-london
|
839ef430eb383c3fa39ce362f867ce72e6fa0c94
|
[
"CC-BY-4.0"
] | null | null | null |
minimos-quadrados.ipynb
|
mat-esp-uerj/minimos-quadrados-221b-baker-street-london
|
839ef430eb383c3fa39ce362f867ce72e6fa0c94
|
[
"CC-BY-4.0"
] | null | null | null | 100.116667 | 21,838 | 0.840196 | true | 5,479 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.689306 | 0.904651 | 0.623581 |
__label__por_Latn
| 0.99826 | 0.287117 |
# $H_{\rm SS}$, up to and including third post-Newtonian order
## This notebook constructs the spin-spin coupling terms in the Hamiltonian up to 3 post-Newtonian order
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** All expressions in this notebook were transcribed twice by hand on separate occasions, and expressions were corrected as needed to ensure consistency with published PN expressions. In addition, this tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). **Additional validation tests may have been performed, but are as yet, undocumented.**
## Author: Zach Etienne
### This notebook exists as the following Python module:
1. [PN_Hamiltonian_SS.py](../../edit/NRPyPN/PN_Hamiltonian_SS.py)
### This notebook & corresponding Python module depend on the following NRPy+/NRPyPN Python modules:
1. [indexedexp.py](../../edit/indexedexp.py): [**documentation+tutorial**](../Tutorial-Indexed_Expressions.ipynb)
1. [NRPyPN_shortcuts.py](../../edit/NRPyPN/NRPyPN_shortcuts.py): [**documentation**](NRPyPN_shortcuts.ipynb)
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
1. Part 1: [$H_{S_1,S_2,{\rm 2PN}}+H_{S_1^2,{\rm 2PN}}+H_{S_2^2,{\rm 2PN}}$](#twopn), as summarized in [Buonanno, Chen, and Damour (2006)](https://arxiv.org/abs/gr-qc/0508067) (see references therein for sources)
1. Part 2: [$H_{S_1,S_2,{\rm 3PN}}$](#s1s2threepn), as derived by [Steinhoff, Hergt, and Schäfer (2008a)](https://arxiv.org/abs/0712.1716)
1. Part 3: [$H_{S_1^2,{\rm 3PN}}+H_{S_2^2,{\rm 3PN}}$](#s1squaredthreepn), as derived in [Steinhoff, Hergt, and Schäfer (2008b)](https://arxiv.org/abs/0809.2200)
1. Part 4: [Validation against second transcription and corresponding Python module](#code_validation)
1. Part 5: [LaTeX PDF output](#latex_pdf_output): $\LaTeX$ PDF Output
<a id='twopn'></a>
# Part 1: $H_{\rm SS, 2PN}$, as summarized in [Buonanno, Chen, and Damour (2006)](https://arxiv.org/abs/gr-qc/0508067) (see references therein for sources) \[Back to [top](#toc)\]
$$\label{twopn}$$
As described in the [nonspinning Hamiltonian notebook](PN-Hamiltonian-Nonspinning.ipynb), the basic physical system assumes two point particles of mass $m_1$ and $m_2$ with corresponding momentum vectors $\mathbf{P}_1$ and $\mathbf{P}_2$, and displacement vectors $\mathbf{X}_1$ and $\mathbf{X}_2$ with respect to the center of mass. Here we also consider the spin vectors of each point mass $\mathbf{S}_1$ and $\mathbf{S}_2$, respectively.
[Steinhoff, Hergt, and Schäfer (2008a)](https://arxiv.org/abs/0712.1716) and [Steinhoff, Hergt, and Schäfer (2008b)](https://arxiv.org/abs/0809.2200) adopt the additional notation
\begin{align}
\mathbf{r}_{12} &= (\mathbf{X}_1-\mathbf{X}_2)\\
r_{12} = r_{21} &= |\mathbf{r}_{12}|\\
\mathbf{n}_{12} &= \frac{\mathbf{r}_{12}}{r_{12}},
\end{align}
and when the numbers in subscripts are flipped, the particles are interchanged.
The complete $H_{\rm SO, 1.5PN}$ expression is constructed in Eqs. 2.18 and 2.19 [Buonanno, Chen, and Damour (2006)](https://arxiv.org/abs/gr-qc/0508067):
\begin{align}
\mu &= \frac{m_1 m_2}{m_1+m_2}\\
\mathbf{S_0} &= \left(1+\frac{m_2}{m_1}\right) \mathbf{S_1} + \left(1+\frac{m_1}{m_2}\right) \mathbf{S_2}\\
H_{SS,\rm 2PN} = H_{S_1,S_2,{\rm 2PN}}+H_{S_1^2,{\rm 2PN}}+H_{S_2^2,{\rm 2PN}} &= \frac{1}{2 q^3} \frac{\mu}{M} \left[3(\mathbf{S_0}\cdot\mathbf{n})^2-\mathbf{S_0}^2\right]\\
\end{align}
```python
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys # Standard Python modules for multiplatform OS-level functions
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
from NRPyPN_shortcuts import div,dot,cross # NRPyPN: shortcuts for e.g., vector operations
# 2PN spin-spin term, from Eqs. 2.18 and 2.19 of
# Buonanno, Chen, and Damour (2006):
# https://arxiv.org/abs/gr-qc/0508067
def f_H_SS_2PN(m1,m2, S1U,S2U, nU, q):
S0U = ixp.zerorank1()
for i in range(3):
S0U[i] = (1 + m2/m1)*S1U[i] + (1 + m1/m2)*S2U[i]
global H_SS_2PN
mu = m1*m2 / (m1 + m2)
H_SS_2PN = mu/(m1 + m2) * (3*dot(S0U,nU)**2 - dot(S0U,S0U)) / (2*q**3)
```
```python
# Second version, for validation purposes only.
def f_H_SS_2PNv2(m1,m2, S1U,S2U, nU, q):
S_0U = ixp.zerorank1()
for i in range(3):
S_0U[i] = (1 + m2/m1)*S1U[i] + (1 + m1/m2)*S2U[i]
mu = m1*m2 / (m1+m2)
global H_SS_2PNv2
H_SS_2PNv2 = div(1,2)*mu/(m1+m2)*( 3*dot(S_0U,nU)**2 - dot(S_0U,S_0U) )/q**3
```
<a id='s1s2threepn'></a>
# Part 2: $H_{S_1,S_2,{\rm 3PN}}$, as derived by [Steinhoff, Hergt, and Schäfer (2008a)](https://arxiv.org/abs/0712.1716) \[Back to [top](#toc)\]
$$\label{s1s2threepn}$$
To reduce possibility of copying error, equations are taken directly from the arXiv LaTeX source code of Eq 2.11 in [Steinhoff, Hergt, and Schäfer (2008a)](https://arxiv.org/abs/0712.1716), and only mildly formatted to (1) improve presentation in Jupyter notebooks and (2) to ensure some degree of consistency in notation across different terms in other Hamiltonian notebooks:
\begin{align}
H_{S_1,S_2, 3PN} &=
\frac{1}{2 m_1 m_2 r_{1 2}^3} [
\tfrac{3}{2} ((\mathbf{p}_1 \times \mathbf{S}_1) \cdot \mathbf{n}_{1 2}) ((\mathbf{p}_2 \times \mathbf{S}_2) \cdot \mathbf{n}_{1 2})
+ 6 ((\mathbf{p}_2 \times \mathbf{S}_1) \cdot \mathbf{n}_{1 2}) ((\mathbf{p}_1 \times \mathbf{S}_2) \cdot \mathbf{n}_{1 2}) \\
& - 15 (\mathbf{S}_1 \cdot \mathbf{n}_{1 2}) (\mathbf{S}_2 \cdot \mathbf{n}_{1 2}) (\mathbf{p}_1 \cdot \mathbf{n}_{1 2}) (\mathbf{p}_2 \cdot \mathbf{n}_{1 2})
- 3 (\mathbf{S}_1 \cdot \mathbf{n}_{1 2}) (\mathbf{S}_2 \cdot \mathbf{n}_{1 2}) (\mathbf{p}_1 \cdot \mathbf{p}_2) \\
& + 3 (\mathbf{S}_1 \cdot \mathbf{p}_2) (\mathbf{S}_2 \cdot \mathbf{n}_{1 2}) (\mathbf{p}_1 \cdot \mathbf{n}_{1 2})
+ 3 (\mathbf{S}_2 \cdot \mathbf{p}_1) (\mathbf{S}_1 \cdot \mathbf{n}_{1 2}) (\mathbf{p}_2 \cdot \mathbf{n}_{1 2}) + 3 (\mathbf{S}_1 \cdot \mathbf{p}_1) (\mathbf{S}_2 \cdot \mathbf{n}_{1 2}) (\mathbf{p}_2 \cdot \mathbf{n}_{1 2}) \\
& + 3 (\mathbf{S}_2 \cdot \mathbf{p}_2) (\mathbf{S}_1 \cdot \mathbf{n}_{1 2}) (\mathbf{p}_1 \cdot \mathbf{n}_{1 2}) - \tfrac{1}{2} (\mathbf{S}_1 \cdot \mathbf{p}_2) (\mathbf{S}_2 \cdot \mathbf{p}_1)
+ (\mathbf{S}_1 \cdot \mathbf{p}_1) (\mathbf{S}_2 \cdot \mathbf{p}_2) \\
& - 3 (\mathbf{S}_1 \cdot \mathbf{S}_2) (\mathbf{p}_1 \cdot \mathbf{n}_{1 2}) (\mathbf{p}_2 \cdot \mathbf{n}_{1 2}) + \tfrac{1}{2} (\mathbf{S}_1 \cdot \mathbf{S}_2) (\mathbf{p}_1 \cdot \mathbf{p}_2)
] \\
& + \frac{3}{2 m_1^2 r_{1 2}^3} [
- ((\mathbf{p}_1 \times \mathbf{S}_1) \cdot \mathbf{n}_{1 2}) ((\mathbf{p}_1 \times \mathbf{S}_2) \cdot \mathbf{n}_{1 2})
+ (\mathbf{S}_1 \cdot \mathbf{S}_2) (\mathbf{p}_1 \cdot \mathbf{n}_{1 2})^2 - (\mathbf{S}_1 \cdot \mathbf{n}_{1 2}) (\mathbf{S}_2 \cdot \mathbf{p}_1) (\mathbf{p}_1 \cdot \mathbf{n}_{1 2})
] \\
& + \frac{3}{2 m_2^2 r_{1 2}^3} [
- ((\mathbf{p}_2 \times \mathbf{S}_2) \cdot \mathbf{n}_{1 2}) ((\mathbf{p}_2 \times \mathbf{S}_1) \cdot \mathbf{n}_{1 2})
+ (\mathbf{S}_1 \cdot \mathbf{S}_2) (\mathbf{p}_2 \cdot \mathbf{n}_{1 2})^2 - (\mathbf{S}_2 \cdot \mathbf{n}_{1 2}) (\mathbf{S}_1 \cdot \mathbf{p}_2) (\mathbf{p}_2 \cdot \mathbf{n}_{1 2})
] \\
& + \frac{6 ( m_1 + m_2 )}{r_{1 2}^4} [ (\mathbf{S}_1 \cdot \mathbf{S}_2) - 2 (\mathbf{S}_1 \cdot \mathbf{n}_{1 2}) (\mathbf{S}_2 \cdot \mathbf{n}_{1 2}) ] \,,
\end{align}
```python
# 3PN spin-spin S_1,S_2 coupling term, from Eq. 2.11 of
# Steinhoff, Hergt, and Schäfer (2008a)
# https://arxiv.org/abs/0712.1716
def f_H_SS_S1S2_3PN(m1,m2, n12U, S1U,S2U, p1U,p2U, r12):
global H_SS_S1S2_3PN
H_SS_S1S2_3PN = (+div(3,2)*(dot(cross(p1U,S1U),n12U)*dot(cross(p2U,S2U),n12U))
+ 6*(dot(cross(p2U,S1U),n12U)*dot(cross(p1U,S2U),n12U))
-15*dot(S1U,n12U)*dot(S2U,n12U)*dot(p1U,n12U)*dot(p2U,n12U)
-3*dot(S1U,n12U)*dot(S2U,n12U)*dot(p1U,p2U)
+3*dot(S1U,p2U)*dot(S2U,n12U)*dot(p1U,n12U)
+3*dot(S2U,p1U)*dot(S1U,n12U)*dot(p2U,n12U)
+3*dot(S1U,p1U)*dot(S2U,n12U)*dot(p2U,n12U)
+3*dot(S2U,p2U)*dot(S1U,n12U)*dot(p1U,n12U)
-div(1,2)*dot(S1U,p2U)*dot(S2U,p1U)
+dot(S1U,p1U)*dot(S2U,p2U)
-3*dot(S1U,S2U)*dot(p1U,n12U)*dot(p2U,n12U)
+div(1,2)*dot(S1U,S2U)*dot(p1U,p2U))/(2*m1*m2*r12**3)
H_SS_S1S2_3PN+= (-dot(cross(p1U,S1U),n12U)*dot(cross(p1U,S2U),n12U)
+dot(S1U,S2U)*dot(p1U,n12U)**2
-dot(S1U,n12U)*dot(S2U,p1U)*dot(p1U,n12U))*3/(2*m1**2*r12**3)
H_SS_S1S2_3PN+= (-dot(cross(p2U,S2U),n12U)*dot(cross(p2U,S1U),n12U)
+dot(S1U,S2U)*dot(p2U,n12U)**2
-dot(S2U,n12U)*dot(S1U,p1U)*dot(p2U,n12U))*3/(2*m2**2*r12**3)
H_SS_S1S2_3PN+= (+dot(S1U,S2U)-2*dot(S1U,n12U)*dot(S2U,n12U))*6*(m1+m2)/r12**4
```
```python
# Second version, for validation purposes only.
def f_H_SS_S1S2_3PNv2(m1,m2, n12U, S1U,S2U, p1U,p2U, q):
def SHS2008a_HS1S2_3PNv2_pt1(m1,m2, n12U, S1U,S2U, p1U,p2U, q):
Hpt1 = ( +div(3,2)*(dot(cross(p1U,S1U),n12U)*dot(cross(p2U,S2U),n12U)) # line 1
+6 *dot(cross(p2U,S1U),n12U)*dot(cross(p1U,S2U),n12U) # line 1
-15*dot(S1U,n12U)*dot(S2U,n12U)*dot(p1U,n12U)*dot(p2U,n12U) # line 2
-3*dot(S1U,n12U)*dot(S2U,n12U)*dot(p1U,p2U) # line 2
+3*dot(S1U,p2U)*dot(S2U,n12U)*dot(p1U,n12U) # line 3
+3*dot(S2U,p1U)*dot(S1U,n12U)*dot(p2U,n12U) # line 3
+3*dot(S1U,p1U)*dot(S2U,n12U)*dot(p2U,n12U) # line 3
+3*dot(S2U,p2U)*dot(S1U,n12U)*dot(p1U,n12U) # line 4
-div(1,2)*dot(S1U,p2U)*dot(S2U,p1U) # line 4
+dot(S1U,p1U)*dot(S2U,p2U) # line 4
-3*dot(S1U,S2U)*dot(p1U,n12U)*dot(p2U,n12U) # line 5
+div(1,2)*dot(S1U,S2U)*dot(p1U,p2U) )/(2*m1*m2*q**3) # line 5
return Hpt1
def SHS2008a_HS1S2_3PNv2_pt2(m1,m2, n12U, S1U,S2U, p1U,p2U, q):
Hpt2 = ( -dot(cross(p1U,S1U),n12U)*dot(cross(p1U,S2U),n12U) # line 6
+dot(S1U,S2U)*dot(p1U,n12U)**2 # line 6
-dot(S1U,n12U)*dot(S2U,p1U)*dot(p1U,n12U) )*div(3,2)/(m1**2*q**3) # line 6
return Hpt2
def SHS2008a_HS1S2_3PNv2_pt3(m1,m2, n12U, S1U,S2U, p1U,p2U, q):
Hpt3 = ( -dot(cross(p2U,S2U),n12U)*dot(cross(p2U,S1U),n12U) # line 7
+dot(S1U,S2U)*dot(p2U,n12U)**2 # line 7
-dot(S2U,n12U)*dot(S1U,p1U)*dot(p2U,n12U) )*div(3,2)/(m2**2*q**3) # line 7
return Hpt3
def SHS2008a_HS1S2_3PNv2_pt4(m1,m2, n12U, S1U,S2U, p1U,p2U, q):
Hpt4 = ( dot(S1U,S2U) - 2*dot(S1U,n12U)*dot(S2U,n12U) ) * 6*(m1+m2)/q**4 # line 8
return Hpt4
global H_SS_S1S2_3PNv2
H_SS_S1S2_3PNv2 = ( +SHS2008a_HS1S2_3PNv2_pt1(m1,m2, n12U, S1U,S2U, p1U,p2U, q)
+SHS2008a_HS1S2_3PNv2_pt2(m1,m2, n12U, S1U,S2U, p1U,p2U, q)
+SHS2008a_HS1S2_3PNv2_pt3(m1,m2, n12U, S1U,S2U, p1U,p2U, q)
+SHS2008a_HS1S2_3PNv2_pt4(m1,m2, n12U, S1U,S2U, p1U,p2U, q) )
```
<a id='s1squaredthreepn'></a>
# Part 3: $H_{S_1^2,{\rm 3PN}}+H_{S_2^2,{\rm 3PN}}$, as derived in [Steinhoff, Hergt, and Schäfer (2008b)](https://arxiv.org/abs/0809.2200) \[Back to [top](#toc)\]
$$\label{s1squaredthreepn}$$
To reduce possibility of copying error, equations are taken directly from the arXiv LaTeX source code of Eq 9 in [Steinhoff, Hergt, and Schäfer (2008b)](https://arxiv.org/abs/0809.2200), and only mildly formatted to (1) improve presentation in Jupyter notebooks and (2) to ensure some degree of consistency in notation across different terms in other Hamiltonian notebooks:
\begin{align}
H_{S_1^2,{\rm 3PN}}+H_{S_2^2,{\rm 3PN}}&=
\frac{1}{r_{12}^3}\bigg[
\frac{m_{2}}{4m_{1}^3}\left(\mathbf{P}_{1}\cdot\mathbf{S}_{1}\right)^2
+\frac{3m_{2}}{8m_{1}^3}\left(\mathbf{P}_{1}\cdot\mathbf{n}_{12}\right)^{2}\mathbf{S}_{1}^{2}
-\frac{3m_{2}}{8m_{1}^3}\mathbf{P}_{1}^{2}\left(\mathbf{S}_{1}\cdot\mathbf{n}_{12}\right)^2 \\
& -\frac{3m_{2}}{4m_{1}^3}\left(\mathbf{P}_{1}\cdot\mathbf{n}_{12}\right)\left(\mathbf{S}_{1}\cdot\mathbf{n}_{12}\right)\left(\mathbf{P}_{1}\cdot\mathbf{S}_{1}\right)
-\frac{3}{4m_{1}m_{2}}\mathbf{P}_{2}^{2}\mathbf{S}_{1}^{2}\\
& +\frac{9}{4m_{1}m_{2}}\mathbf{P}_{2}^{2}\left(\mathbf{S}_{1}\cdot\mathbf{n}_{12}\right)^2
+\frac{3}{4m_{1}^2}\left(\mathbf{P}_{1}\cdot\mathbf{P}_{2}\right)\mathbf{S}_{1}^2
-\frac{9}{4m_{1}^2}\left(\mathbf{P}_{1}\cdot\mathbf{P}_{2}\right)\left(\mathbf{S}_{1}\cdot\mathbf{n}_{12}\right)^2 \\
& -\frac{3}{2m_{1}^2}\left(\mathbf{P}_{1}\cdot\mathbf{n}_{12}\right)\left(\mathbf{P}_{2}\cdot\mathbf{S}_{1}\right)\left(\mathbf{S}_{1}\cdot\mathbf{n}_{12}\right)
+\frac{3}{m_{1}^2}\left(\mathbf{P}_{2}\cdot\mathbf{n}_{12}\right)\left(\mathbf{P}_{1}\cdot\mathbf{S}_{1}\right)\left(\mathbf{S}_{1}\cdot\mathbf{n}_{12}\right) \\
& +\frac{3}{4m_{1}^2}\left(\mathbf{P}_{1}\cdot\mathbf{n}_{12}\right)\left(\mathbf{P}_{2}\cdot\mathbf{n}_{12}\right)\mathbf{S}_{1}^2
-\frac{15}{4m_{1}^2}\left(\mathbf{P}_{1}\cdot\mathbf{n}_{12}\right)\left(\mathbf{P}_{2}\cdot\mathbf{n}_{12}\right)\left(\mathbf{S}_{1}\cdot\mathbf{n}_{12}\right)^2\bigg] \\
& - \frac{G^2 m_2}{r_{12}^4} \bigg[
\frac{9}{2} (\mathbf{S}_1 \cdot \mathbf{n}_{12})^2 - \frac{5}{2} \mathbf{S}_1^2
+ \frac{7 m_2}{m_1} (\mathbf{S}_1 \cdot \mathbf{n}_{12})^2
- \frac{3 m_2}{m_1} \mathbf{S}_1^2 \bigg]
+ (1\leftrightarrow2)\,.
\end{align}
```python
# 3PN spin-orbit coupling term, from Eq. 9 of
# Steinhoff, Hergt, and Schäfer (2008b)
# https://arxiv.org/abs/0809.2200
def f_H_SS_S1sq_S2sq_3PN(m1,m2, n12U,n21U, S1U,S2U, p1U,p2U, r12):
def f_H_SS_particle(m1,m2, n12U, S1U,S2U, p1U,p2U, r12):
H_SS_S1sq_S2sq_3PN_particle = (
+ m2/(4*m1**3)*dot(p1U,S1U)**2
+3*m2/(8*m1**3)*dot(p1U,n12U)**2*dot(S1U,S1U)
-3*m2/(8*m1**3)*dot(p1U,p1U)*dot(S1U,n12U)**2
-3*m2/(4*m1**3)*dot(p1U,n12U)*dot(S1U,n12U)*dot(p1U,S1U)
-3/(4*m1*m2)*dot(p2U,p2U)*dot(S1U,S1U)
+9/(4*m1*m2)*dot(p2U,p2U)*dot(S1U,n12U)**2
+3/(4*m1**2)*dot(p1U,p2U)*dot(S1U,S1U)
-9/(4*m1**2)*dot(p1U,p2U)*dot(S1U,n12U)**2
-3/(2*m1**2)*dot(p1U,n12U)*dot(p2U,S1U)*dot(S1U,n12U)
+3/(m1**2) *dot(p2U,n12U)*dot(p1U,S1U)*dot(S1U,n12U)
+3/(4*m1**2)*dot(p1U,n12U)*dot(p2U,n12U)*dot(S1U,S1U)
-15/(4*m1**2)*dot(p1U,n12U)*dot(p2U,n12U)*dot(S1U,n12U)**2)/r12**3
H_SS_S1sq_S2sq_3PN_particle+= -(+div(9,2)*dot(S1U,n12U)**2
-div(5,2)*dot(S1U,S1U)
+7*m2/m1*dot(S1U,n12U)**2
-3*m2/m1*dot(S1U,S1U))*m2/r12**4
return H_SS_S1sq_S2sq_3PN_particle
global H_SS_S1sq_S2sq_3PN
H_SS_S1sq_S2sq_3PN = (+f_H_SS_particle(m1,m2, n12U, S1U,S2U, p1U,p2U, r12)
+f_H_SS_particle(m2,m1, n21U, S2U,S1U, p2U,p1U, r12))
```
```python
# Second version, transcribed on a separate occasion. For validation purposes only.
def f_H_SS_S1sq_S2sq_3PNv2(m1,m2, n12U,n21U, S1U,S2U, p1U,p2U, q):
def SHS2008b_HSsq_3PNv2_pt(m1,m2, n12U, S1U, p1U,p2U, q):
H = ( +div(1,4)*m2/m1**3*dot(p1U,S1U)**2
+div(3,8)*m2/m1**3*dot(p1U,n12U)**2*dot(S1U,S1U) # line 1
-div(3,8)*m2/m1**3*dot(p1U,p1U)*dot(S1U,n12U)**2 # line 1
-div(3,4)*m2/m1**3*dot(p1U,n12U)*dot(S1U,n12U)*dot(p1U,S1U) # line 2
-div(3,4)/(m1*m2) *dot(p2U,p2U)*dot(S1U,S1U) # line 2
+div(9,4)/(m1*m2) *dot(p2U,p2U)*dot(S1U,n12U)**2 # line 3
+div(3,4)/m1**2 *dot(p1U,p2U)*dot(S1U,S1U) # line 3
-div(9,4)/m1**2 *dot(p1U,p2U)*dot(S1U,n12U)**2 # line 3
-div(3,2)/m1**2 *dot(p1U,n12U)*dot(p2U,S1U)*dot(S1U,n12U) # line 4
+ 3/m1**2 *dot(p2U,n12U)*dot(p1U,S1U)*dot(S1U,n12U) # line 4
+div(3,4)/m1**2 *dot(p1U,n12U)*dot(p2U,n12U)*dot(S1U,S1U) # line 5
-div(15,4)/m1**2 *dot(p1U,n12U)*dot(p2U,n12U)*dot(S1U,n12U)**2 )/q**3 \
-( +div(9,2)*dot(S1U,n12U)**2 # line 6
-div(5,2)*dot(S1U,S1U) # line 6
+ 7*m2/m1*dot(S1U,n12U)**2 # line 6
- 3*m2/m1*dot(S1U,S1U) )*m2/q**4 # line 6
return H
global H_SS_S1sq_S2sq_3PNv2
H_SS_S1sq_S2sq_3PNv2 = ( +SHS2008b_HSsq_3PNv2_pt(m1,m2, n12U, S1U, p1U,p2U, q) # S_1^2 term
+SHS2008b_HSsq_3PNv2_pt(m2,m1, n21U, S2U, p2U,p1U, q) ) # S_2^2 term
```
<a id='code_validation'></a>
# Part 4: Validation against second transcription and corresponding Python module \[Back to [top](#toc)\]
$$\label{code_validation}$$
As a code validation check, we verify agreement between
* the SymPy expressions transcribed from the cited published work on two separate occasions, and
* the SymPy expressions generated in this notebook, and the corresponding Python module.
```python
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
from NRPyPN_shortcuts import m1,m2, nU,n12U,n21U, S1U,S2U, p1U,p2U, q # NRPyPN: Import needed input variables
f_H_SS_2PN(m1,m2, S1U,S2U, nU, q)
f_H_SS_S1S2_3PN( m1,m2, n12U, S1U,S2U, p1U,p2U, q)
f_H_SS_S1sq_S2sq_3PN(m1,m2, n12U,n21U, S1U,S2U, p1U,p2U, q)
def error(varname):
print("ERROR: When comparing Python module & notebook, "+varname+" was found not to match.")
sys.exit(1)
# Validation against second transcription of the expressions:
f_H_SS_2PNv2(m1,m2, S1U,S2U, nU, q)
f_H_SS_S1S2_3PNv2( m1,m2, n12U, S1U,S2U, p1U,p2U, q)
f_H_SS_S1sq_S2sq_3PNv2(m1,m2, n12U,n21U, S1U,S2U, p1U,p2U, q)
if sp.simplify(H_SS_2PN - H_SS_2PNv2) != 0: error("H_SS_2PNv2")
if sp.simplify(H_SS_S1S2_3PN - H_SS_S1S2_3PNv2) != 0: error("H_SS_S1S2_3PNv2")
if sp.simplify(H_SS_S1sq_S2sq_3PN - H_SS_S1sq_S2sq_3PNv2) != 0: error("H_SS_S1sq_S2sq_3PNv2")
# Validation against corresponding Python module:
import PN_Hamiltonian_SS as HSS
HSS.f_H_SS_2PN(m1,m2, S1U,S2U, nU, q)
HSS.f_H_SS_S1S2_3PN( m1,m2, n12U, S1U,S2U, p1U,p2U, q)
HSS.f_H_SS_S1sq_S2sq_3PN(m1,m2, n12U,n21U, S1U,S2U, p1U,p2U, q)
if sp.simplify(H_SS_2PN - HSS.H_SS_2PN) != 0: error("H_SS_2PN")
if sp.simplify(H_SS_S1S2_3PN - HSS.H_SS_S1S2_3PN) != 0: error("H_SS_S1S2_3PN")
if sp.simplify(H_SS_S1sq_S2sq_3PN - HSS.H_SS_S1sq_S2sq_3PN) != 0: error("H_SS_S1sq_S2sq_3PN")
print("ALL TESTS PASS")
```
ALL TESTS PASS
<a id='latex_pdf_output'></a>
# Part 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[PN-Hamiltonian-Spin-Spin.pdf](PN-Hamiltonian-Spin-Spin.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
import os,sys # Standard Python modules for multiplatform OS-level functions
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("PN-Hamiltonian-Spin-Spin",location_of_template_file=os.path.join(".."))
```
Created PN-Hamiltonian-Spin-Spin.tex, and compiled LaTeX file to PDF file
PN-Hamiltonian-Spin-Spin.pdf
|
6a460a71e3a2e6595dcc890d11a749b18daceca4
| 25,756 |
ipynb
|
Jupyter Notebook
|
NRPyPN/PN-Hamiltonian-Spin-Spin.ipynb
|
ksible/nrpytutorial
|
4ca6e9da22def2a9c9bcbcad75847fd1db159f4b
|
[
"BSD-2-Clause"
] | null | null | null |
NRPyPN/PN-Hamiltonian-Spin-Spin.ipynb
|
ksible/nrpytutorial
|
4ca6e9da22def2a9c9bcbcad75847fd1db159f4b
|
[
"BSD-2-Clause"
] | null | null | null |
NRPyPN/PN-Hamiltonian-Spin-Spin.ipynb
|
ksible/nrpytutorial
|
4ca6e9da22def2a9c9bcbcad75847fd1db159f4b
|
[
"BSD-2-Clause"
] | null | null | null | 57.235556 | 455 | 0.539564 | true | 8,659 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.849971 | 0.715424 | 0.60809 |
__label__eng_Latn
| 0.277991 | 0.251127 |
# Importing Packages
```python
import numpy as np
import matplotlib.pyplot as plt
import sympy as sp
from numpy.linalg import inv
import time
from matplotlib.animation import FuncAnimation
from IPython import display
from matplotlib import animation
from IPython.display import HTML
```
```python
import warnings # Serve para ignorar mensagens de alerta que aparecem
warnings.simplefilter(action='ignore', category=FutureWarning)
```
```python
plt.rcParams['font.size'] = 14
```
# Objective
- Resolve the Kuramoto-Sivashinsky equation with a compact finite element scheme
# Kuramoto-Sivashinky Equation (KSE)
Formula:
\begin{equation}
\frac{\partial u}{\partial t} = -\beta\frac{\partial^4 u}{\partial u^4}- \alpha\frac{\partial^2 u}{\partial x^2} - u\frac{\partial u}{\partial x}
\end{equation}
Where $\beta$ and $ \alpha, $ are constants
## Creating the domain
Based on reference [2], we consider a spatial domain $\Omega_x = [a,b]$ and a time domain $\Omega_T = [0,T]$
Defining the constant values:
$b = 200$
$a = 0$
$\beta = 1$
$\alpha = 1$
Uniform grids must be formed by a set of nodes $x$ and $t$. Following reference [1], we define the spatial spacing $\Delta_x$ from the length of the domain and the number of nodes $N$:
$\Delta_x = \frac{b-a}{N-1}$
The time step $\Delta_t$ is defined by the user.
```python
N = 256 # Number of nodes
a = 0
b = 200
dx = (b-a)/(N-1)
dt = 0.05
T = 2000
# Creating the domains
x = np.linspace(a,b,N)
t = np.arange(0,T+dt,dt)
```
## Inicial condition
The initial periodic condition of reference [2] is defined as:
\begin{equation}
u_0 = cos \left(\frac{\pi x}{20}\right) \left(1+sin\frac{\pi x}{20}\right)
\end{equation}
```python
u0 = np.cos(np.pi*x/20)*(1+np.sin(np.pi*x/20))
plt.figure(figsize=(12, 4))
plt.plot(x,u0)
plt.grid()
plt.xlabel('x')
plt.ylabel('Initial condition')
```
```python
u_num = np.zeros((len(t),len(x))) # Matrix where I will save the solution at each time step as a row.
# Applying initial condition
u_num[0,:] = u0
u_num
```
array([[1. , 1.11437737, 1.20634371, ..., 0.73325016, 0.87046365,
1. ],
[0. , 0. , 0. , ..., 0. , 0. ,
0. ],
[0. , 0. , 0. , ..., 0. , 0. ,
0. ],
...,
[0. , 0. , 0. , ..., 0. , 0. ,
0. ],
[0. , 0. , 0. , ..., 0. , 0. ,
0. ],
[0. , 0. , 0. , ..., 0. , 0. ,
0. ]])
# Compact Finite Difference:
Using [3] as the main reference, we see that he didn't consider periodic boundary conditions, but [2] did. Comparing the difference between there matrixes, we can see that all of them are tridiagonal, however, the terms close to the boundary are different.
Therefore, we can use the same scheme as in [3], but applying the periodic boundary conditions the same way as in [2].
## First derivative
```python
# Creating the matrix of numbers
A = np.zeros((N,N))
i,j = np.indices(A.shape,dtype=int)
A[i==j-1] = 1
A[i==j] = 3
A[i==j+1] = 1
# Correcting boundary conditions for periodic case
A[0,-1] = A[0,1]
A[-1,0] = A[-1,-2]
print(A)
A2 = inv(A)
```
[[3. 1. 0. ... 0. 0. 1.]
[1. 3. 1. ... 0. 0. 0.]
[0. 1. 3. ... 0. 0. 0.]
...
[0. 0. 0. ... 3. 1. 0.]
[0. 0. 0. ... 1. 3. 1.]
[1. 0. 0. ... 0. 1. 3.]]
```python
#A[0,:]
```
```python
matrixpsi = np.zeros((N,N))
i,j = np.indices(matrixpsi.shape,dtype=int)
matrixpsi[i==j-2] = 1/(12*dx)
matrixpsi[i==j-1] = 28/(12*dx)
matrixpsi[i==j+1] = -28/(12*dx)
matrixpsi[i==j+2] = -1/(12*dx)
# Correcting boundary conditions for periodic case
matrixpsi[0,-1] = -matrixpsi[0,1]
matrixpsi[0,-2] = -matrixpsi[0,2]
matrixpsi[1,-1] = -matrixpsi[0,2]
matrixpsi[1,0] = -matrixpsi[0,1]
matrixpsi[-1,0] = -matrixpsi[-1,N-2]
matrixpsi[-1,1] = -matrixpsi[-1,N-3]
matrixpsi[-2,0] = -matrixpsi[-2,-4]
matrixpsi.view()
```
array([[ 0. , 2.975 , 0.10625, ..., 0. , -0.10625, -2.975 ],
[-2.975 , 0. , 2.975 , ..., 0. , 0. , -0.10625],
[-0.10625, -2.975 , 0. , ..., 0. , 0. , 0. ],
...,
[ 0. , 0. , 0. , ..., 0. , 2.975 , 0.10625],
[ 0.10625, 0. , 0. , ..., -2.975 , 0. , 2.975 ],
[ 2.975 , 0.10625, 0. , ..., -0.10625, -2.975 , 0. ]])
```python
def u_x(u):
psi = np.matmul(matrixpsi,u)
ux = np.matmul(A2,psi)
return ux
```
## Second derivative
```python
B = np.zeros((N,N))
i,j = np.indices(A.shape,dtype=int)
B[i==j-1] = 1
B[i==j] = 10
B[i==j+1] = 1
# Correcting boundary conditions for periodic case
B[0,-1] = B[0,1]
B[-1,0] = B[-1,-2]
B2 = inv(B)
B
```
array([[10., 1., 0., ..., 0., 0., 1.],
[ 1., 10., 1., ..., 0., 0., 0.],
[ 0., 1., 10., ..., 0., 0., 0.],
...,
[ 0., 0., 0., ..., 10., 1., 0.],
[ 0., 0., 0., ..., 1., 10., 1.],
[ 1., 0., 0., ..., 0., 1., 10.]])
```python
matrixphi = np.zeros((N,N))
i,j = np.indices(matrixphi.shape,dtype=int)
matrixphi[i==j-1] = 12/(dx**2)
matrixphi[i==j] = -(12*2)/(dx**2)
matrixphi[i==j+1] = 12/(dx**2)
# Correcting boundary conditions for periodic case
matrixphi[0,-1] = matrixphi[0,1]
matrixphi[-1,0] = matrixphi[-1,-2]
matrixphi
```
array([[-39.015 , 19.5075, 0. , ..., 0. , 0. , 19.5075],
[ 19.5075, -39.015 , 19.5075, ..., 0. , 0. , 0. ],
[ 0. , 19.5075, -39.015 , ..., 0. , 0. , 0. ],
...,
[ 0. , 0. , 0. , ..., -39.015 , 19.5075, 0. ],
[ 0. , 0. , 0. , ..., 19.5075, -39.015 , 19.5075],
[ 19.5075, 0. , 0. , ..., 0. , 19.5075, -39.015 ]])
```python
def u_xx(u):
phi = np.matmul(matrixphi,u)
uxx = np.matmul(B2,phi)
return uxx
```
## Fourth derivative
```python
def u_xxxx(u):
phi = np.matmul(matrixphi,u_xx(u))
uxxxx = np.matmul(B2,phi)
return uxxxx
```
# Plugging in the derivative values
```python
beta=1
alpha=1
```
```python
def Kuramoto(u):
"""
Function to compute the right-hand side of the system.
Parameters
----------
u : numpy.ndarray
Solution of time step t as a 1D array of floats
Returns
----------
u_t : numpy.ndarray
The right-hand side of the system as a 1D array of floats
"""
u_t = -beta*u_xxxx(u)-alpha*u_xx(u)-u*u_x(u)
return u_t
```
# Solving the equation
```python
def SSP_RK43(u):
u1 = u + (dt/2)*Kuramoto(u)
u2 = u1 + (dt/2)*Kuramoto(u1)
u3 = (2/3)*u + u2/3 + (dt/6)*Kuramoto(u2)
u_tdt = u3 + (dt/2)*Kuramoto(u3)
return u_tdt # solution at time step t+dt
```
```python
u_hist = u_num.copy()
u_hist.shape
```
(40001, 256)
```python
from tqdm.notebook import trange, tqdm
from time import sleep
for i in tqdm(range(len(t)-1)):
u_num[i+1,:] = SSP_RK43(u_num[i,:])
```
0%| | 0/40000 [00:00<?, ?it/s]
# Plots
```python
fig, ax = plt.subplots(figsize=(9, 5))
Xm, Tm = np.meshgrid(x,t)
surf = plt.contourf(Xm, Tm, u_num,15, cmap=plt.get_cmap("seismic"))
plt.colorbar()
plt.xlabel('x')
plt.ylabel('T')
plt.tight_layout()
```
```python
np.save('Kuramoto_dataset\Kuramoto_X', Xm)
np.save('Kuramoto_dataset\Kuramoto_T', Tm)
np.save('Kuramoto_dataset\Kuramoto_U', u_num)
```
# References
[1] - Backpropagation algorithms and reservoir computing in recurrent neural networks for the forecasting of complex spatiotemporal dynamics
[2] - A COMPACT FOURTH-ORDER IMPLICIT-EXPLICIT RUNGE-KUTTA TYPE SCHEME FOR NUMERICAL SOLUTION OF THE KURAMOTO-SIVASHINSKY EQUATION
[3] - A note on solving the fourth-order Kuramoto-Sivanshinsky equation by the compact finite difference scheme
[4] - A Reduced High-order Compact Finite Difference Scheme Based on Proper Orthogonal Decomposition for the Generalized Kuramoto-Sivashinsky Equation
|
c6c49c09024712a324f2bbc197d96f47d721ef97
| 170,326 |
ipynb
|
Jupyter Notebook
|
Kuramoto_Dataset_generation.ipynb
|
pirao/Kuramoto_PINN
|
79fc0dd537ee9424dfc18c5d8c50e3474aed1980
|
[
"MIT"
] | null | null | null |
Kuramoto_Dataset_generation.ipynb
|
pirao/Kuramoto_PINN
|
79fc0dd537ee9424dfc18c5d8c50e3474aed1980
|
[
"MIT"
] | null | null | null |
Kuramoto_Dataset_generation.ipynb
|
pirao/Kuramoto_PINN
|
79fc0dd537ee9424dfc18c5d8c50e3474aed1980
|
[
"MIT"
] | null | null | null | 187.997792 | 109,072 | 0.90608 | true | 2,953 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.868827 | 0.828939 | 0.720204 |
__label__eng_Latn
| 0.632472 | 0.511607 |
<a href="https://colab.research.google.com/github/kmjohnson3/Intro-to-MRI/blob/master/NoteBooks/Selective_RF_Excitation.ipynb" target="_parent"></a>
# Spatially selective excitation
This module will explore slice selection in which we aim to excite a slice (2D imaging) or slab (3D imaging).
First we will need to run some code to import libraries and define a Bloch solver. This time the Bloch solver has some code to make it faster but slightly less accurate.
```
# This is comment, Python will ignore this line
# Import libraries (load libraries which provide some functions)
%matplotlib inline
import numpy as np # array library
import math
import cmath
from scipy import interpolate
import numba
# For interactive plotting
from ipywidgets import interact, interactive, FloatSlider, ToggleButton
from IPython.display import clear_output, display, HTML
# for plotting modified style for better visualization
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['lines.linewidth'] = 4
mpl.rcParams['axes.titlesize'] = 24
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['legend.fontsize'] = 16
@numba.jit(nopython=True)
def bloch_solver( B, time, freq=0, T1=2000, T2=2000, GAM=42.58e6*2*math.pi):
# This is simple Rk4 solution to the Bloch Equations.
#
# Inputs:
# B(array) -- Magentic Field [N x 3] (T)
# time(array) -- Time of each point in waveforms (s)
# freq -- Frequency [Hz]
# T1 -- Longitudinal relaxation times (s)
# T2 -- Transverse relaxation times (s)
# M0 -- Initial state of magnetization (not equilibrium magnetization)
# Outputs:
# MOutput -- Magnetization for each position in time
# Convert frequency to rads/s
act_freq = 2*math.pi*freq
#Convert to rotion rates (gamma*B)
Bx = GAM*B[:,0]
By = GAM*B[:,1]
Bz = GAM*B[:,2] + act_freq
# Double the resolution using linear interpolation (this is faster than splines)
Bx2 = np.zeros( 2*len(Bz)+2)
By2 = np.zeros( 2*len(Bz)+2)
Bz2 = np.zeros( 2*len(Bz)+2)
Bx2[:-4:2] = Bx[:-1]
By2[:-4:2] = By[:-1]
Bz2[:-4:2] = Bz[:-1]
# Temp
Bx2[1:-3:2] = 0.5*Bx[:-1] + 0.5*Bx[1:]
By2[1:-3:2] = 0.5*By[:-1] + 0.5*By[1:]
Bz2[1:-3:2] = 0.5*Bz[:-1] + 0.5*Bz[1:]
#Initialize
Mag = np.array([[0.0],[0.0],[1.0]])
# Output storage
MOutput = np.zeros_like(B)
MOutput = np.expand_dims(MOutput,-1)
#Runge-Kutta PDE Solution
dt = time[2] - time[1]
for count, t1 in enumerate(time):
m1 = Mag
bx = Bx2[count*2]
by = By2[count*2]
bz = Bz2[count*2]
rhs = np.array([[ -1/T2, bz, -by],
[ -bz ,-1/T2, bx],
[ by , -bx, -1/T1]])
k1 = np.dot(rhs, m1) + np.array([[0.0],[0.0],[1.0/T1]])
t2 = t1 + dt/2
bx = Bx2[count*2+1]
by = By2[count*2+1]
bz = Bz2[count*2+1]
m2 = Mag + k1*dt/2
k2 = np.dot(np.array([[ -1/T2, bz, -by],
[ -bz ,-1/T2, bx],
[ by , -bx, -1/T1]]), m2) + np.array([[0.0],[0.0],[1.0/T1]])
t3 = t1 + dt/2
bx = Bx2[count*2+1]
by = By2[count*2+1]
bz = Bz2[count*2+1]
m3 = Mag + k2*dt/2
k3 = np.dot(np.array([[ -1/T2, bz, -by],
[ -bz ,-1/T2, bx],
[ by , -bx, -1/T1]]), m3) + np.array([[0.0],[0.0],[1.0/T1]])
t4 = t1 + dt
bx = Bx2[count*2+2]
by = By2[count*2+2]
bz = Bz2[count*2+2]
m4 = Mag + k3*dt
k4 = np.dot(np.array([[ -1/T2, bz, -by],
[ -bz ,-1/T2, bx],
[ by , -bx, -1/T1]]), m4) + np.array([[0.0],[0.0],[1.0/T1]])
# Runge-Kutta averages the above terms
Mag = Mag + dt/6*(k1 + 2*k2 + 2*k3 + k4);
# Save to an array
MOutput[count,:]= Mag
return MOutput
```
# Sinc Pulses
Not all pulses in MRI are Sinc pulses but we will consider pulses that are. Our pulses will have several paramaters:
* **TBW** [unitless]: The time bandwidth product. This is effectively how many of the sinc lobes we include. More lobes means higher selectivity
* **T** [s]: The time length of the RF pulse, this will control the total time the RF pulse takes. It will set *BW* [Hz], the bandwidth of the pulse in Hz
* **Window function** : This is a function that rolls off the ends of the sinc so that there is a smoother transition when cutting off lobes. For this exercise I am using a hamming window but other options exist.
The below code generated the pulse envelope. In this code, $B_1$ will be aligned in $x$ such that it rotates the magnetization into the $y$ direction. The sinc can also be modulated by a frequency to excite at a different center frequency.
```
def generate_sinc(T, TBW=4, window=True, dt=4e-6, GAM=42.58e6, flip=10, freq=0):
# Number of points in waveform
Nt = int(T/dt)
# Time normalized to the time bandwidth product
t = np.linspace(-TBW,TBW, Nt)
# Get the pulse shape
B1 = np.sinc(t)
# To deal with the truncation we can apply a window function to taper the RF profile
if window:
B1 *= np.hamming(Nt)
# Normalize to the flip angle
B1 = B1 * (flip/360) / (GAM*np.sum(B1*dt))
# Get actual time
time = dt*np.arange(Nt)
# Convert to complex with frequency
B1 = B1*np.exp(2j*math.pi*time*freq)
return time, B1
def simulate_rf(time, B1):
B = np.zeros( (len(B1),3))
B[:,0] = np.real(B1)
B[:,1] = np.imag(B1)
Mout = bloch_solver( B, time, T1=2000, T2=2000, GAM=42.58e6*2*math.pi)
return Mout
def plot_rf(T, TBW, flip, freq, window):
# Create Sinc
time, B1 = generate_sinc(T/1e3, TBW, flip=flip, window=window, freq=freq)
# Simulate
Mout = simulate_rf( time, B1)
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.plot(1e3*time,np.real(B1),label='$B_x$')
plt.plot(1e3*time,np.imag(B1),label='$B_y$')
plt.xlabel('Time [ms]')
plt.ylabel('$B_1$ [T]')
plt.legend()
plt.subplot(122)
plt.plot(1e3*time,Mout[:,2], label=r'$M_z$')
plt.plot(1e3*time,Mout[:,0], label=r'$M_x$')
plt.plot(1e3*time,Mout[:,1], label=r'$M_y$')
plt.xlabel('Time [ms]')
plt.ylabel('Magnetization [a.u.]')
plt.legend()
plt.show()
```
# Sinc scaling parameters without gradients
Below is a simulation using a standard Bloch simulator. The paramaters are set to maintain a constant flip angle for on-resonant spins. Try the following purtibations, first thinking what the affect might be on the peak $B_1$ which is often limited on systems.
* Change the flip angle
* Change the the TBW and T
* Sweep the frequency, does the flip angle change? Is this different for a short and long pulse?
```
w = interactive(plot_rf,
TBW=FloatSlider(min=1, max=12, step=1, value=2, description='TBW '),
T=FloatSlider(min=0.5, max=10, step=0.5, value=2, description='T [ms]'),
flip=FloatSlider(min=1, max=90, step=1, value=20, description='Flip [deg.]'),
freq=FloatSlider(min=-2000, max=2000, step=100, value=0, description='RF Freq [Hz]'),
window=ToggleButton(value=True,description='Toggle Window'))
display(w)
```
interactive(children=(FloatSlider(value=2.0, description='T [ms]', max=10.0, min=0.5, step=0.5), FloatSlider(v…
# Adding a slice select gradient
Now we will add a slice select gradient. In this code, the time of the pulse scales with the $TBW$ by:
\begin{equation}
T = TBW*0.25 \times 10^{-3}
\end{equation}
This means that the bandwidth of the pulse ($BW$) is fixed to:
\begin{equation}
BW=\frac{TBW}{T} = \frac{TBW}{TBW*0.25 \times 10^{-3} [s]}= 4000 [Hz]
\end{equation}
This makes seeing many of the effects much easier. Some questions to consider:
* What is the effect of changing the center frequency? Does it depend on the gradient strength?
* How does changing the gradient amplitude affect the slice thickness?
* Does toggling the window function alter the slice profile?
* What might be the practical need for the rephasing gradient?
* Does the profile look the same for a 90 degree flip angle as a 15 degree? [the small tip angle aproximation will be violated and the responsse will not be a Fourier transform]
```
def simulate_rf_g(time, G, z, B1, T1, T2):
B = np.zeros( (len(B1),3))
B[:,0] = np.real(B1)
B[:,1] = np.imag(B1)
B[:,2] = G*z
Mout = bloch_solver( B, time)
return Mout
def generate_sinc_and_grad( Gsel=1e-3, TBW=4, flip=10, window=True, freq=0, rephase=True):
T = 0.25e-3*TBW
# Generate RF
time, B1 = generate_sinc( T, TBW, flip=flip, window=window, freq=freq)
# Get delta time
dt = time[1] - time[0]
# Gradient for slice select (T/m)
gselect = Gsel*np.ones_like(time)
if rephase:
# Generate rephaser
Grephase = 20e-3 # amplitude of rephaser
T_rephase = (Gsel * T * 0.5) / Grephase # Area / Gradient strength
Nrephase = int(np.ceil( T_rephase / dt )) # number of points
grephase = -Grephase*np.ones((Nrephase,)) #actual
grephase = -grephase*0.5*np.sum(gselect)/np.sum(grephase)
# Pad with zeros
pad = np.zeros((20,))
G = np.concatenate( (pad, gselect, grephase, pad))
B1 = np.concatenate( (pad, B1, 0*grephase, pad))
time = dt*np.arange(len(B1))
else:
# Pad with zeros
pad = np.zeros((20,))
G = np.concatenate( (pad, gselect, pad))
B1 = np.concatenate( (pad, B1, pad))
time = dt*np.arange(len(B1))
return time, B1, G
def plot_rf_g(TBW, flip, Gsel, freq, window, rephase):
# Essentially ignore T1/T2
T1=1000
T2=1000
# Convert to si units
Gsel=Gsel/1e3 # mT/m to T/m
# Create Sinc
time, B1, G = generate_sinc_and_grad( Gsel=Gsel, TBW=TBW, flip=flip, window=window, freq=freq, rephase=rephase)
## Simulate
zsim = np.linspace(-0.05,0.05,501) # Z values to simulate
Mall = []
for z in zsim:
Mout = simulate_rf_g(time, G, z, B1, T1, T2)
Mall.append(Mout)
Mall = np.stack(Mall,axis=0)
# Plots
fig=plt.figure(figsize=(12,8))
# Plot gradients
plt.subplot(221)
plt.plot(1e3*time, 1e3*G, color='b')
plt.ylim([-25, 25])
plt.xlabel('Time [ms]')
plt.ylabel('G [mT/m]', color='b')
# Plot B1
plt.subplot(223)
plt.plot(1e3*time,np.real(B1),label='$B_x$')
plt.plot(1e3*time,np.imag(B1),label='$B_y$')
plt.xlabel('Time [ms]')
plt.ylabel('$B_1$ [T]')
plt.legend()
plt.ylim([-1.2*np.max(np.abs(B1)), 1.2*np.max(np.abs(B1))])
plt.xlabel('Time [ms]')
# Plot of Mz
plt.subplot(222)
plt.plot(zsim*1e3,Mall[:,-1,2],label='$M_z$')
plt.xlabel('Position [mm]')
plt.ylabel('M [a.u.]')
# Plot of Mxy
plt.subplot(224)
plt.plot(zsim*1e3,Mall[:,-1,1],label='$M_y$')
plt.plot(zsim*1e3,Mall[:,-1,0],label='$M_x$')
plt.xlabel('Position [mm]')
plt.ylabel('M [a.u.]')
plt.legend()
plt.tight_layout(pad=0.4, w_pad=4.0, h_pad=1.0)
plt.show()
```
```
w = interactive(plot_rf_g,
TBW=FloatSlider(min=1, max=12, step=1, value=6, description='TBW',continuous_update=False),
flip=FloatSlider(min=1, max=90, step=1, value=10, description='Flip [deg.]',continuous_update=False),
freq=FloatSlider(min=-5000, max=5000, step=100, value=0, description='Freq [Hz]',continuous_update=False),
Gsel=FloatSlider(min=3, max=20, step=1, value=10,description='Gsel [mT/m]', continuous_update=False),
window=ToggleButton(value=True,description='Toggle Window',continuous_update=False),
rephase=ToggleButton(value=True,description='Toggle Rephaser',continuous_update=False),
)
display(w)
```
interactive(children=(FloatSlider(value=6.0, continuous_update=False, description='TBW', max=12.0, min=1.0, st…
|
91b3b1eec89c162163197f41c2c2c78b0b50f613
| 176,567 |
ipynb
|
Jupyter Notebook
|
NoteBooks/Selective_RF_Excitation.ipynb
|
kmjohnson3/Intro-to-MRI
|
19f9b06e7f6b29ce01b9d156b56912f78dfeabe7
|
[
"MIT"
] | 12 |
2021-04-14T21:19:25.000Z
|
2022-02-14T21:17:12.000Z
|
NoteBooks/Selective_RF_Excitation.ipynb
|
kmjohnson3/Intro-to-MRI
|
19f9b06e7f6b29ce01b9d156b56912f78dfeabe7
|
[
"MIT"
] | null | null | null |
NoteBooks/Selective_RF_Excitation.ipynb
|
kmjohnson3/Intro-to-MRI
|
19f9b06e7f6b29ce01b9d156b56912f78dfeabe7
|
[
"MIT"
] | 1 |
2021-04-15T17:05:42.000Z
|
2021-04-15T17:05:42.000Z
| 93.076964 | 63,122 | 0.776725 | true | 3,861 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.782662 | 0.699254 | 0.54728 |
__label__eng_Latn
| 0.790954 | 0.109845 |
# Symbolic mathematics with Sympy
[Sympy](http://www.sympy.org/en/index.html) is described as a:
> "... Python library for symbolic mathematics."
This means it can be used to:
- Manipulate symbolic expressions;
- Solve symbolic equations;
- Carry out symbolic Calculus;
- Plot symbolic function.
It has other capabilities that we will not go in to in this handbook. But you can read more about it here: http://www.sympy.org/en/index.html
## Manipulating symbolic expressions
Before we can start using the library to manipulate expressions, we need to import it.
```python
import sympy as sym
```
The above imports the library and gives us access to it's commands using the shortand `sym` which is conventially used.
If we wanted to get Python to check that $x - x = 0$ we would get an error if we did not tell Python what $x$ was.
This is where Sympy comes in, we can tell Python to create $x$ as a symbolic variable:
```python
x = sym.symbols('x')
```
Now we can calculate $x - x$:
```python
x - x
```
0
We can create and manipulate expressions in Sympy. Let us for example verify:
$$(a + b) ^ 2 = a ^ 2 + 2ab + b ^2$$
First, we create the symbolic variables $a, b$:
```python
a, b = sym.symbols('a, b')
```
Now let's create our expression:
```python
expr = (a + b) ** 2
expr
```
(a + b)**2
**Note** we can get Sympy to use LaTeX so that the output looks nice in a notebook:
```python
sym.init_printing()
```
```python
expr
```
Let us expand our expression:
```python
expr.expand()
```
Note that we can also get Sympy to produce the LaTeX code for future use:
```python
sym.latex(expr.expand())
```
'a^{2} + 2 a b + b^{2}'
---
**EXERCISE** Use Sympy to verify the following expressions:
- $(a - b) ^ 2 = a ^ 2 - 2 a b + b^2$
- $a ^ 2 - b ^ 2 = (a - b) (a + b)$ (instead of using `expand`, try `factor`)
## Solving symbolic equations
We can use Sympy to solve symbolic expression. For example let's find the solution in $x$ of the quadratic equation:
$$a x ^ 2 + b x + c = 0$$
```python
# We only really need to define `c` but doing them all again.
a, b, c, x = sym.symbols('a, b, c, x')
```
The Sympy command for solving equations is `solveset`. The first argument is an expression for which the root will be found. The second argument is the value that we are solving for.
```python
sym.solveset(a * x ** 2 + b * x + c, x)
```
---
**EXERCISE** Use Sympy to find the solutions to the generic cubic equation:
$$a x ^ 3 + b x ^ 2 + c x + d = 0$$
---
It is possible to pass more arguments to `solveset` for example to constrain the solution space. Let us see what the solution of the following is in $\mathbb{R}$:
$$x^2=-1$$
```python
sym.solveset(x ** 2 + 1, x, domain=sym.S.Reals)
```
---
**EXERCISE** Use Sympy to find the solutions to the following equations:
- $x ^ 2 == 2$ in $\mathbb{N}$;
- $x ^ 3 + 2 x = 0$ in $\mathbb{R}$.
---
## Symbolic calculus
We can use Sympy to compute limits. Let us calculate:
$$\lim_{x\to 0^+}\frac{1}{x}$$
```python
sym.limit(1/x, x, 0, dir="+")
```
---
**EXERCISE** Compute the following limits:
1. $\lim_{x\to 0^-}\frac{1}{x}$
2. $\lim_{x\to 0}\frac{1}{x^2}$
---
We can use also Sympy to differentiate and integrate. Let us experiment with differentiating the following expression:
$$x ^ 2 - \cos(x)$$
```python
sym.diff(x ** 2 - sym.cos(x), x)
```
Similarly we can integrate:
```python
sym.integrate(x ** 2 - sym.cos(x), x)
```
We can also carry out definite integrals:
```python
sym.integrate(x ** 2 - sym.cos(x), (x, 0, 5))
```
---
**EXERCISE** Use Sympy to calculate the following:
1. $\frac{d\sin(x ^2)}{dx}$
2. $\frac{d(x ^2 + xy - \ln(y))}{dy}$
3. $\int e^x \cos(x)\;dx$
4. $\int_0^5 e^{2x}\;dx$
## Plotting with Sympy
Finally Sympy can be used to plot functions. Note that this makes use of another Python library called [matplotlib](http://matplotlib.org/). Whilst Sympy allows us to not directly need to make use of matplotlib it could be worth learning to use as it's a very powerful and versatile library.
Before plotting in Jupyter we need to run a command to tell it to display the plots directly in the notebook:
```python
%matplotlib inline
```
Let us plot $x^2$:
```python
expr = x ** 2
p = sym.plot(expr);
```
We can directly save that plot to a file if we wish to:
```python
p.save("x_squared.pdf");
```
---
**EXERCISE** Plot the following functions:
- $y=x + cos(x)$
- $y=x ^ 2 - e^x$ (you might find `ylim` helpful as an argument)
Experiment with saving your plots to a file.
---
## Summary
This section has discussed using Sympy to:
- Manipulate symbolic expressions;
- Calculate limits, derivates and integrals;
- Plot a symbolic expression.
This just touches the surface of what Sympy can do.
Let us move on to using [Numpy](02 - Linear algebra with Numpy.ipynb) to do Linear Algebra.
|
236091c13b28fd2b599095ab15dca8bc798693dd
| 55,927 |
ipynb
|
Jupyter Notebook
|
01-Symbolic-mathematics-with-Sympy.ipynb
|
sierxue/Python-Mathematics-Handbook
|
447502fe05503ccc588c2b9c8abb3207668d6f07
|
[
"MIT"
] | 147 |
2017-02-14T20:07:00.000Z
|
2022-03-01T10:41:28.000Z
|
01-Symbolic-mathematics-with-Sympy.ipynb
|
sierxue/Python-Mathematics-Handbook
|
447502fe05503ccc588c2b9c8abb3207668d6f07
|
[
"MIT"
] | 8 |
2017-02-14T20:07:53.000Z
|
2018-11-16T11:11:40.000Z
|
01-Symbolic-mathematics-with-Sympy.ipynb
|
sierxue/Python-Mathematics-Handbook
|
447502fe05503ccc588c2b9c8abb3207668d6f07
|
[
"MIT"
] | 87 |
2017-02-15T02:16:16.000Z
|
2022-03-01T10:41:21.000Z
| 84.609682 | 17,448 | 0.843689 | true | 1,452 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.903294 | 0.927363 | 0.837682 |
__label__eng_Latn
| 0.992651 | 0.784549 |
### SAYANTAN RAHA
## Roll # : BAI09056
### IIMB - BAI09 - Assignment 4
```python
from IPython.display import HTML
HTML('''
<form action="javascript:code_toggle()"><input type="submit" value="Toggle on/off Code"></form>''')
```
<form action="javascript:code_toggle()"><input type="submit" value="Toggle on/off Code"></form>
```python
import warnings
warnings.filterwarnings('ignore')
%load_ext rpy2.ipython
```
```python
import pandas as pd
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
```python
from pulp import *
import pyomo.environ as pe
from scipy import linalg as slin
from scipy import stats as stats
```
# Q-1-1
- Calculating the Aggregate TPM
The **MLE** estimate of the probability $ P_{ij}$ (probability of moving from state i to satge j) in one Step is given by:
\begin{equation*} \hat{P_{ij}} = \frac {N_{ij}}{ \sum_{k=1}^m {N_{ik}}} \end{equation*}
where $ N_{ij} $ is the number of cases in which $ X_n $ (state in time n is i) and $ X_{n+1}=j$ (state in time n+1 is j). For aggreagte Matrix we will consider all $ N_{ij} $ from all the monthly data
```python
agg = np.array([
[1737.3333333333 ,0,0,0,0],
[ 457.00,1118.67,614.00,0,0],
[ 272.00,0,715.33,895.00 ,0],
[ 859.00,0,0,570.67,561.33 ],
[0,0,0,0, 543.67 ]
])
rowsum = np.sum(agg, axis = 1).reshape(-1,1)
agg1 = agg / rowsum
tpmdf = pd.DataFrame(index=['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won'], data=agg1)
tpmdf.columns = ['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Lost</th>
<th>Stage A</th>
<th>Stage B</th>
<th>Stage C</th>
<th>Won</th>
</tr>
</thead>
<tbody>
<tr>
<th>Lost</th>
<td>1.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage A</th>
<td>0.208707</td>
<td>0.510885</td>
<td>0.280408</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage B</th>
<td>0.144502</td>
<td>0.000000</td>
<td>0.380024</td>
<td>0.475475</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage C</th>
<td>0.431441</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.286625</td>
<td>0.281934</td>
</tr>
<tr>
<th>Won</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
- Month 1 TPM
```python
m1=np.array([
[2005 ,0,0,0,0],
[400, 1002, 602,0,0],
[294,0, 793, 903,0],
[913,0,0, 597, 486],
[0,0,0,0,548 ]
])
rowsum = np.sum(m1, axis = 1).reshape(-1,1)
agg1 = m1 / rowsum
tpmdf = pd.DataFrame(index=['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won'], data=agg1)
tpmdf.columns = ['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Lost</th>
<th>Stage A</th>
<th>Stage B</th>
<th>Stage C</th>
<th>Won</th>
</tr>
</thead>
<tbody>
<tr>
<th>Lost</th>
<td>1.000000</td>
<td>0.0</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage A</th>
<td>0.199601</td>
<td>0.5</td>
<td>0.300399</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage B</th>
<td>0.147739</td>
<td>0.0</td>
<td>0.398492</td>
<td>0.453769</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage C</th>
<td>0.457415</td>
<td>0.0</td>
<td>0.000000</td>
<td>0.299098</td>
<td>0.243487</td>
</tr>
<tr>
<th>Won</th>
<td>0.000000</td>
<td>0.0</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
- Month 2 TPM
```python
m2=np.array([
[1607,0,0,0,0],
[450,1106,570,0,0],
[279,0,808, 974,0],
[871,0,0,601,597],
[0,0,0,0,486 ]
])
rowsum = np.sum(m2, axis = 1).reshape(-1,1)
agg1 = m2 / rowsum
tpmdf = pd.DataFrame(index=['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won'], data=agg1)
tpmdf.columns = ['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Lost</th>
<th>Stage A</th>
<th>Stage B</th>
<th>Stage C</th>
<th>Won</th>
</tr>
</thead>
<tbody>
<tr>
<th>Lost</th>
<td>1.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage A</th>
<td>0.211665</td>
<td>0.520226</td>
<td>0.268109</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage B</th>
<td>0.135371</td>
<td>0.000000</td>
<td>0.392043</td>
<td>0.472586</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage C</th>
<td>0.420976</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.290478</td>
<td>0.288545</td>
</tr>
<tr>
<th>Won</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
- Month 3 TPM
```python
m3=np.array([
[1600,0,0,0,0],
[521 , 1248,670,0,0],
[243,0,545,808,0],
[793,0,0,514,601],
[0,0,0,0,597]
])
rowsum = np.sum(m3, axis = 1).reshape(-1,1)
agg1 = m3 / rowsum
tpmdf = pd.DataFrame(index=['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won'], data=agg1)
tpmdf.columns = ['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Lost</th>
<th>Stage A</th>
<th>Stage B</th>
<th>Stage C</th>
<th>Won</th>
</tr>
</thead>
<tbody>
<tr>
<th>Lost</th>
<td>1.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
</tr>
<tr>
<th>Stage A</th>
<td>0.213612</td>
<td>0.511685</td>
<td>0.274703</td>
<td>0.000000</td>
<td>0.00000</td>
</tr>
<tr>
<th>Stage B</th>
<td>0.152256</td>
<td>0.000000</td>
<td>0.341479</td>
<td>0.506266</td>
<td>0.00000</td>
</tr>
<tr>
<th>Stage C</th>
<td>0.415618</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.269392</td>
<td>0.31499</td>
</tr>
<tr>
<th>Won</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.00000</td>
</tr>
</tbody>
</table>
</div>
# Q-1-2
In order to show the Aggregate Data Follows Markov chain we will do the following tests
- The time homogeneity of transition matrix using **Likelihood Ratio test**
- Perform the **Anderson Goodman Test** on monthly TPMs and Aggregated TPM for time Dependence test.
### Anderson Goodman Test
$ H_O$: The sequence of transitions are independant
$ H_A$: The sequence of transitions are dependant
The corresponding Test Statistic is:
\begin{equation}
\chi^2=\sum_{i=1}^{n} \sum_{j=1}^{n} \frac{(O_{ij} - E_{ij})^2}{E_{ij}}
\end{equation}
where
- $ O_{ij}$ = Observed number of transitions from state i to state j
- $ E_{ij}$ = Expected number of transitions from state i to state j
Alpha / Significance = 0.05
```python
def anderson_goodman_first_order_markov_chain(obsfreq):
assert(type(obsfreq) == np.ndarray), "Object passed is not an Numpy Array."
csum = obsfreq.sum(axis = 0)
rsum = obsfreq.sum(axis = 1)
total = csum.sum()
observed = obsfreq.reshape(-1)
expected = np.array([csum[j] * rsum[i] / total for i in range(csum.shape[0])
for j in range(rsum.shape[0])]).reshape(-1)
data = pd.DataFrame({"observed": list(observed), "expected": list(expected)})
data
data['difference'] = (data["observed"] - data["expected"]) ** 2 / data["expected"]
data
chisq = data.difference.sum()
chisq
df = (rsum.shape[0] -1) * (csum.shape[0] - 1)
df
chicrit = stats.chi2.ppf(0.95, df)
pval = 1 - stats.chi2.cdf(chisq, df)
ho = "ho = Distributions are independent"
ha = "ha = Distributions are dependent"
print("Null Hypothesis :: {}".format(ho))
print("Alternate Hypothesis :: {}".format(ha))
print("\n")
if pval < alpha:
print("Chisq {} is more than critical Chisq value {} for significance alpha {} and df {}".format(
chisq, chicrit, alpha, df))
print("Corresponding p-value {} is below alpha / significance {} for df {} ".format(pval, alpha, df))
print("Hence we reject the NULL Hypothesis, the distributions are dependent")
else:
print("Chisq {} is below critical Chisq value {} for significance alpha {} and df {}".format(chisq, chicrit, alpha, df))
print("Corresponding p-value {} is more than alpha / significance {}".format(pval, alpha))
print("Hence we retain the NULL Hypothesis")
```
```python
alpha = 0.05
print("Aggregate TPM- Test")
anderson_goodman_first_order_markov_chain(agg)
```
Aggregate TPM- Test
Null Hypothesis :: ho = Distributions are independent
Alternate Hypothesis :: ha = Distributions are dependent
Chisq 12987.24859805583 is more than critical Chisq value 26.29622760486423 for significance alpha 0.05 and df 16
Corresponding p-value 0.0 is below alpha / significance 0.05 for df 16
Hence we reject the NULL Hypothesis, the distributions are dependent
```python
alpha = 0.05
print("Month 1 - Test")
anderson_goodman_first_order_markov_chain(m1)
```
Month 1 - Test
Null Hypothesis :: ho = Distributions are independent
Alternate Hypothesis :: ha = Distributions are dependent
Chisq 13648.629894861813 is more than critical Chisq value 26.29622760486423 for significance alpha 0.05 and df 16
Corresponding p-value 0.0 is below alpha / significance 0.05 for df 16
Hence we reject the NULL Hypothesis, the distributions are dependent
```python
alpha = 0.05
print("Month 2 - Test")
anderson_goodman_first_order_markov_chain(m2)
```
Month 2 - Test
Null Hypothesis :: ho = Distributions are independent
Alternate Hypothesis :: ha = Distributions are dependent
Chisq 12764.088812408034 is more than critical Chisq value 26.29622760486423 for significance alpha 0.05 and df 16
Corresponding p-value 0.0 is below alpha / significance 0.05 for df 16
Hence we reject the NULL Hypothesis, the distributions are dependent
```python
alpha = 0.05
print("Month 3 - Test")
anderson_goodman_first_order_markov_chain(m3)
```
Month 3 - Test
Null Hypothesis :: ho = Distributions are independent
Alternate Hypothesis :: ha = Distributions are dependent
Chisq 12576.629383030904 is more than critical Chisq value 26.29622760486423 for significance alpha 0.05 and df 16
Corresponding p-value 0.0 is below alpha / significance 0.05 for df 16
Hence we reject the NULL Hypothesis, the distributions are dependent
```python
def anderson_goodman_homogeniety_test_for_first_order_markov_chain(Freqlist, nt):
PTM12 = np.zeros(Freqlist[0].shape)
PTMlist = list([PTM12.copy(), PTM12.copy(), PTM12.copy()]) ## Need to make this dynamic
for x in np.arange(len(PTMlist)):
for i in np.arange(nt.shape[1]):
PTMlist[x][i,:] = Freqlist[x][i,:] / nt[x,i]
#print(PTMlist )
totalElements = nt.sum(axis = 0).reshape(Freqlist[0].shape[1],1)
#print(totalElements)
intermediate = np.zeros(Freqlist[0].shape)
for i in Freqlist:
intermediate += i
PTMA = intermediate / totalElements
#print(PTMA)
results = []
for t in np.arange(nt.shape[0]):
for i in np.arange(PTMA.shape[0]):
for j in np.arange(PTMA.shape[0]):
if PTMA[i,j] == 0:
results.append(0)
else:
results.append(nt[t,i] * ((PTMlist[t][i, j] - PTMA[i,j]) ** 2)/ PTMA[i,j])
#print(results)
ho = "ho = PTMx elements == PTMAggregate elements"
ha = "ha = PTMx elements != PTMAggregate elements"
print("Null Hypothesis :: {}".format(ho))
print("Alternate Hypothesis :: {}".format(ha))
print("\n")
chisq = sum(results)
df = (len(Freqlist) - 1) * (PTMA.shape[0]) * (PTMA.shape[0] - 1)
#print(df)
chicrit = stats.chi2.ppf(0.95, df)
pval = 1 - stats.chi2.cdf(chisq, df)
if pval < alpha:
print("Chisq {} is more than critical Chisq value {} for significance alpha {} and df {}".format(
chisq, chicrit, alpha, df))
print("Corresponding p-value {} is below alpha / significance {}".format(pval, alpha))
print("Hence we reject the NULL Hypothesis")
else:
print("Chisq {} is below critical Chisq value {} for significance alpha {} and df {}".format(chisq, chicrit, alpha, df))
print("Corresponding p-value {} is more than alpha / significance {}".format(pval, alpha))
print("Hence we retain the NULL Hypothesis")
```
```python
Freqlist = list([m1,m2, m3])
#Freqlist
```
```python
nt = np.array([np.sum(m1, axis = 1),np.sum(m2, axis = 1),np.sum(m3, axis = 1)])
nt
```
array([[2005, 2004, 1990, 1996, 548],
[1607, 2126, 2061, 2069, 486],
[1600, 2439, 1596, 1908, 597]])
### Likelihood Ratio test
$ H_O$: $ P_{ij}(t)$ = $ P_{ij}$ for t = 1,2,3
$ H_A$: $ P_{ij}(t) \neq P_{ij}$ for t = 1,2,3
where $P_{ij}(t)$ is the estimated transition probability
The corresponding Test Statistic is:
\begin{equation}
\lambda=\prod_{t} \prod_{ij} \frac{\hat P_{ij}}{\hat P_{ij}(t)}
\end{equation}
which is equivalent to
\begin{equation}
\chi^2=\sum_{t} \sum_{i} \sum_{j} \frac{n_i(t)[\hat P_{ij}(t) - \hat P_{ij}]^2}{\hat P_{ij}}
\end{equation}
where $n_i(t)$ is the number of customers in state i at time t.
Alpha / Significance = 0.05
```python
print("Time Homogeneity Test / Likelihood Ratio Test - for Aggreagted TPM")
anderson_goodman_homogeniety_test_for_first_order_markov_chain(Freqlist, nt)
```
Time Homogeneity Test / Likelihood Ratio Test - for Aggreagted TPM
Null Hypothesis :: ho = PTMx elements == PTMAggregate elements
Alternate Hypothesis :: ha = PTMx elements != PTMAggregate elements
Chisq 48.381890967159826 is below critical Chisq value 55.75847927888702 for significance alpha 0.05 and df 40
Corresponding p-value 0.17053157682892028 is more than alpha / significance 0.05
Hence we retain the NULL Hypothesis
From the test results above we can conclude that the process is a **First Order Markov Chain**
# Q-1-3
- we will be using the following Aggregate TPM for answering the question. It is derived from the aggregate TPM of counts / frequencies provided in the excel.
```python
def get_Absorbing_State_Markov_data(P1):
assert(type(P1) == np.ndarray), "Object passed is not an Numpy Array."
index = np.array(np.where(P1[:,:] == 1)).T
index[:, 0]
listRows = np.array(list(set(np.arange(P1.shape[0])) - set(index[:,0]))).reshape(-1,1)
listCols = np.array(list(set(np.arange(P1.shape[0])) - set(index[:,1])))
Q = P1[listRows, listCols]
F = slin.inv(np.identity(Q.shape[0]) - Q)
listCols = np.array(list(set(index[:,1])))
R = P1[listRows, listCols]
FR = slin.inv(np.identity(Q.shape[0]) - Q).dot(R)
time2churn = slin.inv(np.identity(Q.shape[0]) - Q).sum(axis=1).reshape(-1,1)
return Q, R, F, FR, time2churn
agg = np.array([
[1737.3333333333 ,0,0,0,0],
[ 457.00,1118.67,614.00,0,0],
[ 272.00,0,715.33,895.00 ,0],
[ 859.00,0,0,570.67,561.33 ],
[0,0,0,0, 543.67 ]
])
rowsum = np.sum(agg, axis = 1).reshape(-1,1)
agg1 = agg / rowsum
tpmdf = pd.DataFrame(index=['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won'], data=agg1)
tpmdf.columns = ['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Lost</th>
<th>Stage A</th>
<th>Stage B</th>
<th>Stage C</th>
<th>Won</th>
</tr>
</thead>
<tbody>
<tr>
<th>Lost</th>
<td>1.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage A</th>
<td>0.208707</td>
<td>0.510885</td>
<td>0.280408</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage B</th>
<td>0.144502</td>
<td>0.000000</td>
<td>0.380024</td>
<td>0.475475</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage C</th>
<td>0.431441</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.286625</td>
<td>0.281934</td>
</tr>
<tr>
<th>Won</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
- Since the TPM has two Absorbing states we cannot represent this matrix into its Canonical Form and find out F & R and find time to absorption, as those absorption number are absorption to either of the two Absorbing States.
- We will remove the unwanted Absorbing state from the Matrix and recompute the TPM (normlize the values wrt to omittied State and use that TMP to fnd out the time to absorption from State B to State Won
- The corresponding new TPM is:
```python
TPM = np.array([
[0.510885 / (1-0.208707), 0.280408/ (1-0.208707), 0, 0],
[0, 0.380024/(1-0.144502), 0.475475/(1-0.144502), 0],
[0, 0, 0.286625/(1-0.431441), 0.281934/(1-0.431441)],
[0., 0, 0, 1],
])
tpmdf = pd.DataFrame(index=['Stage A', 'Stage B', 'Stage C', 'Won'], data=TPM)
tpmdf.columns = ['Stage A', 'Stage B', 'Stage C', 'Won']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Stage A</th>
<th>Stage B</th>
<th>Stage C</th>
<th>Won</th>
</tr>
</thead>
<tbody>
<tr>
<th>Stage A</th>
<td>0.645633</td>
<td>0.354367</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage B</th>
<td>0.000000</td>
<td>0.444214</td>
<td>0.555787</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage C</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.504125</td>
<td>0.495875</td>
</tr>
<tr>
<th>Won</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
- To solve the problem we will represent the Matrix in its canonical form:
P= $\begin{bmatrix}
I & O \\
R & Q
\end{bmatrix}$
where:
- I = Identity Matrix
- O = Zero matrix
- R = Probability of absoption from a transient state to absorbing state
- Q = Probability of transition between transient states
To calculate the eventual probability of absorption we will compute the Limiting probability / long running probability. When we multiply **P** we get the following form of matrix:
$P^n $ = $\begin{bmatrix}
I & O \\
\sum_{k=0}^{n-1}{(Q^k)}R & Q^n
\end{bmatrix}$
as $n\to\infty$ $\sum_{k=0}^{n-1}{(Q^k)}$ = F = $ (I-Q)^{-1}$
- F = Fundamental Matrix
- Expected time to absorption = Fc, where c = unit vector
Solving the problem we get:
```python
Q, R, F, FR, time2churn = get_Absorbing_State_Markov_data(TPM)
```
```python
print("F Matrix::")
print(F)
```
F Matrix::
[[2.0445098 0.92470628 0.61632968]
[0. 1.61296487 1.07506365]
[0. 0. 1.40178691]]
```python
print("Time to Churn: Fc = ")
time2churn
```
Time to Churn: Fc =
array([[3.58554576],
[2.68802852],
[1.40178691]])
- We are getting approximately on an average 3 Months for an Opportunity is **Stage B** to get converted to into **Contract Signing**
# Q-1-4
- Revenue can only be realised once a Opportunity reaches the Won state
- Multiplying TPM with the Initial distribution will be used to find the revenue
```python
TPM = agg1
tpmdf = pd.DataFrame(index=['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won'], data=TPM)
tpmdf.columns = ['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Lost</th>
<th>Stage A</th>
<th>Stage B</th>
<th>Stage C</th>
<th>Won</th>
</tr>
</thead>
<tbody>
<tr>
<th>Lost</th>
<td>1.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage A</th>
<td>0.208707</td>
<td>0.510885</td>
<td>0.280408</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage B</th>
<td>0.144502</td>
<td>0.000000</td>
<td>0.380024</td>
<td>0.475475</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage C</th>
<td>0.431441</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.286625</td>
<td>0.281934</td>
</tr>
<tr>
<th>Won</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
From TPM above we can see that only .281934 or 28% of Stage 3 Revenue has probability of being converted after one month. Hence: Revenue = 0.281934 * 1.6
```python
PI = np.array([0, 1, 1.8, 1.6, 0])
margin = np.array([0, 0, 0, 0, 1])
print('Expected Revenue after a Month {} Billion'.format(0.281934 * 1.6))
```
Expected Revenue after a Month 0.45109440000000006 Billion
```python
TPM = agg1.dot(agg1)
tpmdf = pd.DataFrame(index=['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won'], data=TPM)
tpmdf.columns = ['Lost', 'Stage A', 'Stage B', 'Stage C', 'Won']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Lost</th>
<th>Stage A</th>
<th>Stage B</th>
<th>Stage C</th>
<th>Won</th>
</tr>
</thead>
<tbody>
<tr>
<th>Lost</th>
<td>1.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage A</th>
<td>0.355852</td>
<td>0.261004</td>
<td>0.249818</td>
<td>0.133327</td>
<td>0.000000</td>
</tr>
<tr>
<th>Stage B</th>
<td>0.404555</td>
<td>0.000000</td>
<td>0.144418</td>
<td>0.316974</td>
<td>0.134052</td>
</tr>
<tr>
<th>Stage C</th>
<td>0.555103</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.082154</td>
<td>0.362743</td>
</tr>
<tr>
<th>Won</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
For 2 Months the new TPM is given by $P^2$.
From TPM above we can see that 0.134052 or 13% of Stage B and 0.362743 or 36% of Stage 3 Revenue has probability of being converted after two months. Hence: Revenue =0.1340523 * 1.8 + 0.3627429*1.6
```python
print('Expected Revenue after two Months (end of March) {} Billion'.format(0.1340523 * 1.8 + 0.3627429*1.6))
```
Expected Revenue after two Months (end of March) 0.8216827800000001 Billion
# Q-2-1
- Possible States = (NN, YN, NY, YY)
- NN = No Accidents is past 2 years
- YN = No Accident in last year, but had accident in prior year
- NY = Accident in last year, but had no accident in prior year
- YY = Accident in last year, and had accident in prior year
$P_I = Initial State = [1,0,0,0]$
$Cost = [-200,300,600,1000]$
The corresponding TPM is given by:
```python
TPM = np.array([
[.9, 0, 0.1,0],
[.9, 0, .1, 0],
[0, .9,0, .1],
[0, .9,0, .1],
])
TPM
tpmdf = pd.DataFrame(index=['NN', 'YN', 'NY', 'YY'], data=TPM)
tpmdf.columns = ['NN', 'YN', 'NY', 'YY']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>NN</th>
<th>YN</th>
<th>NY</th>
<th>YY</th>
</tr>
</thead>
<tbody>
<tr>
<th>NN</th>
<td>0.9</td>
<td>0.0</td>
<td>0.1</td>
<td>0.0</td>
</tr>
<tr>
<th>YN</th>
<td>0.9</td>
<td>0.0</td>
<td>0.1</td>
<td>0.0</td>
</tr>
<tr>
<th>NY</th>
<td>0.0</td>
<td>0.9</td>
<td>0.0</td>
<td>0.1</td>
</tr>
<tr>
<th>YY</th>
<td>0.0</td>
<td>0.9</td>
<td>0.0</td>
<td>0.1</td>
</tr>
</tbody>
</table>
</div>
```python
from numpy.linalg import matrix_power
def get_state_after_n_periods(PTM, periods, initital_state, revenue = None):
assert(type(PTM) == np.ndarray), "Object passed is not an Numpy Array."
P1 = PTM.copy()
#for i in np.arange(periods -1):
# P1 = P1.dot(PTM)
P1 = matrix_power(P1, periods)
if revenue is None:
return (P1, initital_state.dot(P1))
else:
return (P1, initital_state.dot(P1).dot(revenue))
```
# Q-2-2
State after n periods is given by $P_I x P^n$
where:
- $P_I = Initial Distribution$
- P = Transition Probability Matrix
- n = Number of periods
```python
C1=np.array([1,0,0,0])
get_state_after_n_periods(TPM, 3, C1, revenue = [-200,300,600,1000])
```
(array([[0.81, 0.09, 0.09, 0.01],
[0.81, 0.09, 0.09, 0.01],
[0.81, 0.09, 0.09, 0.01],
[0.81, 0.09, 0.09, 0.01]]), -70.99999999999999)
His premium will possibly reduce by INR 70.99
# Q-3
- To solve the problem we will represent the TPM Matrix in its canonical form:
P= $\begin{bmatrix}
I & O \\
R & Q
\end{bmatrix}$
where:
- I = Identity Matrix
- O = Zero matrix
- R = Probability of absoption from a transient state to an absorbing state
- Q = Probability of transition between transient states
To calculate the eventual probability of absorption we will compute the Limiting probability / long running probability. When we take dot product of **P** n times with itself we get the following form of matrix:
$P^n $ = $\begin{bmatrix}
I & O \\
\sum_{k=0}^{n-1}{(Q^k)}R & Q^n
\end{bmatrix}$
as $n\to\infty$ $\sum_{k=0}^{n-1}{(Q^k)}$ = F = $ (I-Q)^{-1}$
- F = Fundamental Matrix
- Expected time to absorption = Fc, where c = unit vector
Hence Fc is the matrix we will derive from the provided data points.
Solving the problem we get:
```python
TPM = np.array([
[1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0.05, 0.05, 0.90, 0, 0, 0],
[0.10, 0.05, 0, 0.80, 0.05, 0],
[0.20, 0.10, 0, 0.05, 0.60, 0.05],
[0.10, 0.20, 0, 0, 0, 0.70],
])
tpmdf = pd.DataFrame(index=['1', '2', '3', '4', '5', '6'], data=TPM)
tpmdf.columns = ['1', '2', '3', '4', '5', '6']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1.00</td>
<td>0.00</td>
<td>0.0</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<th>2</th>
<td>0.00</td>
<td>1.00</td>
<td>0.0</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<th>3</th>
<td>0.05</td>
<td>0.05</td>
<td>0.9</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<th>4</th>
<td>0.10</td>
<td>0.05</td>
<td>0.0</td>
<td>0.80</td>
<td>0.05</td>
<td>0.00</td>
</tr>
<tr>
<th>5</th>
<td>0.20</td>
<td>0.10</td>
<td>0.0</td>
<td>0.05</td>
<td>0.60</td>
<td>0.05</td>
</tr>
<tr>
<th>6</th>
<td>0.10</td>
<td>0.20</td>
<td>0.0</td>
<td>0.00</td>
<td>0.00</td>
<td>0.70</td>
</tr>
</tbody>
</table>
</div>
```python
def get_Absorbing_State_Markov_data(P1):
assert(type(P1) == np.ndarray), "Object passed is not an Numpy Array."
index = np.array(np.where(P1[:,:] == 1)).T
index[:, 0]
listRows = np.array(list(set(np.arange(P1.shape[0])) - set(index[:,0]))).reshape(-1,1)
listCols = np.array(list(set(np.arange(P1.shape[0])) - set(index[:,1])))
Q = P1[listRows, listCols]
F = slin.inv(np.identity(Q.shape[0]) - Q)
listCols = np.array(list(set(index[:,1])))
R = P1[listRows, listCols]
FR = slin.inv(np.identity(Q.shape[0]) - Q).dot(R)
time2churn = slin.inv(np.identity(Q.shape[0]) - Q).sum(axis=1).reshape(-1,1)
return Q, R, F, FR, time2churn
```
```python
Q, R, F, FR, time2churn = get_Absorbing_State_Markov_data(TPM)
print("Fundamental Matrix :: ")
F
```
Fundamental Matrix ::
array([[10. , 0. , 0. , -0. ],
[ 0. , 5.16129032, 0.64516129, 0.10752688],
[ 0. , 0.64516129, 2.58064516, 0.43010753],
[ 0. , 0. , 0. , 3.33333333]])
```python
print("R Matrix :: ")
R
```
R Matrix ::
array([[0.05, 0.05],
[0.1 , 0.05],
[0.2 , 0.1 ],
[0.1 , 0.2 ]])
```python
print("Q Matrix ::")
Q
```
Q Matrix ::
array([[0.9 , 0. , 0. , 0. ],
[0. , 0.8 , 0.05, 0. ],
[0. , 0.05, 0.6 , 0.05],
[0. , 0. , 0. , 0.7 ]])
```python
print("Time to Churn = Fc ::")
tpmdf = pd.DataFrame(index=['3', '4', '5', '6'], data=time2churn)
tpmdf.columns = ['Time2Absorption']
tpmdf
```
Time to Churn = Fc ::
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Time2Absorption</th>
</tr>
</thead>
<tbody>
<tr>
<th>3</th>
<td>10.000000</td>
</tr>
<tr>
<th>4</th>
<td>5.913978</td>
</tr>
<tr>
<th>5</th>
<td>3.655914</td>
</tr>
<tr>
<th>6</th>
<td>3.333333</td>
</tr>
</tbody>
</table>
</div>
# Q-3.1
- From the calculations above we can see that time to churn is highest from State 3
```python
print("Probability of Absorption to Absorbing State = FR ::")
tpmdf = pd.DataFrame(index=['3', '4', '5', '6'], data=FR)
tpmdf.columns = ['1', '2']
tpmdf
```
Probability of Absorption to Absorbing State = FR ::
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1</th>
<th>2</th>
</tr>
</thead>
<tbody>
<tr>
<th>3</th>
<td>0.500000</td>
<td>0.500000</td>
</tr>
<tr>
<th>4</th>
<td>0.655914</td>
<td>0.344086</td>
</tr>
<tr>
<th>5</th>
<td>0.623656</td>
<td>0.376344</td>
</tr>
<tr>
<th>6</th>
<td>0.333333</td>
<td>0.666667</td>
</tr>
</tbody>
</table>
</div>
# Q-3.2
- From the FR Matrix above we can see that from State 6 the eventual absorption to State 2 is .666 or 67% approx
# Q-3.3
The CLV for N periods is given by (Pfeifer and Carraway):
CLV = $\sum_{t=0}^N \frac {P_I x P^tR}{(i+i)t}$
where
- i = Interest rate
- d = 1/(1+i) = Discount factor = .99
Initial Distribution : $P_I$ = [0, 0, 0, 0, 0, 1]
Margin = [0, 200, 300, 400, 600, 800]
```python
def get_clv_after_n_periods(PTM, periods, initital_state, revenue , discount = None):
assert(type(PTM) == np.ndarray), "Object passed is not an Numpy Array."
P1 = PTM.copy()
collectAmount = []
if discount is None:
#discount_factor = 1/discount - 1
for i in np.arange(periods + 1):
#print(i)
if i == 0:
collectAmount.append(initital_state.dot(revenue))
elif i == 1:
collectAmount.append(initital_state.dot(P1).dot(revenue))
else:
P1 = P1.dot(PTM)
collectAmount.append(initital_state.dot(P1).dot(revenue))
else:
#discount_factor = 1/discount - 1
for i in np.arange(periods + 1):
#print(i)
if i == 0:
collectAmount.append(initital_state.dot(revenue))
elif i == 1:
collectAmount.append(initital_state.dot(P1).dot(revenue)* discount ** i)
else:
P1 = P1.dot(PTM)
collectAmount.append(initital_state.dot(P1).dot(revenue) * discount ** i)
#print(collectAmount)
return sum(collectAmount)
```
```python
PI = np.array([0, 0, 0, 0, 0, 1])
margin = np.array([0, 200, 300, 400, 600, 800])
print("If Expected Duration is assumed to be 3 (Actual 3.33) then CLV :: {}".format(get_clv_after_n_periods(TPM, 3, PI, margin, 0.99)))
print("If Expected Duration is assumed to be 4 (Actual 3.33) then CLV :: {}".format(get_clv_after_n_periods(TPM, 4, PI, margin, 0.99)))
```
If Expected Duration is assumed to be 3 (Actual 3.33) then CLV :: 2196.0942379999997
If Expected Duration is assumed to be 4 (Actual 3.33) then CLV :: 2477.9331073339995
# Q-4.1
We will use Markov Decision Process to solve the following problem.
- Discount factor is 0.95
- State Space = [1,2,3,4]
- Action set = [1,2,3]
- TPMs for State-Action are provided in the Question (For saving space I will not be displaying the same in Answers)
IN MDP we have an Initial State (S0) in the diagram below and we take Action (A0), then S1 is the State in the next time period and this continues. Such a sequence is shown for 4 Stages in the diagram below.
```python
import pygraphviz as pgv
import pandas as pd
from IPython.display import Image
def draw(dot):
return Image(pgv.AGraph(dot).draw(format='png', prog='dot'))
graph = pd.DataFrame(np.array([[.1,.9],[.1,.9],[.1,.9]]))
g1 = """digraph top {
rankdir=LR;
S0 -> S1 [label = A0]
S1 -> S2 [label = A1]
S2 -> S3 [label = A2];
}"""
draw(g1)
```
The Objective is to **Maximise** the expected return obtained over a period of return. Based on State and Reward the Reward generated can be represented as:
$R(S_0, a_0) + \beta R(S_1, a_1) + \beta^2 R(S_2, a_2) + \beta^3 R(S_3, a_3) + ...$
where :
- $R(S_0, a_0)$ = Reward generated from Initial Stage where initial State = $S_0$ and action taken = $a_0$. The rewards obtained from future states is discounted by factor $\beta$
Objective :: $Maximise_{a_i \epsilon A}[R(S_0, a_0) + \beta R(S_1, a_1) + \beta^2 R(S_2, a_2) + \beta^3 R(S_3, a_3) + ...]$
We will use **POLICY ITEARTION ALGORITHM** to obtain the long term revenue generated from policy (2,2,1,3)
The **Value Function** for a Policy $\Pi$ starting at State $S_i$ is given by:
$V^\Pi(i) = R^\Pi(i) + \beta \sum_{j\epsilon S} P_{ij}^\Pi x V^\Pi(j)$
where:
- $R^\Pi(i)$ :: Immediate Reward
- $\beta \sum_{j\epsilon S} P_{ij}^\Pi x V^\Pi(j)$ :: Discounted Future Reward
We will develop a System of Linear Equation by using the above equation for each State and Action provided in the Policy.
**Equations**:
- 0.62 v1 - 0.285 v2 - 0.1425 v3 - 0.1425 v4 = 160
- 0. v1 + 0.2875 v2 - 0.1 v3 - 0.1425 v4 = 200
- 0. v1 - 0.095 v2 - 0.24 v3 - 0.095 v4 = 270
- 0. v1 + 0. v2 - 0.285 v3 - 0.335 v4 = 500
The Matrix of the coefficients of the system of Equations are :
```python
A = np.array([[1-.95*.4, -.95*.3, -.95*.15, -.95*.15],[0,1-.95*.75,-.1,-.95*.15],[0,-.95*.1,1-.95*.8,-.95*.1],
[0,0,-.95*.3,1-.95*.7]])
A
```
array([[ 0.62 , -0.285 , -0.1425, -0.1425],
[ 0. , 0.2875, -0.1 , -0.1425],
[ 0. , -0.095 , 0.24 , -0.095 ],
[ 0. , 0. , -0.285 , 0.335 ]])
The Immediate Reward Matrix for the State Actions provided:
```python
B = np.array([160,200,270,500]).reshape(-1,1)
B
```
array([[160],
[200],
[270],
[500]])
Policy to evaluate = (2, 2, 1, 3)
Solving the Above set of **Linear Equations** we get the following Values for the Policy:
```python
x = slin.solve(A,B)
tpmdf = pd.DataFrame(index=['V12', 'V22', 'V31', 'V43'], data=x)
tpmdf.columns = ['Policy Value']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Policy Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>V12</th>
<td>6222.414034</td>
</tr>
<tr>
<th>V22</th>
<td>6335.801092</td>
</tr>
<tr>
<th>V31</th>
<td>6368.248847</td>
</tr>
<tr>
<th>V43</th>
<td>6910.301258</td>
</tr>
</tbody>
</table>
</div>
Where $V_{ij}$ = Policy Value when Initial State was i and action taken was j
The Overall Policy Value is :::
```python
np.sum(slin.solve(A,B))
```
25836.765229462104
# Q-4.2
To Check if Policy (2, 2, 1, 2) is better than the previous Policy (2, 2, 1, 3) we will use the **Policy Evaluation Step**. The objectibe is to check if the value function obtained in Q-4-1 is less thena the new Policy.
The Policy improvement evaluation step includes the following:
$T^{\Pi^{new}}(i) = Max_{a_{i,new}}[R(i, a_{i,new}) + \beta \sum_{j\epsilon S} P(j|i,a_{i,new}) x V^\Pi(j)]$
where :
- $i \epsilon S$
- $a_{i,new}$ = A new action chosen for state i
- $T^{\Pi^{new}}(i)$ = Value function when the current policy for state i is changed to $a_{i,new}$
- R(i, a_{i,new}) = Reward when action for state i is replaced with new action $a_{i,new}$
Solving the new equation $T^{\Pi^{new}}(4)$ =
```python
(.095*6368.24884654+400)/.145
```
6930.921658077932
$T^{\Pi^{new}}(4)$ > $V^{\Pi}(4)$ value. Hence this is a better Policy.
We will resolve the problem for the new Policy. The Matrix of the coefficients of the system of New Equations are :
```python
A = np.array([[1-.95*.4, -.95*.3, -.95*.15, -.95*.15],[0,1-.95*.75,-.1,-.95*.15],[0,-.95*.1,1-.95*.8,-.95*.1],
[0,0,-.95*.1,1-.95*.9]])
A
```
array([[ 0.62 , -0.285 , -0.1425, -0.1425],
[ 0. , 0.2875, -0.1 , -0.1425],
[ 0. , -0.095 , 0.24 , -0.095 ],
[ 0. , 0. , -0.095 , 0.145 ]])
The Immediate Reward Matrix for the New State Actions provided:
```python
B = np.array([160,200,270,400]).reshape(-1,1)
B
```
array([[160],
[200],
[270],
[400]])
Solving the Above set of **Linear Equations** we get the following Values for the New Policy:
```python
x = slin.solve(A,B)
tpmdf = pd.DataFrame(index=['V12', 'V22', 'V31', 'V42'], data=x)
tpmdf.columns = ['Policy Value']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Policy Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>V12</th>
<td>6249.595436</td>
</tr>
<tr>
<th>V22</th>
<td>6363.327540</td>
</tr>
<tr>
<th>V31</th>
<td>6393.980092</td>
</tr>
<tr>
<th>V42</th>
<td>6947.780060</td>
</tr>
</tbody>
</table>
</div>
Where $V_{ij}$ = Policy Value when Initial State was i and action taken was j
The Overall Policy Value is :::
```python
sum(slin.solve(A,B))
```
array([25954.68312784])
# Q-4.3
The Optimal Policy can be obtained by solving a sustem of Linear Equations:
**Decision Variables**: x_si, i = 1,2,3,4 are the Best Policy Value function for Policy
**Minimize Objective** : x_s1 + x_s2 + x_s3 + x_s4
**Subject To**:
**Constraint_for_State_S1_Policy_1**: 0.525 x_s1 - 0.2375 x_s2 - 0.1425 x_s3 - 0.095 x_s4 >= 180
**Constraint_for_State_S1_Policy_2**: 0.62 x_s1 - 0.285 x_s2 - 0.1425 x_s3 - 0.1425 x_s4 >= 160
**Constraint_for_State_S1_Policy_3**: 0.43 x_s1 - 0.19 x_s2 - 0.095 x_s3 - 0.095 x_s4 >= 200
**Constraint_for_State_S2_Policy_1**: 0.2875 x_s2 - 0.1425 x_s3 - 0.095 x_s4 >= 225
**Constraint_for_State_S2_Policy_2**: 0.2875 x_s2 - 0.095 x_s3 - 0.1425 x_s4 >= 200
**Constraint_for_State_S2_Policy_3**: - 0.095 x_s1 + 0.335 x_s2 - 0.095 x_s3 - 0.095 x_s4 >= 250
**Constraint_for_State_S3_Policy_1**: - 0.095 x_s2 + 0.24 x_s3 - 0.095 x_s4 >= 270
**Constraint_for_State_S3_Policy_2**: - 0.0475 x_s2 + 0.24 x_s3 - 0.1425 x_s4 >= 240
**Constraint_for_State_S3_Policy_3**: - 0.19 x_s2 + 0.335 x_s3 - 0.095 x_s4 >= 300
**Constraint_for_State_S4_Policy_1**: - 0.19 x_s3 + 0.24 x_s4 >= 450
**Constraint_for_State_S4_Policy_2**: - 0.095 x_s3 + 0.145 x_s4 >= 400
**Constraint_for_State_S4_Policy_3**: - 0.285 x_s3 + 0.335 x_s4 >= 500
**Non Zero Constraint** :All Decision Variable (x_si >0)
```python
# initialize the model
prob = LpProblem("MDPPolicy", LpMinimize)
#List of decision variables
vehicles = ['s1', 's2', 's3', 's4']
# create a dictionary of pulp variables with keys from ingredients
# the default lower bound is -inf
x = pulp.LpVariable.dict('x_%s', vehicles, lowBound = 0)
# Objective function
prob += sum([x[i] for i in vehicles]), "Objective"
# Constraints
prob += x['s1'] >= 200*.9 +0.95*(0.5*x['s1'] + .25*x['s2'] + .15*x['s3'] + .1*x['s4']), "Constraint for State S1 Policy 1"
prob += x['s1'] >= 200*.8 +0.95*(0.4*x['s1'] + .3*x['s2'] + .15*x['s3'] + .15*x['s4']), "Constraint for State S1 Policy 2"
prob += x['s1'] >= 200 +0.95*(0.6*x['s1'] + .2*x['s2'] + .1*x['s3'] + .1*x['s4']), "Constraint for State S1 Policy 3"
prob += x['s2'] >= 250*.9 +0.95*(0.*x['s1'] + .75*x['s2'] + .15*x['s3'] + .1*x['s4']), "Constraint for State S2 Policy 1"
prob += x['s2'] >= 250*.8 +0.95*(0.*x['s1'] + .75*x['s2'] + .1*x['s3'] + .15*x['s4']), "Constraint for State S2 Policy 2"
prob += x['s2'] >= 250 +0.95*(0.1*x['s1'] + .7*x['s2'] + .1*x['s3'] + .1*x['s4']), "Constraint for State S2 Policy 3"
prob += x['s3'] >= 300*.9 +0.95*(0.*x['s1'] + .1*x['s2'] + .8*x['s3'] + .1*x['s4']), "Constraint for State S3 Policy 1"
prob += x['s3'] >= 300*.8 +0.95*(0.*x['s1'] + .05*x['s2'] + .8*x['s3'] + .15*x['s4']), "Constraint for State S3 Policy 2"
prob += x['s3'] >= 300 +0.95*(0.*x['s1'] + .2*x['s2'] + .7*x['s3'] + .1*x['s4']), "Constraint for State S3 Policy 3"
prob += x['s4'] >= 500*.9 +0.95*(0.*x['s1'] + 0*x['s2'] + .2*x['s3'] + .8*x['s4']), "Constraint for State S4 Policy 1"
prob += x['s4'] >= 500*.8 +0.95*(0.*x['s1'] + 0*x['s2'] + .1*x['s3'] + .9*x['s4']), "Constraint for State S4 Policy 2"
prob += x['s4'] >= 500 +0.95*(0.*x['s1'] + 0*x['s2'] + .3*x['s3'] + .7*x['s4']), "Constraint for State S4 Policy 3"
#prob.writeLP("tomatoMix.lp")
status = prob.solve(GLPK(options=["--ranges","MDPPolicy.sen"]))
#print(status)
#print the result
for vehicle in vehicles:
print(' {} :: {} ::'.format(vehicle,
x[vehicle].value()))
print("Objective", value(prob.objective))
prob.writeLP("MDPPolicy.lp")
```
s1 :: 6285.43 ::
s2 :: 6358.91 ::
s3 :: 6491.42 ::
s4 :: 7015.09 ::
Objective 26150.850000000002
# %load MDPPolicy.sen
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 1
Problem:
Objective: Objective = 26150.85187 (MINimum)
No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 Constraint_for_State_S1_Policy_1
BS 198.14905 -18.14905 180.00000 +Inf -6.28153 24906.17364 Constraint_for_State_S1_Policy_2
. +Inf 198.14905 +Inf +Inf
2 Constraint_for_State_S1_Policy_2
NL 160.00000 . 160.00000 143.88421 -5.34746 26064.67331 Constraint_for_State_S1_Policy_3
5.34746 +Inf +Inf +Inf +Inf
3 Constraint_for_State_S1_Policy_3
BS 211.42385 -11.42385 200.00000 +Inf -7.54374 24555.92470 Constraint_for_State_S1_Policy_2
. +Inf 211.42385 +Inf +Inf
4 Constraint_for_State_S2_Policy_1
BS 236.72520 -11.72520 225.00000 +Inf -26.21858 19944.25242 Constraint_for_State_S2_Policy_3
. +Inf 195.32355 338.22724 106217.76397 Constraint_for_State_S3_Policy_3
5 Constraint_for_State_S2_Policy_2
BS 211.85095 -11.85095 200.00000 +Inf -25.11468 20830.28398 Constraint_for_State_S2_Policy_3
. +Inf 197.31463 218.95940 72537.61038 Constraint_for_State_S4_Policy_3
6 Constraint_for_State_S2_Policy_3
NL 250.00000 . 250.00000 246.92242 -24.37290 26075.84219 Constraint_for_State_S4_Policy_1
24.37290 +Inf +Inf +Inf +Inf
7 Constraint_for_State_S3_Policy_1
BS 287.41130 -17.41130 270.00000 392.08067 -37.88866 15261.22234 Constraint_for_State_S3_Policy_3
. +Inf 287.41130 +Inf +Inf
8 Constraint_for_State_S3_Policy_2
BS 256.24270 -16.24270 240.00000 354.82678 -40.22741 15842.87140 Constraint_for_State_S3_Policy_3
. +Inf 243.00060 195.95670 76363.32403 Constraint_for_State_S4_Policy_3
9 Constraint_for_State_S3_Policy_3
NL 300.00000 . 300.00000 297.99286 -29.68893 26091.26207 Constraint_for_State_S4_Policy_1
29.68893 +Inf 433.57783 +Inf 30116.63439 Constraint_for_State_S2_Policy_1
10 Constraint_for_State_S4_Policy_1
BS 450.25150 -.25150 450.00000 548.60557 -26.38302 14271.85677 Constraint_for_State_S4_Policy_3
. +Inf 450.25150 +Inf +Inf
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 2
Problem:
Objective: Objective = 26150.85187 (MINimum)
No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
11 Constraint_for_State_S4_Policy_2
BS 400.50301 -.50301 400.00000 471.18937 -36.70973 11448.49379 Constraint_for_State_S4_Policy_3
. +Inf 400.50301 +Inf +Inf
12 Constraint_for_State_S4_Policy_3
NL 500.00000 . 500.00000 499.67775 -20.59071 26144.21646 Constraint_for_State_S4_Policy_1
20.59071 +Inf 626.02178 +Inf 28745.72946 Constraint_for_State_S2_Policy_2
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 3
Problem:
Objective: Objective = 26150.85187 (MINimum)
No. Column name St Activity Obj coef Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 x_s1 BS 6285.43074 1.00000 . +Inf -1.11226 12874.36722 Constraint_for_State_S1_Policy_2
. +Inf 6285.43074 +Inf +Inf
2 x_s2 BS 6358.90913 1.00000 . +Inf -2.16109 6049.78880 Constraint_for_State_S2_Policy_3
. +Inf 6358.90913 +Inf +Inf
3 x_s3 BS 6491.42180 1.00000 . 7671.88260 -2.35952 4342.78573 Constraint_for_State_S3_Policy_3
. +Inf 6491.42180 +Inf +Inf
4 x_s4 BS 7015.09019 1.00000 . 7875.46343 -2.01599 4993.40589 Constraint_for_State_S4_Policy_3
. +Inf 7015.09019 +Inf +Inf
End of report
Optimal Policy(2,3,3,3)
Optimal Policy is obtained by checking the Sensitivity Report. Binding Constraints provide the best Policy Values
# Q-4.4
The number of time steps in a MDP defines the number of Stages in the process. Since here the number of Time periods is 4, hence we have 4 Stages in this Problem.
The Dynamic Programming recursive equation for the Value Iteration Algorithm is given by:
\begin{equation*}
V^*{_t}(i) = {\displaystyle MAX_{a \epsilon A}}\left[R(i,a_i) + \beta {\displaystyle \Sigma_{k=S_i}^{S_n}} P_{ij}(a_i)V{^*}{_{t+1}}(j)\right]
\end{equation*}
${\text {where }V^*{_t}(i) \text{ is the Optimal value for a policy when current period is t and the current State is } S_i}$
We also assume if the duration of the planning horizon is n, $V^*{_{n+1}}(i) = 0 $ for all states $ S_i $
Variable notations that will be used to solve the problem:
\begin{equation*} V\text{_}{_t}\text{_}{_i}\text{_}{_j} \end{equation*}
\begin{equation*} \text{Where }\text{t = 1..4 defines the time periods or Stages of MDP}\end{equation*}
\begin{equation*} \text{and }\text{i = 1..4 defines the 4 States of MDP}\end{equation*}
\begin{equation*} \text{and }\text{j = 1..3 defines the 3 Actions of MDP}\end{equation*}
- e.g. Variable V_1_1_1 implies Value of Policy for Stage 1, State 1 and Action Taken is 1
- e.g. Variable Vo_1_1_1 implies Optimal Value of Policy for Stage 1, State 1 and Action Taken is 1
Following are the sets of Dynamic Programming Equations and their solutions:
V511 = 0
V512 = 0
V513 = 0
V521 = 0
V522 = 0
V523 = 0
V531 = 0
V532 = 0
V533 = 0
V541 = 0
V542 = 0
V543 = 0V411 = 200 * .9 + 0
V412 = 200 * .8 + 0
V413 = 200 * 1 + 0
V421 = 250 * .9 + 0
V422 = 250 * .8 + 0
V423 = 250 * 1 + 0
V431 = 300 * .9 + 0
V432 = 300 * .8 + 0
V433 = 300 * 1 + 0
V441 = 500 * .9 + 0
V442 = 500 * .8 + 0
V443 = 500 * 1 + 0
```python
V41M = max(V411, V412, V413)
print("Vmax for t=4, State = 1 :: {}".format(V41M))
print("Policy :: 3")
```
Vmax for t=4, State = 1 :: 200
Policy :: 3
```python
V42M = max(V421, V422, V423)
print("Vmax for t=4, State = 2 :: {}".format(V42M))
print("Policy :: 3")
```
Vmax for t=4, State = 2 :: 250
Policy :: 3
```python
V43M = max(V431, V432, V433)
print("Vmax for t=4, State = 3 :: {}".format(V43M))
print("Policy :: 3")
```
Vmax for t=4, State = 3 :: 300
Policy :: 3
```python
V44M = max(V441, V442, V443)
print("Vmax for t=4, State = 4 :: {}".format(V44M))
print("Policy :: 3")
```
Vmax for t=4, State = 4 :: 500
Policy :: 3
V311 = 200 * .9 + 0.95 * (.5*V41M + .25*V42M + .15 *V43M + .1 * V44M)
V312 = 200 * .8 + 0.95 * (.4*V41M + .3*V42M + .15 *V43M + .15 * V44M)
V313 = 200 * 1 + 0.95 * (.6*V41M + .2*V42M + .1 *V43M + .1 * V44M)
```python
V31M = max(V311, V312, V313)
print("V311 :: {} V312 :: {} V313 :: {}".format(V311, V312, V313))
print("Vmax for t=3, State = 1 :: {}".format(V31M))
print("Policy :: 3")
```
V311 :: 424.625 V312 :: 421.25 V313 :: 437.5
Vmax for t=3, State = 1 :: 437.5
Policy :: 3
V321 = 250 * .9 + 0.95 * (.0*V41M + .75*V42M + .15 *V43M + .1 * V44M)
V322 = 250 * .8 + 0.95 * (.0*V41M + .75*V42M + .1 *V43M + .15 * V44M)
V323 = 250 * 1 + 0.95 * (.1*V41M + .7*V42M + .1 *V43M + .1 * V44M)
```python
V32M = max(V321, V322, V323)
print("V321 :: {} V322 :: {} V323 :: {}".format(V321, V322, V323))
print("Vmax for t=3, State = 2 :: {}".format(V32M))
print("Policy :: 3")
```
V321 :: 493.375 V322 :: 477.875 V323 :: 511.25
Vmax for t=3, State = 2 :: 511.25
Policy :: 3
V331 = 300 * .9 + 0.95 * (.0*V41M + .1*V42M + .8 *V43M + .1 * V44M)
V332 = 300 * .8 + 0.95 * (.0*V41M + .05*V42M + .8 *V43M + .15 * V44M)
V333 = 300 * 1 + 0.95 * (.0*V41M + .2*V42M + .7 *V43M + .1 * V44M)
```python
V33M = max(V331, V332, V333)
print("V331 :: {} V332 :: {} V333 :: {}".format(V331, V332, V333))
print("Vmax for t=3, State = 3 :: {}".format(V33M))
print("Policy :: 3")
```
V331 :: 569.25 V332 :: 551.125 V333 :: 594.5
Vmax for t=3, State = 3 :: 594.5
Policy :: 3
V341 = 500 * .9 + 0.95 * (.0*V41M + .0*V42M + .2 *V43M + .8 * V44M)
V342 = 500 * .8 + 0.95 * (.0*V41M + .0*V42M + .1 *V43M + .9 * V44M)
V343 = 500 * 1 + 0.95 * (.0*V41M + .0*V42M + .3 *V43M + .7 * V44M)
```python
V34M = max(V341, V342, V343)
print("V341 :: {} V342 :: {} V343 :: {}".format(V341, V342, V343))
print("Vmax for t=3, State = 4 :: {}".format(V34M))
print("Policy :: 3")
```
V341 :: 887.0 V342 :: 856.0 V343 :: 918.0
Vmax for t=3, State = 4 :: 918.0
Policy :: 3
V211 = 200 * .9 + 0.95 * (.5*V31M + .25*V32M + .15 *V33M + .1 * V34M)
V212 = 200 * .8 + 0.95 * (.4*V31M + .3*V32M + .15 *V33M + .15 * V34M)
V213 = 200 * 1 + 0.95 * (.6*V31M + .2*V32M + .1 *V33M + .1 * V34M)
```python
V21M = max(V211, V212, V213)
print("V211 :: {} V212 :: {} V213 :: {}".format(V211, V212, V213))
print("Vmax for t=2, State = 1 :: {}".format(V21M))
print("Policy :: 3")
```
V211 :: 681.160625 V212 :: 687.4875 V213 :: 690.2
Vmax for t=2, State = 1 :: 690.2
Policy :: 3
V221 = 250 * .9 + 0.95 * (.0*V31M + .75*V32M + .15 *V33M + .1 * V34M)
V222 = 250 * .8 + 0.95 * (.0*V31M + .75*V32M + .1 *V33M + .15 * V34M)
V223 = 250 * 1 + 0.95 * (.1*V31M + .7*V32M + .1 *V33M + .1 * V34M)
```python
V22M = max(V221, V222, V223)
print("V221 :: {} V222 :: {} V223 :: {}".format(V221, V222, V223))
print("Vmax for t=2, State = 2 :: {}".format(V22M))
print("Policy :: 3")
```
V221 :: 761.191875 V222 :: 751.5581249999999 V223 :: 775.2312499999999
Vmax for t=2, State = 2 :: 775.2312499999999
Policy :: 3
V231 = 300 * .9 + 0.95 * (.0*V31M + .1*V32M + .8 *V33M + .1 * V34M)
V232 = 300 * .8 + 0.95 * (.0*V31M + .05*V32M + .8 *V33M + .15 * V34M)
V233 = 300 * 1 + 0.95 * (.0*V31M + .2*V32M + .7 *V33M + .1 * V34M)
```python
V23M = max(V231, V232, V233)
print("V231 :: {} V232 :: {} V233 :: {}".format(V231, V232, V233))
print("Vmax for t=2, State = 3 :: {}".format(V23M))
print("Policy :: 3")
```
V231 :: 857.5987500000001 V232 :: 846.919375 V233 :: 879.69
Vmax for t=2, State = 3 :: 879.69
Policy :: 3
V241 = 500 * .9 + 0.95 * (.0*V31M + .0*V32M + .2 *V33M + .8 * V34M)
V242 = 500 * .8 + 0.95 * (.0*V31M + .0*V32M + .1 *V33M + .9 * V34M)
V243 = 500 * 1 + 0.95 * (.0*V31M + .0*V32M + .3 *V33M + .7 * V34M)
```python
V24M = max(V241, V242, V243)
print("V241 :: {} V242 :: {} V243 :: {}".format(V241, V242, V243))
print("Vmax for t=2, State = 4 :: {}".format(V24M))
print("Policy :: 3")
```
V241 :: 1260.635 V242 :: 1241.3675 V243 :: 1279.9025
Vmax for t=2, State = 4 :: 1279.9025
Policy :: 3
V111 = 200 * .9 + 0.95 * (.5*V21M + .25*V22M + .15 *V23M + .1 * V24M)
V112 = 200 * .8 + 0.95 * (.4*V21M + .3*V22M + .15 *V23M + .15 * V24M)
V113 = 200 * 1 + 0.95 * (.6*V21M + .2*V22M + .1 *V23M + .1 * V24M)
```python
V11M = max(V111, V112, V113)
print("V111 :: {} V112 :: {} V113 :: {}".format(V111, V112, V113))
print("Vmax for t=1, State = 1 :: {}".format(V11M))
print("Policy :: 2")
```
V111 :: 938.9089843749999 V112 :: 950.9588375 V113 :: 945.869225
Vmax for t=1, State = 1 :: 950.9588375
Policy :: 2
V121 = 250 * .9 + 0.95 * (.0*V21M + .75*V22M + .15 *V23M + .1 * V24M)
V122 = 250 * .8 + 0.95 * (.0*V21M + .75*V22M + .1 *V23M + .15 * V24M)
V123 = 250 * 1 + 0.95 * (.1*V21M + .7*V22M + .1 *V23M + .1 * V24M)
```python
V12M = max(V121, V122, V123)
print("V221 :: {} V222 :: {} V223 :: {}".format(V121, V122, V123))
print("Vmax for t=1, State = 2 :: {}".format(V12M))
print("Policy :: 3")
```
V221 :: 1024.298828125 V222 :: 1018.308921875 V223 :: 1036.2590687499999
Vmax for t=1, State = 2 :: 1036.2590687499999
Policy :: 3
V131 = 300 * .9 + 0.95 * (.0*V21M + .1*V22M + .8 *V23M + .1 * V24M)
V132 = 300 * .8 + 0.95 * (.0*V21M + .05*V22M + .8 *V23M + .15 * V24M)
V133 = 300 * 1 + 0.95 * (.0*V21M + .2*V22M + .7 *V23M + .1 * V24M)
```python
V13M = max(V131, V132, V133)
print("V131 :: {} V132 :: {} V133 :: {}".format(V131, V132, V133))
print("Vmax for t=1, State = 3 :: {}".format(V13M))
print("Policy :: 3")
```
V131 :: 1133.8021062500002 V132 :: 1127.773990625 V133 :: 1153.878525
Vmax for t=1, State = 3 :: 1153.878525
Policy :: 3
V141 = 500 * .9 + 0.95 * (.0*V21M + .0*V22M + .2 *V23M + .8 * V24M)
V142 = 500 * .8 + 0.95 * (.0*V21M + .0*V22M + .1 *V23M + .9 * V24M)
V143 = 500 * 1 + 0.95 * (.0*V21M + .0*V22M + .3 *V23M + .7 * V24M)
```python
V14M = max(V141, V142, V143)
print("V141 :: {} V142 :: {} V143 :: {}".format(V141, V142, V143))
print("Vmax for t=1, State = 4 :: {}".format(V14M))
print("Policy :: 3")
```
V141 :: 1589.867 V142 :: 1577.8871874999998 V143 :: 1601.8468125
Vmax for t=1, State = 4 :: 1601.8468125
Policy :: 3
**Optimal Values are**:
```python
data = np.array([
[2,3,3,3],
[3,3,3,3],
[3,3,3,3],
[3,3,3,3]
]).T
tpmdf = pd.DataFrame(index=['State 1', 'State 2', 'State 3', 'State 4'], data=data)
tpmdf.columns = ['t=1 Optimal Action', 't=2 Optimal Action', 't=3 Optimal Action', 't=4 Optimal Action']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>t=1 Optimal Action</th>
<th>t=2 Optimal Action</th>
<th>t=3 Optimal Action</th>
<th>t=4 Optimal Action</th>
</tr>
</thead>
<tbody>
<tr>
<th>State 1</th>
<td>2</td>
<td>3</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<th>State 2</th>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<th>State 3</th>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<th>State 4</th>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
# Q-5-1
Flipkart's Customer Analytics Churn problem deals with a dynamic problem namely the shift customer behavior / preference in the next period from the current period. The shift of preferences by an individual customer can be described as a sequence of certain states. Hence this is a stochastic process recorded over discrete time periods.
A Markovian stochastic process has the memory-less property, which means that the future state can be predicted from the knowledge of the present state.The **First Order Markov Chain** is given by:
$P[X_{n+1}|X_0=i_0, X_1 = i_1, ... , X_n = i_n] = P[X_{n+1}|X_n=i_n]$
The customer behavior state space (defined by frequency of purchase) is discrete and the process is observed over a period of time. Moreover the current state is dependent only on the prior state. Hence the process satisfies the Markovian properties and can be the problem can be modelled as a **Discrete Time Markov Chain**.
Assumptions:
1. The current state is dependent only on the prior state
2. The Transition Probability Matrix time homogeneos
# Q-5-2
In this problem Churn is defined as a period of inactivity (not buying from Flipkart). The Period is defines as 13 Months. If Customer does not buy in 12 prior months he/she is considered as Churned. This is depicted by state 13.
# Q-5-3
- To solve the problem we will represent the TPM Matrix in its canonical form:
P= $\begin{bmatrix}
I & O \\
R & Q
\end{bmatrix}$
where:
- I = Identity Matrix
- O = Zero matrix
- R = Probability of absoption from a transient state to an absorbing state
- Q = Probability of transition between transient states
To calculate the eventual probability of absorption we will compute the Limiting probability / long running probability. When we take dot product of **P** n times with itself we get the following form of matrix:
$P^n $ = $\begin{bmatrix}
I & O \\
\sum_{k=0}^{n-1}{(Q^k)}R & Q^n
\end{bmatrix}$
as $n\to\infty$ $\sum_{k=0}^{n-1}{(Q^k)}$ = F = $ (I-Q)^{-1}$
- F = Fundamental Matrix
- Expected time to absorption = Fc, where c = unit vector
Hence Fc is the matrix we will derive from the provided data points.
TPM:
```python
TPMdf = pd.read_csv("../Assgn-4/exhibit-7.csv", index_col="States")
```
```python
TPM = TPMdf.values
tpmdf = pd.DataFrame(index=['1', '2', '3', '4', '5', '6', '7','8','9','10','11','12','13'], data=TPM)
tpmdf.columns = ['1', '2', '3', '4', '5', '6', '7','8','9','10','11','12','13']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
<th>12</th>
<th>13</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>0.511</td>
<td>0.489</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>2</th>
<td>0.365</td>
<td>0.000</td>
<td>0.635</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>3</th>
<td>0.300</td>
<td>0.000</td>
<td>0.000</td>
<td>0.7</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>4</th>
<td>0.244</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.756</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>5</th>
<td>0.205</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.795</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>6</th>
<td>0.180</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.82</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>7</th>
<td>0.153</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.847</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>8</th>
<td>0.137</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.863</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>9</th>
<td>0.105</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.895</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>10</th>
<td>0.103</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.897</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<th>11</th>
<td>0.091</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.909</td>
<td>0.000</td>
</tr>
<tr>
<th>12</th>
<td>0.079</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.921</td>
</tr>
<tr>
<th>13</th>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.000</td>
<td>0.000</td>
<td>0.00</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>0.000</td>
<td>1.000</td>
</tr>
</tbody>
</table>
</div>
```python
Q, R, F, FR, time2churn = get_Absorbing_State_Markov_data(TPM)
```
```python
print("Time to Churn = Fc ::")
tpmdf = pd.DataFrame(index=['1', '2', '3', '4', '5', '6', '7','8','9','10','11','12'], data=time2churn)
tpmdf.columns = ['Time2Absorption']
tpmdf
```
Time to Churn = Fc ::
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Time2Absorption</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>52.787206</td>
</tr>
<tr>
<th>2</th>
<td>50.742216</td>
</tr>
<tr>
<th>3</th>
<td>47.991946</td>
</tr>
<tr>
<th>4</th>
<td>44.508264</td>
</tr>
<tr>
<th>5</th>
<td>40.513473</td>
</tr>
<tr>
<th>6</th>
<td>36.090686</td>
</tr>
<tr>
<th>7</th>
<td>31.206084</td>
</tr>
<tr>
<th>8</th>
<td>26.127086</td>
</tr>
<tr>
<th>9</th>
<td>20.736082</td>
</tr>
<tr>
<th>10</th>
<td>15.858576</td>
</tr>
<tr>
<th>11</th>
<td>10.503338</td>
</tr>
<tr>
<th>12</th>
<td>5.170189</td>
</tr>
</tbody>
</table>
</div>
# Q-5-4
State after n periods is given by $P_I x P^n$
where:
- $P_I = Initial Distribution$
- P = Transition Probability Matrix
- n = 4 = Number of periods
```python
from numpy.linalg import matrix_power
def get_state_after_n_periods(PTM, periods, initital_state, revenue = None):
assert(type(PTM) == np.ndarray), "Object passed is not an Numpy Array."
P1 = PTM.copy()
#for i in np.arange(periods -1):
# P1 = P1.dot(PTM)
P1 = matrix_power(P1, periods)
if revenue is None:
return (P1, initital_state.dot(P1))
else:
return (P1, initital_state.dot(P1).dot(revenue))
```
```python
initital_state = np.array([1000,1000,1000,0,0,0,0,0,0,0,0,0,0])
x = pd.DataFrame(get_state_after_n_periods(TPM, 4, initital_state)[1], index=TPMdf.index)
```
```python
x.columns = ['Number Customers']
x
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Number Customers</th>
</tr>
<tr>
<th>States</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1074.388975</td>
</tr>
<tr>
<th>2</th>
<td>539.320687</td>
</tr>
<tr>
<th>3</th>
<td>354.210981</td>
</tr>
<tr>
<th>4</th>
<td>255.615948</td>
</tr>
<tr>
<th>5</th>
<td>164.324538</td>
</tr>
<tr>
<th>6</th>
<td>267.153390</td>
</tr>
<tr>
<th>7</th>
<td>344.985480</td>
</tr>
<tr>
<th>8</th>
<td>0.000000</td>
</tr>
<tr>
<th>9</th>
<td>0.000000</td>
</tr>
<tr>
<th>10</th>
<td>0.000000</td>
</tr>
<tr>
<th>11</th>
<td>0.000000</td>
</tr>
<tr>
<th>12</th>
<td>0.000000</td>
</tr>
<tr>
<th>13</th>
<td>0.000000</td>
</tr>
</tbody>
</table>
</div>
# Q-5-5
The long-run CLV is given by:
**CLV** = $Lim_{t\to\infty} CLV_t$ = $(I - dP)^{-1} R$
Where:
- I = Identity Matrix
- P = PTM
- R = Reward Matrix = [1000, -200, -200,-200,-200,-200,-200,-200,-200,-200,-200,-200,0]
- d = discount = (1-.2) = 0.8
```python
margin = np.array([1000, -200, -200,-200,-200,-200,-200,-200,-200,-200,-200,-200,0])
```
```python
CLV = sp.linalg.inv(np.identity(TPM.shape[0]) - .8*TPM).dot(margin)
CLV
tpmdf = pd.DataFrame(index=['1', '2', '3', '4', '5', '6', '7','8','9','10','11','12', '13'], data=CLV)
tpmdf.columns = ['CLV']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>CLV</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>2149.629668</td>
</tr>
<tr>
<th>2</th>
<td>692.385122</td>
</tr>
<tr>
<th>3</th>
<td>521.049722</td>
</tr>
<tr>
<th>4</th>
<td>366.318932</td>
</tr>
<tr>
<th>5</th>
<td>242.578076</td>
</tr>
<tr>
<th>6</th>
<td>141.570457</td>
</tr>
<tr>
<th>7</th>
<td>48.816745</td>
</tr>
<tr>
<th>8</th>
<td>-21.100835</td>
</tr>
<tr>
<th>9</th>
<td>-82.126661</td>
</tr>
<tr>
<th>10</th>
<td>-87.563622</td>
</tr>
<tr>
<th>11</th>
<td>-90.152044</td>
</tr>
<tr>
<th>12</th>
<td>-64.143405</td>
</tr>
<tr>
<th>13</th>
<td>0.000000</td>
</tr>
</tbody>
</table>
</div>
# Q-5-6
- Both are same as both are in State 1 at end of September. Time to churn in 53 months (Details in Q2)
# Q-5-7
TPM for Oct 2013 is:
```python
TPM = pd.read_excel('./Customer Analytics_ Flipkart Data.xlsx', index_col=0)
TPM
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>...</th>
<th>64</th>
<th>65</th>
<th>66</th>
<th>67</th>
<th>68</th>
<th>69</th>
<th>70</th>
<th>71</th>
<th>72</th>
<th>73</th>
</tr>
<tr>
<th>TPM 1</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>0.148977</td>
<td>0.098666</td>
<td>0.130860</td>
<td>0.089315</td>
<td>0.075125</td>
<td>0.082522</td>
<td>0.374536</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>2</th>
<td>0.071390</td>
<td>0.096742</td>
<td>0.161302</td>
<td>0.116978</td>
<td>0.084461</td>
<td>0.084992</td>
<td>0.000000</td>
<td>0.384136</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>3</th>
<td>0.037220</td>
<td>0.058843</td>
<td>0.155777</td>
<td>0.135224</td>
<td>0.106183</td>
<td>0.101329</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.405423</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>4</th>
<td>0.022045</td>
<td>0.035943</td>
<td>0.103449</td>
<td>0.132588</td>
<td>0.118100</td>
<td>0.125870</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.462004</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>5</th>
<td>0.016725</td>
<td>0.024355</td>
<td>0.075653</td>
<td>0.105075</td>
<td>0.121460</td>
<td>0.144371</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>6</th>
<td>0.012655</td>
<td>0.018166</td>
<td>0.052100</td>
<td>0.078913</td>
<td>0.108510</td>
<td>0.158960</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>7</th>
<td>0.038378</td>
<td>0.039901</td>
<td>0.091342</td>
<td>0.070000</td>
<td>0.081312</td>
<td>0.089105</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>8</th>
<td>0.031331</td>
<td>0.047041</td>
<td>0.088130</td>
<td>0.080580</td>
<td>0.064245</td>
<td>0.082553</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>9</th>
<td>0.022114</td>
<td>0.032076</td>
<td>0.088292</td>
<td>0.095641</td>
<td>0.085781</td>
<td>0.095442</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>10</th>
<td>0.014880</td>
<td>0.019548</td>
<td>0.064006</td>
<td>0.089855</td>
<td>0.093334</td>
<td>0.102373</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>11</th>
<td>0.012086</td>
<td>0.015493</td>
<td>0.046713</td>
<td>0.079734</td>
<td>0.094546</td>
<td>0.120323</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>12</th>
<td>0.008103</td>
<td>0.011395</td>
<td>0.035222</td>
<td>0.052263</td>
<td>0.085796</td>
<td>0.131776</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>13</th>
<td>0.036615</td>
<td>0.025722</td>
<td>0.057987</td>
<td>0.061645</td>
<td>0.053156</td>
<td>0.091498</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>14</th>
<td>0.024974</td>
<td>0.037029</td>
<td>0.056548</td>
<td>0.058001</td>
<td>0.057887</td>
<td>0.075090</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>15</th>
<td>0.020251</td>
<td>0.020355</td>
<td>0.056187</td>
<td>0.069626</td>
<td>0.071345</td>
<td>0.088516</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>16</th>
<td>0.011048</td>
<td>0.015392</td>
<td>0.054849</td>
<td>0.068390</td>
<td>0.075628</td>
<td>0.095819</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>17</th>
<td>0.009678</td>
<td>0.011185</td>
<td>0.036651</td>
<td>0.058248</td>
<td>0.080963</td>
<td>0.113236</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>18</th>
<td>0.007839</td>
<td>0.009602</td>
<td>0.028281</td>
<td>0.046069</td>
<td>0.063663</td>
<td>0.116859</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>19</th>
<td>0.032727</td>
<td>0.026537</td>
<td>0.038821</td>
<td>0.048967</td>
<td>0.049382</td>
<td>0.064102</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>20</th>
<td>0.022617</td>
<td>0.018165</td>
<td>0.045809</td>
<td>0.041448</td>
<td>0.044705</td>
<td>0.071578</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>21</th>
<td>0.016271</td>
<td>0.021240</td>
<td>0.050948</td>
<td>0.045912</td>
<td>0.057802</td>
<td>0.070804</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>22</th>
<td>0.011354</td>
<td>0.018140</td>
<td>0.038926</td>
<td>0.059803</td>
<td>0.055409</td>
<td>0.091033</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>23</th>
<td>0.009844</td>
<td>0.008807</td>
<td>0.029932</td>
<td>0.053384</td>
<td>0.063165</td>
<td>0.088705</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>24</th>
<td>0.005657</td>
<td>0.009608</td>
<td>0.023047</td>
<td>0.033312</td>
<td>0.051370</td>
<td>0.096860</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>25</th>
<td>0.008844</td>
<td>0.006035</td>
<td>0.040530</td>
<td>0.028641</td>
<td>0.032940</td>
<td>0.062849</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>26</th>
<td>0.031763</td>
<td>0.029316</td>
<td>0.042026</td>
<td>0.056701</td>
<td>0.055596</td>
<td>0.057518</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>27</th>
<td>0.009682</td>
<td>0.016984</td>
<td>0.041078</td>
<td>0.052383</td>
<td>0.040507</td>
<td>0.073169</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>28</th>
<td>0.009180</td>
<td>0.011238</td>
<td>0.033369</td>
<td>0.049322</td>
<td>0.051598</td>
<td>0.078945</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>29</th>
<td>0.007011</td>
<td>0.005838</td>
<td>0.024930</td>
<td>0.034825</td>
<td>0.050412</td>
<td>0.076224</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>30</th>
<td>0.006357</td>
<td>0.006947</td>
<td>0.017955</td>
<td>0.026996</td>
<td>0.042998</td>
<td>0.085079</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>44</th>
<td>0.010408</td>
<td>0.022934</td>
<td>0.026150</td>
<td>0.027944</td>
<td>0.028578</td>
<td>0.044131</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>45</th>
<td>0.006371</td>
<td>0.016682</td>
<td>0.015604</td>
<td>0.028233</td>
<td>0.029970</td>
<td>0.036419</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>46</th>
<td>0.005157</td>
<td>0.010176</td>
<td>0.019815</td>
<td>0.038687</td>
<td>0.033024</td>
<td>0.053367</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>47</th>
<td>0.003620</td>
<td>0.006867</td>
<td>0.020282</td>
<td>0.029435</td>
<td>0.036276</td>
<td>0.054734</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>48</th>
<td>0.003415</td>
<td>0.003915</td>
<td>0.009023</td>
<td>0.017479</td>
<td>0.030725</td>
<td>0.057304</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>49</th>
<td>0.011111</td>
<td>0.026474</td>
<td>0.036674</td>
<td>0.026897</td>
<td>0.020803</td>
<td>0.027214</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>50</th>
<td>0.003953</td>
<td>0.000000</td>
<td>0.027466</td>
<td>0.007905</td>
<td>0.004132</td>
<td>0.038480</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>51</th>
<td>0.007038</td>
<td>0.002970</td>
<td>0.022499</td>
<td>0.017766</td>
<td>0.022214</td>
<td>0.028625</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>52</th>
<td>0.004429</td>
<td>0.001874</td>
<td>0.012862</td>
<td>0.019643</td>
<td>0.018954</td>
<td>0.044645</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>53</th>
<td>0.002873</td>
<td>0.002363</td>
<td>0.010974</td>
<td>0.021125</td>
<td>0.026538</td>
<td>0.048210</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>54</th>
<td>0.005390</td>
<td>0.001352</td>
<td>0.008703</td>
<td>0.012110</td>
<td>0.023731</td>
<td>0.050262</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>55</th>
<td>0.012879</td>
<td>0.014234</td>
<td>0.003953</td>
<td>0.043667</td>
<td>0.021641</td>
<td>0.033360</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>56</th>
<td>0.005051</td>
<td>0.010781</td>
<td>0.008838</td>
<td>0.024851</td>
<td>0.008838</td>
<td>0.011905</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>57</th>
<td>0.004648</td>
<td>0.011274</td>
<td>0.024897</td>
<td>0.028392</td>
<td>0.033134</td>
<td>0.019872</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>58</th>
<td>0.004758</td>
<td>0.003197</td>
<td>0.013485</td>
<td>0.020187</td>
<td>0.027115</td>
<td>0.040592</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.890667</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>59</th>
<td>0.000751</td>
<td>0.002830</td>
<td>0.016329</td>
<td>0.017115</td>
<td>0.017284</td>
<td>0.044030</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.90166</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>60</th>
<td>0.003686</td>
<td>0.001865</td>
<td>0.008667</td>
<td>0.012957</td>
<td>0.021385</td>
<td>0.049761</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.901678</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>61</th>
<td>0.009091</td>
<td>0.020022</td>
<td>0.020238</td>
<td>0.014069</td>
<td>0.000000</td>
<td>0.028211</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.908369</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>62</th>
<td>0.005051</td>
<td>0.010732</td>
<td>0.028706</td>
<td>0.039470</td>
<td>0.005682</td>
<td>0.015251</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.895108</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>63</th>
<td>0.000000</td>
<td>0.013242</td>
<td>0.009644</td>
<td>0.014461</td>
<td>0.019641</td>
<td>0.027330</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.915682</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>64</th>
<td>0.002797</td>
<td>0.005084</td>
<td>0.010247</td>
<td>0.019226</td>
<td>0.018129</td>
<td>0.032927</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.91159</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>65</th>
<td>0.004553</td>
<td>0.004568</td>
<td>0.014347</td>
<td>0.020216</td>
<td>0.017341</td>
<td>0.031431</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.907544</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>66</th>
<td>0.002390</td>
<td>0.001359</td>
<td>0.011291</td>
<td>0.012139</td>
<td>0.019185</td>
<td>0.043169</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.910467</td>
<td>0.000000</td>
</tr>
<tr>
<th>67</th>
<td>0.000000</td>
<td>0.010101</td>
<td>0.015449</td>
<td>0.005348</td>
<td>0.017677</td>
<td>0.023529</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.927897</td>
</tr>
<tr>
<th>68</th>
<td>0.011841</td>
<td>0.005682</td>
<td>0.014758</td>
<td>0.016711</td>
<td>0.028016</td>
<td>0.022334</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.900659</td>
</tr>
<tr>
<th>69</th>
<td>0.001855</td>
<td>0.007005</td>
<td>0.021047</td>
<td>0.007559</td>
<td>0.016117</td>
<td>0.031069</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.915347</td>
</tr>
<tr>
<th>70</th>
<td>0.004611</td>
<td>0.008425</td>
<td>0.014012</td>
<td>0.015338</td>
<td>0.015760</td>
<td>0.026578</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.915276</td>
</tr>
<tr>
<th>71</th>
<td>0.001990</td>
<td>0.004218</td>
<td>0.005469</td>
<td>0.010458</td>
<td>0.023384</td>
<td>0.028805</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.925677</td>
</tr>
<tr>
<th>72</th>
<td>0.001754</td>
<td>0.003724</td>
<td>0.007960</td>
<td>0.010953</td>
<td>0.019647</td>
<td>0.033428</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.922535</td>
</tr>
<tr>
<th>73</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>...</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
<p>73 rows × 73 columns</p>
</div>
Inital Customer distribution:
```python
PI = pd.read_excel('./Customer Analytics_ Flipkart Data.xlsx', index_col=0,sheet_name='Oct13')
PI
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>...</th>
<th>64</th>
<th>65</th>
<th>66</th>
<th>67</th>
<th>68</th>
<th>69</th>
<th>70</th>
<th>71</th>
<th>72</th>
<th>73</th>
</tr>
</thead>
<tbody>
<tr>
<th>Counts-10/13</th>
<td>315</td>
<td>425</td>
<td>1013</td>
<td>1265</td>
<td>1381</td>
<td>1624</td>
<td>107</td>
<td>162</td>
<td>396</td>
<td>592</td>
<td>...</td>
<td>68</td>
<td>132</td>
<td>253</td>
<td>9</td>
<td>16</td>
<td>42</td>
<td>56</td>
<td>122</td>
<td>258</td>
<td>11299</td>
</tr>
</tbody>
</table>
<p>1 rows × 73 columns</p>
</div>
Customer distribution in Nov 2013 is:
$P_I * P * R$
where:
- P = TPM
- $P_I$ = Initial State
- R = Revenue = [22032, 6977, 3114, 1423, 720, 304], which we will repeat 12 times for each Frequency. Revenue =0 for State 73
```python
pd.options.display.max_rows = 75
pd.options.display.float_format = '{:.2f}'.format
X = PI.dot(TPM.values).T
R = np.array([22032, 6977, 3114, 1423, 720, 304]).reshape(-1,1)
R2 = np.repeat(R.T,12, axis=0).reshape(-1,1)
R3 = np.zeros((73,1))
R3[:72] = R2
X.columns = ['Counts-11/13']
X.index = np.arange(74)[1:]
X['Revenue']=X*R3
#X.iloc[np.arange(73, step=6)].sum()
X
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Counts-11/13</th>
<th>Revenue</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>305.49</td>
<td>6730654.27</td>
</tr>
<tr>
<th>2</th>
<td>388.81</td>
<td>2712755.11</td>
</tr>
<tr>
<th>3</th>
<td>995.10</td>
<td>3098738.97</td>
</tr>
<tr>
<th>4</th>
<td>1216.59</td>
<td>1731206.59</td>
</tr>
<tr>
<th>5</th>
<td>1349.80</td>
<td>971856.92</td>
</tr>
<tr>
<th>6</th>
<td>1807.95</td>
<td>549617.69</td>
</tr>
<tr>
<th>7</th>
<td>117.98</td>
<td>2599306.55</td>
</tr>
<tr>
<th>8</th>
<td>163.26</td>
<td>1139048.85</td>
</tr>
<tr>
<th>9</th>
<td>410.69</td>
<td>1278900.10</td>
</tr>
<tr>
<th>10</th>
<td>584.43</td>
<td>831650.75</td>
</tr>
<tr>
<th>11</th>
<td>707.57</td>
<td>509451.52</td>
</tr>
<tr>
<th>12</th>
<td>926.81</td>
<td>281750.44</td>
</tr>
<tr>
<th>13</th>
<td>63.13</td>
<td>1390791.85</td>
</tr>
<tr>
<th>14</th>
<td>98.19</td>
<td>685080.18</td>
</tr>
<tr>
<th>15</th>
<td>229.94</td>
<td>716030.32</td>
</tr>
<tr>
<th>16</th>
<td>364.67</td>
<td>518931.90</td>
</tr>
<tr>
<th>17</th>
<td>510.56</td>
<td>367606.11</td>
</tr>
<tr>
<th>18</th>
<td>673.42</td>
<td>204718.84</td>
</tr>
<tr>
<th>19</th>
<td>47.81</td>
<td>1053345.04</td>
</tr>
<tr>
<th>20</th>
<td>56.62</td>
<td>395028.45</td>
</tr>
<tr>
<th>21</th>
<td>188.64</td>
<td>587430.58</td>
</tr>
<tr>
<th>22</th>
<td>291.92</td>
<td>415396.02</td>
</tr>
<tr>
<th>23</th>
<td>382.28</td>
<td>275242.78</td>
</tr>
<tr>
<th>24</th>
<td>530.48</td>
<td>161266.78</td>
</tr>
<tr>
<th>25</th>
<td>40.67</td>
<td>896053.01</td>
</tr>
<tr>
<th>26</th>
<td>65.74</td>
<td>458695.83</td>
</tr>
<tr>
<th>27</th>
<td>131.93</td>
<td>410821.33</td>
</tr>
<tr>
<th>28</th>
<td>192.94</td>
<td>274552.15</td>
</tr>
<tr>
<th>29</th>
<td>285.03</td>
<td>205224.39</td>
</tr>
<tr>
<th>30</th>
<td>491.49</td>
<td>149413.74</td>
</tr>
<tr>
<th>31</th>
<td>27.89</td>
<td>614373.04</td>
</tr>
<tr>
<th>32</th>
<td>39.26</td>
<td>273933.42</td>
</tr>
<tr>
<th>33</th>
<td>113.40</td>
<td>353118.51</td>
</tr>
<tr>
<th>34</th>
<td>155.57</td>
<td>221374.38</td>
</tr>
<tr>
<th>35</th>
<td>198.59</td>
<td>142983.56</td>
</tr>
<tr>
<th>36</th>
<td>403.58</td>
<td>122688.11</td>
</tr>
<tr>
<th>37</th>
<td>21.58</td>
<td>475442.15</td>
</tr>
<tr>
<th>38</th>
<td>32.71</td>
<td>228227.99</td>
</tr>
<tr>
<th>39</th>
<td>72.22</td>
<td>224900.69</td>
</tr>
<tr>
<th>40</th>
<td>116.36</td>
<td>165581.50</td>
</tr>
<tr>
<th>41</th>
<td>161.76</td>
<td>116470.76</td>
</tr>
<tr>
<th>42</th>
<td>314.63</td>
<td>95647.86</td>
</tr>
<tr>
<th>43</th>
<td>18.91</td>
<td>416734.10</td>
</tr>
<tr>
<th>44</th>
<td>28.05</td>
<td>195684.86</td>
</tr>
<tr>
<th>45</th>
<td>53.47</td>
<td>166505.53</td>
</tr>
<tr>
<th>46</th>
<td>86.32</td>
<td>122832.06</td>
</tr>
<tr>
<th>47</th>
<td>130.23</td>
<td>93765.83</td>
</tr>
<tr>
<th>48</th>
<td>278.85</td>
<td>84769.51</td>
</tr>
<tr>
<th>49</th>
<td>11.84</td>
<td>260751.62</td>
</tr>
<tr>
<th>50</th>
<td>18.48</td>
<td>128912.99</td>
</tr>
<tr>
<th>51</th>
<td>49.40</td>
<td>153841.23</td>
</tr>
<tr>
<th>52</th>
<td>68.02</td>
<td>96794.83</td>
</tr>
<tr>
<th>53</th>
<td>112.04</td>
<td>80668.59</td>
</tr>
<tr>
<th>54</th>
<td>230.95</td>
<td>70208.98</td>
</tr>
<tr>
<th>55</th>
<td>11.06</td>
<td>243690.45</td>
</tr>
<tr>
<th>56</th>
<td>16.53</td>
<td>115296.09</td>
</tr>
<tr>
<th>57</th>
<td>49.44</td>
<td>153952.72</td>
</tr>
<tr>
<th>58</th>
<td>65.52</td>
<td>93241.02</td>
</tr>
<tr>
<th>59</th>
<td>82.58</td>
<td>59455.04</td>
</tr>
<tr>
<th>60</th>
<td>191.37</td>
<td>58176.53</td>
</tr>
<tr>
<th>61</th>
<td>14.79</td>
<td>325953.32</td>
</tr>
<tr>
<th>62</th>
<td>19.52</td>
<td>136222.16</td>
</tr>
<tr>
<th>63</th>
<td>48.28</td>
<td>150337.84</td>
</tr>
<tr>
<th>64</th>
<td>84.61</td>
<td>120404.82</td>
</tr>
<tr>
<th>65</th>
<td>126.23</td>
<td>90887.29</td>
</tr>
<tr>
<th>66</th>
<td>272.31</td>
<td>82781.28</td>
</tr>
<tr>
<th>67</th>
<td>18.17</td>
<td>400263.90</td>
</tr>
<tr>
<th>68</th>
<td>17.01</td>
<td>118658.23</td>
</tr>
<tr>
<th>69</th>
<td>32.96</td>
<td>102651.58</td>
</tr>
<tr>
<th>70</th>
<td>61.99</td>
<td>88209.08</td>
</tr>
<tr>
<th>71</th>
<td>119.80</td>
<td>86253.02</td>
</tr>
<tr>
<th>72</th>
<td>230.35</td>
<td>70025.85</td>
</tr>
<tr>
<th>73</th>
<td>11762.41</td>
<td>0.00</td>
</tr>
</tbody>
</table>
</div>
# Q-5-8
From TPM (**EXHIBIT 14**) we see that :
1. In state 5 29% of Customers in this state move to Inactivate State
2. In state 6 30% of Customers in this state move to Inactivate State
3. In state 7 30% of Customers in this state move to Inactivate State
4. In state 8 38% of Customers in this state move to Inactivate State
Also Customers in 5,6,7 are high revenue generators for Flipkart.
Once in Inactive state a high proportion of Customers remain in Inactive state, and may finally churn out. Hence if Flipkart intervenes in the states 5,6,7,8 and the intervention is successful they may be able to generate more revenue
# Q-6-1
We will use Markov Decision Process to solve the following problem.
- Discount factor is 0.9
- State Space = [1,2]
- Action set = [1,2]
- TPMs for State-Action are provided in the Question (For saving space I will not be displaying the same in Answers)
IN MDP we have an Initial State (S0) in the diagram below and we take Action (A0), then S1 is the State in the next time period and this continues. Such a sequence is shown for 4 Stages in the diagram below.
```python
import pygraphviz as pgv
import pandas as pd
from IPython.display import Image
def draw(dot):
return Image(pgv.AGraph(dot).draw(format='png', prog='dot'))
graph = pd.DataFrame(np.array([[.1,.9],[.1,.9],[.1,.9]]))
g1 = """digraph top {
rankdir=LR;
S0 -> S1 [label = A0]
S1 -> S2 [label = A1]
S2 -> S3 [label = A2];
}"""
draw(g1)
```
The Objective is to **Maximise** the expected return obtained over a period of return. Based on State and Reward the Reward generated can be represented as:
$R(S_0, a_0) + \beta R(S_1, a_1) + \beta^2 R(S_2, a_2) + \beta^3 R(S_3, a_3) + ...$
where :
- $R(S_0, a_0)$ = Reward generated from Initial Stage where initial State = $S_0$ and action taken = $a_0$. The rewards obtained from future states is discounted by factor $\beta$
Objective :: $Maximise_{a_i \epsilon A}[R(S_0, a_0) + \beta R(S_1, a_1) + \beta^2 R(S_2, a_2) + \beta^3 R(S_3, a_3) + ...]$
We will use **POLICY ITEARTION ALGORITHM** to obtain the long term revenue generated from policy (2,2,1,3)
The **Value Function** for a Policy $\Pi$ starting at State $S_i$ is given by:
$V^\Pi(i) = R^\Pi(i) + \beta \sum_{j\epsilon S} P_{ij}^\Pi x V^\Pi(j)$
where:
- $R^\Pi(i)$ :: Immediate Reward
- $\beta \sum_{j\epsilon S} P_{ij}^\Pi x V^\Pi(j)$ :: Discounted Future Reward
We will develop a System of Linear Equation by using the above equation for each State and Action provided in the Policy. The Matrix of the coefficients of the system of Equations are :
```python
A = np.array([
[1-.9*.6, -.9*.4],
[-.9*.3, 1-.9*.7]
])
A
```
array([[ 0.46, -0.36],
[-0.27, 0.37]])
The Immediate Reward Matrix for the State Actions provided:
```python
B = np.array([20, -2]).reshape(-1,1)
B
```
array([[20],
[-2]])
Policy to evaluate = (1, 2)
Solving the Above set of **Linear Equations** we get the following Values for the Policy:
```python
x = slin.solve(A,B)
tpmdf = pd.DataFrame(index=['V11', 'V22'], data=x)
tpmdf.columns = ['Policy Value']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Policy Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>V11</th>
<td>91.51</td>
</tr>
<tr>
<th>V22</th>
<td>61.37</td>
</tr>
</tbody>
</table>
</div>
Where $V_{ij}$ = Policy Value when Initial State was i and action taken was j
The Overall Policy Value is :::
```python
np.sum(slin.solve(A,B))
```
152.87671232876724
To Check if Policy (1, 1) is better than the previous Policy (1, 2) we will use the **Policy Evaluation Step**. The objective is to check if the value function obtained in Q-4-1 is less than the new Policy.
The Policy improvement evaluation step includes the following:
$T^{\Pi^{new}}(i) = Max_{a_{i,new}}[R(i, a_{i,new}) + \beta \sum_{j\epsilon S} P(j|i,a_{i,new}) x V^\Pi(j)]$
where :
- $i \epsilon S$
- $a_{i,new}$ = A new action chosen for state i
- $T^{\Pi^{new}}(i)$ = Value function when the current policy for state i is changed to $a_{i,new}$
- R(i, a_{i,new}) = Reward when action for state i is replaced with new action $a_{i,new}$
Solving the new equation $T^{\Pi^{new}}(i) =
```python
(.72*91.51-12)/.82
```
65.71609756097563
$T^{\Pi^{new}}(2)$ > $V^{\Pi}(2)$ value. Hence this is a better Policy.
**(1, 1) is better than (1, 2)**
We will resolve the problem for the new Policy. The Matrix of the coefficients of the system of New Equations are :
```python
A = np.array([
[1-.9*.6, -.9*.4],
[-.9*.8, 1-.9*.2]
])
A
```
array([[ 0.46, -0.36],
[-0.72, 0.82]])
The Immediate Reward Matrix for the New State Actions provided:
```python
B = np.array([20, -12]).reshape(-1,1)
B
```
array([[ 20],
[-12]])
Policy to evaluate = (1, 1)
Solving the Above set of **Linear Equations** we get the following Values for the Policy:
```python
x = slin.solve(A,B)
tpmdf = pd.DataFrame(index=['V11', 'V22'], data=x)
tpmdf.columns = ['Policy Value']
tpmdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Policy Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>V11</th>
<td>102.37</td>
</tr>
<tr>
<th>V22</th>
<td>75.25</td>
</tr>
</tbody>
</table>
</div>
Where $V_{ij}$ = Policy Value when Initial State was i and action taken was j
The Overall Policy Value is :::
```python
np.sum(slin.solve(A,B))
```
177.627118644068
# Q-6-2
The Optimal Policy can be obtained by solving a sustem of Linear Equations:
**Decision Variables**: x_si, i = 1,2 are the Best Policy Value function for Policy
**Minimize Objective**: x_s1 + x_s2
**Subject To**:
**Constraint_for_State_S1_Policy_1**: 0.46 x_s1 - 0.36 x_s2 >= 20
**Constraint_for_State_S1_Policy_2**: 0.28 x_s1 - 0.18 x_s2 >= 30
**Constraint_for_State_S2_Policy_1**: - 0.27 x_s1 + 0.37 x_s2 >= -2
**Constraint_for_State_S2_Policy_2**: - 0.72 x_s1 + 0.82 x_s2 >= -12
**Non Zero Constraint**: All Decision Variable (x_si >0)
```python
# initialize the model
prob = LpProblem("MDPPolicy2", LpMinimize)
#List of decision variables
vehicles = ['s1', 's2']
# create a dictionary of pulp variables with keys from ingredients
# the default lower bound is -inf
x = pulp.LpVariable.dict('x_%s', vehicles, lowBound = 0)
# Objective function
prob += sum([x[i] for i in vehicles]), "Objective"
# Constraints
prob += x['s1'] >= 20 + 0.9*(0.6*x['s1'] + .4*x['s2']), "Constraint for State S1 Policy 1"
prob += x['s1'] >= 30 + 0.9*(0.8*x['s1'] + .2*x['s2']), "Constraint for State S1 Policy 2"
prob += x['s2'] >= -2 + 0.9*(0.3*x['s1'] + .7 * x['s2']), "Constraint for State S2 Policy 1"
prob += x['s2'] >= -12 +0.9*(0.8*x['s1'] + .2*x['s2']), "Constraint for State S2 Policy 2"
status = prob.solve(GLPK(options=["--ranges","MDPPolicy2.sen"]))
#print(status)
#print the result
for vehicle in vehicles:
print(' {} :: {} ::'.format(vehicle,
x[vehicle].value()))
print("Objective", value(prob.objective))
prob.writeLP("MDPPolicy2.lp")
```
s1 :: 224.4 ::
s2 :: 182.4 ::
Objective 406.8
# %load MDPPolicy2.sen
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 1
Problem:
Objective: Objective = 332.3636364 (MINimum)
No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 Constraint_for_State_S1_Policy_1
BS 40.47273 -20.47273 20.00000 +Inf -8.76712 -22.46575 Constraint_for_State_S1_Policy_2
. +Inf -Inf 25.55556 1366.66667 Constraint_for_State_S2_Policy_1
2 Constraint_for_State_S1_Policy_2
NL 30.00000 . 30.00000 14.57534 -11.63636 152.87671 Constraint_for_State_S1_Policy_1
11.63636 +Inf +Inf +Inf +Inf
3 Constraint_for_State_S2_Policy_1
NL -2.00000 . -2.00000 -20.21739 -8.36364 180.00000 Constraint_for_State_S2_Policy_2
8.36364 +Inf 60.55556 +Inf 855.55556 Constraint_for_State_S1_Policy_1
4 Constraint_for_State_S2_Policy_2
BS 3.23636 -15.23636 -12.00000 55.55556 -10.00000 300.00000 Constraint_for_State_S2_Policy_1
. +Inf 3.23636 +Inf +Inf
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 2
Problem:
Objective: Objective = 332.3636364 (MINimum)
No. Column name St Activity Obj coef Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 x_s1 BS 195.27273 1.00000 . +Inf -.72973 -5.40541 Constraint_for_State_S1_Policy_2
. +Inf 195.27273 +Inf +Inf
2 x_s2 BS 137.09091 1.00000 . 455.55556 -.64286 107.14286 Constraint_for_State_S2_Policy_1
. +Inf 137.09091 +Inf +Inf
End of report
Optimal Policy (2,1)
Optimal Policy is obtained by checking the Sensitivity Report. Binding Constraints provide the best Policy Values
# Q-7-1
For a Poisson Distribution the number of events by time t, *N(t)* is given by:
$P[N(t) = n] = \frac{e^{-\hat{\lambda}t} x {(\hat{\lambda}t)^n}}{n!}$
Expected Value: $E[N(t)] = \lambda t$
Variance: $Var[N(t)] = \lambda t$
Data Points:
```python
x = np.array([9, 11, 8, 12, 6, 4, 14, 11, 6, 10, 16, 8, 6, 7, 4, 6, 7, 13, 8, 16])
x
```
array([ 9, 11, 8, 12, 6, 4, 14, 11, 6, 10, 16, 8, 6, 7, 4, 6, 7,
13, 8, 16])
```python
lambda_hat = 1/(np.sum(x)/x.shape[0])
print("lambda :: {}".format(lambda_hat))
```
lambda :: 0.10989010989010989
```python
mu = lambda_hat * 12
print("Expected Demand for 12 Months :: {}".format(mu))
```
Expected Demand for 12 Months :: 1.3186813186813187
# Q-7-2
To ensure that the Demand is met 90% of time:
$\sum_{i=0}^k \frac{e^{-\hat{\lambda}t} x {(\hat{\lambda}t)^i}}{i!} \geq\ 0.90$
Here t = 24 (2 years with frequency of months)
The table below shows density and cumulative distribution.
```python
from scipy.stats import poisson
mu = lambda_hat * 24
print("Expected Number of Demand for 2 years:: {}".format(mu))
```
Expected Number of Demand for 2 years:: 2.6373626373626373
```python
k = []
poipmf = []
poicdf = []
for i in np.arange(10):
k.append(i)
poipmf.append(poisson.pmf(i,mu))
poicdf.append(poisson.cdf(i,mu))
tmpdf = pd.DataFrame({'K':k})
tmpdf["Poisson Density"] = poipmf
tmpdf["Cumulative Density"] = poicdf
tmpdf
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>K</th>
<th>Poisson Density</th>
<th>Cumulative Density</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>0.07</td>
<td>0.07</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>0.19</td>
<td>0.26</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>0.25</td>
<td>0.51</td>
</tr>
<tr>
<th>3</th>
<td>3</td>
<td>0.22</td>
<td>0.73</td>
</tr>
<tr>
<th>4</th>
<td>4</td>
<td>0.14</td>
<td>0.87</td>
</tr>
<tr>
<th>5</th>
<td>5</td>
<td>0.08</td>
<td>0.95</td>
</tr>
<tr>
<th>6</th>
<td>6</td>
<td>0.03</td>
<td>0.98</td>
</tr>
<tr>
<th>7</th>
<td>7</td>
<td>0.01</td>
<td>0.99</td>
</tr>
<tr>
<th>8</th>
<td>8</td>
<td>0.00</td>
<td>1.00</td>
</tr>
<tr>
<th>9</th>
<td>9</td>
<td>0.00</td>
<td>1.00</td>
</tr>
</tbody>
</table>
</div>
For k = 5 the CDF >= 0.9. From the table we see that the **smallest k for which the cumulative probability is greater than 0.90 is 5**. Hence he needs to stock 5 parts to meet deman for 24 months
```python
```
|
2a799503355d1c0a003f1b8053d2ad56cad03b4c
| 268,954 |
ipynb
|
Jupyter Notebook
|
IIMB-Assignments/Assgn-4/Assignment-4.ipynb
|
rahasayantan/Work-For-Reference
|
e052da538df84034ec5a0fe3b19c4287de307286
|
[
"MIT"
] | null | null | null |
IIMB-Assignments/Assgn-4/Assignment-4.ipynb
|
rahasayantan/Work-For-Reference
|
e052da538df84034ec5a0fe3b19c4287de307286
|
[
"MIT"
] | null | null | null |
IIMB-Assignments/Assgn-4/Assignment-4.ipynb
|
rahasayantan/Work-For-Reference
|
e052da538df84034ec5a0fe3b19c4287de307286
|
[
"MIT"
] | null | null | null | 35.416645 | 12,036 | 0.445686 | true | 50,861 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.803174 | 0.676132 |
__label__kor_Hang
| 0.201441 | 0.409213 |
# Time Series - III (MA Models)
Moving average models are very similar to Auto regressive models, difference being that instead of using past observations of the time series, we try to estimate the value using past error terms.
\begin{align}
x_{t} &= w_{t} + \beta_{1}.w_{t-1} + \beta_{2}.w_{t-2} + ..
\end{align}
In the above equation, x is the value of the time series that we want to estimate, $w_{t}$ are the error terms of the previous values, which is assumed to be normally distributed since it is white noise and betas are the coefficients. In this, we will try to explain the time series values using the previous noise terms.
For MA(q) models, ACF should be zero for lags > q by definition.
```python
#Importing the required packages
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import statsmodels.tsa.api as smt
import statsmodels.api as sm
import scipy.stats as scs
import statsmodels.stats as sms
import warnings
warnings.filterwarnings('ignore')
```
```python
#Defining a function to visualize and analyze the time series
def tsplot(y, lags=None, figsize=(15, 10), style='bmh'):
'''
Prepares a (3,2) dimensional plot for the visualization of time series values, autocorrelation and partial
autocorrelation plots and QQ and probability plots for comparision with normal distribution.
Args:
y: time series values
lags: How many lagging values are to be considered.
'''
if not isinstance(y, pd.Series):
y = pd.Series(y)
with plt.style.context(style):
fig = plt.figure(figsize=figsize)
layout = (3, 2)
ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
qq_ax = plt.subplot2grid(layout, (2, 0))
pp_ax = plt.subplot2grid(layout, (2, 1))
y.plot(ax=ts_ax)
ts_ax.set_title('Time Series Analysis Plots')
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax, alpha=0.05)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax, alpha=0.05)
sm.qqplot(y, line='s', ax=qq_ax)
qq_ax.set_title('QQ Plot')
scs.probplot(y, sparams=(y.mean(), y.std()), plot=pp_ax)
plt.tight_layout()
return
```
**Simulating MA(1) process using beta = 0.6 and specifying the AR(p) alpha = 0 **
```python
n = int(1000)
#Setting AR(p) alphas = 0
alphas = np.array([0.])
betas = np.array([0.6])
#Adding zero-lag and negating alphas
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
ma1 = smt.arma_generate_sample(ar = ar, ma = ma, nsample = n)
#print(ma1.shape)
_ = tsplot(ma1, lags = 30)
```
Since this is a first order model, i.e, q = 1, there is a peak at lag 1 in ACF plot and rest of the peaks are insignificant. By looking at the ACF of the series we can see how many sequential non zero lags exist. If there are q such lags then we can say that MA(q) model fits well at the data.
This is similar to looking at the PACF plot for AR(p) models.
The above plot shows that MA(1) model could be an appropriate fit for the series.
**Fitting a MA(1) model on the above simulated data. **
```python
max_lag = 30
model = smt.ARMA(ma1, order = (0,1)).fit(maxlag = max_lag,
method = 'mle',
trend = 'nc')
print(model.summary())
```
ARMA Model Results
==============================================================================
Dep. Variable: y No. Observations: 1000
Model: ARMA(0, 1) Log Likelihood -1459.048
Method: mle S.D. of innovations 1.041
Date: Sat, 23 Jun 2018 AIC 2922.095
Time: 02:01:19 BIC 2931.911
Sample: 0 HQIC 2925.826
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ma.L1.y 0.6090 0.025 24.219 0.000 0.560 0.658
Roots
=============================================================================
Real Imaginary Modulus Frequency
-----------------------------------------------------------------------------
MA.1 -1.6421 +0.0000j 1.6421 0.5000
-----------------------------------------------------------------------------
The lag coefficient as calculated by the model is 0.67 which is very close to real value 0.6
```python
from statsmodels.stats.stattools import jarque_bera
score, pvalue, _, _ = jarque_bera(model.resid)
if pvalue < 0.10:
print('We have reason to suspect the residuals are not normally distributed.')
else:
print('The residuals seem normally distributed.')
```
The residuals seem normally distributed.
**Simulating and fitting a MA(3) process to obtain the correct betas where betas1,2,3 are equal to 0.3, 0.2 and 0.1.**
```python
#We should be expecting peaks at lag 1,2,3 and the insignificant peaks beyond 3 lags in ACF plot
n = int(500)
alphas = np.array([0.])
betas = np.array([0.3, 0.2, 0.1])
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
ma3 = smt.arma_generate_sample(ar = ar, ma = ma, nsample = n)
#print(ma3.shape)
_ = tsplot(ma3, lags = 30)
```
**Fitting the MA(3) simulated time series **
```python
max_lag = 30
model = smt.ARMA(ma3, order=(0, 3)).fit(maxlag=max_lag,
method='mle',
trend='nc')
print(model.summary())
```
ARMA Model Results
==============================================================================
Dep. Variable: y No. Observations: 500
Model: ARMA(0, 3) Log Likelihood -724.881
Method: mle S.D. of innovations 1.031
Date: Sat, 23 Jun 2018 AIC 1457.762
Time: 02:01:21 BIC 1474.621
Sample: 0 HQIC 1464.377
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ma.L1.y 0.2972 0.045 6.647 0.000 0.210 0.385
ma.L2.y 0.1626 0.047 3.447 0.001 0.070 0.255
ma.L3.y 0.0358 0.042 0.855 0.393 -0.046 0.118
Roots
=============================================================================
Real Imaginary Modulus Frequency
-----------------------------------------------------------------------------
MA.1 -0.1913 -2.5844j 2.5915 -0.2618
MA.2 -0.1913 +2.5844j 2.5915 0.2618
MA.3 -4.1625 -0.0000j 4.1625 -0.5000
-----------------------------------------------------------------------------
```python
from statsmodels.stats.stattools import jarque_bera
score, pvalue, _, _ = jarque_bera(model.resid)
if pvalue < 0.10:
print('We have reason to suspect the residuals are not normally distributed.')
else:
print('The residuals seem normally distributed.')
```
We have reason to suspect the residuals are not normally distributed.
The model was roughly able to calculate the coefficients of errors
## Application - Fitting a MA(1) model on Tesla log returns
In this section, we apply the above technique to fit a MA(1) model on the log returns of TSLA (Tesla Inc.) stock data and see how good can we explain the changes. We use last 5 years EOD stock price data of the TSLA stock which could be downloaded from Yahoo Finance.
```python
#Downloading the data from the directory
tsla_market_data = pd.read_csv("C:\\Users\ku.kulshrestha\Downloads\TSLA.csv")
print(tsla_market_data.head(5))
```
Date Open High Low Close Adj Close \
0 2013-06-20 104.650002 107.129997 99.449997 100.650002 100.650002
1 2013-06-21 103.699997 103.699997 97.500000 99.550003 99.550003
2 2013-06-24 96.500000 102.870003 95.300003 101.489998 101.489998
3 2013-06-25 103.099998 104.199997 100.550003 102.400002 102.400002
4 2013-06-26 103.800003 105.870003 102.660004 105.720001 105.720001
Volume
0 10106500
1 11718600
2 7119800
3 5848700
4 6602600
Since we would only be dealing with the price at the end of the day, hence we only need to take care about Close column of the above dataset and then calculate the log returns of the closing price.
```python
tsla_ts_data = tsla_market_data[['Date', 'Close']]
#Adding a new col in the dataframe
tsla_ts_data['Log_return'] = np.nan
tsla_ts_data.head()
for i in range(1, len(tsla_ts_data)):
tsla_ts_data.loc[i, 'Log_return'] = np.log(np.divide(tsla_ts_data.loc[i,'Close'], tsla_ts_data.loc[i-1, 'Close']))
#removing the first row as Log_return cannot be defined for it
tsla_ts_data = tsla_ts_data.dropna()
tsla_ts_data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Date</th>
<th>Close</th>
<th>Log_return</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>2013-06-21</td>
<td>99.550003</td>
<td>-0.010989</td>
</tr>
<tr>
<th>2</th>
<td>2013-06-24</td>
<td>101.489998</td>
<td>0.019300</td>
</tr>
<tr>
<th>3</th>
<td>2013-06-25</td>
<td>102.400002</td>
<td>0.008926</td>
</tr>
<tr>
<th>4</th>
<td>2013-06-26</td>
<td>105.720001</td>
<td>0.031907</td>
</tr>
<tr>
<th>5</th>
<td>2013-06-27</td>
<td>109.250000</td>
<td>0.032845</td>
</tr>
</tbody>
</table>
</div>
```python
#Plotting the distribution of the log returns
tsla_ts_data.Log_return.plot(figsize = (10,8))
```
Now we have our data ready, we will fit a simple MA(1) model on the Log_return data.
```python
# Fitting a MA(1) model on the TSLA log returns
max_lag = 30
model = smt.ARMA(tsla_ts_data.Log_return, order = (0,1)).fit(maxlag = max_lag,
method = 'mle',
trend = 'nc')
print(model.summary())
_ = tsplot(model.resid, lags = max_lag)
```
We can see some peaks in ACF plot at k = 8,9,12,16,19. Trying MA(2) now.
```python
# Fitting MA(2) model on TSLA log returns
model2 = smt.ARMA(tsla_ts_data.Log_return, order = (0,2)).fit(maxlag = max_lag,
method = 'mle',
trend = 'nc')
print(model2.summary())
_ = tsplot(model2.resid, lags = max_lag)
```
We see very marginal peaks at k = 16, 19. This concludes that MA(2) model is capturing a lot of the autocorrelation, but not all long-memory effects. Even if we keep increasing the order of the model, we would still see the peaks because we'll be adding a new parameter to a model that has seemingly explained away much of the correlations at shorter lags, but that won't have much of an effect on the longer term lags.
It is highly unlikely that a simple MA(q) model explains all the serial correlation for this data.
```python
from statsmodels.stats.stattools import jarque_bera
score, pvalue, _, _ = jarque_bera(model2.resid)
if pvalue < 0.10:
print('We have reason to suspect the residuals are not normally distributed.')
else:
print('The residuals seem normally distributed.')
```
We have reason to suspect the residuals are not normally distributed.
**Conclusion: **
We have analyzed fitting Tesla Log returns to the AR(p) and MA(q) models where p and q are the order of the AR/MQ models respectively. Both of these models are capable in explaining some of the autocorrelation in the residuals. But some long-memory effects still remains and there is still scope of improvement.
|
c114cc39c3ae0b1570dfbb6d45b1f03b8a39a32a
| 719,531 |
ipynb
|
Jupyter Notebook
|
Time Series - III (MA models).ipynb
|
kushkul/Time-Series
|
c038927d5c138a63d31cafb31d6caade4ae3c6ed
|
[
"MIT"
] | null | null | null |
Time Series - III (MA models).ipynb
|
kushkul/Time-Series
|
c038927d5c138a63d31cafb31d6caade4ae3c6ed
|
[
"MIT"
] | null | null | null |
Time Series - III (MA models).ipynb
|
kushkul/Time-Series
|
c038927d5c138a63d31cafb31d6caade4ae3c6ed
|
[
"MIT"
] | null | null | null | 1,026.435093 | 190,848 | 0.948526 | true | 3,502 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.935347 | 0.847968 | 0.793144 |
__label__eng_Latn
| 0.883308 | 0.681071 |
# Note on GAN/ODE
## Background
After reading the following paper interpreting the gradient descent in terms of ODE solver:
Training Generative Adversarial Networks by Solving Ordinary Differential Equations by Qin, Wu, et al, DeepMind, https://arxiv.org/abs/2010.15040
As an illustrative example, they consider:
\begin{equation}
\frac{\mathrm{d}x}{\mathrm{d}t} = -
\left(
\begin{array}{cc}
\epsilon & -1 \\ 1 & 0
\end{array}
\right)
x
\end{equation}
where $x = (\theta, \phi)$.
Then, they say:
"...when $\epsilon = 0.1$. When we choose $\Delta t = 0.2$ for 200 timestep, Euler's method diverges while RK2 converges".
Interestingly, the paper does not mention a classical concept of the ODE stability region to explain this observation. The purpose of the notebook is to illustrate it.
For example, see http://web.mit.edu/course/16/16.90/OldFiles/BackUp/www/pdfs/Chapter10.pdf for the stability region.
***
© 2020 Youngsuk Lee (lee.youngsuk@gmail.com)
```python
%matplotlib inline
import os, sys
import matplotlib.pyplot as plt
import numpy as np
```
## Find the eigenvalues of the matrix and plot against the stability regions
### Find eigenvalues
```python
# eigenvalues
eps = 0.1
A = -np.array([[eps, -1.0],[1.0, 0]])
eigvs = np.linalg.eigvals(A)
eigvs
```
array([-0.05+0.99874922j, -0.05-0.99874922j])
### Amplification factor and functions
```python
# amp = lambda * dt where lambda is effectively eigenvalue
dt = 0.2
z_eigvs = eigvs * dt
```
```python
# growth rate functions
af_euler = lambda z: 1 + z
af_rk2 = lambda z: 1 + z + 0.5 * z**2
af_rk4 = lambda z: 1 + z + 0.5 * z**2 + (1.0/6) * z**3 + (1.0/24) * z**4
kw_af = {'Euler':af_euler, 'RK2':af_rk2, 'RK4':af_rk4}
```
### Amplification factors
If the absolute value is larger than 1, it is convergent
```python
print('amplification factor: ')
for n, af in kw_af.items():
aaf = np.abs(af(z_eigvs[0]))
print(n + ':' + str(np.abs(af(z_eigvs[0]))) + ', ' + ('to converge ' if aaf < 1.0 else 'to diverge'))
```
amplification factor:
Euler:1.0099504938362078, to diverge
RK2:0.9900505037623081, to converge
RK4:0.9900500559736024, to converge
### Plotting stability regions
```python
# translated from the matlab codes in the above note
v_x = np.linspace(-3, 1, 301)
v_y = np.linspace(-3, 3, 301)
x, y = np.meshgrid(v_x, v_y)
# calculate z = lambda * dt
z = x + (1.0j)*y
def draw_amplification_factors():
ctr_colors = ['r', 'b', 'g']
for idx, n in enumerate(kw_af):
af = kw_af[n]
plt.contour(x, y, np.abs(af(z)), [1], colors=[ctr_colors[idx]])
plt.plot(z_eigvs.real, z_eigvs.imag, 'o', ms=8, label='dt * eig')
plt.grid()
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
draw_amplification_factors()
plt.axis('equal')
plt.title('stability regions: Euler (red), RK2 (blue), RK4 (green)')
plt.subplot(1,2,2)
draw_amplification_factors()
plt.xlim([-0.1, 0.05])
plt.ylim([-0.4, 0.4])
plt.title('zoomed-in')
plt.legend()
plt.show()
```
## Conclusion
The convergence of RK2 and the divergence of Euler for the example in the paper can be explained in terms of the amplifcation factors and stability regions.
# END
```python
```
|
c734f2726ac367279972ff7d7c77405247d9ccac
| 49,198 |
ipynb
|
Jupyter Notebook
|
notebook/gan_ode/nb_note_gan_ode_stability_region.ipynb
|
xyise/xyise
|
e2bc1c2e824da4fc5cd1d81aaef76a1ad147fb01
|
[
"Apache-2.0"
] | null | null | null |
notebook/gan_ode/nb_note_gan_ode_stability_region.ipynb
|
xyise/xyise
|
e2bc1c2e824da4fc5cd1d81aaef76a1ad147fb01
|
[
"Apache-2.0"
] | null | null | null |
notebook/gan_ode/nb_note_gan_ode_stability_region.ipynb
|
xyise/xyise
|
e2bc1c2e824da4fc5cd1d81aaef76a1ad147fb01
|
[
"Apache-2.0"
] | null | null | null | 196.792 | 43,056 | 0.911318 | true | 1,034 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.877477 | 0.779865 |
__label__eng_Latn
| 0.774227 | 0.650221 |
## Knowing the what, but not the where in Bayesian optimization. Vu Nguyen, Michael Osborne. ICML 2020
# Black-box optimization given the knowledge of the optimum value
\begin{align}
f^* = argmax_{x \in \mathcal{X}} f(x)
\end{align}
# Contact: Vu Nguyen, vu@ieee.org
```python
import sys
sys.path.insert(0,'..')
sys.path.insert(0,'../..')
from bayes_opt import BayesOpt_KnownOptimumValue,BayesOpt
import numpy as np
from bayes_opt import vis_ERM,functions
import warnings
warnings.filterwarnings("ignore")
```
# Transforming the surrogate function go below the optimum value $f*$
```python
myfunction=functions.fourier(sd=0)
# myfunction.func: contains the black-box function
# myfunction.bounds: contains the SearchSpace
# myfunction.fstar: contains the known optimum value
# four initial points
x0=[3.1,4.4,8,9]
init_X=np.reshape(x0,(len(x0),1))
init_Y=myfunction.func(init_X)
# create an empty object for BO using GP
acq_name='ei'
bo=BayesOpt(myfunction.func,myfunction.bounds,acq_name=acq_name,verbose=0)
bo.init_with_data(init_X=init_X,init_Y=init_Y)
# create an empty object for BO using transformed GP
acq_name='erm'
IsTGP=1 # using transformed GP
bo_tgp=BayesOpt_KnownOptimumValue(myfunction.func,myfunction.bounds,fstar=myfunction.fstar, \
acq_name=acq_name,IsTGP=1)
bo_tgp.init_with_data(init_X=init_X,init_Y=init_Y)
vis_ERM.plot_1d_Fourier_GP_TGP(bo,bo_tgp,fstar=myfunction.fstar)
```
# Transforming the surrogate function closer to the optimum value $f*$
```python
myfunction=functions.forrester(sd=0)
# myfunction.func: contains the black-box function
# myfunction.bounds: contains the SearchSpace
# myfunction.fstar: contains the known optimum value
# initial 3 points
x0=[0.1,0.46,0.91]
init_X=np.reshape(x0,(len(x0),1))
init_Y=myfunction.func(init_X)
# create an empty object for BO using vanilla GP
acq_name='ei'
bo=BayesOpt(myfunction.func,myfunction.bounds,acq_name=acq_name,verbose=0)
bo.init_with_data(init_X=init_X,init_Y=init_Y)
# create an empty object for BO using transformed GP
acq_name='erm'
IsTGP=1 # using TransformedGP
bo_tgp=BayesOpt_KnownOptimumValue(myfunction.func,myfunction.bounds,fstar=myfunction.fstar, \
acq_name=acq_name,IsTGP=1,verbose=0)
bo_tgp.init_with_data(init_X=init_X,init_Y=init_Y)
vis_ERM.plot_1d_Forrester_GP_TGP(bo,bo_tgp,fstar=myfunction.fstar)
```
# demonstrating different acquisition functions given known optimum value
```python
myfunction=functions.fourier(sd=0)
# myfunction.func: contains the black-box function
# myfunction.bounds: contains the SearchSpace
# myfunction.fstar: contains the known optimum value
x0=[3.2,4.4,8,9]
init_X=np.reshape(x0,(len(x0),1))
init_Y=myfunction.func(init_X)
# create an empty object for BO using transformed GP
acq_name='erm'
IsTGP=1
bo_tgp=BayesOpt_KnownOptimumValue(myfunction.func,myfunction.bounds,fstar=myfunction.fstar, \
acq_name=acq_name,IsTGP=IsTGP)
bo_tgp.init_with_data(init_X=init_X,init_Y=init_Y)
NN=1*myfunction.input_dim
for index in range(0,NN):
vis_ERM.plot_acq_bo_1d_tgp(bo_tgp,fstar=myfunction.fstar)
```
# Running multiple iterations
# using vanilla GP
```python
myfunction=functions.forrester(sd=0)
x0=[0.1,0.46,0.91]
init_X=np.reshape(x0,(len(x0),1))
init_Y=myfunction.func(init_X)
# create an empty object for BO using vanilla GP
acq_name='ei'
bo=BayesOpt(myfunction.func,myfunction.bounds,acq_name=acq_name,verbose=1)
bo.init_with_data(init_X=init_X,init_Y=init_Y)
vis_ERM.plot_1d_Forrester_EI_ERM(bo,fstar=myfunction.fstar)
# number of recommended parameters
NN=5*myfunction.input_dim
for index in range(0,NN):
xt=bo.select_next_point()
vis_ERM.plot_1d_Forrester_EI_ERM(bo,fstar=myfunction.fstar)
```
# Running multiple iterations
# using Transformed GP and ERM
```python
myfunction=functions.forrester(sd=0)
x0=[0.1,0.46,0.91]
init_X=np.reshape(x0,(len(x0),1))
init_Y=myfunction.func(init_X)
# create an empty object for BO using transformed GP
acq_name='erm'
IsTGP=1 # using Transformed GP
bo_tgp=BayesOpt_KnownOptimumValue(myfunction.func,myfunction.bounds,fstar=myfunction.fstar, \
acq_name=acq_name,IsTGP=IsTGP,verbose=1)
bo_tgp.init_with_data(init_X=init_X,init_Y=init_Y)
vis_ERM.plot_1d_tgp_Forrester_EI_ERM(bo_tgp,fstar=myfunction.fstar)
NN=5*myfunction.input_dim
for index in range(0,NN):
xt=bo_tgp.select_next_point()
print(bo_tgp.X_ori[-1])
vis_ERM.plot_1d_tgp_Forrester_EI_ERM(bo_tgp,fstar=myfunction.fstar)
```
|
efb97b06d9bd40088bde038214c9671517551288
| 692,645 |
ipynb
|
Jupyter Notebook
|
demo_visualization_knowing_the_what_but_not_the_where_BO.ipynb
|
ntienvu/KnowingOptimumValue_BO
|
42225cb9d61c1225bd757fe9dd02834a0bc7a3e6
|
[
"MIT"
] | 14 |
2020-06-30T00:36:14.000Z
|
2022-01-11T13:15:53.000Z
|
demo_visualization_knowing_the_what_but_not_the_where_BO.ipynb
|
ntienvu/KnowingOptimumValue_BO
|
42225cb9d61c1225bd757fe9dd02834a0bc7a3e6
|
[
"MIT"
] | null | null | null |
demo_visualization_knowing_the_what_but_not_the_where_BO.ipynb
|
ntienvu/KnowingOptimumValue_BO
|
42225cb9d61c1225bd757fe9dd02834a0bc7a3e6
|
[
"MIT"
] | 2 |
2020-10-17T15:27:06.000Z
|
2021-02-27T10:34:04.000Z
| 1,464.365751 | 64,188 | 0.959411 | true | 1,380 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.849971 | 0.757794 | 0.644103 |
__label__eng_Latn
| 0.474793 | 0.334799 |
<a href="https://www.bigdatauniversity.com"></a>
<h1><center>Non Linear Regression Analysis</center></h1>
If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression because, as the name implies, linear regression presumes that the data is linear.
Let's learn about non linear regressions and apply an example on python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014.
<h2 id="importing_libraries">Importing required libraries</h2>
```python
import numpy as np
import math as m
import matplotlib.pyplot as plt
%matplotlib inline
```
Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = $2x$ + 3.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$).
$$ \ y = a x^3 + b x^2 + c x + d \ $$
Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \log(x)$$
Or even, more complicated such as :
$$ y = \log(a x^3 + b x^2 + c x + d)$$
Let's take a look at a cubic function's graph.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function.
Some other types of non-linear functions are:
### Quadratic
$$ Y = X^2 $$
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
### Exponential
An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
```python
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
### Logarithmic
The response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of __log()__: i.e. $$ y = \log(x)$$
Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as
\begin{equation}
y = \log(X)
\end{equation}
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
### Sigmoidal/Logistic
$$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1/(1+np.power(m.e, -X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
<a id="ref2"></a>
# Non-Linear Regression example
For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
```python
import numpy as np
import pandas as pd
#downloading dataset
!wget -nv -O china_gdp.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv
df = pd.read_csv("china_gdp.csv")
df.head(10)
```
2020-04-14 14:10:36 URL:https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv [1218/1218] -> "china_gdp.csv" [1]
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1960</td>
<td>5.918412e+10</td>
</tr>
<tr>
<th>1</th>
<td>1961</td>
<td>4.955705e+10</td>
</tr>
<tr>
<th>2</th>
<td>1962</td>
<td>4.668518e+10</td>
</tr>
<tr>
<th>3</th>
<td>1963</td>
<td>5.009730e+10</td>
</tr>
<tr>
<th>4</th>
<td>1964</td>
<td>5.906225e+10</td>
</tr>
<tr>
<th>5</th>
<td>1965</td>
<td>6.970915e+10</td>
</tr>
<tr>
<th>6</th>
<td>1966</td>
<td>7.587943e+10</td>
</tr>
<tr>
<th>7</th>
<td>1967</td>
<td>7.205703e+10</td>
</tr>
<tr>
<th>8</th>
<td>1968</td>
<td>6.999350e+10</td>
</tr>
<tr>
<th>9</th>
<td>1969</td>
<td>7.871882e+10</td>
</tr>
</tbody>
</table>
</div>
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
### Plotting the Dataset ###
This is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it decelerate slightly in the 2010s.
```python
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
### Choosing a model ###
From an initial look at the plot, we determine that the logistic function could be a good approximation,
since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
The formula for the logistic function is the following:
$$ \hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$
$\beta_1$: Controls the curve's steepness,
$\beta_2$: Slides the curve on the x-axis.
### Building The Model ###
Now, let's build our regression model and initialize its parameters.
```python
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
```
Lets look at a sample sigmoid line that might fit with the data:
```python
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
```
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
```python
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
```
#### How we find the best parameters for our fit line?
we can use __curve_fit__ which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, *popt) - ydata is minimized.
popt are our optimized parameters.
```python
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
```
beta_1 = 690.447527, beta_2 = 0.997207
Now we plot our resulting regression model.
```python
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
## Practice
Can you calculate what is the accuracy of our model?
```python
# write your code here
# split data into train/test
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
```
Mean absolute error: 0.03
Residual sum of squares (MSE): 0.00
R2-score: 0.91
Double-click __here__ for the solution.
<!-- Your answer is below:
# split data into train/test
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
-->
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="http://cocl.us/ML0101EN-SPSSModeler">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://cocl.us/ML0101EN_DSX">Watson Studio</a>
<h3>Thanks for completing this lesson!</h3>
<h4>Author: <a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a></h4>
<p><a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p>
<hr>
<p>Copyright © 2018 <a href="https://cocl.us/DX0108EN_CC">Cognitive Class</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.</p>
|
34ae873a7bedae7dcc80bf4e15c28885345d77a1
| 174,320 |
ipynb
|
Jupyter Notebook
|
ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb
|
netomenoci/Machine-Learning-With-Python-IBM
|
eb4aaf85b667870299a25a6a7af121bac3dd9f43
|
[
"BSD-4-Clause-UC"
] | null | null | null |
ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb
|
netomenoci/Machine-Learning-With-Python-IBM
|
eb4aaf85b667870299a25a6a7af121bac3dd9f43
|
[
"BSD-4-Clause-UC"
] | null | null | null |
ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb
|
netomenoci/Machine-Learning-With-Python-IBM
|
eb4aaf85b667870299a25a6a7af121bac3dd9f43
|
[
"BSD-4-Clause-UC"
] | null | null | null | 196.971751 | 18,876 | 0.907572 | true | 3,646 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.891811 | 0.803174 | 0.716279 |
__label__eng_Latn
| 0.958738 | 0.502488 |
<a href="https://colab.research.google.com/github/tallywiesenberg/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/LS_DS12_133_High_Dimensional_Data_Assignment.ipynb" target="_parent"></a>
# Vertical Line Test
## 1.1 Create two graphs, one that passes the vertical line test and one that does not.
```
import matplotlib.pyplot as plt
```
```
fig, ax = plt.subplots(figsize = (8,8))
ax.grid()
ax.set_xlim(-10,10)
ax.set_ylim(-10,10)
ax.axhline(0, color = 'k')
ax.axvline(0, color = 'k')
ax.arrow(0,0,9,9)
ax.arrow(0,0,-7.5,-2.5)
ax.arrow(-7.5,-2.5,5,-2.5)
ax.arrow(-2.5,-5,-5,-2.5)
ax.axvline(-5, color = 'r')
ax.axvline(5, color='r')
plt.show()
```
## 1.2 Why are graphs that don't pass the vertical line test not considered "functions?"
because a function is a mapping from a list of inputs to a list of outputs where **no input is mapped to multiple outputs**
# Functions as Relations
## 2.1 Which of the following relations are functions? Why?
\begin{align}
\text{Relation 1: } \{(1, 2), (3, 2), (1, 3)\}
\\
\text{Relation 2: } \{(1, 3), (2, 3), (6, 7)\}
\\
\text{Relation 3: } \{(9, 4), (2, 1), (9, 6)\}
\\
\text{Relation 4: } \{(6, 2), (8, 3), (6, 4)\}
\\
\text{Relation 5: } \{(2, 6), (2, 7), (2, 4)\}
\end{align}
Relation 2 because no input value is mapped with multiple output values
# Functions as a mapping between dimensions
## 3.1 for the following functions what is the dimensionality of the domain (input) and codomain (range/output)?
\begin{align}
m(𝑥_1,𝑥_2,𝑥_3)=(x_1+x_2, x_1+x_3, x_2+x_3)
\\
n(𝑥_1,𝑥_2,𝑥_3,𝑥_4)=(x_2^2 + x_3, x_2x_4)
\end{align}
3 dimentions shown in 3 dimentional space
4 dimentions shown in 2 dimentional space
## 3.2 Do you think it's possible to create a function that maps from a lower dimensional space to a higher dimensional space? If so, provide an example.
i think everythings possible with enough time, i would think that a unit or basis for the additional dimentions are required. like going from a square to a cube adding depth as a dimention, how could that be determined with only r2 as data points?
# Vector Transformations
## 4.1 Plug the corresponding unit vectors into each function. Use the output vectors to create a transformation matrix.
\begin{align}
p(\begin{bmatrix}x_1 \\ x_2 \end{bmatrix}) = \begin{bmatrix} x_1 + 3x_2 \\2 x_2 - x_1 \\ \end{bmatrix}
\\
\\
q(\begin{bmatrix}x_1 \\ x_2 \\ x_3\end{bmatrix}) = \begin{bmatrix} 4x_1 + x_2 + 2x_3 \\2 x_2 - x_1 + 3x_3 \\ 5x_1 - 2x_3 + x_2 \end{bmatrix}
\end{align}
## 4.2 Verify that your transformation matrices are correct by choosing an input matrix and calculating the result both via the traditional functions above and also via vector-matrix multiplication.
```
import numpy as np
r2 = np.array([[1,0],
[0,1]])
p = np.array([[1],
[2]])
print('P:')
print(p)
mp = mp.array([4],
[5])
[19]
[6]
```
# Eigenvalues and Eigenvectors
## 5.1 In your own words, give an explanation for the intuition behind eigenvalues and eigenvectors.
eigenvectors dont change the direction they point during transformation
eigenvalues are the values that may be scaled in size but always stay along the same line of direction
# The Curse of Dimensionality
## 6.1 What are some of the challenges of working with high dimensional spaces?
the distance between data points becomes abstract. the data points loose meaning the more they are pulled in each dimention
## 6.2 What is the rule of thumb for how many observations you should have compared to parameters in your model?
5 times more observations than parameters
# Principal Component Analysis
## 7.1 Code for loading and cleaning the 2013 national dataset from the [Housing Affordability Data System (HADS)](https://www.huduser.gov/portal/datasets/hads/hads.html) --housing data, can be found below.
## Perform PCA on the processed dataset `national_processed` (Make sure you standardize your data!) and then make a scatterplot of PC1 against PC2. Some of our discussion and work around PCA with this dataset will continue during tomorrow's lecture and assignment.
Not only does this dataset have decent amount columns to begin with (99), but in preparing the data for PCA we have also [one-hot-encoded](https://hackernoon.com/what-is-one-hot-encoding-why-and-when-do-you-have-to-use-it-e3c6186d008f#targetText=One%20hot%20encoding%20is%20a,the%20entry%20in%20the%20dataset.) all of the categorical variables. This has the effect of creating a new column for each individual category of each categorical variable. After processing this dataset has 64738 columns. --Das a lot of columns.
Don't worry too much about the mechanics of one-hot encoding right now, you will learn and experiment with a whole bunch of categorical encoding approaches in unit 2.
The code below will read in the dataset and perform the one-hot encoding of the categorical variables. Start adding your PCA code at the bottom of the provided code.
```
from urllib.request import urlopen
from zipfile import ZipFile
from io import BytesIO
import os.path
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Read Natinal Data
national_url = 'https://www.huduser.gov/portal/datasets/hads/hads2013n_ASCII.zip'
national_file = 'thads2013n.txt'
if os.path.exists(national_file):
national = pd.read_csv(national_file)
else:
z_national = urlopen(national_url)
zip_national = ZipFile(BytesIO(z_national.read())).extract(national_file)
national = pd.read_csv(zip_national)
print(national.shape)
national.head()
```
(64535, 99)
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>CONTROL</th>
<th>AGE1</th>
<th>METRO3</th>
<th>REGION</th>
<th>LMED</th>
<th>FMR</th>
<th>L30</th>
<th>L50</th>
<th>L80</th>
<th>IPOV</th>
<th>BEDRMS</th>
<th>BUILT</th>
<th>STATUS</th>
<th>TYPE</th>
<th>VALUE</th>
<th>VACANCY</th>
<th>TENURE</th>
<th>NUNITS</th>
<th>ROOMS</th>
<th>WEIGHT</th>
<th>PER</th>
<th>ZINC2</th>
<th>ZADEQ</th>
<th>ZSMHC</th>
<th>STRUCTURETYPE</th>
<th>OWNRENT</th>
<th>UTILITY</th>
<th>OTHERCOST</th>
<th>COST06</th>
<th>COST12</th>
<th>COST08</th>
<th>COSTMED</th>
<th>TOTSAL</th>
<th>ASSISTED</th>
<th>GLMED</th>
<th>GL30</th>
<th>GL50</th>
<th>GL80</th>
<th>APLMED</th>
<th>ABL30</th>
<th>...</th>
<th>COST08RELPOVCAT</th>
<th>COST08RELFMRPCT</th>
<th>COST08RELFMRCAT</th>
<th>COST12RELAMIPCT</th>
<th>COST12RELAMICAT</th>
<th>COST12RELPOVPCT</th>
<th>COST12RELPOVCAT</th>
<th>COST12RELFMRPCT</th>
<th>COST12RELFMRCAT</th>
<th>COSTMedRELAMIPCT</th>
<th>COSTMedRELAMICAT</th>
<th>COSTMedRELPOVPCT</th>
<th>COSTMedRELPOVCAT</th>
<th>COSTMedRELFMRPCT</th>
<th>COSTMedRELFMRCAT</th>
<th>FMTZADEQ</th>
<th>FMTMETRO3</th>
<th>FMTBUILT</th>
<th>FMTSTRUCTURETYPE</th>
<th>FMTBEDRMS</th>
<th>FMTOWNRENT</th>
<th>FMTCOST06RELPOVCAT</th>
<th>FMTCOST08RELPOVCAT</th>
<th>FMTCOST12RELPOVCAT</th>
<th>FMTCOSTMEDRELPOVCAT</th>
<th>FMTINCRELPOVCAT</th>
<th>FMTCOST06RELFMRCAT</th>
<th>FMTCOST08RELFMRCAT</th>
<th>FMTCOST12RELFMRCAT</th>
<th>FMTCOSTMEDRELFMRCAT</th>
<th>FMTINCRELFMRCAT</th>
<th>FMTCOST06RELAMICAT</th>
<th>FMTCOST08RELAMICAT</th>
<th>FMTCOST12RELAMICAT</th>
<th>FMTCOSTMEDRELAMICAT</th>
<th>FMTINCRELAMICAT</th>
<th>FMTASSISTED</th>
<th>FMTBURDEN</th>
<th>FMTREGION</th>
<th>FMTSTATUS</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>'100003130103'</td>
<td>82</td>
<td>'3'</td>
<td>'1'</td>
<td>73738</td>
<td>956</td>
<td>15738</td>
<td>26213</td>
<td>40322</td>
<td>11067</td>
<td>2</td>
<td>2006</td>
<td>'1'</td>
<td>1</td>
<td>40000</td>
<td>-6</td>
<td>'1'</td>
<td>1</td>
<td>6</td>
<td>3117.394239</td>
<td>1</td>
<td>18021</td>
<td>'1'</td>
<td>533</td>
<td>1</td>
<td>'1'</td>
<td>169.000000</td>
<td>213.750000</td>
<td>648.588189</td>
<td>803.050535</td>
<td>696.905247</td>
<td>615.156712</td>
<td>0</td>
<td>-9</td>
<td>73738</td>
<td>15738</td>
<td>26213</td>
<td>40322</td>
<td>51616.6</td>
<td>20234.571429</td>
<td>...</td>
<td>4</td>
<td>72.898038</td>
<td>2</td>
<td>48.402635</td>
<td>2</td>
<td>290.250487</td>
<td>4</td>
<td>84.001102</td>
<td>2</td>
<td>37.077624</td>
<td>2</td>
<td>222.339102</td>
<td>4</td>
<td>64.346936</td>
<td>2</td>
<td>'1 Adequate'</td>
<td>'-5'</td>
<td>'2000-2009'</td>
<td>'1 Single Family'</td>
<td>'2 2BR'</td>
<td>'1 Owner'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'3 150-200% Poverty'</td>
<td>'2 50.1 - 100% FMR'</td>
<td>'2 50.1 - 100% FMR'</td>
<td>'2 50.1 - 100% FMR'</td>
<td>'2 50.1 - 100% FMR'</td>
<td>'1 LTE 50% FMR'</td>
<td>'2 30 - 50% AMI'</td>
<td>'2 30 - 50% AMI'</td>
<td>'2 30 - 50% AMI'</td>
<td>'2 30 - 50% AMI'</td>
<td>'2 30 - 50% AMI'</td>
<td>'.'</td>
<td>'2 30% to 50%'</td>
<td>'-5'</td>
<td>'-5'</td>
</tr>
<tr>
<th>1</th>
<td>'100006110249'</td>
<td>50</td>
<td>'5'</td>
<td>'3'</td>
<td>55846</td>
<td>1100</td>
<td>17165</td>
<td>28604</td>
<td>45744</td>
<td>24218</td>
<td>4</td>
<td>1980</td>
<td>'1'</td>
<td>1</td>
<td>130000</td>
<td>-6</td>
<td>'1'</td>
<td>1</td>
<td>6</td>
<td>2150.725544</td>
<td>4</td>
<td>122961</td>
<td>'1'</td>
<td>487</td>
<td>1</td>
<td>'1'</td>
<td>245.333333</td>
<td>58.333333</td>
<td>1167.640781</td>
<td>1669.643405</td>
<td>1324.671218</td>
<td>1058.988479</td>
<td>123000</td>
<td>-9</td>
<td>55846</td>
<td>17165</td>
<td>28604</td>
<td>45744</td>
<td>55846.0</td>
<td>19911.400000</td>
<td>...</td>
<td>4</td>
<td>120.424656</td>
<td>3</td>
<td>103.094063</td>
<td>6</td>
<td>275.768999</td>
<td>4</td>
<td>151.785764</td>
<td>3</td>
<td>65.388468</td>
<td>4</td>
<td>174.909320</td>
<td>3</td>
<td>96.271680</td>
<td>2</td>
<td>'1 Adequate'</td>
<td>'-5'</td>
<td>'1980-1989'</td>
<td>'1 Single Family'</td>
<td>'4 4BR+'</td>
<td>'1 Owner'</td>
<td>'3 150-200% Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'3 150-200% Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'2 50.1 - 100% FMR'</td>
<td>'3 GT FMR'</td>
<td>'4 60 - 80% AMI'</td>
<td>'4 60 - 80% AMI'</td>
<td>'6 100 - 120% AMI'</td>
<td>'4 60 - 80% AMI'</td>
<td>'7 120% AMI +'</td>
<td>'.'</td>
<td>'1 Less than 30%'</td>
<td>'-5'</td>
<td>'-5'</td>
</tr>
<tr>
<th>2</th>
<td>'100006370140'</td>
<td>53</td>
<td>'5'</td>
<td>'3'</td>
<td>55846</td>
<td>1100</td>
<td>13750</td>
<td>22897</td>
<td>36614</td>
<td>15470</td>
<td>4</td>
<td>1985</td>
<td>'1'</td>
<td>1</td>
<td>150000</td>
<td>-6</td>
<td>'1'</td>
<td>1</td>
<td>7</td>
<td>2213.789404</td>
<td>2</td>
<td>27974</td>
<td>'1'</td>
<td>1405</td>
<td>1</td>
<td>'1'</td>
<td>159.000000</td>
<td>37.500000</td>
<td>1193.393209</td>
<td>1772.627006</td>
<td>1374.582175</td>
<td>1068.025168</td>
<td>28000</td>
<td>-9</td>
<td>55846</td>
<td>13750</td>
<td>22897</td>
<td>36614</td>
<td>44676.8</td>
<td>19937.500000</td>
<td>...</td>
<td>4</td>
<td>124.962016</td>
<td>3</td>
<td>109.452905</td>
<td>6</td>
<td>458.339239</td>
<td>4</td>
<td>161.147910</td>
<td>3</td>
<td>65.946449</td>
<td>4</td>
<td>276.153890</td>
<td>4</td>
<td>97.093197</td>
<td>2</td>
<td>'1 Adequate'</td>
<td>'-5'</td>
<td>'1980-1989'</td>
<td>'1 Single Family'</td>
<td>'4 4BR+'</td>
<td>'1 Owner'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'3 150-200% Poverty'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'2 50.1 - 100% FMR'</td>
<td>'2 50.1 - 100% FMR'</td>
<td>'4 60 - 80% AMI'</td>
<td>'5 80 - 100% AMI'</td>
<td>'6 100 - 120% AMI'</td>
<td>'4 60 - 80% AMI'</td>
<td>'4 60 - 80% AMI'</td>
<td>'.'</td>
<td>'3 50% or More'</td>
<td>'-5'</td>
<td>'-5'</td>
</tr>
<tr>
<th>3</th>
<td>'100006520140'</td>
<td>67</td>
<td>'5'</td>
<td>'3'</td>
<td>55846</td>
<td>949</td>
<td>13750</td>
<td>22897</td>
<td>36614</td>
<td>13964</td>
<td>3</td>
<td>1985</td>
<td>'1'</td>
<td>1</td>
<td>200000</td>
<td>-6</td>
<td>'1'</td>
<td>1</td>
<td>6</td>
<td>2364.585097</td>
<td>2</td>
<td>32220</td>
<td>'1'</td>
<td>279</td>
<td>1</td>
<td>'1'</td>
<td>179.000000</td>
<td>70.666667</td>
<td>1578.857612</td>
<td>2351.169341</td>
<td>1820.442900</td>
<td>1411.700224</td>
<td>0</td>
<td>-9</td>
<td>55846</td>
<td>13750</td>
<td>22897</td>
<td>36614</td>
<td>44676.8</td>
<td>17875.000000</td>
<td>...</td>
<td>4</td>
<td>191.827492</td>
<td>3</td>
<td>161.926709</td>
<td>7</td>
<td>673.494512</td>
<td>4</td>
<td>247.752301</td>
<td>3</td>
<td>97.224801</td>
<td>5</td>
<td>404.382763</td>
<td>4</td>
<td>148.756610</td>
<td>3</td>
<td>'1 Adequate'</td>
<td>'-5'</td>
<td>'1980-1989'</td>
<td>'1 Single Family'</td>
<td>'3 3BR'</td>
<td>'1 Owner'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'2 50.1 - 100% FMR'</td>
<td>'6 100 - 120% AMI'</td>
<td>'7 120% AMI +'</td>
<td>'7 120% AMI +'</td>
<td>'5 80 - 100% AMI'</td>
<td>'4 60 - 80% AMI'</td>
<td>'.'</td>
<td>'1 Less than 30%'</td>
<td>'-5'</td>
<td>'-5'</td>
</tr>
<tr>
<th>4</th>
<td>'100007130148'</td>
<td>26</td>
<td>'1'</td>
<td>'3'</td>
<td>60991</td>
<td>737</td>
<td>14801</td>
<td>24628</td>
<td>39421</td>
<td>15492</td>
<td>2</td>
<td>1980</td>
<td>'1'</td>
<td>1</td>
<td>-6</td>
<td>-6</td>
<td>'2'</td>
<td>100</td>
<td>4</td>
<td>2314.524902</td>
<td>2</td>
<td>96874</td>
<td>'1'</td>
<td>759</td>
<td>5</td>
<td>'2'</td>
<td>146.000000</td>
<td>12.500000</td>
<td>759.000000</td>
<td>759.000000</td>
<td>759.000000</td>
<td>759.000000</td>
<td>96900</td>
<td>0</td>
<td>60991</td>
<td>14801</td>
<td>24628</td>
<td>39421</td>
<td>48792.8</td>
<td>16651.125000</td>
<td>...</td>
<td>3</td>
<td>102.985075</td>
<td>3</td>
<td>55.308707</td>
<td>3</td>
<td>195.972115</td>
<td>3</td>
<td>102.985075</td>
<td>3</td>
<td>55.308707</td>
<td>3</td>
<td>195.972115</td>
<td>3</td>
<td>102.985075</td>
<td>3</td>
<td>'1 Adequate'</td>
<td>'Central City'</td>
<td>'1980-1989'</td>
<td>'5 50+ units'</td>
<td>'2 2BR'</td>
<td>'2 Renter'</td>
<td>'3 150-200% Poverty'</td>
<td>'3 150-200% Poverty'</td>
<td>'3 150-200% Poverty'</td>
<td>'3 150-200% Poverty'</td>
<td>'4 200%+ Poverty'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'3 GT FMR'</td>
<td>'3 50 - 60% AMI'</td>
<td>'3 50 - 60% AMI'</td>
<td>'3 50 - 60% AMI'</td>
<td>'3 50 - 60% AMI'</td>
<td>'7 120% AMI +'</td>
<td>'0 Not Assisted'</td>
<td>'1 Less than 30%'</td>
<td>'-5'</td>
<td>'-5'</td>
</tr>
</tbody>
</table>
<p>5 rows × 99 columns</p>
</div>
```
# Look at datatypes
# a lot of object datatypes even though they seem to be strings of numbers.
national.dtypes
```
CONTROL object
AGE1 int64
METRO3 object
REGION object
LMED int64
...
FMTINCRELAMICAT object
FMTASSISTED object
FMTBURDEN object
FMTREGION object
FMTSTATUS object
Length: 99, dtype: object
```
# check for null values
national.isnull().sum().any()
```
False
```
# check for number of categorical vs numeric columns
cat_cols = national.columns[national.dtypes=='object']
num_cols = national.columns[national.dtypes!='object']
print(f'{len(cat_cols)} categorical columns')
print(f'{len(num_cols)} numerical columns')
```
32 categorical columns
67 numerical columns
```
# We're making a copy of our data in case we mess something up.
national_processed = national.copy()
# Categorically Encode our Variables:
# They need to all be numeric before we do PCA.
# https://pbpython.com/categorical-encoding.html
# Cast categorical columns to "category" data type
national_processed[cat_cols] = national_processed[cat_cols].astype('category')
national_processed.dtypes
```
CONTROL category
AGE1 int64
METRO3 category
REGION category
LMED int64
...
FMTINCRELAMICAT category
FMTASSISTED category
FMTBURDEN category
FMTREGION category
FMTSTATUS category
Length: 99, dtype: object
```
# Replace all category cell values with their numeric category codes
for col in cat_cols:
national_processed[col] = national_processed[col].cat.codes
print(national_processed.shape)
national_processed.head(10)
```
(64535, 99)
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>CONTROL</th>
<th>AGE1</th>
<th>METRO3</th>
<th>REGION</th>
<th>LMED</th>
<th>FMR</th>
<th>L30</th>
<th>L50</th>
<th>L80</th>
<th>IPOV</th>
<th>BEDRMS</th>
<th>BUILT</th>
<th>STATUS</th>
<th>TYPE</th>
<th>VALUE</th>
<th>VACANCY</th>
<th>TENURE</th>
<th>NUNITS</th>
<th>ROOMS</th>
<th>WEIGHT</th>
<th>PER</th>
<th>ZINC2</th>
<th>ZADEQ</th>
<th>ZSMHC</th>
<th>STRUCTURETYPE</th>
<th>OWNRENT</th>
<th>UTILITY</th>
<th>OTHERCOST</th>
<th>COST06</th>
<th>COST12</th>
<th>COST08</th>
<th>COSTMED</th>
<th>TOTSAL</th>
<th>ASSISTED</th>
<th>GLMED</th>
<th>GL30</th>
<th>GL50</th>
<th>GL80</th>
<th>APLMED</th>
<th>ABL30</th>
<th>...</th>
<th>COST08RELPOVCAT</th>
<th>COST08RELFMRPCT</th>
<th>COST08RELFMRCAT</th>
<th>COST12RELAMIPCT</th>
<th>COST12RELAMICAT</th>
<th>COST12RELPOVPCT</th>
<th>COST12RELPOVCAT</th>
<th>COST12RELFMRPCT</th>
<th>COST12RELFMRCAT</th>
<th>COSTMedRELAMIPCT</th>
<th>COSTMedRELAMICAT</th>
<th>COSTMedRELPOVPCT</th>
<th>COSTMedRELPOVCAT</th>
<th>COSTMedRELFMRPCT</th>
<th>COSTMedRELFMRCAT</th>
<th>FMTZADEQ</th>
<th>FMTMETRO3</th>
<th>FMTBUILT</th>
<th>FMTSTRUCTURETYPE</th>
<th>FMTBEDRMS</th>
<th>FMTOWNRENT</th>
<th>FMTCOST06RELPOVCAT</th>
<th>FMTCOST08RELPOVCAT</th>
<th>FMTCOST12RELPOVCAT</th>
<th>FMTCOSTMEDRELPOVCAT</th>
<th>FMTINCRELPOVCAT</th>
<th>FMTCOST06RELFMRCAT</th>
<th>FMTCOST08RELFMRCAT</th>
<th>FMTCOST12RELFMRCAT</th>
<th>FMTCOSTMEDRELFMRCAT</th>
<th>FMTINCRELFMRCAT</th>
<th>FMTCOST06RELAMICAT</th>
<th>FMTCOST08RELAMICAT</th>
<th>FMTCOST12RELAMICAT</th>
<th>FMTCOSTMEDRELAMICAT</th>
<th>FMTINCRELAMICAT</th>
<th>FMTASSISTED</th>
<th>FMTBURDEN</th>
<th>FMTREGION</th>
<th>FMTSTATUS</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>82</td>
<td>2</td>
<td>0</td>
<td>73738</td>
<td>956</td>
<td>15738</td>
<td>26213</td>
<td>40322</td>
<td>11067</td>
<td>2</td>
<td>2006</td>
<td>0</td>
<td>1</td>
<td>40000</td>
<td>-6</td>
<td>1</td>
<td>1</td>
<td>6</td>
<td>3117.394239</td>
<td>1</td>
<td>18021</td>
<td>1</td>
<td>533</td>
<td>1</td>
<td>0</td>
<td>169.000000</td>
<td>213.750000</td>
<td>648.588189</td>
<td>803.050535</td>
<td>696.905247</td>
<td>615.156712</td>
<td>0</td>
<td>-9</td>
<td>73738</td>
<td>15738</td>
<td>26213</td>
<td>40322</td>
<td>51616.6</td>
<td>20234.571429</td>
<td>...</td>
<td>4</td>
<td>72.898038</td>
<td>2</td>
<td>48.402635</td>
<td>2</td>
<td>290.250487</td>
<td>4</td>
<td>84.001102</td>
<td>2</td>
<td>37.077624</td>
<td>2</td>
<td>222.339102</td>
<td>4</td>
<td>64.346936</td>
<td>2</td>
<td>1</td>
<td>0</td>
<td>5</td>
<td>1</td>
<td>2</td>
<td>0</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>50</td>
<td>4</td>
<td>2</td>
<td>55846</td>
<td>1100</td>
<td>17165</td>
<td>28604</td>
<td>45744</td>
<td>24218</td>
<td>4</td>
<td>1980</td>
<td>0</td>
<td>1</td>
<td>130000</td>
<td>-6</td>
<td>1</td>
<td>1</td>
<td>6</td>
<td>2150.725544</td>
<td>4</td>
<td>122961</td>
<td>1</td>
<td>487</td>
<td>1</td>
<td>0</td>
<td>245.333333</td>
<td>58.333333</td>
<td>1167.640781</td>
<td>1669.643405</td>
<td>1324.671218</td>
<td>1058.988479</td>
<td>123000</td>
<td>-9</td>
<td>55846</td>
<td>17165</td>
<td>28604</td>
<td>45744</td>
<td>55846.0</td>
<td>19911.400000</td>
<td>...</td>
<td>4</td>
<td>120.424656</td>
<td>3</td>
<td>103.094063</td>
<td>6</td>
<td>275.768999</td>
<td>4</td>
<td>151.785764</td>
<td>3</td>
<td>65.388468</td>
<td>4</td>
<td>174.909320</td>
<td>3</td>
<td>96.271680</td>
<td>2</td>
<td>1</td>
<td>0</td>
<td>3</td>
<td>1</td>
<td>4</td>
<td>0</td>
<td>3</td>
<td>4</td>
<td>4</td>
<td>3</td>
<td>4</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>1</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>5</td>
<td>3</td>
<td>7</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>53</td>
<td>4</td>
<td>2</td>
<td>55846</td>
<td>1100</td>
<td>13750</td>
<td>22897</td>
<td>36614</td>
<td>15470</td>
<td>4</td>
<td>1985</td>
<td>0</td>
<td>1</td>
<td>150000</td>
<td>-6</td>
<td>1</td>
<td>1</td>
<td>7</td>
<td>2213.789404</td>
<td>2</td>
<td>27974</td>
<td>1</td>
<td>1405</td>
<td>1</td>
<td>0</td>
<td>159.000000</td>
<td>37.500000</td>
<td>1193.393209</td>
<td>1772.627006</td>
<td>1374.582175</td>
<td>1068.025168</td>
<td>28000</td>
<td>-9</td>
<td>55846</td>
<td>13750</td>
<td>22897</td>
<td>36614</td>
<td>44676.8</td>
<td>19937.500000</td>
<td>...</td>
<td>4</td>
<td>124.962016</td>
<td>3</td>
<td>109.452905</td>
<td>6</td>
<td>458.339239</td>
<td>4</td>
<td>161.147910</td>
<td>3</td>
<td>65.946449</td>
<td>4</td>
<td>276.153890</td>
<td>4</td>
<td>97.093197</td>
<td>2</td>
<td>1</td>
<td>0</td>
<td>3</td>
<td>1</td>
<td>4</td>
<td>0</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>3</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>3</td>
<td>4</td>
<td>0</td>
<td>3</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>3</td>
<td>67</td>
<td>4</td>
<td>2</td>
<td>55846</td>
<td>949</td>
<td>13750</td>
<td>22897</td>
<td>36614</td>
<td>13964</td>
<td>3</td>
<td>1985</td>
<td>0</td>
<td>1</td>
<td>200000</td>
<td>-6</td>
<td>1</td>
<td>1</td>
<td>6</td>
<td>2364.585097</td>
<td>2</td>
<td>32220</td>
<td>1</td>
<td>279</td>
<td>1</td>
<td>0</td>
<td>179.000000</td>
<td>70.666667</td>
<td>1578.857612</td>
<td>2351.169341</td>
<td>1820.442900</td>
<td>1411.700224</td>
<td>0</td>
<td>-9</td>
<td>55846</td>
<td>13750</td>
<td>22897</td>
<td>36614</td>
<td>44676.8</td>
<td>17875.000000</td>
<td>...</td>
<td>4</td>
<td>191.827492</td>
<td>3</td>
<td>161.926709</td>
<td>7</td>
<td>673.494512</td>
<td>4</td>
<td>247.752301</td>
<td>3</td>
<td>97.224801</td>
<td>5</td>
<td>404.382763</td>
<td>4</td>
<td>148.756610</td>
<td>3</td>
<td>1</td>
<td>0</td>
<td>3</td>
<td>1</td>
<td>3</td>
<td>0</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>5</td>
<td>6</td>
<td>6</td>
<td>4</td>
<td>4</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>4</td>
<td>26</td>
<td>0</td>
<td>2</td>
<td>60991</td>
<td>737</td>
<td>14801</td>
<td>24628</td>
<td>39421</td>
<td>15492</td>
<td>2</td>
<td>1980</td>
<td>0</td>
<td>1</td>
<td>-6</td>
<td>-6</td>
<td>2</td>
<td>100</td>
<td>4</td>
<td>2314.524902</td>
<td>2</td>
<td>96874</td>
<td>1</td>
<td>759</td>
<td>5</td>
<td>1</td>
<td>146.000000</td>
<td>12.500000</td>
<td>759.000000</td>
<td>759.000000</td>
<td>759.000000</td>
<td>759.000000</td>
<td>96900</td>
<td>0</td>
<td>60991</td>
<td>14801</td>
<td>24628</td>
<td>39421</td>
<td>48792.8</td>
<td>16651.125000</td>
<td>...</td>
<td>3</td>
<td>102.985075</td>
<td>3</td>
<td>55.308707</td>
<td>3</td>
<td>195.972115</td>
<td>3</td>
<td>102.985075</td>
<td>3</td>
<td>55.308707</td>
<td>3</td>
<td>195.972115</td>
<td>3</td>
<td>102.985075</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>3</td>
<td>5</td>
<td>2</td>
<td>1</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>4</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>3</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>7</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>5</th>
<td>5</td>
<td>56</td>
<td>1</td>
<td>2</td>
<td>62066</td>
<td>657</td>
<td>13170</td>
<td>21924</td>
<td>35073</td>
<td>12005</td>
<td>1</td>
<td>1985</td>
<td>0</td>
<td>1</td>
<td>-6</td>
<td>-6</td>
<td>2</td>
<td>32</td>
<td>3</td>
<td>2482.655916</td>
<td>1</td>
<td>14987</td>
<td>1</td>
<td>695</td>
<td>4</td>
<td>1</td>
<td>94.750000</td>
<td>0.000000</td>
<td>695.000000</td>
<td>695.000000</td>
<td>695.000000</td>
<td>695.000000</td>
<td>15000</td>
<td>1</td>
<td>62066</td>
<td>13170</td>
<td>21924</td>
<td>35073</td>
<td>43446.2</td>
<td>14110.714286</td>
<td>...</td>
<td>4</td>
<td>105.783866</td>
<td>3</td>
<td>59.721372</td>
<td>3</td>
<td>231.570179</td>
<td>4</td>
<td>105.783866</td>
<td>3</td>
<td>59.721372</td>
<td>3</td>
<td>231.570179</td>
<td>4</td>
<td>105.783866</td>
<td>3</td>
<td>1</td>
<td>0</td>
<td>3</td>
<td>4</td>
<td>1</td>
<td>1</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>3</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6</th>
<td>6</td>
<td>50</td>
<td>0</td>
<td>2</td>
<td>60991</td>
<td>988</td>
<td>16646</td>
<td>27713</td>
<td>44340</td>
<td>18050</td>
<td>3</td>
<td>1985</td>
<td>0</td>
<td>1</td>
<td>260000</td>
<td>-6</td>
<td>1</td>
<td>1</td>
<td>6</td>
<td>4084.310118</td>
<td>3</td>
<td>69962</td>
<td>1</td>
<td>1165</td>
<td>1</td>
<td>0</td>
<td>236.000000</td>
<td>75.000000</td>
<td>2038.948229</td>
<td>3042.953477</td>
<td>2353.009103</td>
<td>1821.643625</td>
<td>70001</td>
<td>-9</td>
<td>60991</td>
<td>16646</td>
<td>27713</td>
<td>44340</td>
<td>54891.9</td>
<td>19235.377778</td>
<td>...</td>
<td>4</td>
<td>238.158816</td>
<td>3</td>
<td>191.891709</td>
<td>7</td>
<td>674.338721</td>
<td>4</td>
<td>307.991243</td>
<td>3</td>
<td>114.874680</td>
<td>6</td>
<td>403.688338</td>
<td>4</td>
<td>184.376885</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>3</td>
<td>1</td>
<td>3</td>
<td>0</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>3</td>
<td>6</td>
<td>6</td>
<td>6</td>
<td>5</td>
<td>7</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>7</th>
<td>7</td>
<td>26</td>
<td>3</td>
<td>3</td>
<td>52322</td>
<td>773</td>
<td>13489</td>
<td>22471</td>
<td>35929</td>
<td>15992</td>
<td>2</td>
<td>1980</td>
<td>0</td>
<td>1</td>
<td>-6</td>
<td>-6</td>
<td>2</td>
<td>8</td>
<td>5</td>
<td>2823.395990</td>
<td>2</td>
<td>32000</td>
<td>2</td>
<td>976</td>
<td>3</td>
<td>1</td>
<td>81.000000</td>
<td>0.000000</td>
<td>976.000000</td>
<td>976.000000</td>
<td>976.000000</td>
<td>976.000000</td>
<td>20000</td>
<td>0</td>
<td>52322</td>
<td>13489</td>
<td>22471</td>
<td>35929</td>
<td>41857.6</td>
<td>15175.125000</td>
<td>...</td>
<td>4</td>
<td>126.261320</td>
<td>3</td>
<td>82.905428</td>
<td>4</td>
<td>244.122061</td>
<td>4</td>
<td>126.261320</td>
<td>3</td>
<td>82.905428</td>
<td>4</td>
<td>244.122061</td>
<td>4</td>
<td>126.261320</td>
<td>3</td>
<td>2</td>
<td>0</td>
<td>3</td>
<td>3</td>
<td>2</td>
<td>1</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>4</td>
<td>1</td>
<td>2</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>8</th>
<td>8</td>
<td>60</td>
<td>4</td>
<td>3</td>
<td>50296</td>
<td>1125</td>
<td>13115</td>
<td>21859</td>
<td>34939</td>
<td>15452</td>
<td>3</td>
<td>1985</td>
<td>0</td>
<td>1</td>
<td>170000</td>
<td>-6</td>
<td>1</td>
<td>1</td>
<td>7</td>
<td>2552.762241</td>
<td>2</td>
<td>118987</td>
<td>1</td>
<td>1156</td>
<td>1</td>
<td>0</td>
<td>184.083333</td>
<td>47.500000</td>
<td>1361.395637</td>
<td>2017.860607</td>
<td>1566.743131</td>
<td>1219.311857</td>
<td>107000</td>
<td>-9</td>
<td>50296</td>
<td>13115</td>
<td>21859</td>
<td>34939</td>
<td>40236.8</td>
<td>17049.500000</td>
<td>...</td>
<td>4</td>
<td>139.266056</td>
<td>3</td>
<td>154.306552</td>
<td>7</td>
<td>522.355839</td>
<td>4</td>
<td>179.365387</td>
<td>3</td>
<td>93.241232</td>
<td>5</td>
<td>315.638586</td>
<td>4</td>
<td>108.383276</td>
<td>3</td>
<td>1</td>
<td>0</td>
<td>3</td>
<td>1</td>
<td>3</td>
<td>0</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>3</td>
<td>5</td>
<td>5</td>
<td>6</td>
<td>4</td>
<td>7</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>9</th>
<td>9</td>
<td>26</td>
<td>3</td>
<td>1</td>
<td>63221</td>
<td>552</td>
<td>13338</td>
<td>22199</td>
<td>35501</td>
<td>12005</td>
<td>1</td>
<td>1985</td>
<td>0</td>
<td>1</td>
<td>-6</td>
<td>-6</td>
<td>2</td>
<td>24</td>
<td>3</td>
<td>2845.454432</td>
<td>1</td>
<td>47987</td>
<td>1</td>
<td>1100</td>
<td>4</td>
<td>1</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1100.000000</td>
<td>1100.000000</td>
<td>1100.000000</td>
<td>1100.000000</td>
<td>48000</td>
<td>0</td>
<td>63221</td>
<td>13338</td>
<td>22199</td>
<td>35501</td>
<td>44254.7</td>
<td>14290.714286</td>
<td>...</td>
<td>4</td>
<td>199.275362</td>
<td>3</td>
<td>92.796170</td>
<td>5</td>
<td>366.513953</td>
<td>4</td>
<td>199.275362</td>
<td>3</td>
<td>92.796170</td>
<td>5</td>
<td>366.513953</td>
<td>4</td>
<td>199.275362</td>
<td>3</td>
<td>1</td>
<td>0</td>
<td>3</td>
<td>4</td>
<td>1</td>
<td>1</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>4</td>
<td>6</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>10 rows × 99 columns</p>
</div>
```
# Now we only have numeric columns (ints and floats)
national_processed.dtypes
```
CONTROL int32
AGE1 int64
METRO3 int8
REGION int8
LMED int64
...
FMTINCRELAMICAT int8
FMTASSISTED int8
FMTBURDEN int8
FMTREGION int8
FMTSTATUS int8
Length: 99, dtype: object
```
### Your Code Here
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import numpy as np
```
```
# seperate data into x and y variables
X = np.array([national_processed['ROOMS'],national_processed['BEDRMS']])
# national_processed['COSTMED']])
```
```
X
```
array([[6, 6, 7, ..., 5, 3, 3],
[2, 4, 4, ..., 3, 1, 1]])
```
centered1 = []
#centered2 = []
#centered3 = []
for i in range(len(X[0])):
centr = X[0][i] - X[0].mean()
centered1.append(centr)
# centr2 = X[1][i]- X[1].mean()
# centered2.append(centr2)
# centr3 = X[2][i]-X[1].mean()
# centered3.append(centr3)
std1 = np.std(X[0],axis =1)
#std2 = np.std(X[1],axis=1)
#std3 = np.std(X[2])
standardized = []
#standardized2 = []
#standardized3 = []
for i in range(len(centered1)):
stndrzed = centered1[i] / std1
standardized.append(stndrzed)
# stndrzed2 = centered2[i] / std2
# standardized2.append(stndrzed2)
# stndrzed3 = centered3[i] / std3
# standardized3.append(stndrzed3)
Z = np.array([standardized,standardized2])
```
```
means = np.mean(X.T, axis=1)
centered_data = X - means
stand_devs = np.std(X.T,axis=1)
standard_data = centered_data / stand_devs
```
```
Z = np.array(standard_data.T)
```
```
covariance_mat = np.cov(Z)
```
```
vals, vects = np.linalg.eig(covariance_mat)
print("eigenvalues:", vals)
print("eigenvectors:")
print(vects)
```
```
projected_d = vects.dot(standard_data)
pd.DataFrame(projected_d)
```
```
covariance_matrix = np.cov(Z)
evalues, evectors = np.linalg.eig(covariance_matrix)
print("eigenvalues:", evalues)
print("eigenvectors:")
print(evectors)
```
```
projected = evectors.dot(Z)
projected_df = pd.DataFrame(projected.T)
projected_df.head()
```
```
x = projected_d[0]
y = projected_d[1]
fig, ax = plt.subplots(figsize=(8,8))
ax.grid()
ax.set_xlim(-10,10)
ax.set_ylim(-10,10)
ax.axhline(0,color = 'k')
ax.axvline(0,color = 'k')
plt.scatter(x,y)
plt.show()
```
```
```
```
# Principal Component Analysis
from numpy import array
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
# define a matrix
print("Data: \n", X)
# Standardize the Data
# Instantiate a Standard Scaler object
scaler = StandardScaler()
# Use the object to fit_transform our data
Z = scaler.fit_transform(X)
print("\n Standardized Data: \n", Z)
# create the PCA instance
pca = PCA(2)
# fit on data
pca.fit(Z)
# access values and vectors
print("\n Eigenvectors: \n", pca.components_)
print("\n Eigenvalues: \n",pca.explained_variance_)
# transform data
B = pca.transform(Z)
print("\n Projected Data: \n", B)
```
Data:
[[6 6 7 ... 5 3 3]
[2 4 4 ... 3 1 1]]
Standardized Data:
[[ 1. 1. 1. ... 1. 1. 1.]
[-1. -1. -1. ... -1. -1. -1.]]
Eigenvectors:
[[ 3.93645878e-03 3.93645878e-03 3.93645878e-03 ... 3.93645878e-03
3.93645878e-03 3.93645878e-03]
[ 9.99992252e-01 -1.54958277e-05 -1.54958277e-05 ... -1.54958277e-05
-1.54958277e-05 -1.54958277e-05]]
Eigenvalues:
[1.29068000e+05 2.67203605e-21]
Projected Data:
[[ 2.54035431e+02 9.74548766e-11]
[-2.54035431e+02 -9.74548766e-11]]
# Stretch Goals
## 1) Perform further data exploration on the HADS national dataset (the version before we one-hot encoded it) Make scatterplots and see if you can see any resemblance between the original scatterplots and the plot of the principal components that you made in 7.1.
(You may or may not not see very much resemblance depending on the variables you choose, and that's ok!)
## 2) Study "Scree Plots" and then try and make one for your PCA dataset. How many principal conponents do you need to retain in order for your PCs to contain 90% of the explained variance?
We will present this topic formally at the beginning of tomorrow's lecture, so if you figure this stretch goal out, you're ahead of the game.
## 3) Explore further the intuition behind eigenvalues and eigenvectors by creating your very own eigenfaces:
Prioritize self-study over this stretch goal if you are not semi-comfortable with the topics of PCA, Eigenvalues, and Eigenvectors.
You don't necessarily have to use this resource, but this will get you started:
[Eigenface Tutorial](https://sandipanweb.wordpress.com/2018/01/06/eigenfaces-and-a-simple-face-detector-with-pca-svd-in-python/)
|
f86e8c5aba85313a039de447f9c99015c054b9e1
| 126,382 |
ipynb
|
Jupyter Notebook
|
LS_DS12_133_High_Dimensional_Data_Assignment.ipynb
|
tallywiesenberg/DS-Unit-1-Sprint-1-Dealing-With-Data
|
59091c7ae4ce53610206b001f531e55bf89638e8
|
[
"MIT"
] | null | null | null |
LS_DS12_133_High_Dimensional_Data_Assignment.ipynb
|
tallywiesenberg/DS-Unit-1-Sprint-1-Dealing-With-Data
|
59091c7ae4ce53610206b001f531e55bf89638e8
|
[
"MIT"
] | null | null | null |
LS_DS12_133_High_Dimensional_Data_Assignment.ipynb
|
tallywiesenberg/DS-Unit-1-Sprint-1-Dealing-With-Data
|
59091c7ae4ce53610206b001f531e55bf89638e8
|
[
"MIT"
] | null | null | null | 48.814986 | 20,802 | 0.482735 | true | 17,133 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.661923 | 0.867036 | 0.573911 |
__label__kor_Hang
| 0.277522 | 0.171717 |
# Dimensional Reduction
G. Richards
(2016, 2018, 2020)
based on materials from Connolly, Leighly, VanderPlas, Geron, and Ivezic Ch. 7.0-7.4
**This SHOULD NOT be necessary anymore, but I'm leaving it here for now (2020) just in case anyone runs into problems. Before class starts, you may need to do the following:**
> find . -name “sdss_corrected_spectra.py” -print
> ./anaconda3/lib/python3.8/site-packages/astroML/datasets/sdss_corrected_spectra.py
> emacs -nw ./anaconda3/lib/python3.8/site-packages/astroML/datasets/sdss_corrected_spectra.py
> #DATA_URL = 'http://www.astro.washington.edu/users/vanderplas/spec4000.npz'
> DATA_URL = 'http://staff.washington.edu/jakevdp/spec4000.npz'
Just in case that doesn't work, I've put "spec4000.npz" in PHYS_440_540/data. Copy this to your "astroML_data" directory.
## Curse of Dimensionality
You want to buy a car. Right now--you don't want to wait. But you are picky and have certain things that you would like it to have. Each of those things has a probability between 0 and 1 of being on the the car dealer's lot. You want a red car which has a probability of being on the lot of $p_{\rm red}$; you want good gas mileage, $p_{\rm gas}$; you want leather seats, $p_{\rm leather}$; and you want a sunroof, $p_{\rm sunroof}$. The probability that the dealer has a car on the lot that meets all of those requirements is
$$p_{\rm red} \, p_{\rm gas} \, p_{\rm leather} \, p_{\rm sunroof},$$
or $p^n$ where $n$ is the number of features (assuming equal probability for each).
If the probability of each of these is 50%, then the probability of you driving off with your car of choice is only $0.5*0.5*0.5*0.5 = 0.0625$. Not very good. Imagine if you also wanted other things. This is the [Curse of Dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality).
Let's illustrate the curse of dimensionality with two figures from [here.](https://medium.freecodecamp.org/the-curse-of-dimensionality-how-we-can-save-big-data-from-itself-d9fa0f872335)
In the first example we are trying to find which box hold some treasure, which gets harder and harder with more dimensions, despite there just being 5 boxes in each dimension:
In the next example we inscribe a circle in a square. The area outside of the circle grows larger and larger as the number of dimensions increase:
Can also think about longest linear distance. Goes up by 41% when add 2nd dimension (and 73% larger in 3-D than 1-D).
Mathematically we can describe this as: the more dimensions that your data span, the more points needed to uniformly sample the space.
For $D$ dimensions with coordinates $[-1,1]$, the fraction of points in a unit hypersphere (with radius $r$, as illustrated above) is
$$f_D = \frac{V_D(r)}{(2r)^D} = \frac{\pi^{D/2}}{D2^{D-1}\Gamma(D/2)}$$
which goes to $0$ as $D$ goes to infinity! Actually, as you can see from the plot below, it is effectively 0 much earlier than that!
```python
# Execute this cell
# from Andy Connolly
%matplotlib inline
import numpy as np
import scipy.special as sp
from matplotlib import pyplot as plt
def unitVolume(dimension, radius=1.):
return 2*(radius**dimension *np.pi**(dimension/2.))/(dimension*sp.gamma(dimension/2.))
dim = np.linspace(1,100)
#------------------------------------------------------------
# Plot the results
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(dim,unitVolume(dim)/2.**dim)
ax.set_yscale('log')
ax.set_xlabel('$Dimension$')
ax.set_ylabel('$Volume$')
plt.show()
```
Note that this works in the opposite direction too: let's say you want to find "rare" objects in 10 dimensions, where we'll define rare as <1% of the population. Then you'll need to accept objects from 63% of the distribution in all 10 dimensions! So are those really "rare" or are they just a particular 1% of the population?
```python
import numpy as np
#p^10 = 0.01, solve for p
p = 10**(np.log10(0.01)/10.0)
print(p)
```
What fraction of each dimension do you need to cover to split your data 50-50 in 2D? Try it.
```python
import numpy as np
p = 10**(np.log10(____)/____)
print(p)
```
N.B. Dimensionality isn't just measuring $D$ parameters for $N$ objects. It could be a spectrum with $D$ values or an image with $D$ pixels, etc. In the book the examples used just happen to be spectra of galaxies from the SDSS project. But we can insert the data of our choice instead.
For example: the SDSS comprises a sample of 357 million sources:
- each source has 448 measured attributes
- selecting just 30 (e.g., magnitude, size) and normalizing the data range $-1$ to $1$
yields a probability of having one of the 357 million sources reside within a unit hypersphere of 1 in 1.4$\times 10^5$.
See also [this article](https://towardsdatascience.com/the-curse-of-dimensionality-50dc6e49aa1e).
## Principal Component Analysis (PCA)
In [Principal Component Analysis (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis) we seek to take a data set like the one shown below and apply a transform to the data such that the new axes are aligned with the maximal variance of the data. As can be seen in the Figure, this is basically just the same as doing regression by minimizing the square of the perpendicular distances to the new axes. Note that we haven't made any changes to the data, we have just defined new axes.
```python
# Execute this cell
# Ivezic, Figure 7.2
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.patches import Ellipse
#------------------------------------------------------------
# Set parameters and draw the random sample
np.random.seed(42)
r = 0.9
sigma1 = 0.25
sigma2 = 0.08
rotation = np.pi / 6
s = np.sin(rotation)
c = np.cos(rotation)
X = np.random.normal(0, [sigma1, sigma2], size=(100, 2)).T
R = np.array([[c, -s],[s, c]])
X = np.dot(R, X) #Same data, now rotated by R matrix.
#------------------------------------------------------------
# Plot the diagram
fig = plt.figure(figsize=(5, 5), facecolor='w')
ax = plt.axes((0, 0, 1, 1), xticks=[], yticks=[], frameon=False)
# draw axes
ax.annotate(r'$x$', (-r, 0), (r, 0),
ha='center', va='center',
arrowprops=dict(arrowstyle='<->', color='k', lw=1))
ax.annotate(r'$y$', (0, -r), (0, r),
ha='center', va='center',
arrowprops=dict(arrowstyle='<->', color='k', lw=1))
# draw rotated axes
ax.annotate(r'$x^\prime$', (-r * c, -r * s), (r * c, r * s),
ha='center', va='center',
arrowprops=dict(color='k', arrowstyle='<->', lw=1))
ax.annotate(r'$y^\prime$', (r * s, -r * c), (-r * s, r * c),
ha='center', va='center',
arrowprops=dict(color='k', arrowstyle='<->', lw=1))
# scatter points
ax.scatter(X[0], X[1], s=25, lw=0, c='k', zorder=2)
# draw lines
vnorm = np.array([s, -c])
for v in (X.T):
d = np.dot(v, vnorm)
v1 = v - d * vnorm
ax.plot([v[0], v1[0]], [v[1], v1[1]], '-k')
# draw ellipses
for sigma in (1, 2, 3):
ax.add_patch(Ellipse((0, 0), 2 * sigma * sigma1, 2 * sigma * sigma2,
rotation * 180. / np.pi,
ec='k', fc='gray', alpha=0.2, zorder=1))
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1)
plt.show()
```
Note that the points are correlated along a particular direction which doesn't align with the initial choice of axes. So, we should rotate our axes to align with this correlation.
We'll choose the rotation to maximize the ability to discriminate between the data points:
* the first axis, or **principal component**, is direction of maximal variance
* the second principal component is orthogonal to the first component and maximizes the residual variance
* ...
PCA is a dimensional reduction process because we can generally account for nearly "all" of the variance in the data set with fewer than the original $K$ dimensions. See more below.
We start with a data set $\{x_i\}$ which consists of $N$ objects for which we measure $K$ features. We start by subtracting the mean for each feature in $\{x_i\}$ and write $X$ as a $N\times K$ matrix.
The covariance of this matrix is
$$C_X=\frac{1}{N-1}X^TX.$$
There are off-diagonal terms if there are correlations between the measurements (e.g., maybe two of the features are temperature dependent and the measurements were taken at the same time).
If $R$ is a projection of the data that is aligned with the maximal variance, then we have $Y= X R$ with covariance
$$ C_{Y} = R^T X^T X R = R^T C_X R.$$
$r_1$ is the first principal component of $R$, which can be derived using Langrange multipliers with the following cost function:
$$ \phi(r_1,\lambda_1) = r_1^TC_X r_1 - \lambda_1(r_1^Tr_1-1). $$
If we take derivative of $\phi(r_1,\lambda)$ with respect to $r_1$ and set it to 0, then we have
$$ C_Xr_1 - \lambda_1 r_1 = 0. $$
$\lambda_1$ (the largest eigenvalue of the matrix) is the root of the equation $\det(C_X -
\lambda_1 {\bf I})=0$ for which the eigenvalue is
$$ \lambda_1 = r_1^T C_X r_1.$$
The columns of the full matrix, $R$ are the eigenvectors (known here as principal components).
The diagonal values of $C_Y$ are the variance contained within each component.
We aren't going to go through the linear algebra more than that here. But it would be a good group project for someone. See the end of 7.3.1 starting at the bottom on page 294 or go through [Karen Leighly's PCA lecture notes](http://seminar.ouml.org/lectures/principal-components-analysis/) if you want to walk through the math in more detail.
### Preparing data for PCA
* Subtract the mean of each dimension (to "center" the data)
* Divide by the variance in each dimension (to "whiten" the data)
* (For spectra and images) normalize each row to yield an integral of unity.
Below is a typical call to the PCA algorithm. Note that this example is somewhat backwards. We are starting with `X` and then we are making it higher dimensional--to create a mock high-$D$ data set. Then we are applying PCA as a dimensionality reduction technique.
```python
#Example call from 7.3.2
import numpy as np
from sklearn.decomposition import PCA
X = np.random.normal(size=(100,3)) # 100 points in 3D
R = np.random.random((3,10)) # projection matrix
X = np.dot(X,R) # X is now 10-dim, with 3 intrinsic dims
pca = PCA(n_components=4) # n_components can be optionally set
pca.fit(X)
eigenvalues = pca.transform(X) # compute the subspace projection of X, 4 eigenvalues for each of the 100 samples
mean = pca.mean_ # length 10 mean of the data
eigenvectors = pca.components_ # 4x10 matrix of components, multiply each by respective eigenvalue to reconstruct
#Reconstruction of object1
#Xreconstruct[0] = mean + eigenvectors*eigenvalues[0]
print(eigenvalues.shape)
print(eigenvectors.shape)
```
To illustrate what is happening here is a PCA reconstruction of handwritten "3s" from [Hastie et al.](https://web.stanford.edu/~hastie/ElemStatLearn/) :
[Scikit-Learn's decomposition module](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition) has a number of [PCA type implementations](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA).
Let's work through an example using spectra of galaxies take during the Sloan Digital Sky Survey. In this sample there are 4000 spectra with flux measurements in 1000 bins. 15 example spectra are shown below and our example will use half of the spectra chosen at random.
```python
%matplotlib inline
# Example from Andy Connolly
# See Ivezic, Figure 7.4
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import PCA
#from sklearn.decomposition import RandomizedPCA
from astroML.datasets import sdss_corrected_spectra
from astroML.utils import pickle_results
#------------------------------------------------------------
# Download data
data = sdss_corrected_spectra.fetch_sdss_corrected_spectra()
spectra = sdss_corrected_spectra.reconstruct_spectra(data)
wavelengths = sdss_corrected_spectra.compute_wavelengths(data)
print(len(spectra), len(wavelengths))
#----------------------------------------------------------------------
# Compute PCA
np.random.seed(500)
nrows = 2000 # We'll just look at 2000 random spectra
n_components = 5 # Do the fit with 5 components, which is the mean plus 4
ind = np.random.randint(spectra.shape[0], size=nrows)
spec_mean = spectra[ind].mean(0) # Compute the mean spectrum, which is the first component
# spec_mean = spectra[:50].mean(0)
# use Randomized PCA for speed
#pca = RandomizedPCA(n_components - 1)
pca = PCA(n_components - 1,svd_solver='randomized')
pca.fit(spectra[ind])
pca_comp = np.vstack([spec_mean,pca.components_]) #Add the mean to the components
evals = pca.explained_variance_ratio_
print(evals)
```
Now let's plot the components (eigenvectors). See also Ivezic, Figure 7.4. The left hand panels are just the first 5 spectra for comparison with the first 5 PCA components, which are shown on the right. They are ordered by the size of their eigenvalues.
```python
#Make plots
fig = plt.figure(figsize=(10, 8))
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05,
bottom=0.1, top=0.95, hspace=0.05)
titles = 'PCA components'
for j in range(n_components):
# plot the components
ax = fig.add_subplot(n_components, 2, 2*j+2)
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax.set_xlabel('wavelength (Angstroms)')
ax.plot(wavelengths, pca_comp[j], '-k', lw=1)
# plot zero line
xlim = [3000, 7999]
ax.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax.set_xlim(xlim)
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
# plot the first j spectra
ax2 = fig.add_subplot(n_components, 2, 2*j+1)
ax2.yaxis.set_major_formatter(plt.NullFormatter())
ax2.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax2.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax2.set_xlabel('wavelength (Angstroms)')
ax2.plot(wavelengths, spectra[j], '-k', lw=1)
# plot zero line
ax2.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax2.set_xlim(xlim)
if j == 0:
ax.set_title(titles, fontsize='medium')
if j == 0:
label = 'mean'
else:
label = 'component %i' % j
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax2.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
ax.text(0.02, 0.95, label, transform=ax.transAxes,
ha='left', va='top', bbox=dict(ec='w', fc='w'),
fontsize='small')
plt.show()
```
Now let's make "scree" plots. These plots tell us how much of the variance is explained as a function of the each eigenvector. Our plot won't look much like Ivezic, Figure 7.5, so I've shown it below to explain where "scree" comes from.
```python
# Execute this cell
import numpy as np
from matplotlib import pyplot as plt
#----------------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(121)
ax.plot(np.arange(n_components-1), evals)
ax.set_xlabel("eigenvalue number")
ax.set_ylabel("eigenvalue ")
ax = fig.add_subplot(122)
ax.plot(np.arange(n_components-1), evals.cumsum())
ax.set_xlabel("eigenvalue number")
ax.set_ylabel("cumulative eigenvalue")
plt.show()
```
How much of the variance is explained ([explained_variance_ratio_](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)) by the first two components? How about all of the components?
```python
print("The first component explains {:.3f} of the variance in the data.".format(___.___[0]))
print("The second component explains {:.3f} of the variance in the data.".format(___.___[1]))
print("All components explain {:.3f} of the variance in the data.".format(sum(___.___)))
```
This is why PCA enables dimensionality reduction.
How many components would we need to explain 99.5% of the variance?
```python
for num_feats in np.arange(1,20, dtype = int):
pca = PCA(___=___)
pca.___(spectra[ind])
if (sum(___.___)>___):
break
print("{:d} features are needed to explain 99.5% of the variance".format(____))
```
Note that we would need 1000 components to encode *all* of the variance.
There is a MUCH easier way to do this. Just give it a number of components between 0 and 1 and it will interpret that as a percentage of the variance.
```python
pca995 = PCA(n_components=0.995)
pca995.fit(spectra[ind])
print("{:d} features are needed to explain 99.5% of the variance".format(pca995.n_components_))
```
If you ever use sklearn's PCA, note that if you give it a dataset that is too big, it won't do the full PCA, but rather an approximated one using the `svd_solver="randomized"`, but you can force it to use something else.
## Interpreting the PCA
- The output eigenvectors are ordered by their associated eigenvalues
- The eigenvalues reflect the variance within each eigenvector
- The sum of the eigenvalues is total variance of the system
- Projection of each spectrum onto the first few eigenspectra is a compression of the data
Once we have the eigenvectors, we can try to reconstruct an observed spectrum, ${x}(k)$, in the eigenvector basis, ${e}_i(k)$, as
$$ \begin{equation}
{x}_i(k) = {\mu}(k) + \sum_j^R \theta_{ij} {e}_j(k).
\end{equation}
$$
That would give a full (perfect) reconstruction of the data since it uses all of the eigenvectors. But if we truncate (i.e., $r<R$), then we will have reduced the dimensionality while still reconstructing the data with relatively little loss of information.
For example, we started with 4000x1000 floating point numbers. If we can explain nearly all of the variance with 8 eigenvectors, then we have reduced the problem to 4000x8+8x1000 floating point numbers!
Execute the next cell to see how the reconstruction improves by adding more components.
```python
# Execute this cell
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import PCA
from astroML.datasets import sdss_corrected_spectra
from astroML.decorators import pickle_results
#------------------------------------------------------------
# Download data
data = sdss_corrected_spectra.fetch_sdss_corrected_spectra()
spectra = sdss_corrected_spectra.reconstruct_spectra(data)
wavelengths = sdss_corrected_spectra.compute_wavelengths(data)
#------------------------------------------------------------
# Compute PCA components
# Eigenvalues can be computed using PCA as in the commented code below:
#from sklearn.decomposition import PCA
#pca = PCA()
#pca.fit(spectra)
#evals = pca.explained_variance_ratio_
#evals_cs = evals.cumsum()
# because the spectra have been reconstructed from masked values, this
# is not exactly correct in this case: we'll use the values computed
# in the file compute_sdss_pca.py
evals = data['evals'] ** 2
evals_cs = evals.cumsum()
evals_cs /= evals_cs[-1]
evecs = data['evecs']
spec_mean = spectra.mean(0)
#------------------------------------------------------------
# Find the coefficients of a particular spectrum
spec = spectra[1]
coeff = np.dot(evecs, spec - spec_mean)
#------------------------------------------------------------
# Plot the sequence of reconstructions
fig = plt.figure(figsize=(8, 8))
fig.subplots_adjust(hspace=0)
for i, n in enumerate([0, 4, 8, 20]):
ax = fig.add_subplot(411 + i)
ax.plot(wavelengths, spec, '-', c='gray')
ax.plot(wavelengths, spec_mean + np.dot(coeff[:n], evecs[:n]), '-k')
if i < 3:
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylim(-2, 21)
ax.set_ylabel('flux')
if n == 0:
text = "mean"
elif n == 1:
text = "mean + 1 component\n"
text += r"$(\sigma^2_{tot} = %.2f)$" % evals_cs[n - 1]
else:
text = "mean + %i components\n" % n
text += r"$(\sigma^2_{tot} = %.2f)$" % evals_cs[n - 1]
ax.text(0.01, 0.95, text, ha='left', va='top', transform=ax.transAxes)
fig.axes[-1].set_xlabel(r'${\rm wavelength\ (\AA)}$')
plt.show()
```
### Caveats I
PCA is a linear process, whereas the variations in the data may not be. So it may not always be appropriate to use and/or may require a relatively large number of components to fully describe any non-linearity.
Note also that PCA can be very impractical for large data sets which exceed the memory per core as the computational requirement goes as $\mathscr{O}(D^3$) and the memory requirement goes as $\mathscr{O}(2D^2)$.
### Missing Data
We have assumed so far that there is no missing data (e.g., bad pixels in the spectrum, etc.). But often the data set is incomplete. Since PCA encodes the flux correlation with wavelength (or whatever parameters are in your data set), we can actually use it to determine missing values.
An example is shown below. Here, black are the observed spectra. Gray are the regions where we have no data. Blue is the PCA reconstruction, including the regions where there are no data. Awesome, isn't it?
```python
# Execute this cell
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import ticker
from astroML.datasets import fetch_sdss_corrected_spectra
from astroML.datasets import sdss_corrected_spectra
#------------------------------------------------------------
# Get spectra and eigenvectors used to reconstruct them
data = fetch_sdss_corrected_spectra()
spec = sdss_corrected_spectra.reconstruct_spectra(data)
lam = sdss_corrected_spectra.compute_wavelengths(data)
evecs = data['evecs']
mu = data['mu']
norms = data['norms']
mask = data['mask']
#------------------------------------------------------------
# plot the results
i_plot = ((lam > 5750) & (lam < 6350))
lam = lam[i_plot]
specnums = [20, 8, 9]
subplots = [311, 312, 313]
fig = plt.figure(figsize=(8, 10))
fig.subplots_adjust(hspace=0)
for subplot, i in zip(subplots, specnums):
ax = fig.add_subplot(subplot)
# compute eigen-coefficients
spec_i_centered = spec[i] / norms[i] - mu
coeffs = np.dot(spec_i_centered, evecs.T)
# blank out masked regions
spec_i = spec[i]
mask_i = mask[i]
spec_i[mask_i] = np.nan
# plot the raw masked spectrum
ax.plot(lam, spec_i[i_plot], '-', color='k', lw=2,
label='True spectrum')
# plot two levels of reconstruction
for nev in [10]:
if nev == 0:
label = 'mean'
else:
label = 'N EV=%i' % nev
spec_i_recons = norms[i] * (mu + np.dot(coeffs[:nev], evecs[:nev]))
ax.plot(lam, spec_i_recons[i_plot], label=label)
# plot shaded background in masked region
ylim = ax.get_ylim()
mask_shade = ylim[0] + mask[i][i_plot].astype(float) * ylim[1]
plt.fill(np.concatenate([lam[:1], lam, lam[-1:]]),
np.concatenate([[ylim[0]], mask_shade, [ylim[0]]]),
lw=0, fc='k', alpha=0.2)
ax.set_xlim(lam[0], lam[-1])
ax.set_ylim(ylim)
ax.yaxis.set_major_formatter(ticker.NullFormatter())
if subplot == 311:
ax.legend(loc=1, prop=dict(size=14))
ax.set_xlabel('$\lambda\ (\AA)$')
ax.set_ylabel('normalized flux')
plt.show()
```
The example that we have been using above is "spectral" PCA. Some examples from the literature include:
- [Francis et al. 1992](http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1992ApJ...398..476F&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf)
- [Connolly et al. 1995](http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1995AJ....110.1071C&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf)
- [Yip et al. 2004](http://iopscience.iop.org/article/10.1086/425626/meta;jsessionid=31BB5F11B85D2BF4180834DC71BA0B85.c3.iopscience.cld.iop.org)
One can also do PCA on features that aren't ordered (as they were for the spectra). E.g., if you have $D$ different parameters measured for your objects. The classic example in astronomy is
[Boroson & Green 1992](http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1992ApJS...80..109B&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf).
### Caveats II
One of the things that I don't like about PCA is that the eigenvectors are defined relative to the mean. So they can be positive or negative and they often don't look anything like the original data itself. Whereas it is often the case that you might expect that the components would look like, well, the physical components. For example, quasars are fundamentally galaxies. So, part of their flux comes from the galaxy that they live in. But PCA doesn't return any component that looks like a typical galaxy.
## Non-negative Matrix Factorization (NMF)
This is where [Non-negative Matrix Factorizaiton (NMF)](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization) comes in. Here we are treating the data as a linear sum of positive-definite components.
NMF assumes any data matrix can be factored into two matrices, $W$ and $Y$, with
$$\begin{equation}
X=W Y,
\end{equation}
$$
where both $W$ and $Y$ are nonnegative.
So, $WY$ is an approximation of $X$. Minimizing the reconstruction error $|| (X - W Y)^2 ||$,
nonnegative bases can be derived through an iterative process.
Note, however, that the iterative process does not guarantee nonlocal minima (like $K$-means and EM), but using
random initialization and cross-validation can be used to find the global minimum.
An example from the literature is [Allen et al. 2008](http://arxiv.org/abs/0810.4231)
In Scikit-Learn the [NMF implementation](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html) looks like:
```python
# Execute this cell
import numpy as np
from sklearn.decomposition import NMF
X = np.random.random((100,10)) # 100 points in 10-D
nmf = NMF(n_components=3)
nmf.fit(X)
proj = nmf.transform(X) # project to 3 dimension
comp = nmf.components_ # 3x10 array of components
err = nmf.reconstruction_err_ # how well 3 components capture the data
```
An example (and comparison to PCA) is given below.
```python
# Execute the next 2 cells
# Example from Figure 7.4
# Author: Jake VanderPlas
# License: BSD
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import NMF
#from sklearn.decomposition import RandomizedPCA
from sklearn.decomposition import PCA
from astroML.datasets import sdss_corrected_spectra
from astroML.decorators import pickle_results
#------------------------------------------------------------
# Download data
data = sdss_corrected_spectra.fetch_sdss_corrected_spectra()
spectra = sdss_corrected_spectra.reconstruct_spectra(data)
wavelengths = sdss_corrected_spectra.compute_wavelengths(data)
```
```python
#----------------------------------------------------------------------
# Compute PCA, and NMF components
def compute_PCA_NMF(n_components=5):
spec_mean = spectra.mean(0)
# PCA: use randomized PCA for speed
#pca = RandomizedPCA(n_components - 1)
pca = PCA(n_components - 1,svd_solver='randomized')
pca.fit(spectra)
pca_comp = np.vstack([spec_mean, pca.components_])
# NMF requires all elements of the input to be greater than zero
spectra[spectra < 0] = 0
nmf = NMF(n_components)
nmf.fit(spectra)
nmf_comp = nmf.components_
return pca_comp, nmf_comp
n_components = 5
decompositions = compute_PCA_NMF(n_components)
#----------------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 10))
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05,
bottom=0.1, top=0.95, hspace=0.05)
titles = ['PCA components', 'NMF components']
for i, comp in enumerate(decompositions):
for j in range(n_components):
ax = fig.add_subplot(n_components, 3, 3 * j + 1 + i)
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax.set_xlabel('wavelength (Angstroms)')
ax.plot(wavelengths, comp[j], '-k', lw=1)
# plot zero line
xlim = [3000, 7999]
ax.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax.set_xlim(xlim)
if j == 0:
ax.set_title(titles[i])
if titles[i].startswith('PCA') or titles[i].startswith('ICA'):
if j == 0:
label = 'mean'
else:
label = 'component %i' % j
else:
label = 'component %i' % (j + 1)
ax.text(0.03, 0.94, label, transform=ax.transAxes,
ha='left', va='top')
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(2)
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
plt.show()
```
## Independent Component Analysis (ICA)
For data where the components are statistically independent (or nearly so) [Independent Component Analysis (ICA)](https://en.wikipedia.org/wiki/Independent_component_analysis) has become a popular method for separating mixed components. The classical example is the so-called "cocktail party" problem. This is illustrated in the following figure from Hastie, Tibshirani, and Friedman (Figure 14.27 on page 497 in my copy, so they have clearly added some stuff!). Think of the "source signals" as two voices at a party. You are trying to concentrate on just one voice. What you hear is something like the "measured signals" pattern. You could run the data through PCA and that would do an excellent job of reconstructing the signal with reduced dimensionality, but it wouldn't actually isolate the different physical components (bottom-left panel). ICA on the other hand can (bottom-right panel).
.](../images/HastieFigure14_37.png)
[Hastie et al.](https://web.stanford.edu/~hastie/ElemStatLearn/): "ICA applied to multivariate data looks for a sequence of orthogonal projections such that the projected data look as far from Gaussian as possible. With pre-whitened data, this amounts to looking for
components that are as independent as possible."
In short you want to find components that are maximally non-Gaussian since the sum of 2 random variables will be more Gaussian than either of the components (remember the Central Limit Theorem). Hastie et al. illustrate this as follows:
ICA is a good choice for a complex system with relatively indepent components. For example a galaxy is roughly a linear combination of cool stars and hot stars, and a quasar is just a galaxy with others component from an accretion disk and emission line regions. Ideally we want "eigenvectors" that are aligned with those physical traits/regions as opposed to mathematical constructs.
The basic call to the [FastICA algoirthm](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FastICA.html) in Scikit-Learn looks like:
```python
# Execute this cell
import numpy as np
from sklearn.decomposition import FastICA
X = np.random.normal(size=(100,2)) # 100 objects in 2D
R = np.random.random((2,5)) # mixing matrix
X = np.dot(X,R) # Simulation of a 5D data space
ica = FastICA(2) # Now reproject to 2-D
ica.fit(X)
proj = ica.transform(X) # 100x2 projection of the data
comp = ica.components_ # 2x5 matrix of independent components
## sources = ica.sources_ # 100x2 matrix of sources
```
Execute the next 2 cells to produce a plot showing the ICA components.
```python
%matplotlib inline
#Example from Andy Connolly
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import FastICA
from astroML.datasets import sdss_corrected_spectra
from astroML.decorators import pickle_results
#------------------------------------------------------------
# Download data
data = sdss_corrected_spectra.fetch_sdss_corrected_spectra()
spectra = sdss_corrected_spectra.reconstruct_spectra(data)
wavelengths = sdss_corrected_spectra.compute_wavelengths(data)
#----------------------------------------------------------------------
# Compute PCA
np.random.seed(500)
nrows = 500
n_components = 5
ind = np.random.randint(spectra.shape[0], size=nrows)
spec_mean = spectra[ind].mean(0)
# spec_mean = spectra[:50].mean(0)
ica = FastICA(n_components - 1)
ica.fit(spectra[ind])
ica_comp = np.vstack([spec_mean,ica.components_]) #Add the mean to the components
```
```python
#Make plots
fig = plt.figure(figsize=(10, 8))
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05,
bottom=0.1, top=0.95, hspace=0.05)
titles = 'ICA components'
for j in range(n_components):
# plot the components
ax = fig.add_subplot(n_components, 2, 2*j+2)
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax.set_xlabel(r'wavelength ${\rm (\AA)}$')
ax.plot(wavelengths, ica_comp[j], '-k', lw=1)
# plot zero line
xlim = [3000, 7999]
ax.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax.set_xlim(xlim)
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
# plot the first j spectra
ax2 = fig.add_subplot(n_components, 2, 2*j+1)
ax2.yaxis.set_major_formatter(plt.NullFormatter())
ax2.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax2.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax2.set_xlabel(r'wavelength ${\rm (\AA)}$')
ax2.plot(wavelengths, spectra[j], '-k', lw=1)
# plot zero line
ax2.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax2.set_xlim(xlim)
if j == 0:
ax.set_title(titles, fontsize='medium')
if j == 0:
label = 'mean'
else:
label = 'component %i' % j
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax2.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
ax.text(0.02, 0.95, label, transform=ax.transAxes,
ha='left', va='top', bbox=dict(ec='w', fc='w'),
fontsize='small')
plt.show()
```
As with PCA and NMF, we can similarly do a reconstruction:
```python
# Execute this cell
#------------------------------------------------------------
# Find the coefficients of a particular spectrum
spec = spectra[1]
evecs = data['evecs']
coeff = np.dot(evecs, spec - spec_mean)
#------------------------------------------------------------
# Plot the sequence of reconstructions
fig = plt.figure(figsize=(8, 8))
fig.subplots_adjust(hspace=0)
for i, n in enumerate([0, 2, 4, 8]):
ax = fig.add_subplot(411 + i)
ax.plot(wavelengths, spec, '-', c='gray')
ax.plot(wavelengths, spec_mean + np.dot(coeff[:n], evecs[:n]), '-k')
if i < 3:
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylim(-2, 21)
ax.set_ylabel('flux')
if n == 0:
text = "mean"
elif n == 1:
text = "mean + 1 component\n"
#text += r"$(\sigma^2_{tot} = %.2f)$" % evals_cs[n - 1]
else:
text = "mean + %i components\n" % n
#text += r"$(\sigma^2_{tot} = %.2f)$" % evals_cs[n - 1]
ax.text(0.01, 0.95, text, ha='left', va='top', transform=ax.transAxes)
fig.axes[-1].set_xlabel(r'${\rm wavelength\ (\AA)}$')
plt.show()
```
Ivezic, Figure 7.4 compares the components found by the PCA, ICA, and NMF algorithms. Their differences and similarities are quite interesting.
If you think that I was pulling your leg about the cocktail problem, try it yourself!
Load the code instead of running it and see what effect changing some things has.
```python
%load ../code/plot_ica_blind_source_separation.py
```
Let's revisit the digits sample and see what PCA, NMF, and ICA do for it.
```python
## Execute this cell to load the digits sample
%matplotlib inline
import numpy as np
from sklearn.datasets import load_digits
from matplotlib import pyplot as plt
digits = load_digits()
grid_data = np.reshape(digits.data[0], (8,8)) #reshape to 8x8
plt.imshow(grid_data, interpolation = "nearest", cmap = "bone_r")
print(grid_data)
X = digits.data
y = digits.target
```
Do the PCA transform, projecting to 2 dimensions and plot the results.
```python
# PCA
from sklearn.decomposition import ___
pca = PCA(n_components = ___)
pca.___(___)
X_reduced = pca.transform(___)
plt.scatter(X_reduced[:,___], X_reduced[:,___], c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
Similarly for NMF and ICA
```python
# NMF
from sklearn.decomposition import ___
nmf = NMF(___)
nmf.___(___)
X_reduced = nmf.___(___)
plt.scatter(___, ___, c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
```python
# ICA
from sklearn.decomposition import ___
ica = FastICA(___)
ica.___(___)
X_reduced = ica.___(___)
plt.scatter(___, ___, c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
Take a second to think about what ICA is doing. What if you had digits from digital clocks instead of handwritten?
I wasn't going to introduce [Neural Networks](https://en.wikipedia.org/wiki/Artificial_neural_network) yet, but it is worth noting that Scikit-Learn's [`Bernoulli Restricted Boltzman Machine (RBM)`](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.BernoulliRBM.html) is discussed in the [(unsupervised) neural network](http://scikit-learn.org/stable/modules/neural_networks_unsupervised.html) part of the User's Guide and is relevant here as the data input must be either binary or values between 0 and 1, which is the case that we have here.
We could think about doing dimensional reduction of the digits data set in another way. There are 64 pixels in each of our images. Presumably all of them aren't equally useful. Let's figure out exactly which pixels are the most relevant. We'll use Scikit-Learn's [`RandomForestRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html). We won't get to regression until next week, but you don't need to understand the algorithm to do this, just look at the inputs and outputs. Which pixels are the most important? As a bonus see if you can plot digit images with those pixels highlighted.
```python
from sklearn.ensemble import RandomForestRegressor
RFreg = RandomForestRegressor()# Complete or leave blank as you see fit
RFreg.fit(X,y)# Do Fitting
importances = RFreg.feature_importances_# Determine "importances"
pixelorder = np.argsort(importances)[::-1] #Rank importances (highest to lowest)
print(pixelorder)
plt.figure()
plt.imshow(np.reshape(importances,(8,8)),interpolation="nearest")
plt.show()
```
|
2d2dd18c635d4789ea5d9af439d65535ecaf9c6a
| 1,010,359 |
ipynb
|
Jupyter Notebook
|
notebooks/DimensionReduction.ipynb
|
KleinWang/PHYS_440_540
|
0c01d63ca4b068068f24635185663b2564740aeb
|
[
"MIT"
] | 9 |
2020-08-18T04:34:51.000Z
|
2021-12-26T03:41:02.000Z
|
notebooks/DimensionReduction.ipynb
|
KleinWang/PHYS_440_540
|
0c01d63ca4b068068f24635185663b2564740aeb
|
[
"MIT"
] | null | null | null |
notebooks/DimensionReduction.ipynb
|
KleinWang/PHYS_440_540
|
0c01d63ca4b068068f24635185663b2564740aeb
|
[
"MIT"
] | 5 |
2020-09-15T14:55:24.000Z
|
2021-07-07T19:17:25.000Z
| 596.786178 | 363,728 | 0.941154 | true | 10,654 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.782662 | 0.882428 | 0.690643 |
__label__eng_Latn
| 0.961499 | 0.442927 |
## Sample Jupyter / iPython notebook
Louis Moresi
School of Earth Sciences
University of Melbourne
[louis.moresi@unimelb.edu.au](mailto:louis.moresi@unimelb.edu.au)
[www.moresi.info](http://www.moresi.info)
### Docker containers
This notebook
### Quick links
- [Home](/) - the static web pages on this site
- [This Page](/notebooks/Content/Notebooks/StartHere.ipynb)
- [Browse](/notebooks/Content/Notebooks/) the jupyter filesystem, create and edit notebooks
### What is this ?
This is an example of the iPython / Jupyter notebook system. They are a form of literate programming in which we can mix textbook instruction and explanations with code (in this case, python) that can also be run and edited. The text and mathematics in the notebooks requires a little preliminary learning.
The notebook system also includes a [file browser](/notebooks/Content/Notebooks/) which also allows you to add your own notebook, add a text file or start a terminal on the machine running this notebook.
NOTE that this content is ephemeral - it will disappear with the container if you do not capture the output by mounting the volume or copying the data to your local machine.
### Markdown
You can document your iPython notebooks by making some cells into **Markdown** cells. Markdown is a way of formatting text that is supposed to be almost as readable un-rendered as when it is tidied up. You might argue that it looks equally bad either way, but that's tough because the notebooks use it and that's how I want you to produce nice looking output to hand in as an assignment !
If you look at the **Markdown** cells as source code (by double-clicking on them) you will see how the raw text looks. To get back to the pretty version of the text, hit shift-enter.
### Maths
In a browser, you can render beautiful equations using a javascript tool called **Mathjax** which is build into the iPython notebooks.
You can build in symbols to your text such as $\pi$ and $\epsilon$ if you use the \$ signs to indicate where your equations begin and end, and you know enough $\LaTeX$ [try it here !](http://www.codecogs.com/latex/eqneditor.php) to get by.
Equations in 'display' mode are written like this (again look at the source for this cell to see what is used)
\\[ e^{i\pi} + 1 = 0 \\]
or even like this
\begin{equation}
%%
\nabla^4 \psi = \frac{\partial T}{\partial x}
%%
\end{equation}
Go back to the rendered form of the cell by 'running' it.
### Links
[Markdown Website](http://daringfireball.net/projects/markdown/)
[Mathjax Website](http://docs.mathjax.org)
[Jupyter Notebooks](http://www.jupyter.org)
```python
## This is a live notebook where you can execute python code
print "Hello world"
```
Hello world
```sh
%%sh
## This cell is now running shell (bash) commands
ls -l
echo "---"
whoami
echo "---"
uname -a
echo "---"
```
total 16
-rw-r--r-- 1 lmoresi staff 5020 27 Jan 11:52 StartHere.ipynb
---
lmoresi
---
Darwin MU00011496 15.0.0 Darwin Kernel Version 15.0.0: Sat Sep 19 15:53:46 PDT 2015; root:xnu-3247.10.11~1/RELEASE_X86_64 x86_64
---
```python
# A blank canvas
## Feel free to run your own code in this cell (then hit shift-return to execute it)
```
|
82657212d9c3d9bddc46551c378209179f838e0a
| 5,172 |
ipynb
|
Jupyter Notebook
|
notebooks/StartHere.ipynb
|
lmoresi/docker-web-notebook-module
|
9836584ea6ee2e1c0807bd59d74325e949917824
|
[
"MIT"
] | null | null | null |
notebooks/StartHere.ipynb
|
lmoresi/docker-web-notebook-module
|
9836584ea6ee2e1c0807bd59d74325e949917824
|
[
"MIT"
] | null | null | null |
notebooks/StartHere.ipynb
|
lmoresi/docker-web-notebook-module
|
9836584ea6ee2e1c0807bd59d74325e949917824
|
[
"MIT"
] | 2 |
2016-02-19T03:56:07.000Z
|
2016-02-22T06:34:37.000Z
| 30.785714 | 397 | 0.573279 | true | 853 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.661923 | 0.879147 | 0.581927 |
__label__eng_Latn
| 0.996581 | 0.190342 |
## Imports
```python
import numpy as np
import pylab as plt
from matplotlib import cm
import seaborn as sns
from tqdm import tqdm
```
## System
> NLSE differential equation
\begin{equation}
\partial_z A + \frac{\alpha}{2}A+\frac{i}{2}\beta_2 \partial_T^2 A = i \gamma \| A\|^2 A
\end{equation}
For now, let's suppose $\alpha=0$
\begin{equation}
\partial_z A = (\hat{D}+\hat{N})A
\end{equation}
with $\hat{D}=-\frac{i}{2}\beta_2 \partial_T^2$ and $\hat{N}=i \gamma \| A\|^2$
The operator $\hat{D}$ will be computed in the fourier domain such that $\hat{D}(\omega)=\frac{i}{2}\omega^2\beta_2$
Finally we have at first glance :
\begin{equation}
A(z+h, T)=e^{h\hat{N}} F^{-1} \cdot e^{h\hat{D}(\omega)} \cdot F \cdot A(z,t)
\end{equation}
Where $F$ is the Fourier operator, $h$ is the propagation step (of a few meters) and $\beta_2=-D\lambda^2/2\pi c$ wich is related to the optical fiber chromatic dispersion.
> Source : doi:10.1109/ICEE.2007.4287333
```python
# Constants
c=3e8 # light celerity
l0 = 1.55e-6 # wavelength
nm = 1e-9 # nanometer
ns = 1e-9
km = 5e3 #kilometer
ps = 1e-12 #picosecond
D = 17*ps/nm/km # Dispersion
b2 = -D*l0**2/(2*np.pi*c) # group velocity dispersion (2nd order)
w0 = 2*np.pi*1e10
# Parameters
alpha=0#0.00005 # Losses
gamma=0.78e-3 # [1]/m/W Nonlinear factor https://ieeexplore.ieee.org/document/7764544
L = 5e3 # Fiber length
Nt= 10000 # Time sampling
Nl = 5000 # Length sampling
# Calculated factors and vectors
h = L/Nl # Lengthstep
dT = 0.1*ps # FWHM
T = np.linspace(-50, 50, Nt)*ps # Pulse local time vector
z = np.arange(0, Nl, 1)*h # Propagation distance vector
P0=120 # W
n = 0.03
dt = T[1]-T[0] # timestep
w = np.fft.fftshift(np.fft.fftfreq(Nt, d=dt)) # Pulsation vector # ?? 2*np.pi*c/l0 +
Dw = 0.5*1j*w**2*b2 # Calculated dispersion operator
Noise = np.random.randn(1,Nt) # Amplitude noise vector
A =np.asarray(np.zeros((Nl, Nt), dtype=complex)) # System matrix
B =np.asarray(np.zeros((Nl, Nt), dtype=complex))
A[0,:]= n*np.sqrt(P0)*Noise + 0.5*np.sqrt(P0)*(np.exp(-T**2/(10*(dT)**2))) # Initial state
A[0, :] += np.sqrt(P0)*(np.exp(-(T-10*ps)**2/(2*(dT)**2)))
A[0, :] += np.sqrt(P0)*(np.exp(-(T+10*ps)**2/(2*(dT)**2)))
#A[0,:]= n*np.sqrt(P0)*Noise + np.sqrt(P0)*np.sin(w0*T) # Initial state
B[0,:]=np.fft.fftshift(np.fft.fft(A[0,:]))
```
```python
plt.plot(T,np.abs(A[0,:]))
plt.title("Initial pulse")
plt.show()
```
```python
for i in tqdm(range(1,Nl)):
N = 1j*gamma*np.abs(A[i-1,:])**2-0.5*alpha
Ai = np.exp(0.5*h*N)*A[i-1,:] # half Nonlinearity 1
Ai = np.fft.fftshift(np.fft.fft(Ai)) # Fourier domain
Ai = np.exp(h*Dw)*Ai # Dispersion in Fourier domain
Ai = np.fft.ifft(np.fft.ifftshift(Ai)) # Temporal domain
Ai = np.exp(0.5*h*N)*Ai # half Nonlinearity 2
A[i,:] = Ai
B[i,:] = np.fft.fftshift(np.fft.fft(Ai))
```
100%|█████████████████████████████████████████████████████████████████████████████| 4999/4999 [00:30<00:00, 164.10it/s]
```python
f = w/(2*np.pi)
extent = [f[0], f[-1], z[-1], z[0]]
plt.imshow(np.abs(A), aspect=1, cmap='jet')
plt.xlabel(r"f (Hz)")
plt.ylabel(r"L (m)")
plt.tight_layout()
plt.show()
```
```python
plt.plot(T/ps,np.abs(A[-1,:]), label='output')
plt.plot(T/ps,np.abs(A[0,:]), label='input')
plt.grid()
plt.legend()
plt.xlabel("Pulse time (ps)")
plt.ylabel("Pulse Power (W)")
plt.show()
plt.figure()
plt.plot(T/ps,np.real(A[-1,:]), label='output')
plt.plot(T/ps,np.real(A[0,:]), label='input')
plt.grid()
plt.legend()
plt.xlabel("Pulse time (ps)")
plt.ylabel("Pulse Power (W)")
plt.show()
```
```python
w[0]
```
-4999499999998.172
```python
Dw
```
array([-0.-0.05415821j, -0.-0.05413655j, -0.-0.0541149j , ...,
-0.-0.05409324j, -0.-0.0541149j , -0.-0.05413655j])
```python
N
```
array([0.+4.25759166e-08j, 0.+1.08248965e-08j, 0.+7.47240172e-08j, ...,
0.+1.06654191e-07j, 0.+4.60674140e-09j, 0.+9.80428558e-08j])
```python
h
```
50.0
```python
c/l0
```
193548387096774.2
```python
z[-1]
```
4997.5
```python
```
|
6ed80580bc6111b28b34110de61cb77798d01229
| 172,095 |
ipynb
|
Jupyter Notebook
|
NonlinearOptics/NLSE_Split_Step.ipynb
|
ParadiseLab/Photonics_Jnotebooks
|
623068598977a05814beb7434ef0d190ab4cacb8
|
[
"MIT"
] | 1 |
2022-03-22T22:55:38.000Z
|
2022-03-22T22:55:38.000Z
|
NonlinearOptics/NLSE_Split_Step.ipynb
|
Ydeh22/Photonics_Jnotebooks
|
623068598977a05814beb7434ef0d190ab4cacb8
|
[
"MIT"
] | null | null | null |
NonlinearOptics/NLSE_Split_Step.ipynb
|
Ydeh22/Photonics_Jnotebooks
|
623068598977a05814beb7434ef0d190ab4cacb8
|
[
"MIT"
] | 1 |
2022-03-22T20:59:14.000Z
|
2022-03-22T20:59:14.000Z
| 338.104126 | 69,380 | 0.934914 | true | 1,552 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.897695 | 0.699254 | 0.627717 |
__label__eng_Latn
| 0.340601 | 0.296728 |
# Quick Start Examples for `constriction`'s Python API
- **Author:** Robert Bamler, University of Tuebingen
- **Initial Publication Date:** Jan 4, 2022
This is an interactive jupyter notebook.
You can read this notebook [online](https://github.com/bamler-lab/constriction/blob/main/examples/python/01-hello-world.ipynb) but if you want to execute any code, we recommend to [download](https://raw.githubusercontent.com/bamler-lab/constriction/main/examples/python/01-hello-world.ipynb) it.
More examples, tutorials, and reference materials are available at <https://bamler-lab.github.io/constriction/>.
## Install Constriction
Before you start, install `constriction` by executing the following cell, then restart your jupyter kernel:
```python
!pip install --upgrade constriction~=0.2.1 # (this will automatically also install numpy)
```
**Don't forget to restart your jupyter kernel now.**
Then test if you can import `constriction`:
```python
import constriction # This should produce no output (in particular, no error messages).
```
## Example 1: Hello, World
The following cell implements a very simple encoding-decoding round trip using `constriction`'s ANS coder.
We'll explain what's going on and also show how to use a different entropy coder below.
```python
import constriction
import numpy as np
# Define some example message and entropy model:
message = np.array([6, 10, -4, 2, -9, 41, 3, 0, 2 ], dtype=np.int32)
means = np.array([2.5, 13.1, -1.1, -3.0, -6.1, 34.2, 2.8, -6.4, -3.1], dtype=np.float64)
stds = np.array([4.1, 8.7, 6.2, 5.4, 24.1, 12.7, 4.9, 28.9, 4.2], dtype=np.float64)
model_family = constriction.stream.model.QuantizedGaussian(-100, 100) # We'll provide `means` and `stds` when encoding/decoding.
print(f"Original message: {message}")
# Encode the message:
encoder = constriction.stream.stack.AnsCoder()
encoder.encode_reverse(message, model_family, means, stds)
# Get and print the compressed representation:
compressed = encoder.get_compressed()
print(f"compressed representation: {compressed}")
print(f"(in binary: {[bin(word) for word in compressed]})")
# Decode the message:
decoder = constriction.stream.stack.AnsCoder(compressed) # (we could also just reuse `encoder`.)
reconstructed = decoder.decode(model_family, means, stds)
print(f"Reconstructed message: {reconstructed}")
assert np.all(reconstructed == message)
```
Original message: [ 6 10 -4 2 -9 41 3 0 2]
compressed representation: [3436391223 862640052]
(in binary: ['0b11001100110100110010101100110111', '0b110011011010101101011110110100'])
Reconstructed message: [ 6 10 -4 2 -9 41 3 0 2]
### What's Going on Here?
The above example compresses and then decompresses a short example message using one of the entropy coders provided by `constriction`.
All messages in `constriction` are sequences of integers ("symbols"), represented as a rank-1 numpy array with `dtype=np.int32`.
The variables `mean` and `stds` define an entropy model (see [explanation below](#Background-Information-on-Entropy-Models)).
In our example, the entropy model for each symbol is a [`QuantizedGaussian`](https://bamler-lab.github.io/constriction/apidoc/python/stream/model.html#constriction.stream.model.QuantizedGaussian) distribution (see [below](#The-Specific-Entropy-Model-Used-Here)), which is a common type of entropy model in novel machine-learning based compression methods.
Other entropy models are supported by `constriction`, including custom models, see section ["API documentation"](https://bamler-lab.github.io/constriction/apidoc/python/stream/model.html) below.
More precisely, the entropy model for the first symbol of the message in the above example is a `QuantizedGaussian` with mean 2.5 and standard deviation 4.1, the model for the second symbol has mean 13.1 and standard deviation 8.7, and so on.
The next few lines of the above example *encode* the message.
We use an [Asymmetric Numeral Systems (ANS)](https://en.wikipedia.org/wiki/Asymmetric_numeral_systems) entropy coder here, but we show [below](#Example-2:-Switching-Out-the-Entropy-Coding-Algorithm) how we can use a different entropy coder just as well.
The actual encoding procedure happens in the method `encode_reverse`.
The suffix "_reverse" is to remind us that ANS operates as a *stack* (i.e., "last in first out").
We therefore encode the symbols in reverse order here so that we can subsequently decode them in forward order.
Next, we obtain the compressed representation and print it.
In `constriction`, compressed data is, by default, represented as an array of unsigned 32-bit integers.
See [below](#Example-3:-Writing-Compressed-Data-to-a-File) for an example that writes compressed data to a file.
The final four lines of code above *decode* the message from the compressed data and verify its integrity.
We pass `compressed` as an argument to `AnsCoder` here, and we then call `decode` on it (without the suffix "_reverse").
### Background Information on Entropy Models
The above example uses an entropy model (defined by `means`, `stds`, and `model_family`) for both encoding and decoding.
The entropy *model* and the entropy *coder* together comprise a lossless compression method on which two parties have to agree before they can meaningfully exchange any compressed data.
The entropy *model* is a probability distribution over all conceivable messages.
The job of an entropy *coder* is to come up with an encoding/decoding scheme that minimizes the *expected* bitrate under the entropy model.
Thus, the coder has to assign short compressed representations to the most probable messages under the model at the cost of having to assigning longer compressed representations to less probable messages.
This job is conveniently taken care of by the various entropy coders provided by `constriction`.
### The Specific Entropy Model Used Here
In the above example, we use an entropy model that factorizes over all symbols in the message (if you want to model correlations between symbols, you can use autoregressive models or the bits-back trick, see section ["further reading"](#Further-Reading) below).
The marginal probability distribution for each symbol is a quantized (aka discretized) form of a Gaussian distribution, as it often arises in novel machine-learning based compression methods.
More precisely, we model the probability that the $i$'th symbol $X_i$ of the message takes some integer value $x_i$ as follows,
\begin{align}
P(X_i \! = \! x_i) = \int_{x_i-\frac12}^{x_i+\frac12} f_{\mathcal N}(\xi;\mu_i,\sigma_i^2) \,\text{d}\xi
\quad\forall x_i\in \mathbb Z
\end{align}
where $f_{\mathcal N}(\,\cdot\,;\mu_i,\sigma_i)$ is the probability density function of a normal distribution with mean $\mu_i$ and standard deviation $\sigma_i$.
The means and standard deviations of our entropy models are assigned to variables `means` and `stds` in the above code example.
The entropy coder slightly modifies the model by rounding all probabilities $P(X_i \! = \! x_i)$ to a fixed-point representation with some finite precision, while enforcing three guarantees:
(i) all integers within the range from `-100` to `100` (defined by our arguments to the constructor, `QuantizedGaussian(-100, 100)`) are guaranteed to have a nonzero probability (so that they can be encoded without error);
(ii) the probabilities within this range are guaranteed to sum *exactly* to one (despite the finite precision), and all integers outside of this range have exactly zero probability and cannot be encoded (and also will never be returned when decoding random compressed data with an `AnsCoder`); and
(iii) the model is *exactly* invertible: encoding and decoding internally evaluate the model's cumulative distribution function and the model's quantile function, and `constriction` ensures (via fixed-point arithmetic) that these two functions are the exact inverse of each other since even tiny rounding errors could otherwise have catastrophic effects in an entropy coder.
## Example 2: Switching Out the Entropy Coding Algorithm
The [above example](#Example-1:-Hello,-World) used Asymmetric Numeral Systems (ANS) for entropy coding.
We can also use a [Range Coder](https://en.wikipedia.org/wiki/Range_coding) instead.
Before you look at the modified example below, try writing it yourself:
- Start from [example 1 above](#Example-1:-Hello,-World) and replace `stack.AnsCoder` with `queue.RangeEncoder` for the encoder and with `queue.RangeDecoder` for the decoder (Range Coding uses different data structures for encoding and decoding because, in contrast to ANS, you generally lose the ability to encode any additional symbols once you start decoding with a Range Coder).
- Replace `encode_reverse` with `encode` (i.e., drop the suffix "_reverse") because range coding operates as a queue (i.e., "first in first out").
Your result should look as follows:
```python
import constriction
import numpy as np
# Define some example message and entropy model:
message = np.array([6, 10, -4, 2, -9, 41, 3, 0, 2 ], dtype=np.int32)
means = np.array([2.5, 13.1, -1.1, -3.0, -6.1, 34.2, 2.8, -6.4, -3.1], dtype=np.float64)
stds = np.array([4.1, 8.7, 6.2, 5.4, 24.1, 12.7, 4.9, 28.9, 4.2], dtype=np.float64)
model_family = constriction.stream.model.QuantizedGaussian(-100, 100) # We'll provide `means` and `stds` when encoding/decoding.
print(f"Original message: {message}")
# Encode the message:
encoder = constriction.stream.queue.RangeEncoder()
encoder.encode(message, model_family, means, stds)
# Get and print the compressed representation:
compressed = encoder.get_compressed()
print(f"compressed representation: {compressed}")
print(f"(in binary: {[bin(word) for word in compressed]})")
# Decode the message:
decoder = constriction.stream.queue.RangeDecoder(compressed)
reconstructed = decoder.decode(model_family, means, stds)
print(f"Reconstructed message: {reconstructed}")
assert np.all(reconstructed == message)
```
Original message: [ 6 10 -4 2 -9 41 3 0 2]
compressed representation: [3400499119 1762784004]
(in binary: ['0b11001010101011110111111110101111', '0b1101001000100011111001100000100'])
Reconstructed message: [ 6 10 -4 2 -9 41 3 0 2]
## Example 3: More Complex Entropy Models
In Example 2 above, we changed the entropy coder from ANS to Range Coding but we left the entropy *model* unchanged.
In this example, let's keep the ANS entropy coder but change the entropy model instead.
Rather than modeling each symbol with a Quantized Gaussian distribution, we'll model only the first 6 symbols this way.
For the last 3 symbols, we assume they all drawn from the *same* categorical distribution (we could also use an individual categorical distribution for each symbol, but we want to demonstrate how to encode and decode i.i.d. symbols in this example):
```python
import constriction
import numpy as np
# Same message as above, but a complex entropy model consisting of two parts:
message = np.array([6, 10, -4, 2, -9, 41, 3, 0, 2 ], dtype=np.int32)
means = np.array([2.5, 13.1, -1.1, -3.0, -6.1, 34.2], dtype=np.float64)
stds = np.array([4.1, 8.7, 6.2, 5.4, 24.1, 12.7], dtype=np.float64)
model_family1 = constriction.stream.model.QuantizedGaussian(-50, 50)
model2 = constriction.stream.model.Categorical(np.array(
[0.2, 0.1, 0.3, 0.4], dtype=np.float64)) # Specifies Probabilities of the symbols 0, 1, 2, 3.
print(f"Original message: {message}")
# Encode both parts of the message:
encoder = constriction.stream.queue.RangeEncoder()
encoder.encode(message[0:6], model_family1, means, stds)
encoder.encode(message[6:9], model2) # No model parameters provided here since `model2` is already fully parameterized.
# Get and print the compressed representation:
compressed = encoder.get_compressed()
print(f"compressed representation: {compressed}")
print(f"(in binary: {[bin(word) for word in compressed]})")
# Decode the message:
decoder = constriction.stream.queue.RangeDecoder(compressed)
reconstructed1 = decoder.decode(model_family1, means, stds)
reconstructed2 = decoder.decode(model2, 3) # (decodes 3 additional symbols)
reconstructed = np.concatenate((reconstructed1, reconstructed2))
print(f"Reconstructed message: {reconstructed}")
assert np.all(reconstructed == message)
```
Original message: [ 6 10 -4 2 -9 41 3 0 2]
compressed representation: [3400506403 2908157178]
(in binary: ['0b11001010101011111001110000100011', '0b10101101010101101111010011111010'])
Reconstructed message: [ 6 10 -4 2 -9 41 3 0 2]
We leave it as an exercise to the reader to change the entropy coder in the above example back to an ANS coder. (**Hint:** since ANS operates as a stack, you'll have to encode `message[6:9]` *before* encoding `message[0:6]`.)
## Example 4: Writing Compressed Data to a File
In `constriction`, compressed data is represented by default as an array of unsigned 32-bit integers.
Such data can trivially be written to a file or network socket.
However, make sure you use a well-defined byte order (["endianness"](https://en.wikipedia.org/wiki/Endianness)) so that data saved on one computer architecture can be read on another computer architecture.
Here's Example 1 from above, but this time divided into two parts that only communicate via a file.
```python
import constriction
import numpy as np
import sys
# Define some example message and entropy model:
message = np.array([6, 10, -4, 2, -9, 41, 9, 69, -6 ], dtype=np.int32)
means = np.array([2.5, 13.1, -1.1, -3.0, -6.1, 34.2, 12.8, 56.4, -3.1], dtype=np.float64)
stds = np.array([4.1, 8.7, 6.2, 5.4, 24.1, 12.7, 4.9, 28.9, 4.2], dtype=np.float64)
model_family = constriction.stream.model.QuantizedGaussian(-100, 100) # We'll provide `means` and `stds` when encoding/decoding.
print(f"Original message: {message}")
# Encode the message:
encoder = constriction.stream.stack.AnsCoder()
encoder.encode_reverse(message, model_family, means, stds)
# Get the compressed representation and save it to a file:
compressed = encoder.get_compressed()
if sys.byteorder != 'little':
# Let's use the convention that we always save data in little-endian byte order.
compressed.byteswap(inplace=True)
compressed.tofile('temporary-demo-file.bin')
print(f'Compressed data saved to file "temporary-demo-file.bin".')
```
Original message: [ 6 10 -4 2 -9 41 9 69 -6]
Compressed data saved to file "temporary-demo-file.bin".
```python
# Read the compressed representation from the file:
compressed_read = np.fromfile('temporary-demo-file.bin', dtype=np.uint32)
print(f'Read {len(compressed_read)} words of data from "temporary-demo-file.bin".')
if sys.byteorder != 'little':
# Turn data into native byte order before passing it to `constriction`
compressed_read.byteswap(inplace=True)
# Decode the message:
decoder = constriction.stream.stack.AnsCoder(compressed_read)
reconstructed = decoder.decode(model_family, means, stds)
print(f"Reconstructed message: {reconstructed}")
assert np.all(reconstructed == message)
```
Read 2 words of data from "temporary-demo-file.bin".
Reconstructed message: [ 6 10 -4 2 -9 41 9 69 -6]
## Further Reading
You now know how to use `constriction`'s Python API for some basic encoding and decoding operations.
The [website](https://bamler-lab.github.io/constriction/) has links to more examples and tutorials.
If you have a specific question, go to `constriction`'s [Python API documentation](https://bamler-lab.github.io/constriction/apidoc/python/).
If you're still new to the concept of entropy coding, check out the [teaching material](https://robamler.github.io/teaching/compress21/).
|
4f5d6f6c458104f1b10ebca4150083e5be96af45
| 20,226 |
ipynb
|
Jupyter Notebook
|
examples/python/01-hello-world.ipynb
|
bamler-lab/constriction
|
81169ee4229d87e3f8afa0768300492ba43c337c
|
[
"BSL-1.0",
"Apache-2.0",
"MIT"
] | 15 |
2021-05-20T12:12:08.000Z
|
2022-03-29T09:12:21.000Z
|
examples/python/01-hello-world.ipynb
|
tongdaxu/constriction
|
8f794bd559be79b364e84fbec80b09d36921f1f3
|
[
"BSL-1.0",
"Apache-2.0",
"MIT"
] | 2 |
2021-09-14T10:35:06.000Z
|
2022-01-07T16:59:57.000Z
|
examples/python/01-hello-world.ipynb
|
tongdaxu/constriction
|
8f794bd559be79b364e84fbec80b09d36921f1f3
|
[
"BSL-1.0",
"Apache-2.0",
"MIT"
] | 2 |
2021-08-23T19:58:05.000Z
|
2022-01-07T05:08:31.000Z
| 49.817734 | 391 | 0.650252 | true | 4,143 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.70253 | 0.598522 |
__label__eng_Latn
| 0.973639 | 0.228898 |
1. [Create all value pairs for v1 and v2](#AllValuePairs)
* [Replace NaN with mode](#NaN2Mode)
* [Use sample builtin function to create sample from matrix](#sample)
* [Count of Matching Values in two Matrices/Vectors](#MatchinRows)
* [Cross Validation](#CrossValidation)
* [Value-based join of two Matrices](#JoinMatrices)
* [Filter Matrix to include only Frequent Column Values](#FilterMatrix)
* [(Sparse) Matrix to/from (rowIndex, colIndex, values) conversions (i,j,v)](#Construct_sparse_Matrix)
* [Find and remove duplicates in columns or rows](#Find_and_remove_duplicates)
* [Set based Indexing](#Set_based_Indexing)
* [Group by Aggregate using Linear Algebra](#Multi_column_Sorting)
* [Cumulative Summation with Decay Multiplier](#CumSum_Product)
* [Invert Lower Triangular Matrix](#Invert_Lower_Triangular_Matrix)
```python
from systemml import MLContext, dml
ml = MLContext(sc)
print (ml.buildTime())
```
## Create all value pairs for v1 and v2<a id="AllValuePairs" />
```python
prog="""
v1 = matrix ('2 1 8 3 5 6 7', rows = 7, cols = 1 )
v2 = matrix ('80 20 50', rows = 3, cols = 1 )
nv1 = nrow (v1);
nv2 = nrow (v2);
R = cbind (
matrix (v1 %*% matrix(1, 1, nv2), nv1*nv2, 1),
matrix (matrix(1, nv1, 1) %*% t(v2), nv1*nv2, 1))
print(toString(v1));
print(toString(v2));
print(toString(R));
"""
res = ml.execute(dml(prog))
```
2.000
1.000
8.000
3.000
5.000
6.000
7.000
80.000
20.000
50.000
2.000 80.000
2.000 20.000
2.000 50.000
1.000 80.000
1.000 20.000
1.000 50.000
8.000 80.000
8.000 20.000
8.000 50.000
3.000 80.000
3.000 20.000
3.000 50.000
5.000 80.000
5.000 20.000
5.000 50.000
6.000 80.000
6.000 20.000
6.000 50.000
7.000 80.000
7.000 20.000
7.000 50.000
SystemML Statistics:
Total execution time: 0.000 sec.
Number of executed Spark inst: 0.
## Replace NaN with mode<a id="NaN2Mode" />
This functions replaces NaN in column i with mode of column i.
```python
prog="""
# Function for NaN-aware replacement with mode
replaceNaNwithMode = function (matrix[double] X, integer colId)
return (matrix[double] X)
{
Xi = replace (target=X[,colId], pattern=NaN, replacement=-Inf) # replace NaN with -Inf
Xi = replace (target=Xi, pattern=-Inf, replacement=max(Xi)+1) # replace -Inf with largest value + 1
agg = aggregate (target=Xi, groups=Xi, fn="count") # count each distinct value
mode = as.scalar (rowIndexMax(t(agg[1:nrow(agg)-1, ]))) # mode is max frequent value except last value
X[,colId] = replace (target=Xi, pattern=max(Xi), replacement=mode) # fill in mode
}
X = matrix('1 NaN 1 NaN 1 2 2 1 1 2', rows = 5, cols = 2)
Y = replaceNaNwithMode (X, 2)
print ("Before: \n" + toString(X))
print ("After: \n" + toString(Y))
"""
res = ml.execute(dml(prog))
```
Before:
1.000 NaN
1.000 NaN
1.000 2.000
2.000 1.000
1.000 2.000
After:
1.000 2.000
1.000 2.000
1.000 2.000
2.000 1.000
1.000 2.000
SystemML Statistics:
Total execution time: 0.001 sec.
Number of executed Spark inst: 0.
## Use sample builtin function to create sample from matrix<a id="sample" />
Use sample() function, create permutation matrix using table(), and pull sample from X.
```python
prog="""
X = matrix ('2 1 8 3 5 6 7 9 4 4', rows = 5, cols = 2 )
nbrSamples = 2
sv = order (target = sample (nrow (X), nbrSamples, FALSE)) # samples w/o replacement, and order
P = table (seq (1, nbrSamples), sv, nbrSamples, nrow(X)) # permutation matrix
samples = P %*% X; # apply P to perform selection
print ("X: \n" + toString(X))
print ("sv: \n" + toString(sv))
print ("samples: \n" + toString(samples))
"""
res = ml.execute(dml(prog))
```
X:
2.000 1.000
8.000 3.000
5.000 6.000
7.000 9.000
4.000 4.000
sv:
1.000
5.000
samples:
2.000 1.000
4.000 4.000
SystemML Statistics:
Total execution time: 0.000 sec.
Number of executed Spark inst: 0.
## Count of Matching Values in two Matrices/Vectors<a id="MatchingRows" />
Given two matrices/vectors X and Y, get a count of the rows where X and Y have the same value.
```python
prog="""
X = matrix('8 4 5 4 9 10', rows = 6, cols = 1)
Y = matrix('4 9 5 1 9 7 ', rows = 6, cols = 1)
matches = sum (X == Y)
print ("t(X): " + toString(t(X)))
print ("t(Y): " + toString(t(Y)))
print ("Number of Matches: " + matches + "\n")
"""
res = ml.execute(dml(prog))
```
t(X): 8.000 4.000 5.000 4.000 9.000 10.000
t(Y): 4.000 9.000 5.000 1.000 9.000 7.000
Number of Matches: 2.0
SystemML Statistics:
Total execution time: 0.000 sec.
Number of executed Spark inst: 0.
## Cross Validation<a id="CrossValidation" />
Perform kFold cross validation by running in parallel fold creation, training algorithm, test algorithm, and evaluation.
```python
prog = """
holdOut = 1/3
kFolds = 1/holdOut
nRows = 6; nCols = 3;
X = matrix(seq(1, nRows * nCols), rows = nRows, cols = nCols) # X data
y = matrix(seq(1, nRows), rows = nRows, cols = 1) # y label data
Xy = cbind (X,y) # Xy Data for CV
sv = rand (rows = nRows, cols = 1, min = 0.0, max = 1.0, pdf = "uniform") # sv selection vector for fold creation
sv = (order(target=sv, by=1, index.return=TRUE)) %% kFolds + 1 # with numbers between 1 .. kFolds
stats = matrix(0, rows=kFolds, cols=1) # stats per kFolds model on test data
parfor (i in 1:kFolds)
{
# Skip empty training data or test data.
if ( sum (sv == i) > 0 & sum (sv == i) < nrow(X) )
{
Xyi = removeEmpty(target = Xy, margin = "rows", select = (sv == i)) # Xyi fold, i.e. 1/k of rows (test data)
Xyni = removeEmpty(target = Xy, margin = "rows", select = (sv != i)) # Xyni data, i.e. (k-1)/k of rows (train data)
# Skip extreme label inbalance
distinctLabels = aggregate( target = Xyni[,1], groups = Xyni[,1], fn = "count")
if ( nrow(distinctLabels) > 1)
{
w_i = trainAlg (Xyni[ ,1:ncol(Xy)-1], Xyni[ ,ncol(Xy)]) # w_i Model for i-th training data
p_i = testAlg (Xyi [ ,1:ncol(Xy)-1], w_i) # p_i Prediction for i-th test data
e_i = evalPrediction (p_i, Xyi[ ,ncol(Xy)]) # stats[i,] evaluation of prediction of i-th fold
stats[i,] = e_i
print ( "Test data Xyi" + i + "\n" + toString(Xyi)
+ "\nTrain data Xyni" + i + "\n" + toString(Xyni)
+ "\nw_" + i + "\n" + toString(w_i)
+ "\nstats" + i + "\n" + toString(stats[i,])
+ "\n")
}
else
{
print ("Training data for fold " + i + " has only " + nrow(distinctLabels) + " distinct labels. Needs to be > 1.")
}
}
else
{
print ("Training data or test data for fold " + i + " is empty. Fold not validated.")
}
}
print ("SV selection vector:\n" + toString(sv))
trainAlg = function (matrix[double] X, matrix[double] y)
return (matrix[double] w)
{
w = t(X) %*% y
}
testAlg = function (matrix[double] X, matrix[double] w)
return (matrix[double] p)
{
p = X %*% w
}
evalPrediction = function (matrix[double] p, matrix[double] y)
return (matrix[double] e)
{
e = as.matrix(sum (p - y))
}
"""
res = ml.execute(dml(prog))
```
Test data Xyi1
7.000 8.000 9.000 3.000
10.000 11.000 12.000 4.000
Train data Xyni1
1.000 2.000 3.000 1.000
4.000 5.000 6.000 2.000
13.000 14.000 15.000 5.000
16.000 17.000 18.000 6.000
w_1
170.000
184.000
198.000
stats1
10537.000
Test data Xyi2
13.000 14.000 15.000 5.000
16.000 17.000 18.000 6.000
Train data Xyni2
1.000 2.000 3.000 1.000
4.000 5.000 6.000 2.000
7.000 8.000 9.000 3.000
10.000 11.000 12.000 4.000
w_2
70.000
80.000
90.000
stats2
7469.000
Test data Xyi3
1.000 2.000 3.000 1.000
4.000 5.000 6.000 2.000
Train data Xyni3
7.000 8.000 9.000 3.000
10.000 11.000 12.000 4.000
13.000 14.000 15.000 5.000
16.000 17.000 18.000 6.000
w_3
222.000
240.000
258.000
stats3
5109.000
SV selection vector:
3.000
3.000
1.000
1.000
2.000
2.000
SystemML Statistics:
Total execution time: 0.014 sec.
Number of executed Spark inst: 0.
## Value-based join of two Matrices<a id="JoinMatrices"/>
Given matrix M1 and M2, join M1 on column 2 with M2 on column 2, and return matching rows of M1.
```python
prog = """
M1 = matrix ('1 1 2 3 3 3 4 4 5 3 6 4 7 1 8 2 9 1', rows = 9, cols = 2)
M2 = matrix ('1 1 2 8 3 3 4 3 5 1', rows = 5, cols = 2)
I = rowSums (outer (M1[,2], t(M2[,2]), "==")) # I : indicator matrix for M1
M12 = removeEmpty (target = M1, margin = "rows", select = I) # apply filter to retrieve join result
print ("M1 \n" + toString(M1))
print ("M2 \n" + toString(M2))
print ("M1[,2] joined with M2[,2], and return matching M1 rows\n" + toString(M12))
"""
res = ml.execute(dml(prog))
```
M1
1.000 1.000
2.000 3.000
3.000 3.000
4.000 4.000
5.000 3.000
6.000 4.000
7.000 1.000
8.000 2.000
9.000 1.000
M2
1.000 1.000
2.000 8.000
3.000 3.000
4.000 3.000
5.000 1.000
M1[,2] joined with M2[,2], and return matching M1 rows
1.000 1.000
2.000 3.000
3.000 3.000
5.000 3.000
7.000 1.000
9.000 1.000
SystemML Statistics:
Total execution time: 0.001 sec.
Number of executed Spark inst: 0.
## Filter Matrix to include only Frequent Column Values <a id="FilterMatrix"/>
Given a matrix, filter the matrix to only include rows with column values that appear more often than MinFreq.
```python
prog = """
MinFreq = 3 # minimum frequency of tokens
M = matrix ('1 1 2 3 3 3 4 4 5 3 6 4 7 1 8 2 9 1', rows = 9, cols = 2)
gM = aggregate (target = M[,2], groups = M[,2], fn = "count") # gM: group by and count (grouped matrix)
gv = cbind (seq(1,nrow(gM)), gM) # gv: add group values to counts (group values)
fg = removeEmpty (target = gv * (gv[,2] >= MinFreq), margin = "rows") # fg: filtered groups
I = rowSums (outer (M[,2] ,t(fg[,1]), "==")) # I : indicator of size M with filtered groups
fM = removeEmpty (target = M, margin = "rows", select = I) # FM: filter matrix
print (toString(M))
print (toString(fM))
"""
res = ml.execute(dml(prog))
```
1.000 1.000
2.000 3.000
3.000 3.000
4.000 4.000
5.000 3.000
6.000 4.000
7.000 1.000
8.000 2.000
9.000 1.000
1.000 1.000
2.000 3.000
3.000 3.000
5.000 3.000
7.000 1.000
9.000 1.000
SystemML Statistics:
Total execution time: 0.001 sec.
Number of executed Spark inst: 0.
## (Sparse) Matrix to/from (rowIndex, colIndex, values) conversions (i,j,v) <a id="Construct_sparse_Matrix"></a>
Given rowIndex, colIndex, and values as column vectors, construct (sparse) matrix.
```python
prog = """
I = matrix ("1 3 3 4 5", rows = 5, cols = 1)
J = matrix ("2 3 4 1 6", rows = 5, cols = 1)
V = matrix ("10 20 30 40 50", rows = 5, cols = 1)
IJVs = cbind(I, J, V)
M = table (I, J, V)
print (toString (IJVs))
print (toString (M))
"""
res = ml.execute(dml(prog).output('M')).get('M').toNumPy()
```
1.000 2.000 10.000
3.000 3.000 20.000
3.000 4.000 30.000
4.000 1.000 40.000
5.000 6.000 50.000
0.000 10.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 20.000 30.000 0.000 0.000
40.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 50.000
SystemML Statistics:
Total execution time: 0.001 sec.
Number of executed Spark inst: 0.
Given a sparse matrix, construct ``<i,j,v>`` matrix with 3 columns rowIndex, colIndex, and values.
```python
prog = """
M = matrix ("0 23 10 0 18 0 0 20", rows = 4, cols = 2)
m = nrow(M);
n = ncol(M);
I = matrix((M!=0)*seq(1,m), m*n, 1)
J = matrix((M!=0)*t(seq(1,n)), m*n, 1)
V = matrix(M, m*n, 1)
IJVd = cbind(I, J, V);
IJVs = removeEmpty(target=IJVd, margin="rows");
print ("M:\n" + toString(M))
print ("IJVs:\n" + toString (IJVs))
"""
res = ml.execute(dml(prog).output('M')).get('M').toNumPy()
```
M:
0.000 23.000
10.000 0.000
18.000 0.000
0.000 20.000
IJVs:
1.000 2.000 23.000
2.000 1.000 10.000
3.000 1.000 18.000
4.000 2.000 20.000
SystemML Statistics:
Total execution time: 0.001 sec.
Number of executed Spark inst: 0.
## Find and remove duplicates in columns or rows<a id="Find_and_remove_duplicates"></a>
### Assuming values are sorted.
```python
prog = """
X = matrix ("1 2 3 3 3 4 5 10", rows = 8, cols = 1)
I = rbind (matrix (1,1,1), (X[1:nrow (X)-1,] != X[2:nrow (X),])); # compare current with next value
res = removeEmpty (target = X, margin = "rows", select = I); # select where different
"""
ml.execute(dml(prog).output('res')).get('res').toNumPy()
```
SystemML Statistics:
Total execution time: 0.000 sec.
Number of executed Spark inst: 0.
array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.],
[ 10.]])
### No assumptions on values.
```python
prog = """
X = matrix ("3 2 1 3 3 4 5 10", rows = 8, cols = 1)
I = aggregate (target = X, groups = X[,1], fn = "count") # group and count duplicates
res = removeEmpty (target = seq (1, max (X[,1])), margin = "rows", select = (I != 0)); # select groups
"""
ml.execute(dml(prog).output('res')).get('res').toNumPy()
```
SystemML Statistics:
Total execution time: 0.076 sec.
Number of executed Spark inst: 6.
array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.],
[ 10.]])
### Order the values and then remove duplicates.
```python
prog = """
X = matrix ("3 2 1 3 3 4 5 10", rows = 8, cols = 1)
X = order (target = X, by = 1) # order values
I = rbind (matrix (1,1,1), (X[1:nrow (X)-1,] != X[2:nrow (X),]));
res = removeEmpty (target = X, margin = "rows", select = I);
"""
ml.execute(dml(prog).output('res')).get('res').toNumPy()
```
SystemML Statistics:
Total execution time: 0.000 sec.
Number of executed Spark inst: 0.
array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.],
[ 10.]])
## Set based Indexing<a id="Set_based_Indexing"></a>
Given a matrix X, and a indicator matrix J with indices into X.
Use J to perform operation on X, e.g. add value 10 to cells in X indicated by J.
```python
prog = """
X = matrix (1, rows = 1, cols = 100)
J = matrix ("10 20 25 26 28 31 50 67 79", rows = 1, cols = 9)
res = X + table (matrix (1, rows = 1, cols = ncol (J)), J, 10)
print (toString (res))
"""
ml.execute(dml(prog).output('res')).get('res').toNumPy()
```
1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 11.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 11.000 1.000 1.000 1.000 1.000 11.000 11.000 1.000 11.000 1.000 1.000 11.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 11.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 11.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 11.000
SystemML Statistics:
Total execution time: 0.001 sec.
Number of executed Spark inst: 0.
array([[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 11., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 11., 1., 1.,
1., 1., 11., 11., 1., 11., 1., 1., 11., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 11., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
11., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 11.]])
## Group by Aggregate using Linear Algebra<a id="Multi_column_Sorting"></a>
Given a matrix PCV as (Position, Category, Value), sort PCV by category, and within each category by value in descending order. Create indicator vector for category changes, create distinct categories, and perform linear algebra operations.
```python
prog = """
C = matrix ('50 40 20 10 30 20 40 20 30', rows = 9, cols = 1) # category data
V = matrix ('20 11 49 33 94 29 48 74 57', rows = 9, cols = 1) # value data
PCV = cbind (cbind (seq (1, nrow (C), 1), C), V); # PCV representation
PCV = order (target = PCV, by = 3, decreasing = TRUE, index.return = FALSE);
PCV = order (target = PCV, by = 2, decreasing = FALSE, index.return = FALSE);
# Find all rows of PCV where the category has a new value, in comparison to the previous row
is_new_C = matrix (1, rows = 1, cols = 1);
if (nrow (C) > 1) {
is_new_C = rbind (is_new_C, (PCV [1:nrow(C) - 1, 2] < PCV [2:nrow(C), 2]));
}
# Associate each category with its index
index_C = cumsum (is_new_C); # cumsum
# For each category, compute:
# - the list of distinct categories
# - the maximum value for each category
# - 0-1 aggregation matrix that adds records of the same category
distinct_C = removeEmpty (target = PCV [, 2], margin = "rows", select = is_new_C);
max_V_per_C = removeEmpty (target = PCV [, 3], margin = "rows", select = is_new_C);
C_indicator = table (index_C, PCV [, 1], max (index_C), nrow (C)); # table
sum_V_per_C = C_indicator %*% V
"""
res = ml.execute(dml(prog).output('PCV','distinct_C', 'max_V_per_C', 'C_indicator', 'sum_V_per_C'))
print (res.get('PCV').toNumPy())
print (res.get('distinct_C').toNumPy())
print (res.get('max_V_per_C').toNumPy())
print (res.get('C_indicator').toNumPy())
print (res.get('sum_V_per_C').toNumPy())
```
SystemML Statistics:
Total execution time: 0.002 sec.
Number of executed Spark inst: 0.
[[ 4. 10. 33.]
[ 8. 20. 74.]
[ 3. 20. 49.]
[ 6. 20. 29.]
[ 5. 30. 94.]
[ 9. 30. 57.]
[ 7. 40. 48.]
[ 2. 40. 11.]
[ 1. 50. 20.]]
[[ 10.]
[ 20.]
[ 30.]
[ 40.]
[ 50.]]
[[ 33.]
[ 74.]
[ 94.]
[ 48.]
[ 20.]]
[[ 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 1. 0. 1. 0.]
[ 0. 0. 0. 0. 1. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0.]]
[[ 33.]
[ 152.]
[ 151.]
[ 59.]
[ 20.]]
## Cumulative Summation with Decay Multiplier<a id="CumSum_Product"></a>
Given matrix X, compute:
Y[i] = X[i]
+ X[i-1] * C[i]
+ X[i-2] * C[i] * C[i-1]
+ X[i-3] * C[i] * C[i-1] * C[i-2]
+ ...
```python
cumsum_prod_def = """
cumsum_prod = function (Matrix[double] X, Matrix[double] C, double start)
return (Matrix[double] Y)
# Computes the following recurrence in log-number of steps:
# Y [1, ] = X [1, ] + C [1, ] * start;
# Y [i+1, ] = X [i+1, ] + C [i+1, ] * Y [i, ]
{
Y = X; P = C; m = nrow(X); k = 1;
Y [1,] = Y [1,] + C [1,] * start;
while (k < m) {
Y [k + 1:m,] = Y [k + 1:m,] + Y [1:m - k,] * P [k + 1:m,];
P [k + 1:m,] = P [1:m - k,] * P [k + 1:m,];
k = 2 * k;
}
}
"""
```
In this example we use cumsum_prod for cumulative summation with "breaks", that is, multiple cumulative summations in one.
```python
prog = cumsum_prod_def + """
X = matrix ("1 2 3 4 5 6 7 8 9", rows = 9, cols = 1);
#Zeros in C cause "breaks" that restart the cumulative summation from 0
C = matrix ("0 1 1 0 1 1 1 0 1", rows = 9, cols = 1);
Y = cumsum_prod (X, C, 0);
print (toString(Y))
"""
ml.execute(dml(prog))
```
1.000
3.000
6.000
4.000
9.000
15.000
22.000
8.000
17.000
SystemML Statistics:
Total execution time: 0.001 sec.
Number of executed Spark inst: 0.
MLResults
In this example, we copy selected rows downward to all consecutive non-selected rows.
```python
prog = cumsum_prod_def + """
X = matrix ("1 2 3 4 5 6 7 8 9", rows = 9, cols = 1);
# Ones in S represent selected rows to be copied, zeros represent non-selected rows
S = matrix ("1 0 0 1 0 0 0 1 0", rows = 9, cols = 1);
Y = cumsum_prod (X * S, 1 - S, 0);
print (toString(Y))
"""
ml.execute(dml(prog))
```
1.000
1.000
1.000
4.000
4.000
4.000
4.000
8.000
8.000
SystemML Statistics:
Total execution time: 0.001 sec.
Number of executed Spark inst: 0.
MLResults
This is a naive implementation of cumulative summation with decay multiplier.
```python
cumsum_prod_naive_def = """
cumsum_prod_naive = function (Matrix[double] X, Matrix[double] C, double start)
return (Matrix[double] Y)
{
Y = matrix (0, rows = nrow(X), cols = ncol(X));
Y [1,] = X [1,] + C [1,] * start;
for (i in 2:nrow(X))
{
Y [i,] = X [i,] + C [i,] * Y [i - 1,]
}
}
"""
```
There is a significant performance difference between the <b>naive</b> implementation and the <b>tricky</b> implementation.
```python
prog = cumsum_prod_def + cumsum_prod_naive_def + """
X = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
C = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
Y1 = cumsum_prod_naive (X, C, 0.123);
"""
ml.execute(dml(prog))
```
SystemML Statistics:
Total execution time: 6.081 sec.
Number of executed Spark inst: 0.
MLResults
```python
prog = cumsum_prod_def + cumsum_prod_naive_def + """
X = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
C = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
Y2 = cumsum_prod (X, C, 0.123);
"""
ml.execute(dml(prog))
```
SystemML Statistics:
Total execution time: 0.074 sec.
Number of executed Spark inst: 0.
MLResults
## Invert Lower Triangular Matrix<a id="Invert_Lower_Triangular_Matrix"></a>
In this example, we invert a lower triangular matrix using a the following divide-and-conquer approach. Given lower triangular matrix L, we compute its inverse X which is also lower triangular by splitting both matrices in the middle into 4 blocks (in a 2x2 fashion), and multiplying them together to get the identity matrix:
\begin{equation}
L \text{ %*% } X = \left(\begin{matrix} L_1 & 0 \\ L_2 & L_3 \end{matrix}\right)
\text{ %*% } \left(\begin{matrix} X_1 & 0 \\ X_2 & X_3 \end{matrix}\right)
= \left(\begin{matrix} L_1 X_1 & 0 \\ L_2 X_1 + L_3 X_2 & L_3 X_3 \end{matrix}\right)
= \left(\begin{matrix} I & 0 \\ 0 & I \end{matrix}\right)
\nonumber
\end{equation}
If we multiply blockwise, we get three equations:
$
\begin{equation}
L1 \text{ %*% } X1 = 1\\
L3 \text{ %*% } X3 = 1\\
L2 \text{ %*% } X1 + L3 \text{ %*% } X2 = 0\\
\end{equation}
$
Solving these equation gives the following formulas for X:
$
\begin{equation}
X1 = inv(L1) \\
X3 = inv(L3) \\
X2 = - X3 \text{ %*% } L2 \text{ %*% } X1 \\
\end{equation}
$
If we already recursively inverted L1 and L3, we can invert L2. This suggests an algorithm that starts at the diagonal and iterates away from the diagonal, involving bigger and bigger blocks (of size 1, 2, 4, 8, etc.) There is a logarithmic number of steps, and inside each step, the inversions can be performed in parallel using a parfor-loop.
Function "invert_lower_triangular" occurs within more general inverse operations and matrix decompositions. The divide-and-conquer idea allows to derive more efficient algorithms for other matrix decompositions.
```python
invert_lower_triangular_def = """
invert_lower_triangular = function (Matrix[double] LI)
return (Matrix[double] LO)
{
n = nrow (LI);
LO = matrix (0, rows = n, cols = n);
LO = LO + diag (1 / diag (LI));
k = 1;
while (k < n)
{
LPF = matrix (0, rows = n, cols = n);
parfor (p in 0:((n - 1) / (2 * k)), check = 0)
{
i = 2 * k * p;
j = i + k;
q = min (n, j + k);
if (j + 1 <= q) {
L1 = LO [i + 1:j, i + 1:j];
L2 = LI [j + 1:q, i + 1:j];
L3 = LO [j + 1:q, j + 1:q];
LPF [j + 1:q, i + 1:j] = -L3 %*% L2 %*% L1;
}
}
LO = LO + LPF;
k = 2 * k;
}
}
"""
```
```python
prog = invert_lower_triangular_def + """
n = 1000;
A = rand (rows = n, cols = n, min = -1, max = 1, pdf = "uniform", sparsity = 1.0);
Mask = cumsum (diag (matrix (1, rows = n, cols = 1)));
L = (A %*% t(A)) * Mask; # Generate L for stability of the inverse
X = invert_lower_triangular (L);
print ("Maximum difference between X %*% L and Identity = " + max (abs (X %*% L - diag (matrix (1, rows = n, cols = 1)))));
"""
ml.execute(dml(prog))
```
Maximum difference between X %*% L and Identity = 2.220446049250313E-16
SystemML Statistics:
Total execution time: 0.309 sec.
Number of executed Spark inst: 0.
MLResults
This is a naive implementation of inverting a lower triangular matrix.
```python
invert_lower_triangular_naive_def = """
invert_lower_triangular_naive = function (Matrix[double] LI)
return (Matrix[double] LO)
{
n = nrow (LI);
LO = diag (matrix (1, rows = n, cols = 1));
for (i in 1:n - 1)
{
LO [i,] = LO [i,] / LI [i, i];
LO [i + 1:n,] = LO [i + 1:n,] - LI [i + 1:n, i] %*% LO [i,];
}
LO [n,] = LO [n,] / LI [n, n];
}
"""
```
The naive implementation is significantly slower than the divide-and-conquer implementation.
```python
prog = invert_lower_triangular_naive_def + """
n = 1000;
A = rand (rows = n, cols = n, min = -1, max = 1, pdf = "uniform", sparsity = 1.0);
Mask = cumsum (diag (matrix (1, rows = n, cols = 1)));
L = (A %*% t(A)) * Mask; # Generate L for stability of the inverse
X = invert_lower_triangular_naive (L);
print ("Maximum difference between X %*% L and Identity = " + max (abs (X %*% L - diag (matrix (1, rows = n, cols = 1)))));
"""
ml.execute(dml(prog))
```
Maximum difference between X %*% L and Identity = 4.718447854656915E-16
SystemML Statistics:
Total execution time: 6.890 sec.
Number of executed Spark inst: 0.
MLResults
```python
```
|
6dd096c06ebd1d42c16bd34b5c77487b6b541038
| 44,101 |
ipynb
|
Jupyter Notebook
|
samples/jupyter-notebooks/DML Tips and Tricks (aka Fun With DML).ipynb
|
bertholdreinwald/systemml
|
86b3090badbe7481cb8a834218b6780678acd960
|
[
"Apache-2.0"
] | 15 |
2016-03-03T09:23:25.000Z
|
2017-02-21T22:09:57.000Z
|
samples/jupyter-notebooks/DML Tips and Tricks (aka Fun With DML).ipynb
|
bertholdreinwald/systemml
|
86b3090badbe7481cb8a834218b6780678acd960
|
[
"Apache-2.0"
] | null | null | null |
samples/jupyter-notebooks/DML Tips and Tricks (aka Fun With DML).ipynb
|
bertholdreinwald/systemml
|
86b3090badbe7481cb8a834218b6780678acd960
|
[
"Apache-2.0"
] | 10 |
2016-01-18T01:50:25.000Z
|
2020-03-03T20:25:44.000Z
| 27.701633 | 493 | 0.458266 | true | 10,008 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.752013 | 0.605846 |
__label__eng_Latn
| 0.595545 | 0.245912 |
# Computing the Z-Normalized Euclidean Distance from Dot Products
In the [Matrix Profile I](https://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf) and [Matrix Profile II](https://www.cs.ucr.edu/~eamonn/PID4481997_extend_Matrix%20Profile_I.pdf) papers, the Z-normalized Euclidean distance between a query subsequence, $Q_{i,m}=(q_i, q_{i+1}, q_{i+2}\ldots, q_{i+m-1})$, and the $i^{th}$ subsequence, $T_{i,m}=(t_i, t_{i+1}, t_{i+2}, \ldots, t_{i+m-1})$, with window size, $m$, in the time series, $T$, can be computed following:
\begin{align}
D(Q_{i,m}, T_{i,m}) ={}&
\sqrt{
\sum \limits _{0 \leq {j} \lt m}
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
-
\frac{q_{i+j}-\mu_{Q_{i,m}}}{\sigma_{Q_{i,m}}}
\right)^2
}
\\
={}&
\sqrt{
\sum \limits _{0 \leq {j} \lt m}
\left[
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
\right)
-
2
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
\right)
\left(
\frac{q_{i+j}-\mu_{Q_{i,m}}}{\sigma_{Q_{i,m}}}
\right)
+
\left(
\frac{q_{i+j}-\mu_{Q_{i,m}}}{\sigma_{Q_{i,m}}}
\right)^2
\right]
}
\\
={}&
\sqrt{
\sum \limits _{0 \leq {j} \lt m}
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
\right)^2
-
\sum \limits _{0 \leq {j} \lt m}
2
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
\right)
\left(
\frac{q_j-\mu_{Q_{i,m}}}{\sigma_{Q_{i,m}}}
\right)
+
\sum \limits _{0 \leq {j} \lt m}
\left(
\frac{q_{i+j}-\mu_{Q_{i,m}}}{\sigma_{Q_{i,m}}}
\right)^2
}
\\
={}&
\sqrt{
m
-
\sum \limits _{0 \leq {j} \lt m}
2
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
\right)
\left(
\frac{q_{i+j}-\mu_{Q_{i,m}}}{\sigma_{Q_{i,m}}}
\right)
+
m
}
\\
={}&
\sqrt{
2m
-
2
\sum \limits _{0 \leq {j} \lt m}
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
\right)
\left(
\frac{q_{i+j}-\mu_{Q_m}}{\sigma_{Q_{i,m}}}
\right)
}
\\
={}&
\sqrt{
2m
\left[
1
-
\frac{1}{m}
\sum \limits _{0 \leq {j} \lt m}
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
\right)
\left(
\frac{q_{i+j}-\mu_{Q_{i,m}}}{\sigma_{Q_{i,m}}}
\right)
\right]
}
\\
={}&
\sqrt{
2m
\left[
1
-
\sum \limits _{0 \leq {j} \lt m}
\frac{
\left(
t_{i+j}-M_{T_{i,m}}
\right)
\left(
q_{i+j}-\mu_{Q_{i,m}}
\right)
}{
m \sigma_{Q_{i,m}} \Sigma_{T_{i,m}}
}
\right]
}
\\
={}&
\sqrt{
2m
\left[
1
-
\sum \limits _{0 \leq {j} \lt m}
\frac{
t_{i+j}q_j
-t_{i+j}\mu_{Q_{i,m}}
-M_{T_{i,m}}q_{i+j}
+M_{T_{i,m}}\mu_{Q_{i+m}}
}{
m \sigma_{Q_{i+m}} \Sigma_{T_{i,m}}
}
\right]
}
\\
={}&
\sqrt{
2m
\left[
1
-
\frac{
\sum \limits _{0 \leq {j} \lt m}
q_{i+j}t_{i+j}
-
\sum \limits _{0 \leq {j} \lt m}
t_{i+j}\mu_{Q_{i,m}}
-
\sum \limits _{0 \leq {j} \lt m}
\left(
M_{T_{i,m}}q_{i+j}
-M_{T_{i,m}}{\mu_{Q_{i,m}}}
\right)
}{
m \sigma_{Q_{i,m}} \Sigma_{T_{i,m}}
}
\right]
}
\\
={}&
\sqrt{
2m
\left[
1
-
\frac{
Q_{i,m}\cdot{T_{i,m}}
-
\sum \limits _{0 \leq {j} \lt m}
t_{i+j}\mu_{Q_{i,m}}
-
M_{T_{i,m}}
\sum \limits _{0 \leq {j} \lt m}
\left(
q_{i+j}-{\mu_{Q_{i,m}}}
\right)
}{
m \sigma_{Q_{i,m}} \Sigma_{T_{i,m}}
}
\right]
}
\\
={}&
\sqrt{
2m
\left[
1
-
\frac{
Q_{i,m}\cdot{T_{i,m}}
-
\mu_{Q_{i,m}}
\sum \limits _{0 \leq {j} \lt m}
t_{i+j}
-
0
}{
m \sigma_{Q_{i,m}} \Sigma_{T_{i,m}}
}
\right]
}
\\
={}&
\sqrt{
2m
\left[
1
-
\frac{
Q_{i,m}\cdot{T_{i,m}}
-
\mu_{Q_{i,m}}
\sum \limits _{0 \leq {j} \lt m}
t_{i+j}
}{
m \sigma_{Q_{i,m}} \Sigma_{T_{i,m}}
}
\right]
}
\\
={}&
\sqrt{
2m
\left[
1
-
\frac{
Q_{i,m}\cdot{T_{i,m}}
-
\mu_{Q{i,m}}m
\sum \limits _{0 \leq {j} \lt m}
\frac{t_{i+j}}{m}
}{
m \sigma_{Q_{i,m}} \Sigma_{T_{i,m}}
}
\right]
}
\\
={}&
\sqrt{
2m
\left[
1
-
\frac{
Q_{i,m}\cdot{T_{i,m}}
-
m\mu_{Q_{i,m}}M_{T_{i,m}}
}{
m \sigma_{Q_{i,m}} \Sigma_{T_{i,m}}
}
\right]
}
\\
\end{align}
# Computing the Z-Normalized Euclidean Distance from the Pearson Correlation
Based on the fact that the Pearson Correlation, $\rho$, can be written as (see Equation 4 in [this paper](https://www.cs.unm.edu/~mueen/Projects/JOCOR/joinICDM.pdf) or Equation 3 in [this paper](https://arxiv.org/pdf/1601.02213.pdf)):
\begin{align}
\rho(Q_{i,m}, T_{i,m}) ={}& \frac{E
\left[
\left(
Q_{i,m}-\mu_{Q_{i,m}}
\right)
\left(
T_{i,m}-M_{T_{i,m}}
\right)
\right]
}{\sigma_{Q_{i,m}}\Sigma_{T_{i,m}}}
\\
={}&
\frac{
\langle
\left(
Q_{i,m}-\mu_{Q_{i,m}}
\right)
,
\left(
T_{i,m}-M_{T_{i,m}}
\right)
\rangle
}{\sigma_{Q_{i,m}}\Sigma_{T_{i,m}}}
\\
={}&
\frac{1}{m}
\sum \limits _{0 \leq j \lt m}
\frac{
\left(
q_{i+j}-\mu_{Q_{i,m}}
\right)
\left(
t_{i+j}-M_{T_{i,m}}
\right)
}{\sigma_{Q_{i,m}}\Sigma_{T_{i,m}}}
\\
={}&
\frac{1}{m}
\sum \limits _{0 \leq j \lt m}
\left(
\frac{
q_{i+j}-\mu_{Q_{i,m}}
}{\sigma_{Q_{i,m}}}
\right)
\left(
\frac{
t_{i+j}-M_{T_{i,m}}
}{\Sigma_{T_{i,m}}}
\right)
\\
\end{align}
Similar to above, the Z-normalized Euclidean distance can be computed from $\rho$ following:
\begin{align}
D(Q_{i,m}, T_{i,m}) ={}&
\sqrt{
\sum \limits _{0 \leq {j} \lt m}
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
-
\frac{q_{i+j}-\mu_{Q_{i,m}}}{\sigma_{Q_{i,m}}}
\right)^2
}
\\
\vdots
\\
={}&
\sqrt{
2m
\left[
1
-
\frac{1}{m}
\sum \limits _{0 \leq {j} \lt m}
\left(
\frac{t_{i+j}-M_{T_{i,m}}}{\Sigma_{T_{i,m}}}
\right)
\left(
\frac{q_{i+j}-\mu_{Q_{i,m}}}{\sigma_{Q_{i,m}}}
\right)
\right]
}
\\
={}&
\sqrt{
2m
\left[
1
-
\rho(Q_{i,m},T_{i,m})
\right]
}
\\
\end{align}
Thus, by employing the most efficient way to compute $\rho(Q_{i,m},T_{i,m})$, then we'd also have an efficient way to directly compute $D(Q_{i,m},T_{i,m})$. Recall that:
\begin{align}
\rho(Q_{i,m},T_{i,m}) = \frac{cov(Q_{i,m},T_{i,m})}{\sigma_{Q_{i,m}}\Sigma_{T_{i,m}}}
\end{align}
Thus, it follows that finding the most efficient way to compute the covariance matrix, $cov(Q_{i,m},T_{i,m})$ would result in the most efficient way to compute the distance. Also, remember that we would like to traverse our distance matrix along each diagonal rather than along each row/column.
# Covariance
Recall that the covariance, $cov(Q_{i,m},T_{i,m})$, can be written as:
\begin{align}
cov(Q_{i,m},T_{i,m}) ={}& E
\left[
\left(
Q-\mu_{Q_{i,m}}
\right)
\left(
T_{i,m}-M_{T_{i,m}}
\right)
\right]
\\
={}&
\langle
\left(
Q-\mu_{Q_{i,m}}
\right)
,
\left(
T_{i,m}-M_{T_{i,m}}
\right)
\rangle
\\
={}&
\frac{1}{m}
\sum \limits _{0 \leq j \lt m}
\left(
q_{i+j}-\mu_{Q_{i,m}}
\right)
\left(
t_{i+j}-M_{T_{i,m}}
\right)
\\
\end{align}
Note that we've explicitly called out the fact that the means, $\mu_{Q_{{i,m}}}$ and $M_{T_{i,m}}$, are computed with the subsequences of length $m$. Additionally, according to Welford, we can express these means with respect to the means of the same subsequences that have their last elements removed (i.e., $\mu_{Q_{i,m-1}}$ and $M_{T_{i,m-1}}$).
\begin{align}
cov(Q_{i,m},T_{i,m})
={}&
\frac{1}{m}
\sum \limits _{0 \leq j \lt m}
\left(
q_{i+j}-\mu_{Q_{i,m}}
\right)
\left(
t_{i+j}-M_{T_{i,m}}
\right)
\\
={}&
\frac{
S(Q_{i,m-1}, T_{i,m-1})
+
\left(
\frac{m-1}{m}
\right)
\left(
q_{i+m-1} - \mu_{Q_{i,m-1}}
\right)
\left(
t_{i+m-1} - M_{T_{i,m-1}}
\right)
}{m}
\\
={}&
\frac{
\frac{m-1}{m-1}S(Q_{i,m-1}, T_{i,m-1})
+
\left(
\frac{m-1}{m}
\right)
\left(
q_{i+m-1} - \mu_{Q_{i,m-1}}
\right)
\left(
t_{i+m-1} - M_{T_{i,m-1}}
\right)
}{m}
\\
={}&
\frac{
cov(Q_{i,m-1},T_{i,m-1}) (m-1)
+
\left(
\frac{m-1}{m}
\right)
\left(
q_{i+m-1} - \mu_{Q_{i,m-1}}
\right)
\left(
t_{i+m-1} - M_{T_{i,m-1}}
\right)
}{m}
\\
={}&
\frac{m-1}{m}
\left[
cov(Q_{i,m-1},T_{i,m-1})
+
\frac{
\left(
q_{i+m-1} - \mu_{Q_{i,m-1}}
\right)
\left(
t_{i+m-1} - M_{T_{i,m-1}}
\right)
}{m}
\right]
\\
\end{align}
Similarly, $cov(Q_{i-1,m},T_{i-1,m})$ can also be expressed with respect to $cov(Q_{i,m-1},T_{i,m-1})$:
\begin{align}
cov(Q_{i-1,m},T_{i-1,m})
={}&
\frac{1}{m}
\sum \limits _{0 \leq j \lt m}
\left(
q_{i+j-1}-\mu_{Q_{i-1,m}}
\right)
\left(
t_{i+j-1}-M_{T_{i-1,m}}
\right)
\\
={}&
\frac{
S(Q_{i,m-1},T_{i,m-1})
+
\frac{m-1}{m}
\left(
q_{i-1}
-
\mu_{Q_{i,m-1}}
\right)
\left(
t_{i-1}
-
M_{T_{i,m-1}}
\right)
}{m}
\\
={}&
\frac{
\frac{m-1}{m-1}S(Q_{i,m-1},T_{i,m-1})
+
\frac{m-1}{m}
\left(
q_{i-1}
-
\mu_{Q_{i,m-1}}
\right)
\left(
t_{i-1}
-
M_{T_{i,m-1}}
\right)
}{m}
\\
={}&
\frac{
cov(Q_{i,m-1},T_{i,m-1}) (m-1)
+
\left(
\frac{m-1}{m}
\right)
\left(
q_{i-1}
-
\mu_{Q_{i,m-1}}
\right)
\left(
t_{i-1}
-
M_{T_{i,m-1}}
\right)
}{m}
\\
={}&
\frac{m-1}{m}
\left[
cov(Q_{i,m-1},T_{i,m-1})
+
\frac{
\left(
q_{i-1}
-
\mu_{Q_{i,m-1}}
\right)
\left(
t_{i-1}
-
M_{T_{i,m-1}}
\right)
}{m}
\right]
\\
\end{align}
Now, we can rearrange write this and write represent $cov(Q_{i,m-1},T_{i,m-1})$ as a function of $cov(Q_{i-1,m},T_{i-1,m})$:
\begin{align}
cov(Q_{i-1,m},T_{i-1,m})
={}&
\frac{m-1}{m}
\left[
cov(Q_{i,m-1},T_{i,m-1})
+
\frac{
\left(
q_{i-1}
-
\mu_{Q_{i,m-1}}
\right)
\left(
t_{i-1}
-
M_{T_{i,m-1}}
\right)
}{m}
\right]
\\
\frac{m}{m-1}
cov(Q_{i-1,m},T_{i-1,m})
-
\frac{
\left(
q_{i-1}
-
\mu_{Q_{i,m-1}}
\right)
\left(
t_{i-1}
-
M_{T_{i,m-1}}
\right)
}{m}
={}&
cov(Q_{i,m-1},T_{i,m-1})
\\
\end{align}
And we can then substitute this representation of $cov(Q_{i,m-1},T_{i,m-1})$ into our $cov(Q_{i,m},T_{i,m})$ equation from above and get:
\begin{align}
cov(Q_{i,m},T_{i,m})
={}&
\frac{m-1}{m}
\left[
cov(Q_{i,m-1},T_{i,m-1})
+
\frac{
\left(
q_{i+m-1} - \mu_{Q_{i,m-1}}
\right)
\left(
t_{i+m-1} - M_{T_{i,m-1}}
\right)
}{m}
\right]
\\
={}&
\frac{m-1}{m}
\left[
\frac{m}{m-1}
cov(Q_{i-1,m},T_{i-1,m})
-
\frac{
\left(
q_{i-1}
-
\mu_{Q_{i,m-1}}
\right)
\left(
t_{i-1}
-
M_{T_{i,m-1}}
\right)
}{m}
+
\frac{
\left(
q_{i+m-1} - \mu_{Q_{i,m-1}}
\right)
\left(
t_{i+m-1} - M_{T_{i,m-1}}
\right)
}{m}
\right]
\\
={}&
cov(Q_{i-1,m},T_{i-1,m})
+
\frac{m-1}{m^2}
\left[
\left(
q_{i+m-1} - \mu_{Q_{i,m-1}}
\right)
\left(
t_{i+m-1} - M_{T_{i,m-1}}
\right)
-
\left(
q_{i-1}
-
\mu_{Q_{i,m-1}}
\right)
\left(
t_{i-1}
-
M_{T_{i,m-1}}
\right)
\right]
\\
\end{align}
# Pearson Correlation
\begin{align}
\rho(Q_{i,m},T_{i,m})
&{}=
\frac{cov(Q_{i,m},T_{i,m})}{\sigma_{Q_{i,m}}\Sigma_{T_{i,m}}}
\\
&{}=
\frac{
cov(Q_{i-1,m},T_{i-1,m})
+
\frac{m-1}{m^2}
\left[
\left(
q_{i+m-1} - \mu_{Q_{i,m-1}}
\right)
\left(
t_{i+m-1} - M_{T_{i,m-1}}
\right)
-
\left(
q_{i-1}
-
\mu_{Q_{i,m-1}}
\right)
\left(
t_{i-1}
-
M_{T_{i,m-1}}
\right)
\right]
}{\sigma_{Q_{i,m}}\Sigma_{T_{i,m}}}
\\
\end{align}
# Z-Normalized Distance
```python
```
|
00ac95e310771e337d0487ccc9aa7c3242cddf73
| 27,972 |
ipynb
|
Jupyter Notebook
|
docs/Matrix_Profile_Derivation.ipynb
|
profintegra/stumpy
|
66b3402d91820005b466e1da6fe353b61e6246c5
|
[
"BSD-3-Clause"
] | 2,296 |
2019-05-03T19:26:39.000Z
|
2022-03-31T20:42:08.000Z
|
docs/Matrix_Profile_Derivation.ipynb
|
vishalbelsare/stumpy
|
5f192a0a41fbb44f144cc4b676d525f19aaeaa98
|
[
"BSD-3-Clause"
] | 436 |
2019-05-06T14:14:01.000Z
|
2022-03-31T20:39:31.000Z
|
docs/Matrix_Profile_Derivation.ipynb
|
vishalbelsare/stumpy
|
5f192a0a41fbb44f144cc4b676d525f19aaeaa98
|
[
"BSD-3-Clause"
] | 318 |
2019-05-04T01:36:05.000Z
|
2022-03-31T20:31:11.000Z
| 32.225806 | 490 | 0.257937 | true | 5,546 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.97024 | 0.874077 | 0.848065 |
__label__eng_Latn
| 0.16563 | 0.808671 |
# Matrix Factorization via Singular Value Decomposition
Matrix factorization is the breaking down of one matrix in a product of multiple matrices. It's extremely well studied in mathematics, and it's highly useful. There are many different ways to factor matrices, but singular value decomposition is particularly useful for making recommendations.
So what is singular value decomposition (SVD)? At a high level, SVD is an algorithm that decomposes a matrix $R$ into the best lower rank (i.e. smaller/simpler) approximation of the original matrix $R$. Mathematically, it decomposes R into a two unitary matrices and a diagonal matrix:
$$\begin{equation}
R = U\Sigma V^{T}
\end{equation}$$
where R is users's ratings matrix, $U$ is the user "features" matrix, $\Sigma$ is the diagonal matrix of singular values (essentially weights), and $V^{T}$ is the movie "features" matrix. $U$ and $V^{T}$ are orthogonal, and represent different things. $U$ represents how much users "like" each feature and $V^{T}$ represents how relevant each feature is to each movie.
To get the lower rank approximation, we take these matrices and keep only the top $k$ features, which we think of as the underlying tastes and preferences vectors.
```python
import pandas as pd
import numpy as np
```
```python
movies_df = pd.read_csv('movies.csv')
movies_df['movie_id'] = movies_df['movie_id'].apply(pd.to_numeric)
movies_df.head(3)
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>movie_id</th>
<th>title</th>
<th>genres</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>Toy Story (1995)</td>
<td>Adventure|Animation|Children|Comedy|Fantasy</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>Jumanji (1995)</td>
<td>Adventure|Children|Fantasy</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>Grumpier Old Men (1995)</td>
<td>Comedy|Romance</td>
</tr>
</tbody>
</table>
</div>
```python
ratings_df=pd.read_csv('ratings.csv')
ratings_df.head(3)
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>user_id</th>
<th>movie_id</th>
<th>rating</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>31</td>
<td>2.5</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>1029</td>
<td>3.0</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>1061</td>
<td>3.0</td>
</tr>
</tbody>
</table>
</div>
These look good, but I want the format of my ratings matrix to be one row per user and one column per movie. I'll `pivot` `ratings_df` to get that and call the new variable `R`.
```python
R_df = ratings_df.pivot(index = 'user_id', columns ='movie_id', values = 'rating').fillna(0)
R_df.head()
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>movie_id</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>...</th>
<th>161084</th>
<th>161155</th>
<th>161594</th>
<th>161830</th>
<th>161918</th>
<th>161944</th>
<th>162376</th>
<th>162542</th>
<th>162672</th>
<th>163949</th>
</tr>
<tr>
<th>user_id</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>2</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>4.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>3</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>4</th>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>4.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>5</th>
<td>0.0</td>
<td>0.0</td>
<td>4.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
</tbody>
</table>
<p>5 rows × 9066 columns</p>
</div>
The last thing I need to do is de-mean the data (normalize by each users mean) and convert it from a dataframe to a numpy array.
```python
R = R_df.as_matrix()
user_ratings_mean = np.mean(R, axis = 1)
R_demeaned = R - user_ratings_mean.reshape(-1, 1)
```
# Singular Value Decomposition
Scipy and Numpy both have functions to do the singular value decomposition. I'm going to use the Scipy function `svds` because it let's me choose how many latent factors I want to use to approximate the original ratings matrix (instead of having to truncate it after).
```python
from scipy.sparse.linalg import svds
U, sigma, Vt = svds(R_demeaned, k = 50)
```
Done. The function returns exactly what I detailed earlier in this post, except that the $\Sigma$ returned is just the values instead of a diagonal matrix. This is useful, but since I'm going to leverage matrix multiplication to get predictions I'll convert it to the diagonal matrix form.
```python
sigma = np.diag(sigma)
```
# Making Predictions from the Decomposed Matrices
I now have everything I need to make movie ratings predictions for every user. I can do it all at once by following the math and matrix multiply $U$, $\Sigma$, and $V^{T}$ back to get the rank $k=50$ approximation of $R$.
I also need to add the user means back to get the actual star ratings prediction.
```python
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
```
```python
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns)
preds_df.head()
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>movie_id</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>...</th>
<th>161084</th>
<th>161155</th>
<th>161594</th>
<th>161830</th>
<th>161918</th>
<th>161944</th>
<th>162376</th>
<th>162542</th>
<th>162672</th>
<th>163949</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.054239</td>
<td>0.045130</td>
<td>-0.004835</td>
<td>-0.019817</td>
<td>-0.011284</td>
<td>0.041373</td>
<td>-0.007822</td>
<td>-0.017188</td>
<td>0.012246</td>
<td>0.037670</td>
<td>...</td>
<td>-0.005258</td>
<td>-0.005453</td>
<td>0.012369</td>
<td>-0.004991</td>
<td>-0.004639</td>
<td>-0.019055</td>
<td>0.021402</td>
<td>-0.006365</td>
<td>-0.006098</td>
<td>-0.004819</td>
</tr>
<tr>
<th>1</th>
<td>0.419835</td>
<td>1.406440</td>
<td>-0.188807</td>
<td>0.156658</td>
<td>0.268032</td>
<td>0.414698</td>
<td>0.052172</td>
<td>0.044728</td>
<td>-0.020198</td>
<td>2.220256</td>
<td>...</td>
<td>-0.005909</td>
<td>-0.003974</td>
<td>-0.012555</td>
<td>-0.003555</td>
<td>-0.002711</td>
<td>-0.071621</td>
<td>-0.016212</td>
<td>0.001047</td>
<td>-0.001468</td>
<td>-0.006577</td>
</tr>
<tr>
<th>2</th>
<td>1.345619</td>
<td>0.266505</td>
<td>-0.011962</td>
<td>0.012278</td>
<td>0.079508</td>
<td>0.090960</td>
<td>-0.122094</td>
<td>0.031327</td>
<td>-0.018023</td>
<td>0.141176</td>
<td>...</td>
<td>-0.002647</td>
<td>-0.002364</td>
<td>-0.010153</td>
<td>0.000277</td>
<td>-0.000116</td>
<td>-0.018063</td>
<td>-0.015761</td>
<td>0.010611</td>
<td>0.006792</td>
<td>-0.006357</td>
</tr>
<tr>
<th>3</th>
<td>1.133455</td>
<td>1.046982</td>
<td>0.141275</td>
<td>0.081841</td>
<td>-0.339675</td>
<td>-1.484659</td>
<td>-0.263096</td>
<td>-0.169750</td>
<td>-0.021862</td>
<td>1.611664</td>
<td>...</td>
<td>0.020805</td>
<td>0.000410</td>
<td>0.056040</td>
<td>-0.002817</td>
<td>-0.000767</td>
<td>0.159159</td>
<td>0.087519</td>
<td>-0.030854</td>
<td>-0.021279</td>
<td>0.048529</td>
</tr>
<tr>
<th>4</th>
<td>1.389578</td>
<td>1.466495</td>
<td>0.605557</td>
<td>-0.029647</td>
<td>0.729380</td>
<td>-0.118539</td>
<td>-0.026017</td>
<td>0.065577</td>
<td>-0.156655</td>
<td>0.307926</td>
<td>...</td>
<td>-0.007422</td>
<td>-0.011810</td>
<td>0.006644</td>
<td>-0.005159</td>
<td>-0.001249</td>
<td>-0.034658</td>
<td>0.016456</td>
<td>0.001710</td>
<td>-0.004166</td>
<td>-0.001864</td>
</tr>
</tbody>
</table>
<p>5 rows × 9066 columns</p>
</div>
```python
def recommend_movies(predictions_df, userID, movies_df, original_ratings_df, num_recommendations):
# Get and sort the user's predictions
user_row_number = userID - 1 # UserID starts at 1, not 0
sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings_df[original_ratings_df.user_id == (userID)]
user_full = (user_data.merge(movies_df, how = 'left', left_on = 'movie_id', right_on = 'movie_id').
sort_values(['rating'], ascending=False)
)
print('User {0} has already rated {1} movies.'.format(userID, user_full.shape[0]))
print('Recommending highest {0} predicted ratings movies not already rated.'.format(num_recommendations))
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies_df[~movies_df['movie_id'].isin(user_full['movie_id'])].
merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left',
left_on = 'movie_id',
right_on = 'movie_id').
rename(columns = {user_row_number: 'Predictions'}).
sort_values('Predictions', ascending = False).
iloc[:num_recommendations, :-1]
)
return user_full, recommendations
```
```python
already_rated, predictions = recommend_movies(preds_df,11, movies_df, ratings_df, 10)
```
User 11 has already rated 38 movies.
Recommending highest 10 predicted ratings movies not already rated.
```python
predictions
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>movie_id</th>
<th>title</th>
<th>genres</th>
</tr>
</thead>
<tbody>
<tr>
<th>279</th>
<td>318</td>
<td>Shawshank Redemption, The (1994)</td>
<td>Crime|Drama</td>
</tr>
<tr>
<th>6894</th>
<td>58559</td>
<td>Dark Knight, The (2008)</td>
<td>Action|Crime|Drama|IMAX</td>
</tr>
<tr>
<th>2359</th>
<td>2959</td>
<td>Fight Club (1999)</td>
<td>Action|Crime|Drama|Thriller</td>
</tr>
<tr>
<th>530</th>
<td>608</td>
<td>Fargo (1996)</td>
<td>Comedy|Crime|Drama|Thriller</td>
</tr>
<tr>
<th>1356</th>
<td>1732</td>
<td>Big Lebowski, The (1998)</td>
<td>Comedy|Crime</td>
</tr>
<tr>
<th>959</th>
<td>1213</td>
<td>Goodfellas (1990)</td>
<td>Crime|Drama</td>
</tr>
<tr>
<th>7264</th>
<td>70286</td>
<td>District 9 (2009)</td>
<td>Mystery|Sci-Fi|Thriller</td>
</tr>
<tr>
<th>2025</th>
<td>2542</td>
<td>Lock, Stock & Two Smoking Barrels (1998)</td>
<td>Comedy|Crime|Thriller</td>
</tr>
<tr>
<th>520</th>
<td>593</td>
<td>Silence of the Lambs, The (1991)</td>
<td>Crime|Horror|Thriller</td>
</tr>
<tr>
<th>871</th>
<td>1089</td>
<td>Reservoir Dogs (1992)</td>
<td>Crime|Mystery|Thriller</td>
</tr>
</tbody>
</table>
</div>
```python
already_rated.head(10)
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>user_id</th>
<th>movie_id</th>
<th>rating</th>
<th>title</th>
<th>genres</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>11</td>
<td>50</td>
<td>5.0</td>
<td>Usual Suspects, The (1995)</td>
<td>Crime|Mystery|Thriller</td>
</tr>
<tr>
<th>7</th>
<td>11</td>
<td>923</td>
<td>5.0</td>
<td>Citizen Kane (1941)</td>
<td>Drama|Mystery</td>
</tr>
<tr>
<th>36</th>
<td>11</td>
<td>104841</td>
<td>5.0</td>
<td>Gravity (2013)</td>
<td>Action|Sci-Fi|IMAX</td>
</tr>
<tr>
<th>18</th>
<td>11</td>
<td>26614</td>
<td>5.0</td>
<td>Bourne Identity, The (1988)</td>
<td>Action|Adventure|Drama|Mystery|Thriller</td>
</tr>
<tr>
<th>17</th>
<td>11</td>
<td>6598</td>
<td>5.0</td>
<td>Step Into Liquid (2002)</td>
<td>Documentary</td>
</tr>
<tr>
<th>10</th>
<td>11</td>
<td>1408</td>
<td>5.0</td>
<td>Last of the Mohicans, The (1992)</td>
<td>Action|Romance|War|Western</td>
</tr>
<tr>
<th>9</th>
<td>11</td>
<td>1201</td>
<td>5.0</td>
<td>Good, the Bad and the Ugly, The (Buono, il bru...</td>
<td>Action|Adventure|Western</td>
</tr>
<tr>
<th>19</th>
<td>11</td>
<td>48516</td>
<td>5.0</td>
<td>Departed, The (2006)</td>
<td>Crime|Drama|Thriller</td>
</tr>
<tr>
<th>37</th>
<td>11</td>
<td>106487</td>
<td>5.0</td>
<td>The Hunger Games: Catching Fire (2013)</td>
<td>Action|Adventure|Sci-Fi|IMAX</td>
</tr>
<tr>
<th>4</th>
<td>11</td>
<td>296</td>
<td>5.0</td>
<td>Pulp Fiction (1994)</td>
<td>Comedy|Crime|Drama|Thriller</td>
</tr>
</tbody>
</table>
</div>
# Conclusion
We've seen that we can make good recommendations with raw data based collaborative filtering methods (neighborhood models) and latent features from low-rank matrix factorization methods (factorization models).
Low-dimensional matrix recommenders try to capture the underlying features driving the raw data (which we understand as tastes and preferences). From a theoretical perspective, if we want to make recommendations based on people's tastes, this seems like the better approach. This technique also scales **significantly** better to larger datasets.
However, we still likely lose some meaningful signals by using a lower-rank matrix. And though these factorization based techniques work extremely well, there's research being done on new methods. These efforts have resulted in various types probabilistic matrix factorization (which works and scales even better) and many other approaches.
|
0e10669f34e5476b4c81bae13ce1309f272e9456
| 38,205 |
ipynb
|
Jupyter Notebook
|
recommender-system/ml-latest-small/model.ipynb
|
ankitpandey2708/ml
|
2d32e91cb5a73dd47306cc32a8eedd379aaee032
|
[
"MIT"
] | null | null | null |
recommender-system/ml-latest-small/model.ipynb
|
ankitpandey2708/ml
|
2d32e91cb5a73dd47306cc32a8eedd379aaee032
|
[
"MIT"
] | null | null | null |
recommender-system/ml-latest-small/model.ipynb
|
ankitpandey2708/ml
|
2d32e91cb5a73dd47306cc32a8eedd379aaee032
|
[
"MIT"
] | null | null | null | 34.020481 | 384 | 0.383013 | true | 6,528 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.944995 | 0.822189 | 0.776964 |
__label__eng_Latn
| 0.500382 | 0.643481 |
```python
# 그래프, 수학 기능 추가
# Add graph and math features
import pylab as py
```
# 수치 적분<br>Numerical Integration
여기서는 정적분의 근사값을 수열의 합으로 계산할 것이다.<br>
Here, we would approximate the definite integral by a sum of a sequence.
면적이 2인 원의 반지름을 구해 보자.<br>Let's find the radius of a circle with area of 2.
$$
\begin{align}
\pi r^2 &= 2 \\
r^2 &= \frac{2}{\pi} \\
r &= \sqrt{\frac{2}{\pi}}
\end{align}
$$
```python
r = py.sqrt(2.0 / py.pi)
```
```python
r
```
이러한 원의 중심이 원점에 위치하고 있다고 생각해 보자.<br>Let's assume that a circle of such radius has its center at the origin.
$$
x^2 + y^2 = r^2 \\
y^2 = r^2 - x^2 \\
y = \pm \sqrt{r^2 - x^2} \\
y_{plus} = + \sqrt{r^2 - x^2} \\
y_{minus} = - \sqrt{r^2 - x^2}
$$
```python
x_array = py.linspace(-r, r, 64)
y_plus = py.sqrt(r**2 - x_array ** 2)
py.plot(x_array, y_plus, '.-')
x_array2 = r * py.cos(py.deg2rad(py.linspace(180, 0, 16)))
y_minus = -py.sqrt(r**2 - x_array2 ** 2)
py.plot(x_array2, y_minus, '.-')
py.axis('equal')
py.grid(True)
```
$+$ 부분만 생각하기로 하자.<br>Let's just think about the $+$ side only.
$$
y = \sqrt{r^2 - x^2}
$$
```python
x_array = py.sort(r * py.cos(py.deg2rad(range(0, 180))))
y_plus = py.sqrt(r**2 - x_array ** 2)
py.fill_between(x_array, y_plus)
py.axis('equal')
py.grid(True)
```
이 반원의 면적을 수치적으로 구해보기로 하자. (반원의 정확한 값은 얼마이겠는가?)<br>
Let's try to numerically find the area of this half-circle. (What would be the exact value?)
## 0차 적분<br>0th Order Integration
우선 $x$를 일정 간격으로 나누어 보자.<br>Let's divide the $x$ coordinates in a constant interval.
```python
d = r * 2.0
n = 10
x_array_bar = py.linspace(-r, r, n+1)
y_array_bar = py.sqrt(abs(r**2 - x_array_bar ** 2))
delta_x = x_array_bar[1]-x_array_bar[0]
py.fill_between(x_array, y_plus)
# TODO : 막대그래프 직사각형 안을 칠하지 않으려면 어떻게 하면 좋겠는가?
# TODO : How can we remove the color inside the rectangle?
py.bar(x_array_bar, y_array_bar, width=delta_x, alpha=0.5, align='edge', edgecolor='k')
py.axis('equal')
py.grid(True)
```
아래 셀은 `x_array` 간격을 확인한다.<br>
Following cell verifies increment of `x_array`.
```python
assert 1e-3 > abs(delta_x - (d/n)), (delta_x, d/n)
```
직사각형의 모양과 반원의 모양이 정확히 일치하지는 않는다는 점을 기억하자.<br>
Let's remember that the areas of the rectangles and the half circle are not exactly the same.
각 직사각형의 면적을 구해서 더해 보자<br>Let's find the area of each rectangle and sum up.
$$
Area = \sum_{k=0}^{n-1} F_k
$$
$$
F_k = f(x_k)\cdot \Delta x
$$
$$
Area = \sum_{k=0}^{n-1} f(x_k)\cdot \Delta x
$$
```python
summation = 0
for k in range(n):
F_k = y_array_bar[k] * delta_x
print('k = %2d, F_k = %g' % (k, F_k))
summation += F_k
print('summation =', summation)
```
예상한 값 1에 더 비슷한 값을 얻기 위해 더 잘게 나누어 보자<br>To obtain the result closer to the expected value of 1, let's divide with a narrower interval.
```python
n = 100
x_array_bar = py.linspace(-r, r, n+1)
y_array_bar = py.sqrt(r**2 - x_array_bar ** 2)
delta_x = x_array_bar[1]-x_array_bar[0]
py.fill_between(x_array, y_plus)
py.bar(x_array_bar, y_array_bar, width=delta_x, alpha=0.5, align='edge', edgecolor='k')
py.axis('equal')
py.grid(True)
```
```python
summation = 0
for k in range(n):
summation += delta_x * y_array_bar[k]
print('summation =', summation)
```
더 잘게 나눈 결과에 대한 의견은 어떠한가?<br>
What is your opinion about using the narrower partitions?
### 함수로 구현<br>Implement in a Function
다양한 경우에 더 편리하게 적용하기 위해 함수 형태로 만들어 보자.<br>To make it more convenient to apply to various cases, let's implement in a function
```python
def half_circle(x):
return py.sqrt(r**2 - x ** 2)
```
```python
def get_delta_x(xi, xe, n):
return (xe - xi) / n
```
```python
def num_int_0(f, xi, xe, n, b_verbose=False):
x_array = py.linspace(xi, xe, n+1)
delta_x = x_array[1] - x_array[0]
assert 1e-3 > (abs(delta_x - get_delta_x(xi, xe, n)) / get_delta_x(xi, xe, n)), f"delta_x = {delta_x}"
integration_result = 0.0
for k in range(n):
x_k = x_array[k]
F_k = f(x_k) * delta_x
if b_verbose:
print('k = %2d, F_k = %g' % (k, F_k))
integration_result += F_k
return integration_result
```
```python
n = 100
result = num_int_0(half_circle, -r, r, n)
print('result =', result)
```
```python
%timeit -n 100 result = num_int_0(half_circle, -r, r, n)
```
### $cos \theta$의 반 주기<br>Half period of $cos \theta$
```python
n = 10
result_cos = num_int_0(py.cos, 0, py.pi, n, b_verbose=True)
print('result =', result_cos)
```
```python
n = 100
result_cos = num_int_0(py.cos, 0, py.pi, n)
print('result =', result_cos)
```
### 1/4 원<br>A quarter circle
```python
n = 10
result_quarter = num_int_0(half_circle, -r, 0, n, b_verbose=True)
print('result =', result_quarter)
```
```python
n = 10
result_quarter = num_int_0(half_circle, 0, r, n, b_verbose=True)
print('result =', result_quarter)
```
```python
n = 100
result_quarter = num_int_0(half_circle, -r, 0, n)
print('result =', result_quarter)
```
## 연습문제<br>Exercises
도전 과제 1: $e^ x$ 를 $0 \le x \le 1$ 구간에서 0차 적분으로 적분하시오. 이론적 엄밀해와 비교하시오.<br>Try this 1: Integrate $e^ x$ over $0 \le x \le 1$ interval using the 0th order integration. Compare with the exact solution.<br>
```python
```
도전 과제 2: $sin \left( cos x \right)$ 를 $0 \le x \le 1$ 구간에서 적분하시오.<br>
Try this 2: Integrate $sin \left( cos x \right)$ over $0 \le x \le 1$ interval. <br>
(ref : [Examples for
Numerical Integration](https://www.wolframalpha.com/examples/mathematics/applied-mathematics/numerical-analysis/numerical-integration/), Wolfram Alpha, Accessed Aug 28 2018)
```python
```
도전 과제 3: 이미 다루었던 이분법 함수와 0차 적분을 이용하여 면적이 2인 원의 반지름을 구하는 프로그램을 작성하고 실행해 보시오.<br>
Try this 3: Using the bisection method function and 0th roder intergration, write a program finding radius of a circle with area of two, and run it.
```python
```
## 리만 합<br>Riemann Sum
이렇게 어떤 함수의 정적분을 유한한 합으로 바꾸어 계산하는 것을 리만 합이라고 부른다.<br>
Riemanm Sum is a type of approximation of an integral by a finite sum. [[wikipedia](https://en.wikipedia.org/wiki/Riemann_sum)]
## `pylab.linspace()`
일정 간격의 배열을 생성한다.<br>
This would generate an array of a constant interval.
```python
py.linspace(0, 10, 5)
```
아래 셀의 결과를 비교해 보시오.<br>
Compare the results of the following cells.
```python
py.arange(0, 10+1, 1)
```
```python
py.linspace(0, 10, 10+1)
```
## 함수형 프로그래밍<br>Functional programming
간격이 일정하다면 면적의 근사값을 다음과 같이 바꾸어 쓸 수 있다.<br>
If the interval $\Delta x$ is constant, we may rewrite the approximation of the area as follows.
$$
Area = \sum_{k=0}^{n-1} f(x_k)\cdot \Delta x= \Delta x \sum_{k=0}^{n-1} f(x_k)
$$
할당문 없이 `sum()` 과 `map()` 함수로 구현해 보자.<br>
Instead of assignments, let's implement using `sum()` and `map()` functions.
```python
def num_int_0_functional(f, xi, xe, n):
return (
get_delta_x(xi, xe, n) * sum(
map(
f,
py.linspace(xi, xe, n+1)[:-1],
)
)
)
```
```python
n = 100
result_func = num_int_0_functional(half_circle, -r, r, n)
print('result_func =', result_func)
```
```python
assert 1e-3 > abs(result - result_func), f"result = {result}, result_func = {result_func}"
```
```python
%timeit -n 100 result_func = num_int_0_functional(half_circle, -r, r, n)
```
이와 같이 할당문과 부가적 효과 없이 함수로만 구현하는 형태를 *함수형 프로그래밍* 이라고 한다.<br>
*Functional programming* is to implement with functions only without assignments and side effects. [[Sahota](https://dev.to/navi/why-functional-programming-matters-2o95)]
## NumPy 벡터화<br>Vectorization of NumPy
```python
import pylab as py
```
```python
def num_int_0_vector(f, xi, xe, n):
return f(
py.linspace(xi, xe-get_delta_x(xi, xe, n), n)
).sum() * get_delta_x(xi, xe, n)
```
```python
n = 100
result_vect = num_int_0_vector(half_circle, -r, r, n)
print('result_vect =', result_vect)
```
```python
assert 1e-3 > abs(result - result_vect), f"result = {result}, result_vect = {result_vect}"
```
```python
%timeit -n 100 result_vect = num_int_0_vector(half_circle, -r, r, n)
```
## 시험<br>Test
아래는 함수가 맞게 작동하는지 확인함<br>
Following cells verify whether the functions work correctly.
```python
import pylab as py
r = py.sqrt(1.0 / py.pi)
n = 10
def half_circle(x):
return py.sqrt(r**2 - x ** 2)
assert 0.25 > num_int_0(half_circle, -r, 0, n), num_int_0(half_circle, -r, 0, n)
assert 0.25 < num_int_0(half_circle, 0, r, n), num_int_0(half_circle, 0, r, n)
assert 0.25 > num_int_0_functional(half_circle, -r, 0, n), num_int_0_functional(half_circle, -r, 0, n)
assert 0.25 < num_int_0_functional(half_circle, 0, r, n), num_int_0_functional(half_circle, 0, r, n)
assert 0.25 > num_int_0_vector(half_circle, -r, 0, n), num_int_0_vector(half_circle, -r, 0, n)
assert 0.25 < num_int_0_vector(half_circle, 0, r, n), num_int_0_vector(half_circle, 0, r, n)
```
```python
assert 0.1 > (abs(num_int_0(half_circle, -r, 0, n) - 0.25) * 4)
assert 0.1 > (abs(num_int_0(half_circle, 0, r, n) - 0.25) * 4)
assert 0.1 > (abs(num_int_0_functional(half_circle, -r, 0, n) - 0.25) * 4)
assert 0.1 > (abs(num_int_0_functional(half_circle, 0, r, n) - 0.25) * 4)
assert 0.1 > (abs(num_int_0_vector(half_circle, -r, 0, n) - 0.25) * 4)
assert 0.1 > (abs(num_int_0_vector(half_circle, 0, r, n) - 0.25) * 4)
```
## Final Bell<br>마지막 종
```python
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
```python
```
|
ca15699712679e04ac3bd150dce2d0cea5ea8528
| 19,665 |
ipynb
|
Jupyter Notebook
|
00_zeroth_order.ipynb
|
kangwonlee/19ECA-30-num-int
|
20f45cb6cde958ebb901341a0253abfb95efe92d
|
[
"BSD-3-Clause"
] | null | null | null |
00_zeroth_order.ipynb
|
kangwonlee/19ECA-30-num-int
|
20f45cb6cde958ebb901341a0253abfb95efe92d
|
[
"BSD-3-Clause"
] | null | null | null |
00_zeroth_order.ipynb
|
kangwonlee/19ECA-30-num-int
|
20f45cb6cde958ebb901341a0253abfb95efe92d
|
[
"BSD-3-Clause"
] | null | null | null | 22.021277 | 217 | 0.493567 | true | 3,645 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.833325 | 0.824462 | 0.687044 |
__label__kor_Hang
| 0.77012 | 0.434566 |
```python
# Boilerplate imports
from numpy import array as ary; from numpy import log as ln
from numpy import cos, sin, pi, sqrt, exp, arccos, arcsin
tau = 2*pi
import numpy as np;
from matplotlib import pyplot as plt
# Linear algebra functions
from numpy.linalg import inv, pinv, det, eig, eigh, eigvals
from matplotlib.patches import Ellipse # for plotting ellipse
from collections import namedtuple
```
```python
# program to generate a covariance matrix where the variance values are fixed at [2,2].
def generate_cov(off_diag):
cov = ary([[2.0, off_diag], [off_diag, 2.0]])
return cov
Dots = namedtuple('Dots', ['points', 'area'])
def get_encircled_dots(covariance_matrix, bounds=[-10, 10], resolution=300):
pt_list = np.linspace(*bounds, resolution)
unit_square_size = ((bounds[1]-bounds[0])/resolution)**2
points = ary(np.meshgrid(pt_list, pt_list)).T.flatten().reshape([-1,2])
chi2_level = ((points @ inv(covariance_matrix)) * points).sum(axis=1)
mask = chi2_level <= 1 # choose only points within the error ellipse
area = sum(mask)*unit_square_size
return Dots(points[mask], area)
PLOT_CIRCLE = True
# if PLOT_CIRCLE: plot the error ellipse using 11 differernt covariance values;
# else: plot plot the variation of the error ellipse area wrt. the covariance value.
if not PLOT_CIRCLE:
determinant_cov, determinant_inv_cov, size = [], [], []
```
Setup:
- We have a point cloud with a center at (0,0).
- 1 sigma of the points (68% of them) lies within the error ellipse. (We won't be plotting the remaining 32%)
- The top left element of the covariance matrix describes the width of this ellipse, and the bottom right describes the height of the ellipse.
- Therefore, varying the covariance value (i.e. the symmetric off-diagonal terms) should only make the ellipse into a thinner ellipse that is leaning left/right.
```python
# plot the error ellipse
cov_range = np.linspace(-1.98, 1.98, 11)
for i in cov_range:
cov = generate_cov(i)
(minor, major) , eig_mat = eig(inv(cov) * det(cov))
mat = inv(eig_mat)
orientation = np.mean([arcsin(mat[0,1]), -arcsin(mat[1,0])])*np.sign(mat[0,0])
fig, ax = plt.subplots()
ellipse = Ellipse([0,0], # centered at the origin
2*sqrt(major),
2*sqrt(minor),
np.rad2deg(orientation)
)
# DUUUUDE I got the width=2*sqrt(major)/sqrt(det(inv(cov))) equation by trial and error LMAO
ax.add_patch(ellipse)
ax.scatter(*(get_encircled_dots(cov).points.T), marker='+', alpha=0.4, color='C1', zorder=10) # scatter plot approach
plt.show()
```
Using the code block above we can verify that we have correctly plotted the error ellipse correctly: All points with $\chi^2 \le 1$ are plotted; and it concides exactly with the error ellipse.
The parameters about the ellipse is closely related to the following matrix:
\begin{equation}
M = S^{-1} \dot det(S)
\end{equation}
where S is the covariance matrix
The major radius is equal to sqrt(the larger eigen value of $M$), minor radius is equal to sqrt(the smaller eigen value of $M$).
To draw the ellipse on our graph, we first draw an ellipse with those specified radii (major axis in the horizontal direction), then apply the rotation matrix as described by the eigenvector matrix of $M$.
We can also show that the covariance ellipse area is equal to the expression $det(S)$ (i.e. determinant of the covariance matrix).
The ellipse area was empirically calculated by counting the number of points that the ellipse covers when spread over an evenly spaced grid.
```python
determinant_cov, size = [], []
for i in np.linspace(-0, 1.98, 30):
cov = generate_cov(i)
determinant_cov.append(det(cov))
size.append(get_encircled_dots(cov).area)
plt.plot(cov_range, sqrt(determinant_cov)*pi, label='cov determinant')
plt.xlabel('covariance (off-diagonal elements) value')
plt.ylabel('area/area prediction/other quantities')
plt.plot(cov_range, size, label='ellipse size')
plt.legend()
plt.show()
print("Notice that these two lines overlap very well.")
print("In fact, they would be exactly the same if the number of samples we take approaches infinity.")
```
The important conclusion from this project is that, if we fix the variance values, increasing the absolute value of the covariance will make the error ellipse thinner. And the specific algorithm required to plot the error ellipse is also found.
|
17005e240b8b2511a21eaf6b5e067c9ba8f2f153
| 6,365 |
ipynb
|
Jupyter Notebook
|
2d.ipynb
|
OceanNuclear/Covariance
|
9b3c8ed4cf01fe36824de26b414ce7386d1f87f5
|
[
"MIT"
] | null | null | null |
2d.ipynb
|
OceanNuclear/Covariance
|
9b3c8ed4cf01fe36824de26b414ce7386d1f87f5
|
[
"MIT"
] | null | null | null |
2d.ipynb
|
OceanNuclear/Covariance
|
9b3c8ed4cf01fe36824de26b414ce7386d1f87f5
|
[
"MIT"
] | null | null | null | 37.886905 | 250 | 0.604556 | true | 1,129 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.912436 | 0.859664 | 0.784388 |
__label__eng_Latn
| 0.984567 | 0.660729 |
# Load Drivers
The code below links the necessary components to our file. Whether you're attempting to play with the network or train the network, you'll need to run this cell.
```python
#necessary imports
import pandas
import numpy as np
import os
import tensorflow.keras as keras
from keras.models import Model
from keras.layers import Dense, Input
from IPython.display import display
import sympy as sp
sp.init_printing(use_latex = True)
import math
import matplotlib.pyplot as plt
%matplotlib inline
%run exp_Drivers.ipynb
EMPTY = 1;
COLOR = 0;
BLACK = -1;
WHITE = 1;
WIDTH = 9;
```
# Data Categorization and Assignment
If you're training the network, you need to run this code, as it converts the sgf games in the database into positions the network can read.
```python
Boards = []
Moves = []
def Main():
path = "./go9"
counter = 0
for entry in os.scandir(path): #I changed my mind i love python
Go = True
Board = createEmptyBoard() # 0 - 80 = [color, empty], 81 = [turn, turn]
with open(entry) as f:
if Go:
for line in f:
if line[0] == ';': # this is the line with all the moves.
Go = False
copy = ""
for c in line:
if c != "[" and c != "]" and c != ")":
copy += c
arr = copy[1:].split(';')
for a in arr:
int_move = Decode_Move(a[1:])
move = index_to_coordinate(int_move)
color = 1
if(a[0] == 'B'):
color = -1
Boards.append(Board)
Moves.append(int_move)
if int_move < 81:
Board = Move(Board, move[1], move[0])[1]
Main()
Boards = np.array(Boards)
print(Moves[0])
Moves = np.array(Moves)
```
40
```python
print(Boards.shape)
```
(414124, 9, 9, 2)
```python
# Example Position:
printBoard(Boards[22], -1)
printBoard(Boards[23], 1)
```
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . O . @ . @ . .
4 . . . O @ @ O @ .
5 . . @ O @ O O @ .
6 . . O . @ O . . .
7 . . . . @ O . . .
8 . . . @ O O . . .
9 . . . . . . . . .
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . O . @ . @ . .
4 . . . O @ @ O @ .
5 . . @ O @ O O @ .
6 . . O @ @ O . . .
7 . . . . @ O . . .
8 . . . @ O O . . .
9 . . . . . . . . .
Create the training and testing data.
```python
X = Boards
Y = keras.utils.to_categorical(Moves)
training_samples = int(0.9 * X.shape[0])
X_train, X_test = X[:training_samples], X[training_samples:] # Inputs
Y_train, Y_test = Y[:training_samples], Y[training_samples:] # Outputs
print(X.shape)
print(Y.shape)
print(Moves)
```
(414124, 9, 9, 2)
(414124, 82)
[40 49 41 ... 75 18 36]
# Building the Model
Here is where the model, a convolutional neural network, is created. The model must be created whether you want to train it, or play against it.
```python
model = keras.models.Sequential()
model.add(keras.layers.Dense(2, activation = 'relu', input_shape = (9, 9, 2)))
model.add(keras.layers.Conv2D(81, (3, 3), activation = 'relu'))
model.add(keras.layers.Conv2D(81, (3, 3), activation = 'relu'))
model.add(keras.layers.Conv2D(81, (3, 3), activation = 'relu'))
model.add(keras.layers.Conv2D(81, (3, 3), activation = 'relu'))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(82, activation = 'relu'))
model.add(keras.layers.Dropout(.25))
model.add(keras.layers.Dense(82, activation = 'softmax'))
model.compile(loss=keras.losses.CategoricalCrossentropy(), optimizer = keras.optimizers.Adam(), metrics = [keras.metrics.CategoricalAccuracy()])
print(X_train.shape)
model.summary()
```
(372711, 9, 9, 2)
Model: "sequential_24"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_46 (Dense) (None, 9, 9, 2) 6
_________________________________________________________________
conv2d_82 (Conv2D) (None, 7, 7, 81) 1539
_________________________________________________________________
conv2d_83 (Conv2D) (None, 5, 5, 81) 59130
_________________________________________________________________
conv2d_84 (Conv2D) (None, 3, 3, 81) 59130
_________________________________________________________________
conv2d_85 (Conv2D) (None, 1, 1, 81) 59130
_________________________________________________________________
flatten_13 (Flatten) (None, 81) 0
_________________________________________________________________
dense_47 (Dense) (None, 82) 6724
_________________________________________________________________
dropout_1 (Dropout) (None, 82) 0
_________________________________________________________________
dense_48 (Dense) (None, 82) 6806
=================================================================
Total params: 192,465
Trainable params: 192,465
Non-trainable params: 0
_________________________________________________________________
If you already have a .h5 weights file, then you can run this cell to load those weights.
```python
# Load Weights
model.load_weights('mini_weights.h5')
```
# Training
Here is where the training is conducted. If you simply want to play against the neural net, skip to the last cell.
```python
#Train the model
history = model.fit(X_train, Y_train, batch_size = 32, epochs = 6, workers = 10, verbose = 1, validation_data = (X_test, Y_test))
```
Epoch 1/6
182/182 [==============================] - 40s 220ms/step - loss: 2.3604 - categorical_accuracy: 0.3811 - val_loss: 3.2259 - val_categorical_accuracy: 0.2869
Epoch 2/6
182/182 [==============================] - 40s 219ms/step - loss: 2.3443 - categorical_accuracy: 0.3834 - val_loss: 3.2210 - val_categorical_accuracy: 0.2867
Epoch 3/6
182/182 [==============================] - 40s 218ms/step - loss: 2.3321 - categorical_accuracy: 0.3864 - val_loss: 3.2465 - val_categorical_accuracy: 0.2870
Epoch 4/6
182/182 [==============================] - 40s 220ms/step - loss: 2.3212 - categorical_accuracy: 0.3879 - val_loss: 3.2587 - val_categorical_accuracy: 0.2878
Epoch 5/6
182/182 [==============================] - 40s 220ms/step - loss: 2.3167 - categorical_accuracy: 0.3897 - val_loss: 3.2685 - val_categorical_accuracy: 0.2867
Epoch 6/6
182/182 [==============================] - 40s 221ms/step - loss: 2.3102 - categorical_accuracy: 0.3899 - val_loss: 3.2739 - val_categorical_accuracy: 0.2868
```python
# Save Weights
model.save_weights('mini_weights.h5')
```
# Results
```python
plt.figure(1)
plt.subplot(211)
plt.plot(history.history['categorical_accuracy'])
plt.plot(history.history['val_categorical_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.subplot(212)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.tight_layout()
plt.show()
```
# Play
```python
Play()
```
(9, 9, 2)
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . . . . . . . .
4 . . . . . . . . .
5 . . . . . . . . .
6 . . . . . . . . .
7 . . . . . . . . .
8 . . . . . . . . .
9 . . . . . . . . .
Black's Move: : 40
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . . . . . . . .
4 . . . . . . . . .
5 . . . . @ . . . .
6 . . . . . . . . .
7 . . . . . . . . .
8 . . . . . . . . .
9 . . . . . . . . .
White's Move: : 38
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . . . . . . . .
4 . . . . . . . . .
5 . . O . @ . . . .
6 . . . . . . . . .
7 . . . . . . . . .
8 . . . . . . . . .
9 . . . . . . . . .
Black's Move: : 21
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . . @ . . . . .
4 . . . . . . . . .
5 . . O . @ . . . .
6 . . . . . . . . .
7 . . . . . . . . .
8 . . . . . . . . .
9 . . . . . . . . .
White's Move: : 58
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . . @ . . . . .
4 . . . . . . . . .
5 . . O . @ . . . .
6 . . . . . . . . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
Black's Move: : 32
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . . @ . . . . .
4 . . . . . @ . . .
5 . . O . @ . . . .
6 . . . . . . . . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
White's Move: : 20
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . O @ . . . . .
4 . . . . . @ . . .
5 . . O . @ . . . .
6 . . . . . . . . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
Black's Move: : 39
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . O @ . . . . .
4 . . . . . @ . . .
5 . . O @ @ . . . .
6 . . . . . . . . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
White's Move: : 23
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . O @ . O . . .
4 . . . . . @ . . .
5 . . O @ @ . . . .
6 . . . . . . . . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
Black's Move: : 33
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . O @ . O . . .
4 . . . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . . . . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
White's Move: : 28
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . O @ . O . . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . . . . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
Black's Move: : 24
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . . O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . . . . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
White's Move: : 19
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . . . . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
Black's Move: : 51
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . . @ . .
7 . . . . O . . . .
8 . . . . . . . . .
9 . . . . . . . . .
White's Move: : 60
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . . @ . .
7 . . . . O . O . .
8 . . . . . . . . .
9 . . . . . . . . .
Black's Move: : 69
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . . @ . .
7 . . . . O . O . .
8 . . . . . . @ . .
9 . . . . . . . . .
White's Move: : 52
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . . @ O .
7 . . . . O . O . .
8 . . . . . . @ . .
9 . . . . . . . . .
Black's Move: : 50
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . . . @ . .
9 . . . . . . . . .
White's Move: : 70
# A B C D E F G H I
1 . . . . . . . . .
2 . . . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . . . @ O .
9 . . . . . . . . .
Black's Move: : 10
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . . . @ O .
9 . . . . . . . . .
White's Move: : 67
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . . . .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . O . @ O .
9 . . . . . . . . .
Black's Move: : 42
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . @ . .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . O . @ O .
9 . . . . . . . . .
White's Move: : 43
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ . .
4 . O . . . @ @ . .
5 . . O @ @ . @ O .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . O . @ O .
9 . . . . . . . . .
Black's Move: : 25
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ @ .
4 . O . . . @ @ . .
5 . . O @ @ . @ O .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . O . @ O .
9 . . . . . . . . .
White's Move: : 34
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ @ .
4 . O . . . @ @ O .
5 . . O @ @ . @ O .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . O . @ O .
9 . . . . . . . . .
Black's Move: : 30
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . . O @ @ . @ O .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . O . @ O .
9 . . . . . . . . .
White's Move: : 76
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . . O @ @ . @ O .
6 . . . . . @ @ O .
7 . . . . O . O . .
8 . . . . O . @ O .
9 . . . . O . . . .
Black's Move: : 53
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . . O @ @ . @ O .
6 . . . . . @ @ O @
7 . . . . O . O . .
8 . . . . O . @ O .
9 . . . . O . . . .
White's Move: : 75
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . . O @ @ . @ O .
6 . . . . . @ @ O @
7 . . . . O . O . .
8 . . . . O . @ O .
9 . . . O O . . . .
Black's Move: : 64
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . . . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . . O @ @ . @ O .
6 . . . . . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . . O O . . . .
White's Move: : 15
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . . O @ @ . @ O .
6 . . . . . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . . O O . . . .
Black's Move: : 46
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . . O @ @ . @ O .
6 . @ . . . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . . O O . . . .
White's Move: : 37
# A B C D E F G H I
1 . . . . . . . . .
2 . @ . . . . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ . @ O .
6 . @ . . . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . . O O . . . .
Black's Move: : 3
# A B C D E F G H I
1 . . . @ . . . . .
2 . @ . . . . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ . @ O .
6 . @ . . . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . . O O . . . .
White's Move: : 48
# A B C D E F G H I
1 . . . @ . . . . .
2 . @ . . . . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ . @ O .
6 . @ . O . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . . O O . . . .
Black's Move: : 41
# A B C D E F G H I
1 . . . @ . . . . .
2 . @ . . . . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ @ @ O .
6 . @ . O . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . . O O . . . .
White's Move: : 13
# A B C D E F G H I
1 . . . @ . . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ @ @ O .
6 . @ . O . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . . O O . . . .
Black's Move: : 4
# A B C D E F G H I
1 . . . @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ @ @ O .
6 . @ . O . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . . O O . . . .
White's Move: : 74
# A B C D E F G H I
1 . . . @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ @ @ O .
6 . @ . O . @ @ O @
7 . . . . O . O . .
8 . @ . . O . @ O .
9 . . O O O . . . .
Black's Move: : 62
# A B C D E F G H I
1 . . . @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ @ @ O .
6 . @ . O . @ @ O @
7 . . . . O . O . @
8 . @ . . O . @ O .
9 . . O O O . . . .
White's Move: : 73
# A B C D E F G H I
1 . . . @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ @ @ O .
6 . @ . O . @ @ O @
7 . . . . O . O . @
8 . @ . . O . @ O .
9 . O O O O . . . .
Black's Move: : 2
# A B C D E F G H I
1 . . @ @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ @ @ O .
6 . @ . O . @ @ O @
7 . . . . O . O . @
8 . @ . . O . @ O .
9 . O O O O . . . .
White's Move: : 45
# A B C D E F G H I
1 . . @ @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O .
5 . O O @ @ @ @ O .
6 O @ . O . @ @ O @
7 . . . . O . O . @
8 . @ . . O . @ O .
9 . O O O O . . . .
Black's Move: : 35
# A B C D E F G H I
1 . . @ @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O @
5 . O O @ @ @ @ O .
6 O @ . O . @ @ O @
7 . . . . O . O . @
8 . @ . . O . @ O .
9 . O O O O . . . .
White's Move: : 68
# A B C D E F G H I
1 . . @ @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O @
5 . O O @ @ @ @ O .
6 O @ . O . @ @ O @
7 . . . . O . O . @
8 . @ . . O O @ O .
9 . O O O O . . . .
Black's Move: : 44
# A B C D E F G H I
1 . . @ @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 . . . . O . O . @
8 . @ . . O O @ O .
9 . O O O O . . . .
White's Move: : 71
# A B C D E F G H I
1 . . @ @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ . @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 . . . . O . O . @
8 . @ . . O O @ O O
9 . O O O O . . . .
Black's Move: : 31
# A B C D E F G H I
1 . . @ @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 . . . . O . O . @
8 . @ . . O O @ O O
9 . O O O O . . . .
White's Move: : 63
# A B C D E F G H I
1 . . @ @ @ . . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 . . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . . . .
Black's Move: : 5
# A B C D E F G H I
1 . . @ @ @ @ . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 . . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . . . .
White's Move: : 54
# A B C D E F G H I
1 . . @ @ @ @ . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . . . .
Black's Move: : 1
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . . . .
White's Move: : 81
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 . @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . . . .
Black's Move: : 9
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . . . .
White's Move: : 81
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 . O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . . . .
Black's Move: : 18
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . . . .
White's Move: : 81
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . . . .
Black's Move: : 78
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . @ . .
White's Move: : 81
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O . @ . .
Black's Move: : 77
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ . .
White's Move: : 81
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 . O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ . .
Black's Move: : 27
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ . .
White's Move: : 81
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 . O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ . .
Black's Move: : 36
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ . .
White's Move: : 81
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ . . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ . .
Black's Move: : 11
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ @ . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ . .
White's Move: : 81
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ @ . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ . .
Black's Move: : 79
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ @ . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ @ .
White's Move: : 81
# A B C D E F G H I
1 . @ @ @ @ @ . . .
2 @ @ @ . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ @ .
Black's Move: : 0
# A B C D E F G H I
1 @ @ @ @ @ @ . . .
2 @ @ @ . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ @ .
White's Move: : 81
# A B C D E F G H I
1 @ @ @ @ @ @ . . .
2 @ @ @ . O . O . .
3 @ O O @ . O @ @ .
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ @ .
Black's Move: : 26
# A B C D E F G H I
1 @ @ @ @ @ @ . . .
2 @ @ @ . O . O . .
3 @ O O @ . O @ @ @
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ @ .
White's Move: : 81
# A B C D E F G H I
1 @ @ @ @ @ @ . . .
2 @ @ @ . O . O . .
3 @ O O @ . O @ @ @
4 @ O . @ @ @ @ O @
5 @ O O @ @ @ @ O @
6 O @ . O . @ @ O @
7 O . . . O . O . @
8 O @ . . O O @ O O
9 . O O O O @ @ @ .
Black's Move: : 81
Players agreed to end the game.
```python
```
|
af515e026237eec4ec348d70f7d7c9921630ee30
| 55,486 |
ipynb
|
Jupyter Notebook
|
exp_GOnet.ipynb
|
CSCI4850/s21-team6-project
|
2d3f6e759a303e819aae73a098975360480a1355
|
[
"MIT"
] | null | null | null |
exp_GOnet.ipynb
|
CSCI4850/s21-team6-project
|
2d3f6e759a303e819aae73a098975360480a1355
|
[
"MIT"
] | null | null | null |
exp_GOnet.ipynb
|
CSCI4850/s21-team6-project
|
2d3f6e759a303e819aae73a098975360480a1355
|
[
"MIT"
] | null | null | null | 41.100741 | 15,544 | 0.466334 | true | 12,547 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.828939 | 0.654895 | 0.542868 |
__label__por_Latn
| 0.402697 | 0.099593 |
```python
__author__ = 'Aravindh'
```
```python
```
# Transformation by a General Quasioptical system (from Goldsmith Book )
```python
from sympy import symbols, Matrix, init_printing, pi, sqrt, solveset, Eq, S, plot
from sympy.physics.optics import gaussopt
from sympy.physics.optics import RayTransferMatrix, ThinLens, BeamParameter
from sympy import I as i
from sympy import im, re
from sympy import E as e
init_printing()
```
```python
d_in, d_out, A, B, C, D = symbols('d_in, d_out, A, B, C, D')
Z_c, wn = symbols('Z_c, wn')
z, R, w, w0, lam = symbols('z, R, w, w0, lam')
```
```python
#Zc = pi*w0**2/lam
R = z*(1 + (Z_c / z) ** 2)
w = w0 * sqrt(1 + (z / Z_c) ** 2)
Z_c, R, w
```
```python
m1 = RayTransferMatrix(1, d_out, 0, 1)
m2 = RayTransferMatrix(A, B, C, D)
m3 = RayTransferMatrix(1, d_in, 0, 1)
m1, m2, m3
```
$$\left ( \left[\begin{matrix}1 & d_{out}\\0 & 1\end{matrix}\right], \quad \left[\begin{matrix}A & B\\C & D\end{matrix}\right], \quad \left[\begin{matrix}1 & d_{in}\\0 & 1\end{matrix}\right]\right )$$
```python
M = m1*m2*m3
M
```
$$\left[\begin{matrix}A + C d_{out} & B + D d_{out} + d_{in} \left(A + C d_{out}\right)\\C & C d_{in} + D\end{matrix}\right]$$
## From ABCD law
$$q_{out} = (A.q_{in}+B)/(C.q_{in}+D)$$
```python
q_in = i*Z_c
q_out = (M.A*q_in + M.B) / (M.C*q_in + M.D)
q_out
```
## Solving for the real part of q_out, we obtain the distance from the system output plane to the output beam waist and the output waist radius:
```python
d_out = ((A*d_in + B)*(C*d_in + D) + A*C*Z_c**2) / ((C*d_in + D)**2 + C**2*Z_c**2)
d_out
```
```python
w0_out, w0_in = symbols('w0_out, w0_in')
w0_out = w0_in/sqrt((C*d_in + D)**2 + C**2*Z_c**2)
w0_out
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
|
7a8f0311964338ccf1beeacfef274a5f0f0f50d3
| 21,902 |
ipynb
|
Jupyter Notebook
|
Simulations/THz/testing/Untitled.ipynb
|
aravindhnivas/FELion-Spectrum-Analyser
|
430f16884482089b2f717ea7dd50625078971e48
|
[
"MIT"
] | null | null | null |
Simulations/THz/testing/Untitled.ipynb
|
aravindhnivas/FELion-Spectrum-Analyser
|
430f16884482089b2f717ea7dd50625078971e48
|
[
"MIT"
] | null | null | null |
Simulations/THz/testing/Untitled.ipynb
|
aravindhnivas/FELion-Spectrum-Analyser
|
430f16884482089b2f717ea7dd50625078971e48
|
[
"MIT"
] | 1 |
2019-01-25T20:37:57.000Z
|
2019-01-25T20:37:57.000Z
| 64.991098 | 5,008 | 0.772669 | true | 727 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.891811 | 0.763484 | 0.680883 |
__label__eng_Latn
| 0.317321 | 0.420251 |
# Scenario D - Peakshape Variation (results evaluation)
This file is used to evaluate the inference (numerical) results.
The model used in the inference of the parameters is formulated as follows:
\begin{equation}
\large y = f(x) = \sum\limits_{m=1}^M \big[A_m \cdot f_{pseudo-Voigt}(x)\big] + \epsilon
\end{equation}
where:
\begin{equation}
\large f_{pseudo-Voigt}(x) = \eta \cdot \frac{\sigma_m^2}{(x-\mu_m)^2 + \sigma_m^2} + (1 - \eta) \cdot e^{-\frac{(x-\mu_m)^2}{2\cdot\sigma_m^2}}
\end{equation}
```python
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pymc3 as pm
import arviz as az
import seaborn as sns
#az.style.use('arviz-darkgrid')
print('Running on PyMC3 v{}'.format(pm.__version__))
```
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
Running on PyMC3 v3.8
## Import local modules
```python
import sys
sys.path.append('../../modules')
import results as res
import figures as fig
```
## Load results and extract convergence information
```python
# list of result files to load
filelst = ['./scenario_peakshape_mruns.csv']
ldf = res.load_results(filelst)
```
reading file: ./scenario_peakshape_mruns.csv
```python
# extract the convergence results per model
labellist = [0.0, 0.25, 0.5, 0.75, 1.0]
dres = res.get_model_summary(ldf, labellist)
```
```python
# figure size and color mapping
figs=(8,8)
col = "Blues"
col_r = col + "_r"
```
## Heatmaps of n-peak model vs. n-peak number in dataset
### WAIC
```python
fig.plot_heatmap(dres['waic'], labellist, title="", color=col, fsize=figs, fname="hmap_waic", precision=".0f")
```
### Rhat
```python
fig.plot_heatmap(dres['rhat'], labellist, title="", color=col, fsize=figs, fname="hmap_rhat", precision=".2f")
```
### R2
```python
fig.plot_heatmap(dres['r2'], labellist, title="", color=col_r, fsize=figs, fname="hmap_r2", precision=".2f")
```
### BFMI
```python
fig.plot_heatmap(dres['bfmi'], labellist, title="", color=col_r, fsize=figs,
fname="hmap_bfmi", precision=".2f")
```
### MCSE
```python
fig.plot_heatmap(dres['mcse'], labellist, title="", color=col, fsize=figs, fname="hmap_mcse", precision=".2f")
```
### Noise
```python
fig.plot_heatmap(dres['noise'], labellist, title="", color=col, fsize=figs,
fname="hmap_noise", precision=".2f")
```
### ESS
```python
fig.plot_heatmap(dres['ess'], labellist, title="", color=col_r, fsize=figs, fname="hmap_ess", precision=".0f")
```
```python
```
|
ee1858cc89e6300fcac428ac76530839f7b75a7b
| 386,385 |
ipynb
|
Jupyter Notebook
|
code/scenarios/scenario_d/scenario_peakshape_evaluation.ipynb
|
jnispen/PPSDA
|
910261551dd08768a72ab0a3e81bd73c706a143a
|
[
"MIT"
] | 1 |
2021-01-07T02:22:25.000Z
|
2021-01-07T02:22:25.000Z
|
code/scenarios/scenario_d/scenario_peakshape_evaluation.ipynb
|
jnispen/PPSDA
|
910261551dd08768a72ab0a3e81bd73c706a143a
|
[
"MIT"
] | null | null | null |
code/scenarios/scenario_d/scenario_peakshape_evaluation.ipynb
|
jnispen/PPSDA
|
910261551dd08768a72ab0a3e81bd73c706a143a
|
[
"MIT"
] | null | null | null | 1,061.497253 | 61,096 | 0.954576 | true | 798 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.774583 | 0.661923 | 0.512714 |
__label__eng_Latn
| 0.595749 | 0.029537 |
```python
import random
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sympy
from sklearn.linear_model import LinearRegression
from scipy.spatial import Voronoi, voronoi_plot_2d
```
```python
N = 70
M = 10
Matrix = [(random.random()*100,random.random()*100) for x in range(M)]
points = np.array(Matrix)
vor = Voronoi(points)
voronoi_plot_2d(vor)
plt.show()
```
```python
vor.regions
```
[[-1, 2],
[3, 1, -1, 2],
[8, 6, 5, 7],
[7, -1, 0, 5],
[8, 1, 3, 4, 6],
[8, 1, -1, 7],
[10, 9, -1],
[9, 4, 3, 2, -1],
[],
[10, 0, 5, 6, 4, 9],
[10, 0, -1]]
```python
MAX_NUMBER = 10000
primes = list(sympy.primerange(0, MAX_NUMBER))
X = []
y = []
for n in range(MAX_NUMBER):
valid_primes = [p for p in primes if p <= n ** 0.5]
remainder = sum([n % p for p in valid_primes])
y.append(remainder)
X.append([n])
model = LinearRegression(fit_intercept = False).fit(X, y)
1 / model.coef_[0]
```
17.841987719271046
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
|
1084d85db3c9a7fcad3137bf866645d3b82d1644
| 25,367 |
ipynb
|
Jupyter Notebook
|
playground.ipynb
|
YeasterEgg/jupyter-stuff
|
50bd10a583b06fc5da9dbbc1e4ba67d21f45cfc7
|
[
"MIT"
] | 2 |
2019-12-12T17:53:26.000Z
|
2019-12-13T14:34:43.000Z
|
playground.ipynb
|
lucamattiazzi/jupyter-stuff
|
50bd10a583b06fc5da9dbbc1e4ba67d21f45cfc7
|
[
"MIT"
] | null | null | null |
playground.ipynb
|
lucamattiazzi/jupyter-stuff
|
50bd10a583b06fc5da9dbbc1e4ba67d21f45cfc7
|
[
"MIT"
] | null | null | null | 121.956731 | 21,448 | 0.888004 | true | 431 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.859664 | 0.70253 | 0.60394 |
__label__eng_Latn
| 0.226978 | 0.241484 |
# Partial Derivatives in sympy
```python
import sympy
```
```python
x, u = sympy.symbols('x u', real=True)
```
```python
U = sympy.Function('U')(x,u)
```
```python
U
```
U(x, u)
### The case of a(n arbitrary) point transformation
cf. Introduction to Differential Invariants, Chapter 2 Lie Transformations pp. 16
```python
x = sympy.Symbol('x',real=True)
```
```python
y = sympy.Function('y')(x)
```
```python
U = sympy.Function('U')(x,y)
X = sympy.Function('X')(x,y)
Y = sympy.Function('Y')(X)
```
```python
sympy.pprint(sympy.diff(U,x))
```
d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│
──(y(x))⋅⎜───(U(x, ξ₂))⎟│ + ⎜───(U(ξ₁, y(x)))⎟│
dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x
```python
sympy.pprint( sympy.diff(Y,x))
```
⎛d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│ ⎞ ⎛ d ⎞│
⎜──(y(x))⋅⎜───(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁, y(x)))⎟│ ⎟⋅⎜───(Y(ξ₁))⎟│
⎝dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x⎠ ⎝dξ₁ ⎠│ξ₁=X
(x, y(x))
```python
sympy.pprint( sympy.diff(Y,x).args[0] )
```
d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│
──(y(x))⋅⎜───(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁, y(x)))⎟│
dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x
```python
sympy.pprint( sympy.diff(U,x)/ sympy.diff(Y,x).args[0])
```
d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│
──(y(x))⋅⎜───(U(x, ξ₂))⎟│ + ⎜───(U(ξ₁, y(x)))⎟│
dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x
──────────────────────────────────────────────────────────
d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│
──(y(x))⋅⎜───(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁, y(x)))⎟│
dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x
For $Y''(X)$,
```python
YprimeX = sympy.diff(U,x)/sympy.diff(Y,x).args[0]
sympy.pprint( sympy.diff(YprimeX,x).simplify() )
```
⎛⎛ 2
⎛d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│ ⎞ ⎜⎜d d
- ⎜──(y(x))⋅⎜───(U(x, ξ₂))⎟│ + ⎜───(U(ξ₁, y(x)))⎟│ ⎟⋅⎜⎜──(y(x))⋅────
⎝dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x⎠ ⎜⎜dx
⎝⎝ dy(x
──────────────────────────────────────────────────────────────────────────────
2 ⎞ ⎛ 2 ⎞│
d ⎟ d d ⎜ ∂ ⎟│
──(X(x, y(x))) + ────────(X(x, y(x)))⎟⋅──(y(x)) + ──(y(x))⋅⎜──────(X(x, ξ₃))⎟│
2 dx dy(x) ⎟ dx dx ⎝∂ξ₃ ∂x ⎠│
) ⎠
──────────────────────────────────────────────────────────────────────────────
2 2 ⎞
d d ⎛ ∂ ⎞│ ⎟ ⎛d ⎛ ∂
+ ───(X(x, y(x))) + ───(y(x))⋅⎜───(X(x, ξ₂))⎟│ ⎟ + ⎜──(y(x))⋅⎜──
ξ₃=y(x) 2 2 ⎝∂ξ₂ ⎠│ξ₂=y(x)⎟ ⎝dx ⎝∂ξ
dx dx ⎠
──────────────────────────────────────────────────────────────────────────────
⎛d ⎛ ∂ ⎞│ ⎛ ∂
⎜──(y(x))⋅⎜───(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁,
⎝dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁
⎛⎛ 2
⎞│ ⎛ ∂ ⎞│ ⎞ ⎜⎜d d
─(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁, y(x)))⎟│ ⎟⋅⎜⎜──(y(x))⋅──────(U(x, y(x)))
₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x⎠ ⎜⎜dx 2
⎝⎝ dy(x)
──────────────────────────────────────────────────────────────────────────────
2
⎞│ ⎞
y(x)))⎟│ ⎟
⎠│ξ₁=x⎠
2 ⎞ ⎛ 2 ⎞│ 2
d ⎟ d d ⎜ ∂ ⎟│ d
+ ────────(U(x, y(x)))⎟⋅──(y(x)) + ──(y(x))⋅⎜──────(U(x, ξ₃))⎟│ + ───(U
dx dy(x) ⎟ dx dx ⎝∂ξ₃ ∂x ⎠│ξ₃=y(x) 2
⎠ dx
──────────────────────────────────────────────────────────────────────────────
2 ⎞
d ⎛ ∂ ⎞│ ⎟
(x, y(x))) + ───(y(x))⋅⎜───(U(x, ξ₂))⎟│ ⎟
2 ⎝∂ξ₂ ⎠│ξ₂=y(x)⎟
dx ⎠
───────────────────────────────────────────────
```python
sympy.factor_list( sympy.diff(Y,x)) # EY 20160522 I don't know how to simply obtain the factors of an expression
# EY 20160522 update resolved: look at above and look at this page; it explains all:
# http://docs.sympy.org/dev/tutorial/manipulation.html
```
(1,
[(Subs(Derivative(Y(_xi_1), _xi_1), (_xi_1,), (X(x, y(x)),)), 1),
(Derivative(y(x), x)*Subs(Derivative(X(x, _xi_2), _xi_2), (_xi_2,), (y(x),)) + Subs(Derivative(X(_xi_1, y(x)), _xi_1), (_xi_1,), (x,)),
1)])
```python
t, x, u, u_1, x_t, u_t, u_1t = sympy.symbols('t x u u_1 x_t u_t u_1t', real=True)
X = -u_1
U = u - x*u_1
U_1 = x
```
cf. How to do total derivatives: http://robotfantastic.org/total-derivatives-in-sympy.html
```python
from sympy import Derivative, diff, expr
def difftotal(expr, diffby, diffmap):
"""Take the total derivative with respect to a variable.
Example:
theta, t, theta_dot = symbols("theta t theta_dot")
difftotal(cos(theta), t, {theta: theta_dot})
returns
-theta_dot*sin(theta)
"""
# Replace all symbols in the diffmap by a functional form
fnexpr = expr.subs({s:s(diffby) for s in diffmap})
# Do the differentiation
diffexpr = diff(fnexpr, diffby)
# Replace the Derivatives with the variables in diffmap
derivmap = {Derivative(v(diffby), diffby):dv
for v,dv in diffmap.iteritems()}
finaldiff = diffexpr.subs(derivmap)
# Replace the functional forms with their original form
return finaldiff.subs({s(diffby):s for s in diffmap})
```
```python
difftotal( U,t,{x:x_t, u:u_t, u_1:u_1t}) + (-U_1)* (-u_1t)
```
-u_1*x_t + u_t
This transformation is the Legendre transformation
cf. 4. Exercises Chapter 2 Lie Transformations Introduction to Differential Invariants.
Consider transformation $(x,u)=(x,u(x)) \to (X,U)=(X(x,u),U(x,u))=(u,x)$. Let $Y=Y(X)$. $Y(X) \in \Gamma(\mathbb{R}^1 \times \mathbb{R}^1)$, i.e. $Y(X)$ is a section. So $Y(X) = Y(X(x,u)) = U(x,u)$. And so in this case,
$Y(X(x,u))=Y(u)=U(x,u) = x$
```python
x = sympy.Symbol('x',real=True)
u = sympy.Function('u')(x)
U = x
X = u
Y = sympy.Function('Y')(X)
```
```python
sympy.pprint( sympy.diff(Y,x))
```
d ⎛ d ⎞│
──(u(x))⋅⎜───(Y(ξ₁))⎟│
dx ⎝dξ₁ ⎠│ξ₁=u(x)
```python
sympy.pprint(sympy.diff(U,x))
```
1
And so $Y'(X)$ is
```python
sympy.pprint( 1/ sympy.diff(Y,x).args[0])
```
1
────────
d
──(u(x))
dx
And so $Y''(X)$ is
```python
sympy.pprint( sympy.diff( 1/ sympy.diff(Y,x).args[0], x))
```
2
d
-───(u(x))
2
dx
───────────
2
⎛d ⎞
⎜──(u(x))⎟
⎝dx ⎠
cf. (2) from 4. Exercises, Chapter 2 Lie Transformations pp. 20
Recall an arbitrary point transformation:
```python
x = sympy.Symbol('x',real=True)
```
```python
y = sympy.Function('y')(x)
```
```python
U = sympy.Function('U')(x,y)
X = sympy.Function('X')(x,y)
Y = sympy.Function('Y')(X)
```
```python
sympy.pprint(sympy.diff(U,x))
```
d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│
──(y(x))⋅⎜───(U(x, ξ₂))⎟│ + ⎜───(U(ξ₁, y(x)))⎟│
dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x
```python
sympy.pprint( sympy.diff(Y,x))
```
⎛d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│ ⎞ ⎛ d ⎞│
⎜──(y(x))⋅⎜───(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁, y(x)))⎟│ ⎟⋅⎜───(Y(ξ₁))⎟│
⎝dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x⎠ ⎝dξ₁ ⎠│ξ₁=X
(x, y(x))
```python
sympy.pprint( sympy.diff(Y,x).args[0] )
```
d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│
──(y(x))⋅⎜───(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁, y(x)))⎟│
dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x
```python
sympy.pprint( sympy.diff(U,x)/ sympy.diff(Y,x).args[0])
```
d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│
──(y(x))⋅⎜───(U(x, ξ₂))⎟│ + ⎜───(U(ξ₁, y(x)))⎟│
dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x
──────────────────────────────────────────────────────────
d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│
──(y(x))⋅⎜───(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁, y(x)))⎟│
dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x
For $Y''(X)$,
```python
YprimeX = sympy.diff(U,x)/sympy.diff(Y,x).args[0]
Yprime2X = sympy.diff(YprimeX,x)
sympy.pprint( Yprime2X.simplify() )
```
⎛⎛ 2
⎛d ⎛ ∂ ⎞│ ⎛ ∂ ⎞│ ⎞ ⎜⎜d d
- ⎜──(y(x))⋅⎜───(U(x, ξ₂))⎟│ + ⎜───(U(ξ₁, y(x)))⎟│ ⎟⋅⎜⎜──(y(x))⋅────
⎝dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x⎠ ⎜⎜dx
⎝⎝ dy(x
──────────────────────────────────────────────────────────────────────────────
2 ⎞ ⎛ 2 ⎞│
d ⎟ d d ⎜ ∂ ⎟│
──(X(x, y(x))) + ────────(X(x, y(x)))⎟⋅──(y(x)) + ──(y(x))⋅⎜──────(X(x, ξ₃))⎟│
2 dx dy(x) ⎟ dx dx ⎝∂ξ₃ ∂x ⎠│
) ⎠
──────────────────────────────────────────────────────────────────────────────
2 2 ⎞
d d ⎛ ∂ ⎞│ ⎟ ⎛d ⎛ ∂
+ ───(X(x, y(x))) + ───(y(x))⋅⎜───(X(x, ξ₂))⎟│ ⎟ + ⎜──(y(x))⋅⎜──
ξ₃=y(x) 2 2 ⎝∂ξ₂ ⎠│ξ₂=y(x)⎟ ⎝dx ⎝∂ξ
dx dx ⎠
──────────────────────────────────────────────────────────────────────────────
⎛d ⎛ ∂ ⎞│ ⎛ ∂
⎜──(y(x))⋅⎜───(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁,
⎝dx ⎝∂ξ₂ ⎠│ξ₂=y(x) ⎝∂ξ₁
⎛⎛ 2
⎞│ ⎛ ∂ ⎞│ ⎞ ⎜⎜d d
─(X(x, ξ₂))⎟│ + ⎜───(X(ξ₁, y(x)))⎟│ ⎟⋅⎜⎜──(y(x))⋅──────(U(x, y(x)))
₂ ⎠│ξ₂=y(x) ⎝∂ξ₁ ⎠│ξ₁=x⎠ ⎜⎜dx 2
⎝⎝ dy(x)
──────────────────────────────────────────────────────────────────────────────
2
⎞│ ⎞
y(x)))⎟│ ⎟
⎠│ξ₁=x⎠
2 ⎞ ⎛ 2 ⎞│ 2
d ⎟ d d ⎜ ∂ ⎟│ d
+ ────────(U(x, y(x)))⎟⋅──(y(x)) + ──(y(x))⋅⎜──────(U(x, ξ₃))⎟│ + ───(U
dx dy(x) ⎟ dx dx ⎝∂ξ₃ ∂x ⎠│ξ₃=y(x) 2
⎠ dx
──────────────────────────────────────────────────────────────────────────────
2 ⎞
d ⎛ ∂ ⎞│ ⎟
(x, y(x))) + ───(y(x))⋅⎜───(U(x, ξ₂))⎟│ ⎟
2 ⎝∂ξ₂ ⎠│ξ₂=y(x)⎟
dx ⎠
───────────────────────────────────────────────
```python
```
|
0298b12d29b6eb7273f677d653f82b6c1e7d5ad7
| 24,787 |
ipynb
|
Jupyter Notebook
|
partiald_sympy.ipynb
|
ernestyalumni/CompPhys
|
1f5d7559146a14a21182653b77fd35e6d6829855
|
[
"Apache-2.0"
] | 70 |
2017-07-24T04:09:27.000Z
|
2021-12-24T16:00:41.000Z
|
partiald_sympy.ipynb
|
ernestyalumni/CompPhys
|
1f5d7559146a14a21182653b77fd35e6d6829855
|
[
"Apache-2.0"
] | 3 |
2018-01-16T22:34:47.000Z
|
2019-01-29T22:37:10.000Z
|
partiald_sympy.ipynb
|
ernestyalumni/CompPhys
|
1f5d7559146a14a21182653b77fd35e6d6829855
|
[
"Apache-2.0"
] | 40 |
2017-01-24T19:18:42.000Z
|
2021-03-01T07:13:35.000Z
| 30.906484 | 237 | 0.260742 | true | 5,272 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.951863 | 0.841826 | 0.801303 |
__label__yue_Hant
| 0.162756 | 0.700028 |
# Derivatives
Illustrates computing the derivative of the survival function, quantile and TVaR functions.
Refs:
Tasche
Venter et al ASTIN
Major Forum
```python
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from IPython.core.display import HTML, display
from importlib import reload
import re
import pypandoc
import sys
# pandas options
pd.set_option('max_rows', 50)
pd.set_option('max_columns', 30)
pd.set_option('display.max_colwidth', 150)
# matplotlib and plotting options
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
# seaborn options
sns.set(context='paper', style='darkgrid', font='serif')
# sns.set(context='paper', style='ticks', font='serif')
# warnings
import warnings
# warnings.simplefilter('error')
# warnings.simplefilter('ignore')
import logging
logging.getLogger("matplotlib").setLevel(logging.CRITICAL)
```
```python
# this file is in examples
sys.path.insert(0,'..')
import aggregate as agg
uw = agg.Underwriter(debug=False)
```
```python
u = np.array([777, 223, 123])
nu = np.linalg.norm(u)
# for computing derivatives
delta = 1
base_pf = '''port LNExample{i}
agg A 1 claims sev {ex1} * lognorm 1 cv 0.425 fixed
agg B 1 claims sev {ex2} * lognorm 1 cv 0.250 fixed
agg C 1 claims sev {ex3} * lognorm 1 cv 0.350 fixed '''
xxp = uw(base_pf.format(i=1, ex1=u[0]+delta/2, ex2=u[1], ex3=u[2]))
xxm = uw(base_pf.format(i=1, ex1=u[0]-delta/2, ex2=u[1], ex3=u[2]))
xx0 = uw(base_pf.format(i=1, ex1=u[0], ex2=u[1], ex3=u[2]))
xx0.recommend_bucket()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>bs10</th>
<th>bs11</th>
<th>bs12</th>
<th>bs13</th>
<th>bs14</th>
<th>bs15</th>
<th>bs16</th>
<th>bs17</th>
<th>bs18</th>
<th>bs19</th>
<th>bs20</th>
</tr>
<tr>
<th>line</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>A</th>
<td>2.460020</td>
<td>1.230010</td>
<td>0.615005</td>
<td>0.307503</td>
<td>0.153751</td>
<td>0.076876</td>
<td>0.038438</td>
<td>0.019219</td>
<td>0.009609</td>
<td>0.004777</td>
<td>0.002402</td>
</tr>
<tr>
<th>B</th>
<td>0.452154</td>
<td>0.226077</td>
<td>0.113038</td>
<td>0.056519</td>
<td>0.028260</td>
<td>0.014130</td>
<td>0.007065</td>
<td>0.003532</td>
<td>0.001766</td>
<td>0.000878</td>
<td>0.000442</td>
</tr>
<tr>
<th>C</th>
<td>0.324141</td>
<td>0.162070</td>
<td>0.081035</td>
<td>0.040518</td>
<td>0.020259</td>
<td>0.010129</td>
<td>0.005065</td>
<td>0.002532</td>
<td>0.001266</td>
<td>0.000629</td>
<td>0.000317</td>
</tr>
<tr>
<th>total</th>
<td>3.236315</td>
<td>1.618157</td>
<td>0.809079</td>
<td>0.404539</td>
<td>0.202270</td>
<td>0.101135</td>
<td>0.050567</td>
<td>0.025284</td>
<td>0.012642</td>
<td>0.006284</td>
<td>0.003160</td>
</tr>
</tbody>
</table>
</div>
```python
bs = 0.05
N = 17
xxp.update(bs=bs, log2=N, add_exa=True, remove_fuzz=True, padding=2)
xxm.update(bs=bs, log2=N, add_exa=True, remove_fuzz=True, padding=2)
xx0.update(bs=bs, log2=N, add_exa=True, remove_fuzz=True, padding=2)
```
C:\Users\steve\Anaconda3\envs\Working_Duplicate\lib\site-packages\numpy\core\fromnumeric.py:56: FutureWarning: Series.nonzero() is deprecated and will be removed in a future version.Use Series.to_numpy().nonzero() instead
return getattr(obj, method)(*args, **kwds)
C:\Users\steve\Anaconda3\envs\Working_Duplicate\lib\site-packages\numpy\core\fromnumeric.py:56: FutureWarning: Series.nonzero() is deprecated and will be removed in a future version.Use Series.to_numpy().nonzero() instead
return getattr(obj, method)(*args, **kwds)
C:\Users\steve\Anaconda3\envs\Working_Duplicate\lib\site-packages\numpy\core\fromnumeric.py:56: FutureWarning: Series.nonzero() is deprecated and will be removed in a future version.Use Series.to_numpy().nonzero() instead
return getattr(obj, method)(*args, **kwds)
## 1. Derivative of probability function wrt $t$
$S_u(t) = \int_{\{\sum u_ix_i > t\}} p(x)dx$
Apply IOS formula, parameter is $t$. $p$ is independent of $t$ and so the first term integrating
$\nabla_u p\equiv 0$. The area of integration is defined by $f(t,x)=t-\sum u_ix_i <0$, with $u_i$ fixed,
$\nabla_t f = 1$ and $\nabla_x f = -(u_1,\dots, u_n)$.
The density $p$ is the density of $(X_1, \dots,X_n)$. Aggregate computes the density of $(u_1X_1,
\dots, u_n X_n)$ and so $p(x)\approx \| u \| p_{total} / bs$ where the additional bucket size factor $bs$ is needed to convert the density over the bucket ```p_total``` into a uniform density. This is integrated over a bucket of size ```bs```.
Therefore the IOS formula gives
\begin{align}
\nabla_t S(t) &= -\int_{\{\sum u_ix_i=t\}} \frac{\nabla_u f}{\| \nabla_x f \|} p(x) dx \\
&= -\int_{\{\sum u_ix_i=t\}} \frac{1}{\| u \|} p(x) dx \\
&= -\int_{\{\sum u_ix_i=t\}} p_{uX}( x ) dx \\
&= -p_{total} \\
\end{align}
which is kinda obvious...
## 2. Derivative of probability function wrt portfolio weights
$S_u(t) = \int_{\{\sum u_ix_i > t\}} p(x)dx$ where $p$ is the probability density of $(X_1, \dots, X_n)$.
Apply IOS formula, parameters are $u_i$. $p$ is independent of $u$ and so the first term integrating
$\nabla_u p\equiv 0$. The area of integration is defined by $f(u,x)=t-\sum u_ix_i <0$, with $t$ fixed,
$\nabla_u f = (x_1,\dots, x_n)$ and $\nabla_x f = -(u_1,\dots, u_n)$.
**As above**: The density $p$ is the density of $(X_1, \dots,X_n)$. Aggregate computes the density of $(u_1X_1,
\dots, u_n X_n)$ and so $p(x)\approx \| u \| p_{total} / bs$ where the additional bucket size factor $bs$ is needed to convert the density over the bucket ```p_total``` into a uniform density.
Therefore the IOS formula gives
\begin{align}
\nabla_u S(t) &= -\int_{\{\sum u_ix_i=t\}} -\frac{\nabla_u f}{\| \nabla_x f \|} p(x) dx \\
&= \int_{\{\sum u_ix_i=t\}} \frac{x_i}{\| \nabla_x f \|} p(x) dx \\
&= \frac{1}{u_i} \int_{\{\sum u_ix_i=t\}} u_i x_i p_{uX}( x ) dx \\
&= \frac{1}{u_i} exeqa(t) \frac{p_{total}}{bs} \\
\end{align}
```python
df = pd.DataFrame(dict(xxp=xxp.density_df.S, xxm=xxm.density_df.S,
formula_partial=xx0.density_df.exeqa_A / u[0] * xx0.density_df.p_total / bs))
df.loc[:, 'num_partial'] = (df.xxp - df.xxm)/delta
df.loc[:, 'err'] = df.formula_partial / df.num_partial
f, axs = plt.subplots(1,2,figsize=(8,4))
df.filter(regex='num|form').plot(ax=axs[0])
df.loc[250:5000, 'err'].plot(ax=axs[1], ylim=[0.5,1.5])
```
<matplotlib.axes._subplots.AxesSubplot at 0x20908115160>
## 3. Derivative of Quantile Function (i.e. Value at Risk)
If $Q$ is the $p$ quantile function then $S(u, Q(u))=p$. The implicit function theorem gives
$$
S_1 + S_2 \nabla_u Q = 0
$$
where $S_i$ denotes the partial derivative of $S$ wrt the $i$th argument. Hence
$$
\nabla_u Q = -S_1 / S_2.
$$
These two terms were both computed above giving
$$
\nabla_u Q(p) = \text{E}\left[u_iX_i \mid \sum u_iX_i=Q(p) \right] / u_i.
$$
which is computed numerically as
$$
\nabla_u Q(p) = \frac{1}{u_i} exeqa(t).
$$
There is roughness in the graphic since VaR is approximated to the nearest multiple of ```bs```.
```python
num_diff = []
formula_diff = []
ps = np.arange(0.1, 1, 0.005)
for p in ps:
num_diff.append( (xxp.q(p) - xxm.q(p)) / delta )
formula_diff.append(float(xx0.density_df.loc[xx0.q(p), 'exeqa_A'] / u[0]))
diff_test = pd.DataFrame(dict(num_diff=num_diff, formula_diff=formula_diff), index=ps)
diff_test['r1'] = diff_test.formula_diff/diff_test.num_diff
diff_test.r1.plot(ylim=[0.5, 1.5])
```
<matplotlib.axes._subplots.AxesSubplot at 0x20901d538d0>
## 4. Derivative of Tail Value at Risk
If $T$ is the $p$ TVaR defined as
$$
T(u, p) = \frac{1}{1-p}\int_{\{\sum u_ix_i>Q(u,p)\}} (\sum u_ix_i) p(x)dx.
$$
If $f=Q-\sum u_ix_i$ then $\nabla_u f=\nabla_u Q - x$ and $\nabla_x f=u$. Hence
$$
\begin{align}
\nabla_u T(p) &= \frac{1}{1-p}\int_{\{\sum u_ix_i>Q(u,p)\}} \nabla_u[Q-(\sum u_ix_i) p(x)]dx \\
&\quad -\frac{1}{\| u\|(1-p)}\int_{\{\sum u_ix_i=Q(u,p)\}} \left(E(X_i\mid \sum u_ix_i=Q)- x_i\right) p(x)dx \\
&= E(X_i\mid \sum u_ix_i>Q(u,p)) \\
\end{align}
$$
since the second term is zero---both expressions evaluating to $E(X_i\mid \sum u_ix_i=Q)$.
```python
num_diff = []
formula_diff = []
ps = np.arange(0.1, 1, 0.025)
for p in ps:
num_diff.append( (xxp.tvar(p) - xxm.tvar(p)) / delta )
formula_diff.append(float(xx0.density_df.loc[xx0.q(p), 'exgta_A'] / u[0]))
diff_test = pd.DataFrame(dict(num_diff=num_diff, formula_diff=formula_diff), index=ps)
diff_test['r1'] = diff_test.formula_diff/diff_test.num_diff
diff_test.r1.plot(ylim=[0.5, 1.5])
display(diff_test.loc[.8:1, :])
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>num_diff</th>
<th>formula_diff</th>
<th>r1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0.825</th>
<td>1.696920</td>
<td>1.696934</td>
<td>1.000008</td>
</tr>
<tr>
<th>0.850</th>
<td>1.752386</td>
<td>1.752403</td>
<td>1.000010</td>
</tr>
<tr>
<th>0.875</th>
<td>1.817812</td>
<td>1.817866</td>
<td>1.000030</td>
</tr>
<tr>
<th>0.900</th>
<td>1.897759</td>
<td>1.897834</td>
<td>1.000039</td>
</tr>
<tr>
<th>0.925</th>
<td>2.000824</td>
<td>2.000885</td>
<td>1.000030</td>
</tr>
<tr>
<th>0.950</th>
<td>2.146483</td>
<td>2.146601</td>
<td>1.000055</td>
</tr>
<tr>
<th>0.975</th>
<td>2.397802</td>
<td>2.398036</td>
<td>1.000098</td>
</tr>
</tbody>
</table>
</div>
```python
```
|
b82ce50831caca154ce2bd2590cfea5648eec6f3
| 134,080 |
ipynb
|
Jupyter Notebook
|
examples/Derivatives.ipynb
|
mynl/aggregate
|
48ab306fb9d19f08d6d42112490fc305c376ca8d
|
[
"BSD-3-Clause"
] | 6 |
2020-01-07T13:42:57.000Z
|
2021-11-23T19:46:55.000Z
|
examples/Derivatives.ipynb
|
mynl/aggregate
|
48ab306fb9d19f08d6d42112490fc305c376ca8d
|
[
"BSD-3-Clause"
] | null | null | null |
examples/Derivatives.ipynb
|
mynl/aggregate
|
48ab306fb9d19f08d6d42112490fc305c376ca8d
|
[
"BSD-3-Clause"
] | null | null | null | 43.574911 | 256 | 0.480661 | true | 3,872 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.7773 | 0.727975 | 0.565855 |
__label__eng_Latn
| 0.555448 | 0.153001 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial3.ipynb" target="_parent"></a>
# Neuromatch Academy: Week 3, Day 4, Tutorial 3 (Bonus)
# Deep Learning: Building and Evaluating Normative Encoding Models
**Content creators**: Jorge A. Menendez, Yalda Mohsenzadeh, Carsen Stringer
**Conent reviewers**: Roozbeh Farhoodi, Madineh Sarvestani, Kshitij Dwivedi, Spiros Chavlis, Ella Batty, Michael Waskom
---
#Tutorial Objectives
In this tutorial, we'll be using deep learning to build an encoding model of the visual system, and then compare its internal representations to those observed in neural data.
Importantly, the encoding model we'll use here is different from the encoding models used in Tutorial 2. Its parameters won't be directly optimized to fit the neural data. Instead, we will optimize its parameters to solve a particular visual task that we know the brain can solve. We therefore refer to it as a "normative" encoding model, since it is optimized for a specific behavioral task.
To then evaluate whether this normative encoding model is actually a good model of the brain, we'll analyze its internal representations and compare them to the representations observed in mouse primary visual cortex. Since we understand exactly what the encoding model's representations are optimized to do, any similarities will hopefully shed light on why the representations in the brain look the way they do.
More concretely, our goal will be learn how to:
* Visualize and analyze the internal representations of a deep network
* Quantify the similarity between distributed representations in a model and neural representations observed in recordings, using Representational Similarity Analysis (RSA)
```python
#@title Video 1: Deep convolutional network for orientation discrimination
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KlXtKJCpV4I", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=KlXtKJCpV4I
---
# Setup
**Don't forget to execute the hidden cells below!**
```python
import numpy as np
from scipy.stats import zscore
import matplotlib as mpl
from matplotlib import pyplot as plt
import torch
from torch import nn, optim
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
```
```python
#@title Data retrieval and loading
import os
import hashlib
import requests
fname = "W3D4_stringer_oribinned1.npz"
url = "https://osf.io/683xc/download"
expected_md5 = "436599dfd8ebe6019f066c38aed20580"
if not os.path.isfile(fname):
try:
r = requests.get(url)
except requests.ConnectionError:
print("!!! Failed to download data !!!")
else:
if r.status_code != requests.codes.ok:
print("!!! Failed to download data !!!")
elif hashlib.md5(r.content).hexdigest() != expected_md5:
print("!!! Data download appears corrupted !!!")
else:
with open(fname, "wb") as fid:
fid.write(r.content)
```
```python
#@title Figure Settings
%matplotlib inline
%config InlineBackend.figure_format='retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
```
```python
#@title Helper Functions
def load_data(data_name=fname, bin_width=1):
"""Load mouse V1 data from Stringer et al. (2019)
Data from study reported in this preprint:
https://www.biorxiv.org/content/10.1101/679324v2.abstract
These data comprise time-averaged responses of ~20,000 neurons
to ~4,000 stimulus gratings of different orientations, recorded
through Calcium imaginge. The responses have been normalized by
spontanous levels of activity and then z-scored over stimuli, so
expect negative numbers. They have also been binned and averaged
to each degree of orientation.
This function returns the relevant data (neural responses and
stimulus orientations) in a torch.Tensor of data type torch.float32
in order to match the default data type for nn.Parameters in
Google Colab.
This function will actually average responses to stimuli with orientations
falling within bins specified by the bin_width argument. This helps
produce individual neural "responses" with smoother and more
interpretable tuning curves.
Args:
bin_width (float): size of stimulus bins over which to average neural
responses
Returns:
resp (torch.Tensor): n_stimuli x n_neurons matrix of neural responses,
each row contains the responses of each neuron to a given stimulus.
As mentioned above, neural "response" is actually an average over
responses to stimuli with similar angles falling within specified bins.
stimuli: (torch.Tensor): n_stimuli x 1 column vector with orientation
of each stimulus, in degrees. This is actually the mean orientation
of all stimuli in each bin.
"""
with np.load(data_name) as dobj:
data = dict(**dobj)
resp = data['resp']
stimuli = data['stimuli']
if bin_width > 1:
# Bin neural responses and stimuli
bins = np.digitize(stimuli, np.arange(0, 360 + bin_width, bin_width))
stimuli_binned = np.array([stimuli[bins == i].mean() for i in np.unique(bins)])
resp_binned = np.array([resp[bins == i, :].mean(0) for i in np.unique(bins)])
else:
resp_binned = resp
stimuli_binned = stimuli
# only use stimuli <= 180
resp_binned = resp_binned[stimuli_binned <= 180]
stimuli_binned = stimuli_binned[stimuli_binned <= 180]
stimuli_binned -= 90 # 0 means vertical, -ve means tilted left, +ve means tilted right
# Return as torch.Tensor
resp_tensor = torch.tensor(resp_binned, dtype=torch.float32)
stimuli_tensor = torch.tensor(stimuli_binned, dtype=torch.float32).unsqueeze(1) # add singleton dimension to make a column vector
return resp_tensor, stimuli_tensor
def grating(angle, sf=1 / 28, res=0.1, patch=False):
"""Generate oriented grating stimulus
Args:
angle (float): orientation of grating (angle from vertical), in degrees
sf (float): controls spatial frequency of the grating
res (float): resolution of image. Smaller values will make the image
smaller in terms of pixels. res=1.0 corresponds to 640 x 480 pixels.
patch (boolean): set to True to make the grating a localized
patch on the left side of the image. If False, then the
grating occupies the full image.
Returns:
torch.Tensor: (res * 480) x (res * 640) pixel oriented grating image
"""
angle = np.deg2rad(angle) # transform to radians
wpix, hpix = 640, 480 # width and height of image in pixels for res=1.0
xx, yy = np.meshgrid(sf * np.arange(0, wpix * res) / res, sf * np.arange(0, hpix * res) / res)
if patch:
gratings = np.cos(xx * np.cos(angle + .1) + yy * np.sin(angle + .1)) # phase shift to make it better fit within patch
gratings[gratings < 0] = 0
gratings[gratings > 0] = 1
xcent = gratings.shape[1] * .75
ycent = gratings.shape[0] / 2
xxc, yyc = np.meshgrid(np.arange(0, gratings.shape[1]), np.arange(0, gratings.shape[0]))
icirc = ((xxc - xcent) ** 2 + (yyc - ycent) ** 2) ** 0.5 < wpix / 3 / 2 * res
gratings[~icirc] = 0.5
else:
gratings = np.cos(xx * np.cos(angle) + yy * np.sin(angle))
gratings[gratings < 0] = 0
gratings[gratings > 0] = 1
# Return torch tensor
return torch.tensor(gratings, dtype=torch.float32)
def show_stimulus(img, ax=None):
"""Visualize a stimulus"""
if ax is None:
ax = plt.gca()
ax.imshow(img, cmap=mpl.cm.binary)
ax.set_aspect('auto')
ax.set_xticks([])
ax.set_yticks([])
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
class CNN(nn.Module):
"""Deep convolutional network with one convolutional + pooling layer followed
by one fully connected layer
Args:
h_in (int): height of input image, in pixels (i.e. number of rows)
w_in (int): width of input image, in pixels (i.e. number of columns)
Attributes:
conv (nn.Conv2d): filter weights of convolutional layer
pool (nn.MaxPool2d): max pooling layer
dims (tuple of ints): dimensions of output from pool layer
fc (nn.Linear): weights and biases of fully connected layer
out (nn.Linear): weights and biases of output layer
"""
def __init__(self, h_in, w_in):
super().__init__()
C_in = 1 # input stimuli have only 1 input channel
C_out = 8 # number of output channels (i.e. of convolutional kernels to convolve the input with)
K = 5 # size of each convolutional kernel
Kpool = 2 # size of patches over which to pool
self.conv = nn.Conv2d(C_in, C_out, kernel_size=K, padding=K//2) # add padding to ensure that each channel has same dimensionality as input
self.pool = nn.MaxPool2d(Kpool)
self.dims = (C_out, h_in // Kpool, w_in // Kpool) # dimensions of pool layer output
self.fc = nn.Linear(np.prod(self.dims), 10) # flattened pool output --> 10D representation
self.out = nn.Linear(10, 1) # 10D representation --> scalar
def forward(self, x):
"""Classify grating stimulus as tilted right or left
Args:
x (torch.Tensor): p x 48 x 64 tensor with pixel grayscale values for
each of p stimulus images.
Returns:
torch.Tensor: p x 1 tensor with network outputs for each input provided
in x. Each output should be interpreted as the probability of the
corresponding stimulus being tilted right.
"""
x = x.unsqueeze(1) # p x 1 x 48 x 64, add a singleton dimension for the single stimulus channel
x = torch.relu(self.conv(x)) # output of convolutional layer
x = self.pool(x) # output of pooling layer
x = x.view(-1, np.prod(self.dims)) # flatten pooling layer outputs into a vector
x = torch.relu(self.fc(x)) # output of fully connected layer
x = torch.sigmoid(self.out(x)) # network output
return x
def train(net, train_data, train_labels, n_epochs=20, batch_size=100, learning_rate=1e-3, momentum=.99):
"""Run stochastic gradient descent on binary cross-entropy loss for a given
deep network (cf. appendix for details)
Args:
net (nn.Module): deep network whose parameters to optimize with SGD
train_data (torch.Tensor): n_train x h x w tensor with stimulus gratings
train_labels (torch.Tensor): n_train x 1 tensor with true tilt of each
stimulus grating in train_data, i.e. 1. for right, 0. for left
n_epochs (int): number of times to run SGD through whole training data set
batch_size (int): number of training data samples in each mini-batch
learning_rate (float): learning rate to use for SGD updates
momentum (float): momentum parameter for SGD updates
"""
# Initialize binary cross-entropy loss function
loss_fn = nn.BCELoss()
# Initialize SGD optimizer with momentum
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=momentum)
# Placeholder to save loss at each iteration
track_loss = []
# Loop over epochs
for i in range(n_epochs):
# Split up training data into random non-overlapping mini-batches
ishuffle = torch.randperm(train_data.shape[0]) # random ordering of training data
minibatch_data = torch.split(train_data[ishuffle], batch_size) # split train_data into minibatches
minibatch_labels = torch.split(train_labels[ishuffle], batch_size) # split train_labels into minibatches
# Loop over mini-batches
for stimuli, tilt in zip(minibatch_data, minibatch_labels):
# Evaluate loss and update network weights
out = net(stimuli) # predicted probability of tilt right
loss = loss_fn(out, tilt) # evaluate loss
optimizer.zero_grad() # clear gradients
loss.backward() # compute gradients
optimizer.step() # update weights
# Keep track of loss at each iteration
track_loss.append(loss.item())
# Track progress
if (i + 1) % (n_epochs // 5) == 0:
print(f'epoch {i + 1} | loss on last mini-batch: {loss.item(): .2e}')
print('training done!')
def get_hidden_activity(net, stimuli, layer_labels):
"""Retrieve internal representations of network
Args:
net (nn.Module): deep network
stimuli (torch.Tensor): p x 48 x 64 tensor with stimuli for which to
compute and retrieve internal representations
layer_labels (list): list of strings with labels of each layer for which
to return its internal representations
Returns:
dict: internal representations at each layer of the network, in
numpy arrays. The keys of this dict are the strings in layer_labels.
"""
# Placeholder
hidden_activity = {}
# Attach 'hooks' to each layer of the network to store hidden
# representations in hidden_activity
def hook(module, input, output):
module_label = list(net._modules.keys())[np.argwhere([module == m for m in net._modules.values()])[0, 0]]
if module_label in layer_labels: # ignore output layer
hidden_activity[module_label] = output.view(stimuli.shape[0], -1).detach().numpy()
hooks = [layer.register_forward_hook(hook) for layer in net.children()]
# Run stimuli through the network
pred = net(stimuli)
# Remove the hooks
[h.remove() for h in hooks]
return hidden_activity
def plot_corr_matrix(rdm, ax=None):
"""Plot dissimilarity matrix
Args:
rdm (numpy array): n_stimuli x n_stimuli representational dissimilarity
matrix
ax (matplotlib axes): axes onto which to plot
Returns:
nothing
"""
if ax is None:
ax = plt.gca()
image = ax.imshow(rdm, vmin=0.0, vmax=2.0)
ax.set_xticks([])
ax.set_yticks([])
cbar = plt.colorbar(image, ax=ax, label='dissimilarity')
def plot_multiple_rdm(rdm_dict):
"""Draw multiple subplots for each RDM in rdm_dict."""
fig, axs = plt.subplots(1, len(rdm_dict),
figsize=(4 * len(resp_dict), 3.5))
# Compute RDM's for each set of responses and plot
for i, (label, rdm) in enumerate(rdm_dict.items()):
# Uncomment to test your function
image = plot_corr_matrix(rdm, axs[i])
axs[i].set_title(label)
def plot_rdm_rdm_correlations(rdm_sim):
"""Draw a bar plot showing between-RDM correlations."""
f, ax = plt.subplots()
ax.bar(rdm_sim.keys(), rdm_sim.values())
ax.set_xlabel('Deep network model layer')
ax.set_ylabel('Correlation of model layer RDM\nwith mouse V1 RDM')
```
---
# Section 1: Orientation discrimination task
We will build our normative encoding model by optimizing its parameters to solve an orientation discrimination task.
The task is to tell whether a given grating stimulus is tilted to the "right" or "left"; that is, whether its angle relative to the vertical is positive or negative, respectively. We show example stimuli below, which were constructed using the helper function `grating()`.
Note that this is a task that we know many mammalian visual systems are capable of solving. It is therefore conceivable that the representations in a deep network model optimized for this task might resemble those in the brain. To test this hypothesis, we will compare the representations of our optimized encoding model to neural activity recorded in response to these very same stimuli, courtesy of [Stringer et al 2019](https://www.biorxiv.org/content/10.1101/679324v2.abstract).
```python
#@title
#@markdown Execute this cell to plot example stimuli
orientations = np.linspace(-90, 90, 5)
h = 3
n_col = len(orientations)
fig, axs = plt.subplots(1, n_col, figsize=(h * n_col, h))
h, w = grating(0).shape # height and width of stimulus
print('stimulus size: %i x %i' % (h, w))
for i, ori in enumerate(orientations):
stimulus = grating(ori)
axs[i].set_title(f'{ori: .0f}$^o$')
show_stimulus(stimulus, axs[i])
```
---
# Section 2: A deep network model of orientation discrimination
Our goal is to build a model that solves the orientation discrimination task outlined above. The model should take as input a stimulus image and output the probability of that stimulus being tilted right.
To do this, we will use a **convolutional neural network (CNN)**, which is the type of network we saw in Tutorial 2. Here, we will use a CNN that performs *two-dimensional* convolutions on the raw stimulus image (which is a 2D matrix of pixels), rather than *one-dimensional* convolutions on a categorical 1D vector representation of the stimulus. CNNs are commonly used for image processing.
The particular CNN we will use here has two layers:
1. a *convolutional layer*, which convolves the images with a set of filters
2. a *fully connected layer*, which transforms the output of this convolution into a 10-dimensional representation
Finally, a set of output weights transforms this 10-dimensional representation into a single scalar $p$, denoting the predicted probability of the input stimulus being tilted right.
<p align="center">
</p>
See the appendix for in-depth instructions for how to code up such a network in PyTorch. For now, however, we'll leave these details aside and focus on training this network and analyzing its internal representations.
Run the next cell to train such a network to solve this task. After initializing our CNN model, it builds a dataset of oriented grating stimuli to use for training it. These are then passed into a function called `train()` that uses SGD to optimize the model's parameters, taking similar arguments as the `train()` function we wrote in Tutorial 1.
Note that it may take ~30 seconds for the training to complete.
```python
help(train)
```
Help on function train in module __main__:
train(net, train_data, train_labels, n_epochs=20, batch_size=100, learning_rate=0.001, momentum=0.99)
Run stochastic gradient descent on binary cross-entropy loss for a given
deep network (cf. appendix for details)
Args:
net (nn.Module): deep network whose parameters to optimize with SGD
train_data (torch.Tensor): n_train x h x w tensor with stimulus gratings
train_labels (torch.Tensor): n_train x 1 tensor with true tilt of each
stimulus grating in train_data, i.e. 1. for right, 0. for left
n_epochs (int): number of times to run SGD through whole training data set
batch_size (int): number of training data samples in each mini-batch
learning_rate (float): learning rate to use for SGD updates
momentum (float): momentum parameter for SGD updates
```python
# Set random seeds for reproducibility
np.random.seed(12)
torch.manual_seed(12)
# Initialize CNN model
net = CNN(h, w)
# Build training set to train it on
n_train = 1000 # size of training set
# sample n_train random orientations between -90 and +90 degrees
ori = (np.random.rand(n_train) - 0.5) * 180
# build orientated grating stimuli
stimuli = torch.stack([grating(i) for i in ori])
# stimulus tilt: 1. if tilted right, 0. if tilted left, as a column vector
tilt = torch.tensor(ori > 0).type(torch.float).unsqueeze(-1)
# Train model
train(net, stimuli, tilt)
```
epoch 4 | loss on last mini-batch: 2.35e-01
epoch 8 | loss on last mini-batch: 2.00e-03
epoch 12 | loss on last mini-batch: 6.20e-08
epoch 16 | loss on last mini-batch: 4.94e-04
epoch 20 | loss on last mini-batch: 3.96e-07
training done!
---
# Section 3: Comparing CNNs to neural activity
Let's now analyze the internal representations of our deep CNN model of orientation discrimination and qualitatively compare them to population responses in mouse primary visual cortex.
In Section 3.3, we'll try to quantitatively compare CNN and primary visual cortex representations. In Section 3.2, we will visualize their representations and get some intuition for their structure.
## Section 3.1: Load data
In the next cell, we provide code for loading in some data from [this paper](https://www.biorxiv.org/content/10.1101/679324v2.abstract), which contains the responses of about ~20,000 neurons in mouse primary visual cortex to grating stimuli like those used to train our network (this is the same data used in Tutorial 1). These data are stored in two variables:
* `resp_v1` is a matrix where each row contains the responses of all neurons to a single stimulus.
* `ori` is a vector with the orientations of each stimulus, in degrees. As in the above convention, negative angles denote stimuli tilted to the left and positive angles denote stimuli tilted to the right.
We will then extract our deep CNN model's representations of these same stimuli (i.e. oriented gratings with the orientations in `ori`). We will run the same stimuli through our CNN model and use the helper function `get_hidden_activity()` to store the model's internal representations. The output of this function is a Python `dict`, which contains a matrix of population responses (just like `resp_v1`) for each layer of the network specified by the `layer_labels` argument. We'll focus on looking at the representations in
* the output of the first convolutional layer, stored in the model as `'pool'` (see the appendix for the details of the CNN architecture to understand why it's called this way)
* the 10-dimensional output of the fully connected layer, stored in the model as `'fc'`
```python
# Load mouse V1 data
resp_v1, ori = load_data()
# Extract model internal representations of each stimulus in the V1 data
# construct grating stimuli for each orientation presented in the V1 data
stimuli = torch.stack([grating(a.item()) for a in ori])
layer_labels = ['pool', 'fc']
resp_model = get_hidden_activity(net, stimuli, layer_labels)
# Aggregate all responses into one dict
resp_dict = {}
resp_dict['V1 data'] = resp_v1
for k, v in resp_model.items():
label = f"model\n'{k}' layer"
resp_dict[label] = v
```
## Section 3.2: Quantitative comparisons of CNNs and neural activity
```python
#@title Video 2: Quantitative comparisons of CNNs and neural activity
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="2Jbk7jFBvbU", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=2Jbk7jFBvbU
### Section 3.2.1 Representational Similarity Analysis (RSA)
We noticed above some similarities and differences between the population responses in mouse primary visual cortex and in different layers in our model. Let's now try to quantify this.
To do this, we'll use a technique called [**Representational Similarity Analysis**](https://www.frontiersin.org/articles/10.3389/neuro.06.004.2008/full?utm_source=FWEB&utm_medium=NBLOG&utm_campaign=ECO_10YA_top-research). The idea is to look at the similarity structure between representations of different stimuli. We can say that a brain area and a model use a similar representational scheme if stimuli that are represented (dis)similarly in the brain are represented (dis)similarly in the model as well.
To quantify this, we begin by computing the **representational dissimilarity matrix (RDM)** for the mouse V1 data and each model layer. This matrix, which we'll call $\mathbf{M}$, is computed as one minus the correlation coefficients between population responses to each stimulus. We can efficiently compute this by using the $z$-scored responses (see Appendix for explanation). In particular, the full matrix can be computed as:
\begin{gather}
\mathbf{M} = 1 - \frac{1}{N} \mathbf{ZZ}^T \\
\end{gather}
where $\mathbf{Z}$ is the z-scored responses and N is the number of neurons (or units).
#### Exercise 1: Compute RDMs
Complete the function `RDM()` for computing the RDM for a given set of population responses to each stimulus. Use the above formula in terms of $z$-scored population responses. You can use the helper function `zscore()` to compute the matrix of $z$-scored responses.
The subsequent cell uses this function to plot the RDM of the population responses in the V1 data and in each layer of our model CNN.
```python
def RDM(resp):
"""Compute the representational dissimilarity matrix (RDM)
Args:
resp (ndarray): S x N matrix with population responses to
each stimulus in each row
Returns:
ndarray: S x S representational dissimilarity matrix
"""
#########################################################
## TO DO for students: compute representational dissimilarity matrix
# Fill out function and remove
raise NotImplementedError("Student exercise: complete function RDM")
#########################################################
# z-score responses to each stimulus
zresp = ...
# Compute RDM
RDM = ...
return RDM
# Uncomment to test your function
# rdm_dict = {label: RDM(resp) for label, resp in resp_dict.items()}
# plot_multiple_rdm(rdm_dict)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_DeepLearning1/solutions/W3D4_Tutorial3_Solution_805a9425.py)
*Example output:*
```python
#@title Video 3: Exercise 1 solution discussion
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="otzR-KXDjus", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=otzR-KXDjus
#### (Bonus) Exercise: Correlate RDMs
To quantify how similar the representations are, we can simply correlate their dissimilarity matrices. For this, we'll again use the correlation coefficient. Note that dissimilarity matrices are symmetric ($M_{ss'} = M_{s's}$), so we should only use the off-diagonal terms on one side of the diagonal when computing this correlation to avoid overcounting. Moreover, we should leave out the diagonal terms, which are always equal to 0, so will always be perfectly correlated across any pair of RDM's.
Complete the function `correlate_rdms()` below that computes this correlation. The code for extracting the off-diagonal terms is provided.
We will then use function to compute the correlation between the RDM's for each layer of our model CNN and that of the V1 data.
```python
def correlate_rdms(rdm1, rdm2):
"""Correlate off-diagonal elements of two RDM's
Args:
rdm1 (np.ndarray): S x S representational dissimilarity matrix
rdm2 (np.ndarray): S x S representational dissimilarity matrix to
correlate with rdm1
Returns:
float: correlation coefficient between the off-diagonal elements
of rdm1 and rdm2
"""
# Extract off-diagonal elements of each RDM
ioffdiag = np.triu_indices(rdm1.shape[0], k=1) # indices of off-diagonal elements
rdm1_offdiag = rdm1[ioffdiag]
rdm2_offdiag = rdm2[ioffdiag]
#########################################################
## TO DO for students: compute correlation coefficient
# Fill out function and remove
raise NotImplementedError("Student exercise: complete correlate rdms")
#########################################################
corr_coef = np.corrcoef(..., ...)[0,1]
return corr_coef
# Split RDMs into V1 responses and model responses
rdm_model = rdm_dict.copy()
rdm_v1 = rdm_model.pop('V1 data')
# Correlate off-diagonal terms of dissimilarity matrices
# Uncomment below to test your function
# rdm_sim = {label: correlate_rdms(rdm_v1, rdm) for label, rdm in rdm_model.items()}
# plot_rdm_rdm_correlations(rdm_sim)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_DeepLearning1/solutions/W3D4_Tutorial3_Solution_ec39c647.py)
*Example output:*
According to this metric, which layer's representations most resemble those in the data? Does this agree with your intuitions from exercise 3?
#### (Bonus) Exercise: Plot rows of RDM
To better understand how these correlations in RDM's arise, we can try plotting individual rows of the RDM matrix. The resulting curves show the similarity of the responses to each stimulus with that to one specific stimulus.
Complete the `plot_rdm_rows()` function below for plotting the rows of the model and data RDM's. We will then plot a few specified rows. Do these curves explain the correlation (or lack thereof) in RDM's you saw in the previous exercise?
```python
def plot_rdm_rows(ori_list, rdm_dict, rdm_oris):
"""Plot the dissimilarity of response to each stimulus with response to one
specific stimulus
Args:
ori_list (list of float): plot dissimilarity with response to stimulus with
orientations closest to each value in this list
rdm_dict (dict): RDM's from which to extract dissimilarities
rdm_oris (np.ndarray): orientations corresponding to each row/column of RDMs
in rdm_dict
"""
n_col = len(ori_list)
f, axs = plt.subplots(1, n_col, figsize=(4 * n_col, 4), sharey=True)
# Get index of orientation closest to ori_plot
for ax, ori_plot in zip(axs, ori_list):
iori = np.argmin(np.abs(ori - ori_plot))
######################################################################
# TODO: plot dissimilarity curves in each RDM and remove the error
raise NotImplementedError("Student exercise: complete plot_rdm_rows")
######################################################################
# Plot dissimilarity curves in each RDM
for label, rdm in rdm_dict.items():
ax.plot(..., ..., label=label)
# Draw vertical line at stimulus we are plotting dissimilarity w.r.t.
ax.axvline(rdm_oris[iori], color=".7", zorder=-1)
# Label axes
ax.set_title(f'Dissimilarity with response\nto {ori_plot: .0f}$^o$ stimulus')
ax.set_xlabel('Stimulus orientation ($^o$)')
axs[0].set_ylabel('Dissimilarity')
axs[-1].legend(loc="upper left", bbox_to_anchor=(1, 1))
ori_list = [-75, -25, 25, 75]
# Uncomment to test your function
# plot_rdm_rows(ori_list, rdm_dict, ori)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_DeepLearning1/solutions/W3D4_Tutorial3_Solution_276b8031.py)
*Example output:*
## Section 3.3: Qualitative comparisons of CNNs and neural activity
To visualize the representations in the data and in each of these model layers, we'll use two classic techniques from systems neuroscience:
1. **tuning curves**: plotting the response of single neurons (or units, in the case of the deep network) as a function of the stimulus orientation
2. **dimensionality reduction**: plotting full population responses to each stimulus in two dimensions via dimensionality reduction. We'll use the non-linear dimensionality reduction technique t-SNE for this.
### Section 3.3.1: Tuning curves
Below, we show some example tuning curves for different neurons and units in the CNN we trained above. How are the single neuron responses similar/different between the model and the data? Try running this cell multiple times to get an idea of shared properties in the tuning curves of the neurons within each population.
```python
#@title
#@markdown Execute this cell to visualize tuning curves
fig, axs = plt.subplots(1, len(resp_dict), figsize=(len(resp_dict) * 6, 6))
for i, (label, resp) in enumerate(resp_dict.items()):
ax = axs[i]
ax.set_title('%s responses' % label)
# Pick three random neurons whose tuning curves to plot
ineurons = np.random.choice(resp.shape[1], 3, replace=False)
# Plot tuning curves of ineurons
ax.plot(ori, resp[:, ineurons])
ax.set_xticks(np.linspace(-90, 90, 5))
ax.set_xlabel('stimulus orientation')
ax.set_ylabel('neural response')
plt.tight_layout()
plt.show()
```
### Section 3.3.2: Dimensionality reduction of representations
We can visualize a dimensionality-reduced version of the internal representations of the mouse primary visual cortex or CNN internal representations in order to potentially uncover informative structure. Here, we use PCA to reduce the dimensionality to 20 dimensions, and then use tSNE to further reduce dimensionality to 2 dimensions. We use the first step of PCA so that tSNE runs faster.
#### (Bonus) Exercise: Visualize reduced dimensionality representations
Complete the code below for plotting dimensionality-reduced population responses.
```python
def plot_resp_lowd(resp_dict):
"""Plot a low-dimensional representation of each dataset in resp_dict."""
n_col = len(resp_dict)
fig, axs = plt.subplots(1, n_col, figsize=(4.5 * len(resp_dict), 4.5))
for i, (label, resp) in enumerate(resp_dict.items()):
ax = axs[i]
ax.set_title('%s responses' % label)
# First do PCA to reduce dimensionality to 20 dimensions so that tSNE is faster
resp_lowd = PCA(n_components=min(20, resp.shape[1])).fit_transform(resp)
# Then do tSNE to reduce dimensionality to 2 dimensions
resp_lowd = TSNE(n_components=2).fit_transform(resp_lowd)
#########################################################################
# TODO: plot dimensionality-reduced responses and remove the error
raise NotImplementedError("Student exercise: complete plot_resp_lowd")
#########################################################################
# Plot dimensionality-reduced population responses
# on 2D axes, with each point colored by stimulus orientation
x, y = ..., ...
pts = ax.scatter(x, y, c=ori, cmap='twilight', vmin=-90, vmax=90)
fig.colorbar(pts, ax=ax, ticks=np.linspace(-90, 90, 5), label='Stimulus orientation')
ax.set_xlabel('Dimension 1')
ax.set_ylabel('Dimension 2')
ax.set_xticks([])
ax.set_yticks([])
# Uncomment to test your function
# plot_resp_lowd(resp_dict)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_DeepLearning1/solutions/W3D4_Tutorial3_Solution_f4bd7002.py)
*Example output:*
Interpret the figure above. Why do these representations look the way they do? Here are a few specific questions to think about:
* How are the population responses similar/different between the model and the data? Can you explain these population-level responses from the single neuron responses seen in the previous exercise, or vice-versa?
* How do the representations in the different layers of the model differ, and how does this relate to the orientation discrimination task the model was optimized for?
* Which layer of our deep network encoding model most closely resembles the V1 data?
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_DeepLearning1/solutions/W3D4_Tutorial3_Solution_ff17ff9e.py)
---
# Summary
In this notebook, we learned
* how to use deep learning to build a normative encoding model of the visual system
* how to use RSA to evaluate how the model's representations match to those in the brain
Our approach was to optimize a deep convolutional network to solve an orientation discrimination task. But note that many other approaches could have been taken.
Firstly, there are many other "normative" ways to solve this orientation discrimination task. We could have used different neural network architectures, or even used a completely different algorithm that didn't involve a neural network at all, but instead used other kinds of image transformations (e.g. Fourier transforms). Neural network approaches, however, are special in that they explicitly uses abstract distributed representations to compute, which feels a lot closer to the kinds of algorithms the brain uses. See the appendix for a deeper discussion of why *convolutional* neural networks in particular are well-suited for building normative models of the visual system.
Secondly, our choice of visual task was mostly arbitrary. For example, we could have trained our network to directly estimate the orientation of the stimulus, rather than just discriminating between two classes of tilt. Or, we could have trained the network to perform a more naturalistic task, such as recognizing the rotation of an arbitrary image. Or we could try a task like object recognition. Is this something that mice compute in their visual cortex?
Training on different tasks could lead to different representations of the oriented grating stimuli, which might match the observed V1 representations better or worse.
---
# Appendix
## Convolutional Neural Networks (CNN's)
Convolutional layers are different from their fully connected counterparts in two ways (see figure below):
* In a fully connected layer, each unit computes a weighted sum over all the input units. In a convolutional layer, on the other hand, each unit computes a weighted sum over only a small patch of the input, referred to as the unit's **receptive field**. When the input is an image, the receptive field can be thought of as a local patch of pixels.
* In a fully connected layer, each unit uses its own independent set of weights to compute the weighted sum. In a convolutional layer, all the units (within the same channel) **share the same weights**. This set of shared weights is called the **convolutional filter or kernel**. The result of this computation is a convolution, where each unit has computed the same weighted sum over a different part of the input.
<p align="center">
</p>
## Building CNN's with PyTorch
Here we walk through building the different types of layers in a CNN using PyTorch, culminating in the CNN model used above.
#### **Fully connected layers**
In a fully connected layer, each unit computes a weighted sum over all the input units and applies a non-linear function to this weighted sum. You have used such layers many times already in parts 1 and 2. As you have already seen, these are implemented in PyTorch using the `nn.Linear` class.
See the next cell for code for constructing a deep network with one fully connected layer that will classify an input image as being tilted left or right. Specifically, its output is the predicted probability of the input image being tilted right. To ensure that its output is a probability (i.e. a number between 0 and 1), we use a sigmoid activation function to squash the output into this range (implemented with `torch.sigmoid()`).
```python
class FC(nn.Module):
"""Deep network with one fully connected layer
Args:
h_in (int): height of input image, in pixels (i.e. number of rows)
w_in (int): width of input image, in pixels (i.e. number of columns)
Attributes:
fc (nn.Linear): weights and biases of fully connected layer
out (nn.Linear): weights and biases of output layer
"""
def __init__(self, h_in, w_in):
super().__init__()
self.dims = h_in * w_in # dimensions of flattened input
self.fc = nn.Linear(self.dims, 10) # flattened input image --> 10D representation
self.out = nn.Linear(10, 1) # 10D representation --> scalar
def forward(self, x):
"""Classify grating stimulus as tilted right or left
Args:
x (torch.Tensor): p x 48 x 64 tensor with pixel grayscale values for
each of p stimulus images.
Returns:
torch.Tensor: p x 1 tensor with network outputs for each input provided
in x. Each output should be interpreted as the probability of the
corresponding stimulus being tilted right.
"""
x = x.view(-1, self.dims) # flatten each input image into a vector
x = torch.relu(self.fc(x)) # output of fully connected layer
x = torch.sigmoid(self.out(x)) # network output
return x
```
#### **Convolutional layers**
In a convolutional layer, each unit computes a weighted sum over a two-dimensional $K \times K$ patch of inputs (see appendix for a more detailed description). As we saw in part 2, the units are arranged in **channels** (see figure below), whereby units in the same channel compute the same weighted sum over different parts of the input, using the weights of that channel's **convolutional filter (or kernel)**. The output of a convolutional layer is thus a three-dimensional tensor of shape $C^{out} \times H \times W$, where $C^{out}$ is the number of channels (i.e. the number of convolutional filters/kernels), and $H$ and $W$ are the height and width of the input.
<p align="center">
</p>
Such layers can be implemented in Python using the PyTorch class `nn.Conv2d`, which takes the same arguments as `nn.Conv1d` (documentation [here](https://pytorch.org/docs/master/generated/torch.nn.Conv2d.html)).
See the next cell for code incorporating a convolutional layer with 8 convolutional filters of size 5 $\times$ 5 into our above fully connected network. Note that we have to flatten the multi-channel output in order to pass it on to the fully connected layer.
**Note:** as is also the case for the `nn.Conv1d` class, the inputs to `nn.Conv2d` layers must have a channel dimension in their first dimension. Thus, the input to a `nn.Conv2d` layer must be a 3D tensor of shape $C^{in} \times H \times W$ where $C^{in}$ is the number of input channels and $H, W$ their height and width, respectively. This means we'll have to make sure the stimulus images we feed into our network are 3D as well, like RGB images are. We'll do this by simply appending a singleton dimension, to reflect the fact that our grayscale images have a single color channel.
```python
class ConvFC(nn.Module):
"""Deep network with one convolutional layer and one fully connected layer
Args:
h_in (int): height of input image, in pixels (i.e. number of rows)
w_in (int): width of input image, in pixels (i.e. number of columns)
Attributes:
conv (nn.Conv2d): filter weights of convolutional layer
dims (tuple of ints): dimensions of output from conv layer
fc (nn.Linear): weights and biases of fully connected layer
out (nn.Linear): weights and biases of output layer
"""
def __init__(self, h_in, w_in):
super().__init__()
C_in = 1 # input stimuli have only 1 input channel
C_out = 8 # number of output channels (i.e. of convolutional kernels to convolve the input with)
K = 5 # size of each convolutional kernel (should be odd number for the padding to work as expected)
self.conv = nn.Conv2d(C_in, C_out, kernel_size=K, padding=K//2) # add padding to ensure that each channel has same dimensionality as input
self.dims = (C_out, h_in, C_out) # dimensions of conv layer output
self.fc = nn.Linear(np.prod(self.dims), 10) # flattened conv output --> 10D representation
self.out = nn.Linear(10, 1) # 10D representation --> scalar
def forward(self, x):
"""Classify grating stimulus as tilted right or left
Args:
x (torch.Tensor): p x 48 x 64 tensor with pixel grayscale values for
each of p stimulus images.
Returns:
torch.Tensor: p x 1 tensor with network outputs for each input provided
in x. Each output should be interpreted as the probability of the
corresponding stimulus being tilted right.
"""
x = x.unsqueeze(1) # p x 1 x 48 x 64, add a singleton dimension for the single stimulus channel
x = torch.relu(self.conv(x)) # output of convolutional layer
x = x.view(-1, np.prod(self.dims)) # flatten convolutional layer outputs into a vector
x = torch.relu(self.fc(x)) # output of fully connected layer
x = torch.sigmoid(self.out(x)) # network output
return x
```
#### **Max pooling layers**
In a max pooling layer, each unit computes the maximum over a small two-dimensional $K^{pool} \times K^{pool}$ patch of inputs. Given a multi-channel input of dimensions $C \times H \times W$, the output of a max pooling layer has dimensions $C \times H^{out} \times W^{out}$, where:
\begin{align}
H^{out} &= \left\lfloor \frac{H}{K^{pool}} \right\rfloor\\
W^{out} &= \left\lfloor \frac{W}{K^{pool}} \right\rfloor
\end{align}
where $\lfloor\cdot\rfloor$ denotes rounding down to the nearest integer below (i.e. floor division `//` in Python).
Max pooling layers can be implemented with the PyTorch `nn.MaxPool2d` class, which takes as a single argument the size $K^{pool}$ of the pooling patch. See the next cell for an example, which builds upon the previous example by adding in a max pooling layer just after the convolutional layer. Note again that we need to calculate the dimensions of its output in order to set the dimensions of the subsequent fully connected layer.
```python
class PoolConvFC(nn.Module):
"""Deep network with one convolutional layer followed by a max pooling layer
and one fully connected layer
Args:
h_in (int): height of input image, in pixels (i.e. number of rows)
w_in (int): width of input image, in pixels (i.e. number of columns)
Attributes:
conv (nn.Conv2d): filter weights of convolutional layer
pool (nn.MaxPool2d): max pooling layer
dims (tuple of ints): dimensions of output from pool layer
fc (nn.Linear): weights and biases of fully connected layer
out (nn.Linear): weights and biases of output layer
"""
def __init__(self, h_in, w_in):
super().__init__()
C_in = 1 # input stimuli have only 1 input channel
C_out = 8 # number of output channels (i.e. of convolutional kernels to convolve the input with)
K = 5 # size of each convolutional kernel
Kpool = 2 # size of patches over which to pool
self.conv = nn.Conv2d(C_in, C_out, kernel_size=K, padding=K//2) # add padding to ensure that each channel has same dimensionality as input
self.pool = nn.MaxPool2d(Kpool)
self.dims = (C_out, h_in // Kpool, w_in // Kpool) # dimensions of pool layer output
self.fc = nn.Linear(np.prod(self.dims), 10) # flattened pool output --> 10D representation
self.out = nn.Linear(10, 1) # 10D representation --> scalar
def forward(self, x):
"""Classify grating stimulus as tilted right or left
Args:
x (torch.Tensor): p x 48 x 64 tensor with pixel grayscale values for
each of p stimulus images.
Returns:
torch.Tensor: p x 1 tensor with network outputs for each input provided
in x. Each output should be interpreted as the probability of the
corresponding stimulus being tilted right.
"""
x = x.unsqueeze(1) # p x 1 x 48 x 64, add a singleton dimension for the single stimulus channel
x = torch.relu(self.conv(x)) # output of convolutional layer
x = self.pool(x) # output of pooling layer
x = x.view(-1, np.prod(self.dims)) # flatten pooling layer outputs into a vector
x = torch.relu(self.fc(x)) # output of fully connected layer
x = torch.sigmoid(self.out(x)) # network output
return x
```
This pooling layer completes the CNN model trained above to perform orientation discrimination. We can think of this architecture as having two primary layers:
1. a convolutional + pooling layer
2. a fully connected layer
We group together the convolution and pooling layers into one, as they really form one full unit of convolutional processing, where each patch of the image is passed through a convolutional filter and pooled with neighboring patches. It is standar practice to follow up any convolutional layer with a pooling layer, so they are generally treated as a single block of processing.
## Orientation discrimination as a binary classification problem
What loss function should we minimize to optimize orientation discrimination performance? We first note that the orientation discrimination task is a **binary classification problem**, where the goal is to classify a given stimulus into one of two classes: being tilted left or being tilted right.
Our goal is thus to output a high probability of the stimulus being tilted right (i.e. large $p$) whenever the stimulus is tilted right, and a high probability of the stimulus being tilted left (i.e. large $1-p \Leftrightarrow$ small $p$) whenever the stimulus is tilted left.
Let $\tilde{y}^{(n)}$ be the label of the $n$th stimulus in the mini-batch, indicating its true tilt:
\begin{equation}
\tilde{y}^{(n)} =
\begin{cases}
1 &\text{if stimulus }n\text{ is tilted right} \\
0 &\text{if stimulus }n\text{ is tilted left}
\end{cases}
\end{equation}
Let $p^{(n)}$ be the predicted probability of that stimulus being tilted right assigned by our network. Note that that $1-p^{(n)}$ is the predicted probability of that stimulus being tilted left. We'd now like to modify the parameters so as to maximize the predicted probability of the true class $\tilde{y}^{(n)}$. One way to formalize this is as maximizing the *log* probability
\begin{align}
\log \left( \text{predicted probability of stimulus } n \text{ being of class } \tilde{y}^{(n)}\right) &=
\begin{cases}
\log p^{(n)} &\text{if }\tilde{y}^{(n)} = 1 \\
\log (1 - p^{(n)}) &\text{if }\tilde{y}^{(n)} = 0
\end{cases}
\\
&= \tilde{y}^{(n)} \log p^{(n)} + (1 - \tilde{y}^{(n)})\log(1 - p^{(n)})
\end{align}
You should recognize this expression as the log likelihood of the Bernoulli distribution under the predicted probability $p^{(n)}$. This is the same quantity that is maximized in logistic regression, where the predicted probability $p^{(n)}$ is just a simple linear sum of its inputs (rather than a complicated non-linear operation, like in the deep networks used here).
To turn this into a loss function, we simply multiply it by -1, resulting in the so-called **binary cross-entropy**, or **negative log likelihood**. Summing over $P$ samples in a batch, the binary cross entropy loss is given by
\begin{equation}
L = -\sum_{n=1}^P \tilde{y}^{(n)} \log p^{(n)} + (1 - \tilde{y}^{(n)})\log(1 - p^{(n)})
\end{equation}
The binary cross-entropy loss can be implemented in PyTorch using the `nn.BCELoss()` loss function (cf. [documentation](https://pytorch.org/docs/master/generated/torch.nn.BCELoss.html)).
Feel free to check out the code used to optimize the CNN in the `train()` function defined in the hidden cell of helper functions at the top of the notebook. Because the CNN's used here have lots of parameters, we have to use two tricks that we didn't use in the previous parts of this tutorial:
1. We have to use *stochastic* gradient descent (SGD), rather than just gradient descent (GD).
2. We have to use [momentum](https://distill.pub/2017/momentum/) in our SGD updates. This is easily incorporated into our PyTorch implementation by just setting the `momentum` argument of the built-in `optim.SGD` optimizer.
## RDM Z-Score Explanation
If $r^{(s)}_i$ is the response of the $i$th neuron to the $s$th stimulus, then
\begin{gather}
M_{ss'} = 1 - \frac{\text{Cov}\left[ r_i^{(s)}, r_i^{(s')} \right]}{\sqrt{\text{Var}\left[ r_i^{(s)} \right] \text{Var}\left[ r_i^{(s')} \right]}} = 1 - \frac{\sum_{i=1}^N (r_i^{(s)} - \bar{r}^{(s)})(r_i^{(s')} - \bar{r}^{(s')}) }{\sqrt{\sum_{i=1}^N \left( r_i^{(s)} - \bar{r}^{(s)} \right)^2 \sum_{i=1}^N \left( r_i^{(s')} - \bar{r}^{(s')} \right)^2 }} \\
\bar{r}^{(s)} = \frac{1}{N} \sum_{i=1}^N r_i^{(s)}
\end{gather}
This can be computed efficiently by using the $z$-scored responses
\begin{equation}
z_i^{(s)} = \frac{r_i^{(s)} - \bar{r}^{(s)}}{\sqrt{\frac{1}{N}\sum_{i=1}^N \left( r_i^{(s)} - \bar{r}^{(s)} \right)^2}} \Rightarrow M_{ss'} = 1 - \frac{1}{N}\sum_{i=1}^N z_i^{(s)}z_i^{(s')}
\end{equation}
such that the full matrix can be computed through the matrix multiplication
\begin{gather}
\mathbf{M} = 1 - \frac{1}{N} \mathbf{ZZ}^T \\
\mathbf{Z} =
\begin{bmatrix}
z_1^{(1)} & z_2^{(1)} & \ldots & z_N^{(1)} \\
z_1^{(2)} & z_2^{(2)} & \ldots & z_N^{(2)} \\
\vdots & \vdots & \ddots & \vdots \\
z_1^{(S)} & z_2^{(S)} & \ldots & z_N^{(S)}
\end{bmatrix}
\end{gather}
where $S$ is the total number of stimuli. Note that $\mathbf{Z}$ is an $S \times N$ matrix, and $\mathbf{M}$ is an $S \times S$ matrix.
|
4191976ed71c9e981081bf91e557397847fd9638
| 525,300 |
ipynb
|
Jupyter Notebook
|
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial3.ipynb
|
ddinesan/NeuroMatchAcademy
|
977f4ddaaa4f33746672930ae01d5ea592dbbba0
|
[
"CC-BY-4.0"
] | 1 |
2020-08-04T10:11:55.000Z
|
2020-08-04T10:11:55.000Z
|
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial3.ipynb
|
Andy-Dufrein/course-content
|
977f4ddaaa4f33746672930ae01d5ea592dbbba0
|
[
"CC-BY-4.0"
] | null | null | null |
tutorials/W3D4_DeepLearning1/student/W3D4_Tutorial3.ipynb
|
Andy-Dufrein/course-content
|
977f4ddaaa4f33746672930ae01d5ea592dbbba0
|
[
"CC-BY-4.0"
] | 1 |
2021-03-27T10:57:54.000Z
|
2021-03-27T10:57:54.000Z
| 293.463687 | 330,072 | 0.910011 | true | 13,033 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.73412 | 0.695958 | 0.510917 |
__label__eng_Latn
| 0.987834 | 0.02536 |
## Matrices ----- Notation and Operations
## Matrix notation
<a href="https://en.wikipedia.org/wiki/Matrix_(mathematics)">Matrix Notation</a> is a notation system that allows succinct representation of complex operations, such as a change of basis.
* **Matlab** is based on Matrix Notation.
* **Python**: similar functionality by using **numpy**
Recall that a **vector** can be represented as a one dimensional array of numbers. A **matrix** is a two dimensional rectangle of numbers. A matrix consists of rows, indexed from the top to the bottom and of columns, indexed from the left to the right. As is described in the figure.
A matrix with $n$ rows and $m$ columns is said to be an "$m$ by $n$ matrix".
In numpy we will say that the **shape** of the matrix is $(m,n)$. We will also use the LaTeX notation $M_{m \times n}$ to indicate that $M$ is an $m \times n$ matrix.
### Transposing a Matrix
At times it is useful to switch the rows and column dimensions of matrices. Consider the matrix
$$
\begin{equation}
A=\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
a_{31} & a_{32}
\end{bmatrix}
\end{equation}
$$
The transpose of A is
$$
\begin{equation}
A^{\mathsf{T}}=\begin{bmatrix}
a_{11} & a_{21} & a_{31} \\
a_{12} & a_{22} & a_{32} \\
\end{bmatrix}
\end{equation}
$$
```
import numpy as np
# The .reshape command reorganized the elements of a matrix into a new shape
A = np.array(range(6))
print('A=',A)
B=A.reshape(2,3)
print("B is a 2X3 matrix:\n",B)
print("the shape of B is:",B.shape)
print("The transpose of B is\n",B.T)
print("the shape of B.T is:",B.T.shape)
```
### Vectors as matrices.
When using matrix notation, vectors can be represented as either [row or column vectors](https://en.wikipedia.org/wiki/Row_and_column_vectors). In a matrix context, a vector $\vec{v}$ is denoted by a bold-face letter. ${\bf v}$ for a column vector and ${\bf v}^\top$ for row vector:
* By default a vector is represented as a **column vector** which is a matrix consisting of a single column:
$$
\begin{equation}
{\bf v}=
\begin{bmatrix}
v_1 \\
v_2 \\
\vdots \\
v_d
\end{bmatrix}
\end{equation}
$$
* If $\vec{v}$ is a column vector then the **transpose** of $\vec{v}$, denoted by $\vec{v}^\top$ is a **row vector** which is a matrix consists of a single row:
$$
\begin{equation}
{\bf v}^{\top}=
\begin{bmatrix}
v_1 & v_2 & \cdots & v_d
\end{bmatrix}
\end{equation}
$$
#### A vector as a matrix
Row and Column vectors can be thought of as matrices.
* The column vector ${\bf v}$ is a $d \times 1$ matrix.
* The row vector ${\bf v}^{\top}$ is a $1 \times d$ matrix.
#### A matrix as a collection of vectors
Matrices can be represented as a collection of vectors. For example, consider the $2\times 3$ matrix ${\bf A}=\begin{bmatrix}
a_{11} & a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}
\end{bmatrix}$
We can represent ${\bf A}=\begin{bmatrix}
a_{11} & a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}
\end{bmatrix}$ as vectors in one of two ways:
* As a row of column vectors:
$$ {\bf A} = \begin{bmatrix} {\bf c}_1 , {\bf c}_2 , {\bf c}_3 \end{bmatrix}$$
where
$$
{\bf c}_1=\begin{bmatrix} a_{11}\\ a_{21} \end{bmatrix},
{\bf c}_2=\begin{bmatrix} a_{12}\\ a_{22} \end{bmatrix},
{\bf c}_3=\begin{bmatrix} a_{13}\\ a_{23} \end{bmatrix}$$
* Or as a column of row vectors: $
{\bf A} = \begin{bmatrix} {\bf r}_1 \\ {\bf r}_2 \end{bmatrix}$
where $
{\bf r}_1=\begin{bmatrix} a_{11}, a_{12}, a_{13} \end{bmatrix},
{\bf r}_2=\begin{bmatrix} a_{21}, a_{22}, a_{23} \end{bmatrix},
$
```
A=np.array(range(6)).reshape(2,3)
print('A=\n',A)
```
```
print("Splitting A into columns:")
Columns=np.split(A,3,axis=1)
for i in range(len(Columns)):
print('column %d'%i)
print(Columns[i])
```
```
A_recon=np.concatenate(Columns,axis=1)
print('reconstructing the matrix from the columns:')
print(A_recon)
print('Checking that the reconstruction is equal to the original')
print(A_recon==A)
```
```
print("Splitting A into rows:")
Rows=np.split(A,2,axis=0)
for i in range(len(Rows)):
print('row %d'%i)
print(Rows[i])
```
```
A_recon=np.concatenate(Rows,axis=0)
print('reconstructing the matrix from the rows:')
print(A_recon)
print('Checking that the reconstruction is equal to the original')
print(A_recon==A)
```
#### Numpy functions
Beyond the commands `reshape`, `split` and `concatanate` numpy has a rich set of functions to manipulate arrays, for a complete list see [Numpy Array Manipulation routines](https://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html)
### Matrix - scalar operations
You can add/subtract multiply/divide a scalar from a matrix
#### Adding a scalar value to a matrix
Let $A$=$\bigl[ \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{smallmatrix} \bigr]$. Here is how we would add the scalar $3$ to $A$:
$$
\begin{equation}
A+3=\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}+3
=\begin{bmatrix}
a_{11}+3 & a_{12}+3 \\
a_{21}+3 & a_{22}+3
\end{bmatrix}
\end{equation}
$$
#### Subtracting a scalar value to a matrix
Substraction is similar
$$
\begin{equation}
A-3=\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}-3
=\begin{bmatrix}
a_{11}-3 & a_{12}-3 \\
a_{21}-3 & a_{22}-3
\end{bmatrix}
\end{equation}
$$
#### Product of a scalar and a matrix
Multiplication is also similar
$$
\begin{equation}
3 \times A = 3 \times \begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}
=
\begin{bmatrix}
3a_{11} & 3a_{12} \\
3a_{21} & 3a_{22}
\end{bmatrix}
\end{equation}
$$
#### Dividing a matrix by a scalar
Division by $a$ is the same as multiplying by $1/a$. Note that you cn divide a matrix by a scalar, but dividing a scalar by a matrix is not defined.
$$
\begin{equation}
A/5= A \times \frac{1}{5}= \begin{bmatrix}
a_{11}/5 & a_{12}/5 \\
a_{21}/5 & a_{22}/5
\end{bmatrix}
\end{equation}
$$
```
# Some examples of matrix-scalar operations using numpy
print('A=\n',A)
print('A+3=3+A=\n',A+3) # addition
print('A*3=\n',A*3) # product
print('A/2=\n',A/2) # integer division
print('A/2.=\n',A/2.) # floating point division
```
### Adding and subtracting two matrices
Let $A$=$\bigl[ \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{smallmatrix} \bigr]$ and $B$=$\bigl[ \begin{smallmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{smallmatrix} \bigr]$. To compute $A-B$, subtract each element of B from the corresponding element of A:
$
A -B =
\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix} -
\begin{bmatrix} b_{11} & b_{12} \\
b_{21} & b_{22}
\end{bmatrix} $
$ =
\begin{bmatrix}
a_{11}-b_{11} & a_{12}-b_{12} \\
a_{21}-b_{21} & a_{22}-b_{22}
\end{bmatrix}
$
Addition works exactly the same way:
$ A + B =
\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix} +
\begin{bmatrix} b_{11} & b_{12} \\
b_{21} & b_{22}
\end{bmatrix} $
$ =
\begin{bmatrix}
a_{11}+b_{11} & a_{12}+b_{12} \\
a_{21}+b_{21} & a_{22}+b_{22}
\end{bmatrix}
$
An important point to know about matrix addition and subtraction is that it is only defined when $A$ and $B$ are of the same size. Here, both are $2 \times 2$. Since operations are performed element by element, these two matrices must be conformable- and for addition and subtraction that means they must have the same numbers of rows and columns. I like to be explicit about the dimensions of matrices for checking conformability as I write the equations, so write
$$
A_{2 \times 2} + B_{2 \times 2}= \begin{bmatrix}
a_{11}+b_{11} & a_{12}+b_{12} \\
a_{21}+b_{21} & a_{22}+b_{22}
\end{bmatrix}_{2 \times 2}
$$
Notice that the result of a matrix addition or subtraction operation is always of the same dimension as the two operands.
Let's define another matrix, B, that is also $2 \times 2$ and add it to A:
```
B = np.random.randn(2,2)
print(B)
```
```
try:
result = A + B
except Exception as e:
print(e)
```
### Matrix-Matrix products
#### The dot product of two vectors
* Recall that a vector is just a skinny matrix.
* Consider the dot product $(1,2,3) \cdot (1,1,0) = 1 \times 1 + 2 \times 1 +3 \times 0= 3$.
Conventions of dot product in matrix notation:
* The first vector is a row vector and the second vector is a column vector.
* There is no operator ($\cdot$) between the two vectors
$$
\begin{bmatrix} 1,2,3 \end{bmatrix}
\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = 1 \times 1 + 2 \times 1 +3 \times 0= 3
$$
#### The dot product of a matrix and a vector
To multiply the matrix ${\bf A}=\begin{bmatrix}
a_{11} & a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}
\end{bmatrix}$
by the column vector ${\bf c}=\begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix}$
We think of ${\bf A}$ as consisting or two row vectors:
${\bf A} = \begin{bmatrix} {\bf r}_1 \\ {\bf r}_2 \end{bmatrix}$
where $
{\bf r}_1=\begin{bmatrix} a_{11}, a_{12}, a_{13} \end{bmatrix},
{\bf r}_2=\begin{bmatrix} a_{21}, a_{22}, a_{23} \end{bmatrix},
$
and take the dot products of ${\bf r}_1,{\bf r}_2$ with ${\bf c}$ to create a column vector of dimension 2:
${\bf A} {\bf c} = \begin{bmatrix} {\bf r}_1 {\bf c} \\ {\bf r}_2 {\bf c} \end{bmatrix}
= \begin{bmatrix}
a_{11}c_1 + a_{12}c_2 + a_{13} c_3 \\
a_{21}c_1 + a_{22}c_2 + a_{23} c_3
\end{bmatrix}$
#### Dot product of two matrices
Multiplying a matrix and a column vector can be generalized to multiplying two matrices.
To do so we think of
Alternatively, consider a matrix ${\bf C}$ of size $2 \times 3$ and a matrix ${\bf A}$ of size $3 \times 2$
$$
\begin{equation}
{\bf A}=\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
a_{31} & a_{32}
\end{bmatrix}
,
{\bf C} =
\begin{bmatrix}
c_{11} & c_{12} & c_{13} \\
c_{21} & c_{22} & c_{23}
\end{bmatrix}
\end{equation}
$$
To compute ${\bf AC}$ we think of ${\bf A}$ as a column of row vectors:
${\bf A} =\begin{bmatrix}
{\bf a}_1 \\
{\bf a}_2 \\
{\bf a}_3
\end{bmatrix}
$
and of ${\bf C}$ as a row of column vectors: ${\bf C} =\begin{bmatrix}
{\bf c}_1,
{\bf c}_2,
{\bf c}_3
\end{bmatrix}
$
${\bf AC}$ is the matrix generated from taking the dot product of each row vector in ${\bf A}$ with each column vector in ${\bf C}$
${\bf AC}=
\begin{bmatrix}
{\bf a}_1 \\
{\bf a}_2 \\
{\bf a}_3
\end{bmatrix}
\begin{bmatrix}
{\bf c}_1,
{\bf c}_2,
{\bf c}_3
\end{bmatrix}
= \begin{bmatrix}
{\bf a}_1 \cdot {\bf c}_1 & {\bf a}_1 \cdot {\bf c}_2 & {\bf a}_1 \cdot {\bf c}_3 \\
{\bf a}_2 \cdot {\bf c}_1 & {\bf a}_2 \cdot {\bf c}_2 & {\bf a}_2 \cdot {\bf c}_3 \\
{\bf a}_3 \cdot {\bf c}_1 & {\bf a}_3 \cdot {\bf c}_2 & {\bf a}_3 \cdot {\bf c}_3
\end{bmatrix} =
$
$= \begin{bmatrix}
a_{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \\
a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \\
a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23}
\end{bmatrix}
$
For more information on the topic of matrix multiplication, see
http://en.wikipedia.org/wiki/Matrix_multiplication.
```
# Matrix - Vector product
A = np.arange(6).reshape((3,2))
C = np.array([-1,1])
print(A.shape)
print(C.shape)
print(np.dot(A,C.T))
```
```
# Matrix - Matrix product
# Define the matrices A and C
A = np.arange(6).reshape((3,2))
C = np.random.randn(2,2)
print('A=\n',A)
print('C=\n',C)
```
We will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result:
```
print('A.dot(C)=\n',A.dot(C))
print('np.dot(A,C)=\n',np.dot(A,C))
```
#### Conformity
Note that the number of columns in the first matrix has to be equal to the number of columns in the second matrix. Otherwise, the matrix product is not defined. When this condition holds we say that the two matrices **conform**.
Taking the product of two matrices that don't conform results in an exception:
```
np.dot(C,A)
```
## Orthonormal matrices and change of Basis
**As was explained in the notebook: "Linear Algebra Review"**
We say that the vectors $\vec{u}_1,\vec{u}_2,\ldots,\vec{u}_d \in R^d$ form an **orthonormal basis** of $R^d$. If:
* **Normality:** $\vec{u}_1,\vec{u}_2,\ldots,\vec{u}_d$ are unit vectors: $\forall 1 \leq i \leq d: \vec{u}_i \cdot \vec{u}_i =1 $
* **Orthogonality:** Every pair of vectors are orthogonal:
$\forall 1 \leq i\neq j \leq d: \vec{u}_i \cdot \vec{u}_j =0 $
** Orthonormal basis can be used to rotate the vector space:**
* $\vec{v}$ is **represented** as a list of $d$ dot products: $$[\vec{v}\cdot\vec{u_1},\vec{v}\cdot\vec{u_2},\ldots,\vec{v}\cdot\vec{u_d}]$$
* $\vec{v}$ is **reconstructed** by summing its projections on the basis vectors:
$$\vec{v} = (\vec{v}\cdot\vec{u_1})\vec{u_1} + (\vec{v}\cdot\vec{u_2})\vec{u_2} + \cdots + (\vec{v}\cdot\vec{u_d})\vec{u_d}$$
### Change of Basis using matrix notation
To use matrix notation, we think of $\vec{u}_i$ as a row vector:
$$
{\bf u}_i=\begin{bmatrix} u_{i1}, u_{i2},\ldots, u_{id} \end{bmatrix},
$$
We can combine the orthonormal vectors to create an *orthonormal matrix*
$$ {\bf U} = \begin{bmatrix} {\bf u}_1 \\ {\bf u}_2 \\ \vdots \\ {\bf u}_d \end{bmatrix}
= \begin{bmatrix}
u_{11}, u_{12},\ldots, u_{1d} \\
u_{21}, u_{22},\ldots, u_{2d} \\
\vdots\\
u_{d1}, u_{d2},\ldots, u_{dd}
\end{bmatrix}
$$
Orthonormality: ${\bf UU^{\top} = I}$
Using this notation, the representation of a column vector $\bf v$ in the orthonormal basis corresponsing to the rows of ${\bf U}$ is equal to
$${\bf Uv} = \begin{bmatrix} {\bf u}_1 {\bf v} \\ {\bf u}_2 {\bf v} \\ \vdots \\ {\bf u}_d {\bf v} \end{bmatrix}$$
And the reconstruction of $\bf v$ is equal to ${\bf U U^{\mathsf{T}} v}$
## The Identity Matrix
The identity matrix behaves like the number $1$:
The dot product of any matrix ${\bf A}$ by the identity matrix ${\bf I}$ yields ${\bf A}$.
$$ {\bf A I = I A = A} $$
The identity matrix is zero everywhere other than the diagonal, where it is $1$.
$$
{\bf I} = \begin{bmatrix}
1, 0,\ldots, 0 \\
0, 1,\ldots, 0 \\
\ddots \\
0,0,\ldots, 1
\end{bmatrix}
$$
**Excercise:** Check that ${\bf A I = I A = A}$.
## Inverting a Matrix
Recall that the multiplicative inverse of the number $a$ is $a^{-1}=1/a$
The property of $a^{-1}$ is that $a a^{-1}=1$.
Recall also that $0$ does not have a multiplicative inverse.
**Some** square matrices ${\bf A}$ have a multiplicative inverse ${\bf A^{-1}}$
such that ${\bf A A^{-1} = A^{-1} A =I}$
Finding the inverse of a matrix is called *inverting* the matrix.
An $n\times n$ matrix $\bf A$ represents a linear transformation from $R^n$ to $R^n$. If the matrix is [**invertible**](https://en.wikipedia.org/wiki/Invertible_matrix) then there is another transformation ${\bf A}^{-1}$ that represents the inverse transformation, such that for any column vctor ${\bf v} \in R^n$:
$${\bf A}^{-1}{\bf A}{\bf v} = {\bf A}{\bf A}^{-1}{\bf v} = {\bf v} $$
### Inverting a 2X2 matrix
Consider the square $2 \times 2$ matrix ${\bf A} = \bigl( \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{smallmatrix} \bigr)$. The inverse of matrix ${\bf A}$ is
$$
\begin{equation}
{\bf A}^{-1}=\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}^{-1}=\frac{1}{a_{11}a_{22}-a_{12}a_{21}} \begin{bmatrix}
a_{22} & -a_{12} \\
-a_{21} & a_{11}
\end{bmatrix}
\end{equation}
$$
**Excercise:** Check that $ {\bf A A^{-1}=A^{-1} A=I }$
For more information on inverting matrices, see this
http://en.wikipedia.org/wiki/Matrix_inversion.
```
# An example of computing the inverse using numpy.linalg.inv
# note, we need a square matrix (# rows = # cols), use C:
C = np.random.randn(2,2)
print("C=\n",C)
C_inverse = np.linalg.inv(C)
print("C_inverse=\n",C_inverse)
```
Checking that $C\times C^{-1} = I$:
```
I = np.eye(2)
print("identity matrix=\n",I)
print("C.dot(C_inverse)-I=\n",C.dot(C_inverse)-I)
print("C_inverse.dot(C)-I=\n",C_inverse.dot(C)-I)
```
### Singular matrices
Not all matrices have an inverse. Those that do not are called **singular**
```
C=np.array([[1,0],[1,0]])
print("C=\n",C)
try:
C_inverse = np.linalg.inv(C)
except:
print('C cannot be inverted: it is a singular matrix')
```
## Next video: solving a set of linear equations
|
1607f412fa16d6e47b8545ad5dffac1877325762
| 32,037 |
ipynb
|
Jupyter Notebook
|
Week 12 _ Regression and PCA/more_lectures/2.Matrix_notation_and_operations.ipynb
|
ebishwaraj/ProbabilityStatisticsPython_DSE210x_UCSD_DataScienceMicroMasters
|
3c29447c62c74d4831129f44c72a74432b213c99
|
[
"MIT"
] | 1 |
2020-12-23T15:22:27.000Z
|
2020-12-23T15:22:27.000Z
|
Week 12 _ Regression and PCA/more_lectures/2.Matrix_notation_and_operations.ipynb
|
ebishwaraj/ProbabilityStatisticsPython_DSE210x_UCSD_DataScienceMicroMasters
|
3c29447c62c74d4831129f44c72a74432b213c99
|
[
"MIT"
] | null | null | null |
Week 12 _ Regression and PCA/more_lectures/2.Matrix_notation_and_operations.ipynb
|
ebishwaraj/ProbabilityStatisticsPython_DSE210x_UCSD_DataScienceMicroMasters
|
3c29447c62c74d4831129f44c72a74432b213c99
|
[
"MIT"
] | null | null | null | 25.961912 | 478 | 0.49524 | true | 5,848 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.951863 | 0.810479 | 0.771465 |
__label__eng_Latn
| 0.91813 | 0.630704 |
## Benchmark Lorenz 63 linear and nonlinear filters
In this notebook, we are interested in the sequential inference
References:
[1] Evensen, G., 1994. Sequential data assimilation with a nonlinear quasi‐geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(C5), pp.10143-10162.
[2] Asch, M., Bocquet, M. and Nodet, M., 2016. Data assimilation: methods, algorithms, and applications. Society for Industrial and Applied Mathematics.
[3] Bishop, C.H., Etherton, B.J. and Majumdar, S.J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), pp.420-436.
[4] Lorenz, E.N., 1963. Deterministic nonperiodic flow. Journal of atmospheric sciences, 20(2), pp.130-141.
[5] Spantini, A., Baptista, R. and Marzouk, Y., 2019. Coupling techniques for nonlinear ensemble filtering. arXiv preprint arXiv:1907.00389.
### The basic steps
To carry out sequential inference in `AdaptiveTransportMap`, we need to carry out a few basic steps:
* **Specify the problem**: Define the state-space model: initial condition, dynamical and observation models (including process and observation noise)
* **Specify the inflation parameters**: Determine the levels of covariance inflation to properly balance the dynamical system and the observations from the truth system
* **Specify the filter**: Choose the ensemble filter to assimilate the observations in the state estimate
* **Perform the sequential inference**: Perform the sequential inference
We will go through all of these here.
```julia
using Revise
using LinearAlgebra
using AdaptiveTransportMap
using Statistics
using Distributions
using OrdinaryDiffEq
using JLD
```
┌ Info: Precompiling AdaptiveTransportMap [bdf749b0-1400-4207-80d3-e689c0e3f03d]
└ @ Base loading.jl:1278
┌ Warning: Type annotations on keyword arguments not currently supported in recipes. Type information has been discarded
└ @ RecipesBase ~/.julia/packages/RecipesBase/92zOw/src/RecipesBase.jl:116
┌ Warning: Type annotations on keyword arguments not currently supported in recipes. Type information has been discarded
└ @ RecipesBase ~/.julia/packages/RecipesBase/92zOw/src/RecipesBase.jl:116
┌ Warning: Type annotations on keyword arguments not currently supported in recipes. Type information has been discarded
└ @ RecipesBase ~/.julia/packages/RecipesBase/92zOw/src/RecipesBase.jl:116
┌ Warning: Type annotations on keyword arguments not currently supported in recipes. Type information has been discarded
└ @ RecipesBase ~/.julia/packages/RecipesBase/92zOw/src/RecipesBase.jl:116
┌ Warning: Type annotations on keyword arguments not currently supported in recipes. Type information has been discarded
└ @ RecipesBase ~/.julia/packages/RecipesBase/92zOw/src/RecipesBase.jl:116
┌ Warning: Type annotations on keyword arguments not currently supported in recipes. Type information has been discarded
└ @ RecipesBase ~/.julia/packages/RecipesBase/92zOw/src/RecipesBase.jl:116
┌ Warning: Type annotations on keyword arguments not currently supported in recipes. Type information has been discarded
└ @ RecipesBase ~/.julia/packages/RecipesBase/92zOw/src/RecipesBase.jl:116
```julia
using DelimitedFiles
```
Load some packages to make nice figures
```julia
using Plots
default(tickfont = font("CMU Serif", 9),
titlefont = font("CMU Serif", 14),
guidefont = font("CMU Serif", 12),
legendfont = font("CMU Serif", 10),
grid = false)
# Plots.font("sans-serif")
# clibrary(:colorbrewer)
# gr()
pyplot()
using LaTeXStrings
# PyPlot.rc("text", usetex = "true")
# rcParams = PyPlot.PyDict(PyPlot.matplotlib."rcParams")
# rcParams["text.usetex"] = true;
PyPlot.rc("font", family = "CMU Serif")
PyPlot.matplotlib[:rc]("mathtext",fontset="cm") #computer modern font
PyPlot.matplotlib[:rc]("font",family="serif",size=12)
using ColorSchemes
```
The Lorenz-63 model is a three dimensional system that models the atmospheric convection [4]. This system is a classical benchmark problem in data assimilation. The state $\boldsymbol{x} = (x_1, x_2, x_3)$ is governed by the following set of ordinary differential equations:
\begin{equation}
\begin{aligned}
&\frac{\mathrm{d} x_1}{\mathrm{d} t}=\sigma(x_2-x_1)\\
&\frac{\mathrm{d} x_2}{\mathrm{d} t}=x_1(\rho-x_2)-x_2\\
&\frac{\mathrm{d} x_3}{\mathrm{d} t}=x_1 x_2-\beta x_3,
\end{aligned}
\end{equation}
where $\sigma = 10, \beta = 8/3, \rho = 28$. For these values, the system is chaotic and behaves like a strange attractor. We integrate this system of ODEs with time step $\Delta t_{dyn} = 0.05$. The state is fully observed $h(t,\boldsymbol{x}) = \boldsymbol{x}$ with $\Delta t_{obs}=0.1$. The initial distribution $\pi_{\mathsf{X}_0}$ is the standard Gaussian. The process noise is Gaussian with zero mean and covariance $10^{-4}\boldsymbol{I}_3$. The measurement noise has a Gaussian distribution with zero mean and covariance $\theta^2\boldsymbol{I}_3$ where $\theta^2 = 4.0$.
```julia
path = "/media/mat/HDD/AdaptiveTransportMap/notebooks/lorenz63/data/"
β_array = collect(0.95:0.01:1.05)
Ne_array = [10, 20, 40, 60, 100, 200, 400, 600]
model, data = setup_lorenz63(path, Ne_array);
```
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
Ne 10 RMSE: 0.5928069316894501
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
Ne 20 RMSE: 0.5444225927797981
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
Ne 40 RMSE: 0.4896839278465799
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
Ne 60 RMSE: 0.4794815325657096
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
Ne 100 RMSE: 0.4791869562532382
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:03[39m
Ne 200 RMSE: 0.48105668064964185
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
Ne 400 RMSE: 0.48282592384478173
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:20[39m
Ne 600 RMSE: 0.4846248170532807
```julia
# save(path*"lorenz63_data.jld", "data", data)
```
```julia
data = load(path*"lorenz63_data.jld", "data")
```
SyntheticData([0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6000000000000001, 0.7000000000000001, 0.8, 0.9, 1.0 … 599.1, 599.2, 599.3000000000001, 599.4, 599.5, 599.6, 599.7, 599.8000000000001, 599.9, 600.0], [-0.22950363471447657, 0.9216676448369137, -0.2969466755253305], [0.5493777232168378 1.9325334655339608 … 5.846018274793296 3.9203793494802213; 1.3470852152857304 4.233634108048278 … 2.4991778332625634 3.3036939755885326; -0.20782547004567642 0.12823736565833915 … 28.37631893621292 22.834861253310663], [1.884515687964999 0.15701426740781987 … 4.525541006447913 3.4631303339151946; -0.05946240332624342 2.8307715835113862 … 4.382540618269552 3.697992563381411; -1.8223964161767956 -0.25316028804554913 … 29.750657308982625 26.336548121393665])
```julia
Nx = 3
Ny = 3
Δtdyn = 0.05
Δtobs = 0.1
σx = 1e-6
σy = 2.0
ϵx = AdditiveInflation(Nx, zeros(Nx), σx)
ϵy = AdditiveInflation(Ny, zeros(Ny), σy)
tspinup = 200.0
Tspinup = 2000
tmetric = 400.0
Tmetric = 4000
t0 = 0.0
tf = 600.0
Tf = 6000
Tburn = 2000
Tstep = 4000
Tspinup = 2000
f = lorenz63!
h(x, t) = x
# Create a local version of the observation operator
h(x, t, idx) = x[idx]
F = StateSpace(lorenz63!, h)
model = Model(Nx, Ny, Δtdyn, Δtobs, ϵx, ϵy, MvNormal(zeros(Nx), Matrix(1.0*I, Nx, Nx)), Tburn, Tstep, Tspinup, F);
```
Benchmark for the Lorenz-63 problem with the sequential stochastic EnkF
```julia
metric_enkf = benchmark_lorenz63(model, data, path, Ne_array, β_array);
```
(Ne, β) = (10, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.482059 seconds (3.65 M allocations: 331.523 MiB, 8.14% gc time)
Ne = 10
Ne 10& β 0.95 RMSE: 0.6663970586950805
(Ne, β) = (10, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.434502 seconds (3.43 M allocations: 320.851 MiB, 8.87% gc time)
Ne = 10
Ne 10& β 0.96 RMSE: 4.2005659601223435
(Ne, β) = (10, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.419540 seconds (3.43 M allocations: 320.851 MiB, 6.43% gc time)
Ne = 10
Ne 10& β 0.97 RMSE: 3.871272834443296
(Ne, β) = (10, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.458612 seconds (3.43 M allocations: 320.850 MiB, 6.02% gc time)
Ne = 10
Ne 10& β 0.98 RMSE: 3.218226821354243
(Ne, β) = (10, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.450783 seconds (3.43 M allocations: 320.851 MiB, 6.34% gc time)
Ne = 10
Ne 10& β 0.99 RMSE: 3.3612166268872126
(Ne, β) = (10, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.424816 seconds (3.43 M allocations: 320.838 MiB, 6.68% gc time)
Ne = 10
Ne 10& β 1.0 RMSE: 1.3316393302410514
(Ne, β) = (10, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.421161 seconds (3.43 M allocations: 320.838 MiB, 7.02% gc time)
Ne = 10
Ne 10& β 1.01 RMSE: 4.164092184440774
(Ne, β) = (10, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.424460 seconds (3.43 M allocations: 320.837 MiB, 6.99% gc time)
Ne = 10
Ne 10& β 1.02 RMSE: 4.6334761696906
(Ne, β) = (10, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.458607 seconds (3.43 M allocations: 320.850 MiB, 6.58% gc time)
Ne = 10
Ne 10& β 1.03 RMSE: 0.6381460091987642
(Ne, β) = (10, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.426068 seconds (3.43 M allocations: 320.851 MiB, 7.21% gc time)
Ne = 10
Ne 10& β 1.04 RMSE: 4.502038163768013
(Ne, β) = (10, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.432222 seconds (3.43 M allocations: 320.851 MiB, 7.54% gc time)
Ne = 10
Ne 10& β 1.05 RMSE: 0.5624558359768645
(Ne, β) = (20, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.774102 seconds (6.47 M allocations: 612.027 MiB, 7.60% gc time)
Ne = 20
Ne 20& β 0.95 RMSE: 0.5127924498520988
(Ne, β) = (20, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.776889 seconds (6.47 M allocations: 612.027 MiB, 6.15% gc time)
Ne = 20
Ne 20& β 0.96 RMSE: 2.434083402115809
(Ne, β) = (20, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.762005 seconds (6.47 M allocations: 612.027 MiB, 7.77% gc time)
Ne = 20
Ne 20& β 0.97 RMSE: 0.5486324660874901
(Ne, β) = (20, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.781939 seconds (6.47 M allocations: 612.027 MiB, 6.50% gc time)
Ne = 20
Ne 20& β 0.98 RMSE: 0.5317346888167963
(Ne, β) = (20, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.803012 seconds (6.47 M allocations: 612.027 MiB, 7.67% gc time)
Ne = 20
Ne 20& β 0.99 RMSE: 0.5186154669950279
(Ne, β) = (20, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.761549 seconds (6.47 M allocations: 612.027 MiB, 6.95% gc time)
Ne = 20
Ne 20& β 1.0 RMSE: 0.5361536087969322
(Ne, β) = (20, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.777026 seconds (6.47 M allocations: 612.027 MiB, 7.07% gc time)
Ne = 20
Ne 20& β 1.01 RMSE: 1.3266751074235172
(Ne, β) = (20, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.785912 seconds (6.47 M allocations: 612.027 MiB, 8.60% gc time)
Ne = 20
Ne 20& β 1.02 RMSE: 0.5780972039771318
(Ne, β) = (20, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.769464 seconds (6.47 M allocations: 612.027 MiB, 7.47% gc time)
Ne = 20
Ne 20& β 1.03 RMSE: 1.0296133656749271
(Ne, β) = (20, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
0.793371 seconds (6.47 M allocations: 612.027 MiB, 7.45% gc time)
Ne = 20
Ne 20& β 1.04 RMSE: 0.5236340150605355
(Ne, β) = (20, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
[32mProgress: 25%|██████████▎ | ETA: 0:00:41[39m
0.782180 seconds (6.47 M allocations: 612.027 MiB, 7.67% gc time)
Ne = 20
Ne 20& β 1.05 RMSE: 0.5910832124859545
(Ne, β) = (40, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.479851 seconds (12.55 M allocations: 1.168 GiB, 8.33% gc time)
Ne = 40
Ne 40& β 0.95 RMSE: 0.49698026493481157
(Ne, β) = (40, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.495618 seconds (12.55 M allocations: 1.168 GiB, 7.67% gc time)
Ne = 40
Ne 40& β 0.96 RMSE: 0.49772130566919187
(Ne, β) = (40, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.514675 seconds (12.55 M allocations: 1.168 GiB, 7.85% gc time)
Ne = 40
Ne 40& β 0.97 RMSE: 0.5055991778752534
(Ne, β) = (40, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.508457 seconds (12.55 M allocations: 1.168 GiB, 8.35% gc time)
Ne = 40
Ne 40& β 0.98 RMSE: 0.4992036634217543
(Ne, β) = (40, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.608344 seconds (12.55 M allocations: 1.168 GiB, 8.11% gc time)
Ne = 40
Ne 40& β 0.99 RMSE: 0.4772130423431148
(Ne, β) = (40, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.693943 seconds (12.55 M allocations: 1.168 GiB, 8.21% gc time)
Ne = 40
Ne 40& β 1.0 RMSE: 0.4703716017742587
(Ne, β) = (40, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.625110 seconds (12.55 M allocations: 1.168 GiB, 8.52% gc time)
Ne = 40
Ne 40& β 1.01 RMSE: 0.48882085957988547
(Ne, β) = (40, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.638226 seconds (12.55 M allocations: 1.168 GiB, 7.95% gc time)
Ne = 40
Ne 40& β 1.02 RMSE: 0.4919853442356405
(Ne, β) = (40, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.959197 seconds (12.55 M allocations: 1.168 GiB, 7.03% gc time)
Ne = 40
Ne 40& β 1.03 RMSE: 0.4830465741371725
(Ne, β) = (40, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:03[39m
3.172996 seconds (12.55 M allocations: 1.168 GiB, 6.12% gc time)
Ne = 40
Ne 40& β 1.04 RMSE: 0.49728649791764035
(Ne, β) = (40, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:03[39m
[32mProgress: 38%|███████████████▍ | ETA: 0:01:00[39m
3.205869 seconds (12.55 M allocations: 1.168 GiB, 6.16% gc time)
Ne = 40
Ne 40& β 1.05 RMSE: 0.5762111684382283
(Ne, β) = (60, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.629657 seconds (18.64 M allocations: 1.741 GiB, 5.91% gc time)
Ne = 60
Ne 60& β 0.95 RMSE: 0.47665217349006933
(Ne, β) = (60, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.625465 seconds (18.64 M allocations: 1.741 GiB, 6.09% gc time)
Ne = 60
Ne 60& β 0.96 RMSE: 0.47109291514856727
(Ne, β) = (60, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.670345 seconds (18.64 M allocations: 1.741 GiB, 6.81% gc time)
Ne = 60
Ne 60& β 0.97 RMSE: 0.47143468984605696
(Ne, β) = (60, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.654974 seconds (18.64 M allocations: 1.741 GiB, 6.46% gc time)
Ne = 60
Ne 60& β 0.98 RMSE: 0.48436861402840664
(Ne, β) = (60, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.670536 seconds (18.64 M allocations: 1.741 GiB, 6.67% gc time)
Ne = 60
Ne 60& β 0.99 RMSE: 0.49569213239255006
(Ne, β) = (60, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.677872 seconds (18.64 M allocations: 1.741 GiB, 6.88% gc time)
Ne = 60
Ne 60& β 1.0 RMSE: 0.4827686848106934
(Ne, β) = (60, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.676812 seconds (18.64 M allocations: 1.741 GiB, 6.91% gc time)
Ne = 60
Ne 60& β 1.01 RMSE: 0.5198131428295012
(Ne, β) = (60, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.664488 seconds (18.64 M allocations: 1.741 GiB, 6.78% gc time)
Ne = 60
Ne 60& β 1.02 RMSE: 0.47788346149009864
(Ne, β) = (60, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.673150 seconds (18.64 M allocations: 1.741 GiB, 6.35% gc time)
Ne = 60
Ne 60& β 1.03 RMSE: 0.5203856427891894
(Ne, β) = (60, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.710835 seconds (18.64 M allocations: 1.741 GiB, 6.52% gc time)
Ne = 60
Ne 60& β 1.04 RMSE: 0.4784450959970038
(Ne, β) = (60, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
[32mProgress: 50%|████████████████████▌ | ETA: 0:01:28[39m
4.726039 seconds (18.64 M allocations: 1.741 GiB, 6.70% gc time)
Ne = 60
Ne 60& β 1.05 RMSE: 0.4762555944437609
(Ne, β) = (100, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.828180 seconds (30.80 M allocations: 2.876 GiB, 6.71% gc time)
Ne = 100
Ne 100& β 0.95 RMSE: 0.47806829585508775
(Ne, β) = (100, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:08[39m
8.110609 seconds (30.80 M allocations: 2.876 GiB, 9.56% gc time)
Ne = 100
Ne 100& β 0.96 RMSE: 0.48304450432078194
(Ne, β) = (100, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.686307 seconds (30.80 M allocations: 2.876 GiB, 6.13% gc time)
Ne = 100
Ne 100& β 0.97 RMSE: 0.4766220145098008
(Ne, β) = (100, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.687931 seconds (30.80 M allocations: 2.876 GiB, 6.18% gc time)
Ne = 100
Ne 100& β 0.98 RMSE: 0.49545774328642633
(Ne, β) = (100, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.695652 seconds (30.80 M allocations: 2.876 GiB, 6.16% gc time)
Ne = 100
Ne 100& β 0.99 RMSE: 0.4706757271447107
(Ne, β) = (100, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.694940 seconds (30.80 M allocations: 2.876 GiB, 5.96% gc time)
Ne = 100
Ne 100& β 1.0 RMSE: 0.482875273781868
(Ne, β) = (100, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.711941 seconds (30.80 M allocations: 2.876 GiB, 6.18% gc time)
Ne = 100
Ne 100& β 1.01 RMSE: 0.47579684985592186
(Ne, β) = (100, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.732680 seconds (30.80 M allocations: 2.876 GiB, 6.10% gc time)
Ne = 100
Ne 100& β 1.02 RMSE: 0.46783176619536343
(Ne, β) = (100, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.796519 seconds (30.80 M allocations: 2.876 GiB, 6.47% gc time)
Ne = 100
Ne 100& β 1.03 RMSE: 0.4603262989094052
(Ne, β) = (100, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.816022 seconds (30.80 M allocations: 2.876 GiB, 6.34% gc time)
Ne = 100
Ne 100& β 1.04 RMSE: 0.4892863392212606
(Ne, β) = (100, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
[32mProgress: 62%|█████████████████████████▋ | ETA: 0:01:44[39m
7.877464 seconds (30.80 M allocations: 2.876 GiB, 6.67% gc time)
Ne = 100
Ne 100& β 1.05 RMSE: 0.4741187970220842
(Ne, β) = (200, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.632471 seconds (61.21 M allocations: 5.720 GiB, 6.82% gc time)
Ne = 200
Ne 200& β 0.95 RMSE: 0.4818447679432225
(Ne, β) = (200, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.756291 seconds (61.21 M allocations: 5.720 GiB, 6.96% gc time)
Ne = 200
Ne 200& β 0.96 RMSE: 0.49097284442223565
(Ne, β) = (200, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.884436 seconds (61.21 M allocations: 5.720 GiB, 7.20% gc time)
Ne = 200
Ne 200& β 0.97 RMSE: 0.48053427667212145
(Ne, β) = (200, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.996709 seconds (61.21 M allocations: 5.720 GiB, 7.34% gc time)
Ne = 200
Ne 200& β 0.98 RMSE: 0.4799263300100315
(Ne, β) = (200, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:16[39m
16.222137 seconds (61.21 M allocations: 5.720 GiB, 8.61% gc time)
Ne = 200
Ne 200& β 0.99 RMSE: 0.47706479674816593
(Ne, β) = (200, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:14[39m
14.655415 seconds (61.21 M allocations: 5.720 GiB, 5.24% gc time)
Ne = 200
Ne 200& β 1.0 RMSE: 0.4808145273400052
(Ne, β) = (200, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:14[39m
14.922068 seconds (61.21 M allocations: 5.720 GiB, 5.76% gc time)
Ne = 200
Ne 200& β 1.01 RMSE: 0.4833086203040021
(Ne, β) = (200, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.155434 seconds (61.21 M allocations: 5.720 GiB, 6.15% gc time)
Ne = 200
Ne 200& β 1.02 RMSE: 0.48088816433227044
(Ne, β) = (200, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.304416 seconds (61.21 M allocations: 5.720 GiB, 6.41% gc time)
Ne = 200
Ne 200& β 1.03 RMSE: 0.49544092391539324
(Ne, β) = (200, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.495801 seconds (61.21 M allocations: 5.720 GiB, 6.76% gc time)
Ne = 200
Ne 200& β 1.04 RMSE: 0.4870167805462974
(Ne, β) = (200, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.629741 seconds (61.21 M allocations: 5.720 GiB, 6.95% gc time)
Ne = 200
[32mProgress: 75%|██████████████████████████████▊ | ETA: 0:01:55[39m
Ne 200& β 1.05 RMSE: 0.4837543353816042
(Ne, β) = (400, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.458841 seconds (122.04 M allocations: 11.407 GiB, 7.25% gc time)
Ne = 400
Ne 400& β 0.95 RMSE: 0.49252842157030563
(Ne, β) = (400, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.833510 seconds (122.04 M allocations: 11.407 GiB, 7.54% gc time)
Ne = 400
Ne 400& β 0.96 RMSE: 0.48505738313215907
(Ne, β) = (400, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:29[39m
29.559266 seconds (122.04 M allocations: 11.407 GiB, 6.13% gc time)
Ne = 400
Ne 400& β 0.97 RMSE: 0.48478087923180296
(Ne, β) = (400, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.599581 seconds (122.03 M allocations: 11.406 GiB, 6.44% gc time)
Ne = 400
Ne 400& β 0.98 RMSE: 0.4879131299605394
(Ne, β) = (400, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:14[39m
14.474783 seconds (122.01 M allocations: 11.405 GiB, 8.98% gc time)
Ne = 400
Ne 400& β 0.99 RMSE: 0.48369241320139805
(Ne, β) = (400, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:14[39m
14.801513 seconds (122.01 M allocations: 11.405 GiB, 9.49% gc time)
Ne = 400
Ne 400& β 1.0 RMSE: 0.48822968965919994
(Ne, β) = (400, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.129903 seconds (122.01 M allocations: 11.405 GiB, 10.02% gc time)
Ne = 400
Ne 400& β 1.01 RMSE: 0.47633232761396915
(Ne, β) = (400, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:13[39m
13.820416 seconds (122.01 M allocations: 11.405 GiB, 8.47% gc time)
Ne = 400
Ne 400& β 1.02 RMSE: 0.4850509676448275
(Ne, β) = (400, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:14[39m
14.043136 seconds (122.01 M allocations: 11.405 GiB, 8.13% gc time)
Ne = 400
Ne 400& β 1.03 RMSE: 0.4863621885231936
(Ne, β) = (400, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:14[39m
14.505684 seconds (122.01 M allocations: 11.405 GiB, 8.95% gc time)
Ne = 400
Ne 400& β 1.04 RMSE: 0.4892480273593055
(Ne, β) = (400, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:14[39m
14.843639 seconds (122.01 M allocations: 11.405 GiB, 9.61% gc time)
Ne = 400
[32mProgress: 88%|███████████████████████████████████▉ | ETA: 0:01:21[39m
Ne 400& β 1.05 RMSE: 0.48025831353454973
(Ne, β) = (600, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.301051 seconds (184.97 M allocations: 17.124 GiB, 10.11% gc time)
Ne = 600
Ne 600& β 0.95 RMSE: 0.49231525980475443
(Ne, β) = (600, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:21[39m
21.502275 seconds (184.97 M allocations: 17.124 GiB, 8.73% gc time)
Ne = 600
Ne 600& β 0.96 RMSE: 0.48556165517710537
(Ne, β) = (600, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.437674 seconds (184.97 M allocations: 17.124 GiB, 9.66% gc time)
Ne = 600
Ne 600& β 0.97 RMSE: 0.48633808642728477
(Ne, β) = (600, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.877220 seconds (184.97 M allocations: 17.124 GiB, 10.37% gc time)
Ne = 600
Ne 600& β 0.98 RMSE: 0.48545155921535105
(Ne, β) = (600, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.524545 seconds (184.97 M allocations: 17.124 GiB, 10.41% gc time)
Ne = 600
Ne 600& β 0.99 RMSE: 0.48966113382886306
(Ne, β) = (600, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:21[39m
21.441277 seconds (184.97 M allocations: 17.124 GiB, 8.71% gc time)
Ne = 600
Ne 600& β 1.0 RMSE: 0.48939658436437183
(Ne, β) = (600, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.277237 seconds (184.97 M allocations: 17.124 GiB, 9.70% gc time)
Ne = 600
Ne 600& β 1.01 RMSE: 0.4827126013954903
(Ne, β) = (600, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.926279 seconds (184.97 M allocations: 17.124 GiB, 10.34% gc time)
Ne = 600
Ne 600& β 1.02 RMSE: 0.4911989079874944
(Ne, β) = (600, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.040433 seconds (184.97 M allocations: 17.124 GiB, 9.93% gc time)
Ne = 600
Ne 600& β 1.03 RMSE: 0.48374707997917293
(Ne, β) = (600, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.602304 seconds (184.97 M allocations: 17.124 GiB, 8.47% gc time)
Ne = 600
Ne 600& β 1.04 RMSE: 0.4891095994644669
(Ne, β) = (600, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.283262 seconds (184.97 M allocations: 17.124 GiB, 9.61% gc time)
Ne = 600
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:13:31[39m
Ne 600& β 1.05 RMSE: 0.4870945374290198
```julia
save(path*"metric_enkf.jld", "metric", metric_enkf)
```
Benchmark for the Lorenz-63 problem with the sequential stochastic radial map filter
p = 0
```julia
p = 0
```
0
```julia
metric_srmf0 = benchmark_srmf_lorenz63(model, data, path, Ne_array, β_array, p);
```
(Ne, β) = (10, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.133800 seconds (29.94 M allocations: 2.060 GiB, 5.57% gc time)
Ne = 10
Ne 10& β 0.95 RMSE: 1.1484494268122272
(Ne, β) = (10, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.813872 seconds (27.44 M allocations: 1.944 GiB, 13.96% gc time)
Ne = 10
Ne 10& β 0.96 RMSE: 0.8844648948939896
(Ne, β) = (10, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.973606 seconds (27.44 M allocations: 1.944 GiB, 20.62% gc time)
Ne = 10
Ne 10& β 0.97 RMSE: 0.8987151950707258
(Ne, β) = (10, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.540982 seconds (27.44 M allocations: 1.944 GiB, 12.20% gc time)
Ne = 10
Ne 10& β 0.98 RMSE: 1.179350520554961
(Ne, β) = (10, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.528894 seconds (27.44 M allocations: 1.944 GiB, 12.35% gc time)
Ne = 10
Ne 10& β 0.99 RMSE: 1.1785641815765975
(Ne, β) = (10, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.539094 seconds (27.44 M allocations: 1.944 GiB, 12.39% gc time)
Ne = 10
Ne 10& β 1.0 RMSE: 1.3009752556146112
(Ne, β) = (10, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.520186 seconds (27.44 M allocations: 1.944 GiB, 12.11% gc time)
Ne = 10
Ne 10& β 1.01 RMSE: 0.7867241919234845
(Ne, β) = (10, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.531638 seconds (27.44 M allocations: 1.944 GiB, 12.24% gc time)
Ne = 10
Ne 10& β 1.02 RMSE: 0.962822021432672
(Ne, β) = (10, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.530673 seconds (27.44 M allocations: 1.944 GiB, 12.31% gc time)
Ne = 10
Ne 10& β 1.03 RMSE: 1.434295644989174
(Ne, β) = (10, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.536077 seconds (27.44 M allocations: 1.944 GiB, 12.36% gc time)
Ne = 10
Ne 10& β 1.04 RMSE: 0.8537895025352013
(Ne, β) = (10, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.533104 seconds (27.44 M allocations: 1.944 GiB, 12.39% gc time)
Ne = 10
Ne 10& β 1.05 RMSE: 0.9932633807819111
(Ne, β) = (20, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.633885 seconds (49.92 M allocations: 3.519 GiB, 13.08% gc time)
Ne = 20
Ne 20& β 0.95 RMSE: 0.6286752299268548
(Ne, β) = (20, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.623486 seconds (49.92 M allocations: 3.519 GiB, 12.98% gc time)
Ne = 20
Ne 20& β 0.96 RMSE: 0.8037660588047775
(Ne, β) = (20, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.646838 seconds (49.92 M allocations: 3.519 GiB, 13.10% gc time)
Ne = 20
Ne 20& β 0.97 RMSE: 1.0089193535324248
(Ne, β) = (20, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.633817 seconds (49.92 M allocations: 3.519 GiB, 12.87% gc time)
Ne = 20
Ne 20& β 0.98 RMSE: 0.5959535360588502
(Ne, β) = (20, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.644281 seconds (49.92 M allocations: 3.519 GiB, 13.31% gc time)
Ne = 20
Ne 20& β 0.99 RMSE: 0.6247188881793764
(Ne, β) = (20, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.642403 seconds (49.92 M allocations: 3.519 GiB, 13.14% gc time)
Ne = 20
Ne 20& β 1.0 RMSE: 0.6387820364243605
(Ne, β) = (20, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.646506 seconds (49.92 M allocations: 3.519 GiB, 12.92% gc time)
Ne = 20
Ne 20& β 1.01 RMSE: 0.6778088446045358
(Ne, β) = (20, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.646823 seconds (49.92 M allocations: 3.519 GiB, 12.95% gc time)
Ne = 20
Ne 20& β 1.02 RMSE: 0.7370970881096843
(Ne, β) = (20, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.651735 seconds (49.92 M allocations: 3.519 GiB, 13.10% gc time)
Ne = 20
Ne 20& β 1.03 RMSE: 0.6238691432756764
(Ne, β) = (20, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.658236 seconds (49.92 M allocations: 3.519 GiB, 13.17% gc time)
Ne = 20
Ne 20& β 1.04 RMSE: 0.636861172893418
(Ne, β) = (20, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
[32mProgress: 25%|██████████▎ | ETA: 0:02:32[39m
2.646402 seconds (49.92 M allocations: 3.519 GiB, 12.98% gc time)
Ne = 20
Ne 20& β 1.05 RMSE: 0.6321372860633301
(Ne, β) = (40, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.873960 seconds (94.88 M allocations: 6.677 GiB, 13.62% gc time)
Ne = 40
Ne 40& β 0.95 RMSE: 0.526061692073652
(Ne, β) = (40, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.853956 seconds (94.88 M allocations: 6.677 GiB, 13.67% gc time)
Ne = 40
Ne 40& β 0.96 RMSE: 0.5864045378096628
(Ne, β) = (40, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.876736 seconds (94.89 M allocations: 6.677 GiB, 13.41% gc time)
Ne = 40
Ne 40& β 0.97 RMSE: 0.5335992013046416
(Ne, β) = (40, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.880851 seconds (94.89 M allocations: 6.677 GiB, 13.36% gc time)
Ne = 40
Ne 40& β 0.98 RMSE: 0.5869600534238577
(Ne, β) = (40, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.877236 seconds (94.89 M allocations: 6.677 GiB, 13.58% gc time)
Ne = 40
Ne 40& β 0.99 RMSE: 0.5858457746099073
(Ne, β) = (40, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.905780 seconds (94.89 M allocations: 6.677 GiB, 13.52% gc time)
Ne = 40
Ne 40& β 1.0 RMSE: 0.5345533695082024
(Ne, β) = (40, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.929506 seconds (94.89 M allocations: 6.677 GiB, 13.43% gc time)
Ne = 40
Ne 40& β 1.01 RMSE: 0.6050358172537531
(Ne, β) = (40, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.926354 seconds (94.89 M allocations: 6.677 GiB, 13.42% gc time)
Ne = 40
Ne 40& β 1.02 RMSE: 0.568357487747651
(Ne, β) = (40, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.926934 seconds (94.89 M allocations: 6.677 GiB, 13.38% gc time)
Ne = 40
Ne 40& β 1.03 RMSE: 0.5367884174494164
(Ne, β) = (40, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
4.933803 seconds (94.89 M allocations: 6.677 GiB, 13.35% gc time)
Ne = 40
Ne 40& β 1.04 RMSE: 0.5697469990848145
(Ne, β) = (40, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
[32mProgress: 38%|███████████████▍ | ETA: 0:02:54[39m
4.925939 seconds (94.89 M allocations: 6.677 GiB, 13.33% gc time)
Ne = 40
Ne 40& β 1.05 RMSE: 0.5983007371142408
(Ne, β) = (60, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.200459 seconds (139.85 M allocations: 9.831 GiB, 13.73% gc time)
Ne = 60
Ne 60& β 0.95 RMSE: 0.5517990794959166
(Ne, β) = (60, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.185629 seconds (139.85 M allocations: 9.831 GiB, 13.65% gc time)
Ne = 60
Ne 60& β 0.96 RMSE: 0.5297515265479515
(Ne, β) = (60, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.250978 seconds (139.85 M allocations: 9.831 GiB, 13.83% gc time)
Ne = 60
Ne 60& β 0.97 RMSE: 0.5487243554193113
(Ne, β) = (60, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.280048 seconds (139.85 M allocations: 9.831 GiB, 13.79% gc time)
Ne = 60
Ne 60& β 0.98 RMSE: 0.5235687639566404
(Ne, β) = (60, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.262630 seconds (139.85 M allocations: 9.831 GiB, 13.72% gc time)
Ne = 60
Ne 60& β 0.99 RMSE: 0.5255546386664416
(Ne, β) = (60, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.326786 seconds (139.85 M allocations: 9.831 GiB, 13.77% gc time)
Ne = 60
Ne 60& β 1.0 RMSE: 0.5232981333793559
(Ne, β) = (60, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.339923 seconds (139.85 M allocations: 9.830 GiB, 13.83% gc time)
Ne = 60
Ne 60& β 1.01 RMSE: 0.5458236226902257
(Ne, β) = (60, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.388407 seconds (139.85 M allocations: 9.830 GiB, 13.82% gc time)
Ne = 60
Ne 60& β 1.02 RMSE: 0.5134357821914465
(Ne, β) = (60, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.382969 seconds (139.85 M allocations: 9.831 GiB, 13.67% gc time)
Ne = 60
Ne 60& β 1.03 RMSE: 0.5271442144557923
(Ne, β) = (60, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.391881 seconds (139.85 M allocations: 9.831 GiB, 13.70% gc time)
Ne = 60
Ne 60& β 1.04 RMSE: 0.602458577322211
(Ne, β) = (60, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
[32mProgress: 50%|████████████████████▌ | ETA: 0:03:05[39m
7.395644 seconds (139.85 M allocations: 9.831 GiB, 13.71% gc time)
Ne = 60
Ne 60& β 1.05 RMSE: 0.5359470458895377
(Ne, β) = (100, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:12[39m
12.004950 seconds (229.78 M allocations: 16.129 GiB, 13.89% gc time)
Ne = 100
Ne 100& β 0.95 RMSE: 0.5193662699499372
(Ne, β) = (100, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:12[39m
12.108418 seconds (229.78 M allocations: 16.129 GiB, 13.77% gc time)
Ne = 100
Ne 100& β 0.96 RMSE: 0.5079748492069234
(Ne, β) = (100, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:12[39m
12.137083 seconds (229.78 M allocations: 16.129 GiB, 13.64% gc time)
Ne = 100
Ne 100& β 0.97 RMSE: 0.49987042496324013
(Ne, β) = (100, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.922769 seconds (229.78 M allocations: 16.129 GiB, 14.68% gc time)
Ne = 100
Ne 100& β 0.98 RMSE: 0.4996937994155483
(Ne, β) = (100, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.673638 seconds (229.77 M allocations: 16.129 GiB, 13.06% gc time)
Ne = 100
Ne 100& β 0.99 RMSE: 0.5651392466514593
(Ne, β) = (100, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.636419 seconds (229.77 M allocations: 16.129 GiB, 12.93% gc time)
Ne = 100
Ne 100& β 1.0 RMSE: 0.5046348173327794
(Ne, β) = (100, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.637835 seconds (229.77 M allocations: 16.129 GiB, 12.82% gc time)
Ne = 100
Ne 100& β 1.01 RMSE: 0.5308164637284795
(Ne, β) = (100, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.715849 seconds (229.77 M allocations: 16.129 GiB, 12.75% gc time)
Ne = 100
Ne 100& β 1.02 RMSE: 0.5140395779088874
(Ne, β) = (100, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.893517 seconds (229.78 M allocations: 16.129 GiB, 12.85% gc time)
Ne = 100
Ne 100& β 1.03 RMSE: 0.5063975108726824
(Ne, β) = (100, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.976938 seconds (229.78 M allocations: 16.129 GiB, 12.99% gc time)
Ne = 100
Ne 100& β 1.04 RMSE: 0.5023704625639498
(Ne, β) = (100, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
[32mProgress: 62%|█████████████████████████▋ | ETA: 0:03:05[39m
11.088506 seconds (229.78 M allocations: 16.129 GiB, 13.00% gc time)
Ne = 100
Ne 100& β 1.05 RMSE: 0.504844252618068
(Ne, β) = (200, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.346210 seconds (454.59 M allocations: 31.955 GiB, 13.45% gc time)
Ne = 200
Ne 200& β 0.95 RMSE: 0.4803635260996815
(Ne, β) = (200, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.632569 seconds (454.60 M allocations: 31.955 GiB, 13.59% gc time)
Ne = 200
Ne 200& β 0.96 RMSE: 0.4872927627427276
(Ne, β) = (200, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.920468 seconds (454.59 M allocations: 31.954 GiB, 13.58% gc time)
Ne = 200
Ne 200& β 0.97 RMSE: 0.5098855760914754
(Ne, β) = (200, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:23[39m
23.160961 seconds (454.59 M allocations: 31.954 GiB, 13.52% gc time)
Ne = 200
Ne 200& β 0.98 RMSE: 0.4868914292162333
(Ne, β) = (200, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:23[39m
23.403640 seconds (454.60 M allocations: 31.955 GiB, 13.51% gc time)
Ne = 200
Ne 200& β 0.99 RMSE: 0.5002560873500156
(Ne, β) = (200, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:23[39m
23.127500 seconds (454.60 M allocations: 31.955 GiB, 13.98% gc time)
Ne = 200
Ne 200& β 1.0 RMSE: 0.49672705084088203
(Ne, β) = (200, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:20[39m
20.708318 seconds (454.59 M allocations: 31.954 GiB, 13.02% gc time)
Ne = 200
Ne 200& β 1.01 RMSE: 0.49006412554143214
(Ne, β) = (200, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:20[39m
20.588153 seconds (454.59 M allocations: 31.954 GiB, 12.70% gc time)
Ne = 200
Ne 200& β 1.02 RMSE: 0.49482247407019475
(Ne, β) = (200, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:20[39m
20.905507 seconds (454.59 M allocations: 31.954 GiB, 12.59% gc time)
Ne = 200
Ne 200& β 1.03 RMSE: 0.49847455550829445
(Ne, β) = (200, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:21[39m
21.280925 seconds (454.59 M allocations: 31.954 GiB, 12.79% gc time)
Ne = 200
Ne 200& β 1.04 RMSE: 0.50256457753377
(Ne, β) = (200, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:21[39m
21.643765 seconds (454.59 M allocations: 31.954 GiB, 12.97% gc time)
Ne = 200
[32mProgress: 75%|██████████████████████████████▊ | ETA: 0:03:04[39m
Ne 200& β 1.05 RMSE: 0.4934412203361845
(Ne, β) = (400, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:43[39m
43.865408 seconds (904.24 M allocations: 63.376 GiB, 13.12% gc time)
Ne = 400
Ne 400& β 0.95 RMSE: 0.4966032616726723
(Ne, β) = (400, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:44[39m
44.869630 seconds (904.24 M allocations: 63.376 GiB, 13.09% gc time)
Ne = 400
Ne 400& β 0.96 RMSE: 0.4927578200694277
(Ne, β) = (400, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:44[39m
44.760125 seconds (904.24 M allocations: 63.377 GiB, 13.33% gc time)
Ne = 400
Ne 400& β 0.97 RMSE: 0.4865200689217106
(Ne, β) = (400, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:41[39m
41.092524 seconds (904.24 M allocations: 63.376 GiB, 12.77% gc time)
Ne = 400
Ne 400& β 0.98 RMSE: 0.48975932508233483
(Ne, β) = (400, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:41[39m
41.574992 seconds (904.24 M allocations: 63.377 GiB, 12.61% gc time)
Ne = 400
Ne 400& β 0.99 RMSE: 0.4906229449504541
(Ne, β) = (400, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:42[39m
42.718542 seconds (904.24 M allocations: 63.376 GiB, 12.82% gc time)
Ne = 400
Ne 400& β 1.0 RMSE: 0.4944298354967293
(Ne, β) = (400, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:43[39m
43.931814 seconds (904.24 M allocations: 63.376 GiB, 12.83% gc time)
Ne = 400
Ne 400& β 1.01 RMSE: 0.48849174178216653
(Ne, β) = (400, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:45[39m
45.084560 seconds (904.24 M allocations: 63.377 GiB, 12.74% gc time)
Ne = 400
Ne 400& β 1.02 RMSE: 0.48561688572260825
(Ne, β) = (400, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:44[39m
44.165042 seconds (904.24 M allocations: 63.376 GiB, 13.16% gc time)
Ne = 400
Ne 400& β 1.03 RMSE: 0.4875869558556531
(Ne, β) = (400, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:41[39m
41.089159 seconds (904.24 M allocations: 63.376 GiB, 12.75% gc time)
Ne = 400
Ne 400& β 1.04 RMSE: 0.4974148516891929
(Ne, β) = (400, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:41[39m
41.544359 seconds (904.24 M allocations: 63.376 GiB, 12.62% gc time)
Ne = 400
[32mProgress: 88%|███████████████████████████████████▉ | ETA: 0:02:27[39m
Ne 400& β 1.05 RMSE: 0.4985308620628531
(Ne, β) = (600, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:03[39m
63.990291 seconds (1.36 G allocations: 94.880 GiB, 12.85% gc time)
Ne = 600
Ne 600& β 0.95 RMSE: 0.4898628732309215
(Ne, β) = (600, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:06[39m
66.571756 seconds (1.36 G allocations: 94.880 GiB, 12.81% gc time)
Ne = 600
Ne 600& β 0.96 RMSE: 0.49368200840885224
(Ne, β) = (600, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:02[39m
62.916620 seconds (1.36 G allocations: 94.880 GiB, 13.26% gc time)
Ne = 600
Ne 600& β 0.97 RMSE: 0.49070523069821864
(Ne, β) = (600, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:01[39m
61.464605 seconds (1.36 G allocations: 94.880 GiB, 12.69% gc time)
Ne = 600
Ne 600& β 0.98 RMSE: 0.5039640249168997
(Ne, β) = (600, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.029103 seconds (1.36 G allocations: 94.880 GiB, 12.83% gc time)
Ne = 600
Ne 600& β 0.99 RMSE: 0.4905755824175767
(Ne, β) = (600, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:05[39m
65.470855 seconds (1.36 G allocations: 94.880 GiB, 13.00% gc time)
Ne = 600
Ne 600& β 1.0 RMSE: 0.48592536436266276
(Ne, β) = (600, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:01[39m
61.333647 seconds (1.36 G allocations: 94.880 GiB, 12.73% gc time)
Ne = 600
Ne 600& β 1.01 RMSE: 0.4889267051187657
(Ne, β) = (600, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.002353 seconds (1.36 G allocations: 94.880 GiB, 12.84% gc time)
Ne = 600
Ne 600& β 1.02 RMSE: 0.49045114985977833
(Ne, β) = (600, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:06[39m
66.592093 seconds (1.36 G allocations: 94.880 GiB, 12.79% gc time)
Ne = 600
Ne 600& β 1.03 RMSE: 0.4887423193769721
(Ne, β) = (600, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:08[39m
68.468548 seconds (1.36 G allocations: 94.880 GiB, 12.71% gc time)
Ne = 600
Ne 600& β 1.04 RMSE: 0.4881860190601592
(Ne, β) = (600, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:01[39m
61.724471 seconds (1.36 G allocations: 94.880 GiB, 13.47% gc time)
Ne = 600
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:28:56[39m
Ne 600& β 1.05 RMSE: 0.4856713058195489
```julia
save(path*"metric_srmf"*string(p)*".jld", "metric", metric_srmf0)
```
p = 1
```julia
p = 1
```
1
```julia
metric_srmf1 = benchmark_srmf_lorenz63(model, data, path, [40, 60, 100, 200, 400, 600], β_array, p);
```
(Ne, β) = (40, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.078899 seconds (110.58 M allocations: 8.314 GiB, 10.37% gc time)
Ne = 40
Ne 40& β 0.95 RMSE: 0.7723808130054851
(Ne, β) = (40, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.025194 seconds (110.57 M allocations: 8.314 GiB, 10.50% gc time)
Ne = 40
Ne 40& β 0.96 RMSE: 0.5629149122349779
(Ne, β) = (40, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.025554 seconds (110.58 M allocations: 8.316 GiB, 10.52% gc time)
Ne = 40
Ne 40& β 0.97 RMSE: 0.5310537560523495
(Ne, β) = (40, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.049337 seconds (110.52 M allocations: 8.309 GiB, 10.55% gc time)
Ne = 40
Ne 40& β 0.98 RMSE: 0.5343250189574938
(Ne, β) = (40, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.059835 seconds (110.57 M allocations: 8.314 GiB, 10.54% gc time)
Ne = 40
Ne 40& β 0.99 RMSE: 0.5809329085586002
(Ne, β) = (40, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.015968 seconds (110.38 M allocations: 8.294 GiB, 10.59% gc time)
Ne = 40
Ne 40& β 1.0 RMSE: 0.5432796133025068
(Ne, β) = (40, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.052354 seconds (110.49 M allocations: 8.306 GiB, 10.56% gc time)
Ne = 40
Ne 40& β 1.01 RMSE: 1.08335727525266
(Ne, β) = (40, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.067479 seconds (110.39 M allocations: 8.294 GiB, 10.53% gc time)
Ne = 40
Ne 40& β 1.02 RMSE: 0.5453480548967617
(Ne, β) = (40, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.094221 seconds (110.52 M allocations: 8.309 GiB, 10.49% gc time)
Ne = 40
Ne 40& β 1.03 RMSE: 0.5249062589358962
(Ne, β) = (40, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.121928 seconds (110.53 M allocations: 8.310 GiB, 10.64% gc time)
Ne = 40
Ne 40& β 1.04 RMSE: 0.5849053245730811
(Ne, β) = (40, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.113037 seconds (110.53 M allocations: 8.310 GiB, 10.46% gc time)
Ne = 40
Ne 40& β 1.05 RMSE: 0.5640792586396154
(Ne, β) = (60, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.233380 seconds (161.54 M allocations: 12.174 GiB, 10.73% gc time)
Ne = 60
Ne 60& β 0.95 RMSE: 0.4762703645143312
(Ne, β) = (60, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.207626 seconds (161.35 M allocations: 12.153 GiB, 10.82% gc time)
Ne = 60
Ne 60& β 0.96 RMSE: 0.4882474751545278
(Ne, β) = (60, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.195387 seconds (161.36 M allocations: 12.154 GiB, 10.78% gc time)
Ne = 60
Ne 60& β 0.97 RMSE: 0.4763831266891743
(Ne, β) = (60, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.272659 seconds (161.33 M allocations: 12.150 GiB, 10.83% gc time)
Ne = 60
Ne 60& β 0.98 RMSE: 0.4768488554532817
(Ne, β) = (60, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.244054 seconds (161.28 M allocations: 12.144 GiB, 10.80% gc time)
Ne = 60
Ne 60& β 0.99 RMSE: 0.5069400094985029
(Ne, β) = (60, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.205660 seconds (161.13 M allocations: 12.128 GiB, 10.78% gc time)
Ne = 60
Ne 60& β 1.0 RMSE: 0.4491862411891419
(Ne, β) = (60, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.305847 seconds (161.28 M allocations: 12.145 GiB, 10.82% gc time)
Ne = 60
Ne 60& β 1.01 RMSE: 0.5020820734129197
(Ne, β) = (60, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.264868 seconds (161.13 M allocations: 12.127 GiB, 10.94% gc time)
Ne = 60
Ne 60& β 1.02 RMSE: 0.4919762085543033
(Ne, β) = (60, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.300924 seconds (161.07 M allocations: 12.119 GiB, 10.90% gc time)
Ne = 60
Ne 60& β 1.03 RMSE: 0.4870417259053223
(Ne, β) = (60, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.347958 seconds (160.94 M allocations: 12.105 GiB, 11.00% gc time)
Ne = 60
Ne 60& β 1.04 RMSE: 0.4758641306133785
(Ne, β) = (60, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
[32mProgress: 33%|█████████████▋ | ETA: 0:06:22[39m
10.413946 seconds (161.00 M allocations: 12.112 GiB, 11.08% gc time)
Ne = 60
Ne 60& β 1.05 RMSE: 0.4863548067248575
(Ne, β) = (100, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:16[39m
16.732375 seconds (264.10 M allocations: 19.920 GiB, 11.15% gc time)
Ne = 100
Ne 100& β 0.95 RMSE: 0.42995813993833953
(Ne, β) = (100, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:16[39m
16.377927 seconds (264.28 M allocations: 19.942 GiB, 12.07% gc time)
Ne = 100
Ne 100& β 0.96 RMSE: 0.44474307228619964
(Ne, β) = (100, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.130775 seconds (264.48 M allocations: 19.966 GiB, 10.60% gc time)
Ne = 100
Ne 100& β 0.97 RMSE: 0.45883826146843526
(Ne, β) = (100, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:14[39m
14.948581 seconds (263.75 M allocations: 19.877 GiB, 10.46% gc time)
Ne = 100
Ne 100& β 0.98 RMSE: 0.49609931343946184
(Ne, β) = (100, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:14[39m
14.946672 seconds (263.66 M allocations: 19.867 GiB, 10.43% gc time)
Ne = 100
Ne 100& β 0.99 RMSE: 0.4353038017049091
(Ne, β) = (100, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.124175 seconds (263.69 M allocations: 19.870 GiB, 10.44% gc time)
Ne = 100
Ne 100& β 1.0 RMSE: 0.42661125198830213
(Ne, β) = (100, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.243410 seconds (263.50 M allocations: 19.847 GiB, 10.45% gc time)
Ne = 100
Ne 100& β 1.01 RMSE: 0.4566419167688565
(Ne, β) = (100, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.415580 seconds (263.81 M allocations: 19.885 GiB, 10.49% gc time)
Ne = 100
Ne 100& β 1.02 RMSE: 0.4699116742103299
(Ne, β) = (100, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.473183 seconds (263.37 M allocations: 19.831 GiB, 10.52% gc time)
Ne = 100
Ne 100& β 1.03 RMSE: 0.4622618590541929
(Ne, β) = (100, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.547387 seconds (263.15 M allocations: 19.804 GiB, 10.59% gc time)
Ne = 100
Ne 100& β 1.04 RMSE: 0.4405588922143494
(Ne, β) = (100, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
[32mProgress: 50%|████████████████████▌ | ETA: 0:06:02[39m
15.618372 seconds (262.79 M allocations: 19.761 GiB, 10.63% gc time)
Ne = 100
Ne 100& β 1.05 RMSE: 0.4473540633194391
(Ne, β) = (200, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.925745 seconds (524.71 M allocations: 39.941 GiB, 11.19% gc time)
Ne = 200
Ne 200& β 0.95 RMSE: 0.40381548589667954
(Ne, β) = (200, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.189337 seconds (524.47 M allocations: 39.910 GiB, 11.23% gc time)
Ne = 200
Ne 200& β 0.96 RMSE: 0.38961915722056484
(Ne, β) = (200, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.356046 seconds (524.12 M allocations: 39.865 GiB, 11.32% gc time)
Ne = 200
Ne 200& β 0.97 RMSE: 0.42332071002803295
(Ne, β) = (200, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.621138 seconds (523.76 M allocations: 39.819 GiB, 11.33% gc time)
Ne = 200
Ne 200& β 0.98 RMSE: 0.4184421271326753
(Ne, β) = (200, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.829070 seconds (523.84 M allocations: 39.829 GiB, 11.39% gc time)
Ne = 200
Ne 200& β 0.99 RMSE: 0.4042303345682685
(Ne, β) = (200, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.025159 seconds (523.04 M allocations: 39.727 GiB, 11.60% gc time)
Ne = 200
Ne 200& β 1.0 RMSE: 0.4146757122715879
(Ne, β) = (200, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:29[39m
29.109950 seconds (522.30 M allocations: 39.630 GiB, 10.67% gc time)
Ne = 200
Ne 200& β 1.01 RMSE: 0.4074581737812653
(Ne, β) = (200, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:29[39m
29.418878 seconds (521.85 M allocations: 39.573 GiB, 10.62% gc time)
Ne = 200
Ne 200& β 1.02 RMSE: 0.39765203204991734
(Ne, β) = (200, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:29[39m
29.731355 seconds (520.98 M allocations: 39.461 GiB, 10.72% gc time)
Ne = 200
Ne 200& β 1.03 RMSE: 0.40627919987902705
(Ne, β) = (200, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:30[39m
30.202696 seconds (521.04 M allocations: 39.468 GiB, 10.77% gc time)
Ne = 200
Ne 200& β 1.04 RMSE: 0.4101973984187267
(Ne, β) = (200, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:30[39m
30.568229 seconds (520.69 M allocations: 39.424 GiB, 10.90% gc time)
Ne = 200
[32mProgress: 67%|███████████████████████████▍ | ETA: 0:05:52[39m
Ne 200& β 1.05 RMSE: 0.39484602825839454
(Ne, β) = (400, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:03[39m
63.339798 seconds (1.05 G allocations: 80.408 GiB, 11.01% gc time)
Ne = 400
Ne 400& β 0.95 RMSE: 0.3835741218588291
(Ne, β) = (400, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.131382 seconds (1.05 G allocations: 80.192 GiB, 11.14% gc time)
Ne = 400
Ne 400& β 0.96 RMSE: 0.4175559961324692
(Ne, β) = (400, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:02[39m
62.161557 seconds (1.05 G allocations: 80.090 GiB, 11.18% gc time)
Ne = 400
Ne 400& β 0.97 RMSE: 0.38958203865834684
(Ne, β) = (400, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:58[39m
58.792544 seconds (1.05 G allocations: 79.935 GiB, 10.42% gc time)
Ne = 400
Ne 400& β 0.98 RMSE: 0.38148653730148496
(Ne, β) = (400, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:59[39m
59.864718 seconds (1.05 G allocations: 79.842 GiB, 10.51% gc time)
Ne = 400
Ne 400& β 0.99 RMSE: 0.39946084497515
(Ne, β) = (400, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:00[39m
60.978563 seconds (1.05 G allocations: 79.626 GiB, 10.73% gc time)
Ne = 400
Ne 400& β 1.0 RMSE: 0.4047940592730738
(Ne, β) = (400, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:02[39m
62.936091 seconds (1.05 G allocations: 79.674 GiB, 10.84% gc time)
Ne = 400
Ne 400& β 1.01 RMSE: 0.403473340612347
(Ne, β) = (400, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:03[39m
63.228973 seconds (1.04 G allocations: 79.502 GiB, 10.99% gc time)
Ne = 400
Ne 400& β 1.02 RMSE: 0.39865153534078557
(Ne, β) = (400, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:59[39m
59.592373 seconds (1.05 G allocations: 79.571 GiB, 11.14% gc time)
Ne = 400
Ne 400& β 1.03 RMSE: 0.39962832386085245
(Ne, β) = (400, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:57[39m
57.971335 seconds (1.04 G allocations: 79.161 GiB, 10.54% gc time)
Ne = 400
Ne 400& β 1.04 RMSE: 0.38361799970069677
(Ne, β) = (400, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:59[39m
59.066628 seconds (1.04 G allocations: 79.265 GiB, 10.61% gc time)
Ne = 400
[32mProgress: 83%|██████████████████████████████████▏ | ETA: 0:04:36[39m
Ne 400& β 1.05 RMSE: 0.40113149840721735
(Ne, β) = (600, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:32[39m
92.986502 seconds (1.58 G allocations: 120.740 GiB, 10.69% gc time)
Ne = 600
Ne 600& β 0.95 RMSE: 0.3968036261893885
(Ne, β) = (600, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:33[39m
93.423295 seconds (1.58 G allocations: 120.514 GiB, 10.97% gc time)
Ne = 600
Ne 600& β 0.96 RMSE: 0.3886737696908119
(Ne, β) = (600, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:29[39m
89.587327 seconds (1.58 G allocations: 120.564 GiB, 10.50% gc time)
Ne = 600
Ne 600& β 0.97 RMSE: 0.39382001222836804
(Ne, β) = (600, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:31[39m
91.974685 seconds (1.58 G allocations: 120.331 GiB, 10.76% gc time)
Ne = 600
Ne 600& β 0.98 RMSE: 0.38934507702299526
(Ne, β) = (600, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:34[39m
94.351034 seconds (1.57 G allocations: 120.132 GiB, 10.96% gc time)
Ne = 600
Ne 600& β 0.99 RMSE: 0.38127094859276167
(Ne, β) = (600, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:35[39m
95.887110 seconds (1.57 G allocations: 119.935 GiB, 11.03% gc time)
Ne = 600
Ne 600& β 1.0 RMSE: 0.3862182048197918
(Ne, β) = (600, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:27[39m
87.248150 seconds (1.57 G allocations: 119.753 GiB, 10.98% gc time)
Ne = 600
Ne 600& β 1.01 RMSE: 0.38765857239103557
(Ne, β) = (600, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:28[39m
88.310541 seconds (1.57 G allocations: 119.803 GiB, 10.57% gc time)
Ne = 600
Ne 600& β 1.02 RMSE: 0.3959200740691458
(Ne, β) = (600, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:30[39m
90.980863 seconds (1.57 G allocations: 119.380 GiB, 10.80% gc time)
Ne = 600
Ne 600& β 1.03 RMSE: 0.38311574696930545
(Ne, β) = (600, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:29[39m
89.428448 seconds (1.57 G allocations: 119.012 GiB, 10.98% gc time)
Ne = 600
Ne 600& β 1.04 RMSE: 0.3943156740220039
(Ne, β) = (600, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:27[39m
87.424345 seconds (1.57 G allocations: 119.062 GiB, 10.58% gc time)
Ne = 600
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:39:41[39m
Ne 600& β 1.05 RMSE: 0.39856029673685395
```julia
save(path*"metric_srmf"*string(p)*".jld", "metric", metric_srmf1)
```
p = 2
```julia
p = 2
```
2
```julia
metric_srmf2 = benchmark_srmf_lorenz63(model, data, path, [60, 100, 200, 400, 600], collect(0.97:0.01:1.04), p);
```
(Ne, β) = (60, 0.97)
[32mProgress: 10%|████ | ETA: 0:00:10[39m
Max number of iterations is reached during the optimization
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.980591 seconds (164.40 M allocations: 12.733 GiB, 10.14% gc time)
Ne = 60
Ne 60& β 0.97 RMSE: 0.762719711537487
(Ne, β) = (60, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.040051 seconds (164.46 M allocations: 12.740 GiB, 10.19% gc time)
Ne = 60
Ne 60& β 0.98 RMSE: 0.5591917485590263
(Ne, β) = (60, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.066374 seconds (164.46 M allocations: 12.741 GiB, 10.16% gc time)
Ne = 60
Ne 60& β 0.99 RMSE: 0.5961111316702354
(Ne, β) = (60, 1.0)
[32mProgress: 75%|██████████████████████████████▊ | ETA: 0:00:03[39m
Max number of iterations is reached during the optimization
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.081931 seconds (164.41 M allocations: 12.734 GiB, 10.27% gc time)
Ne = 60
Ne 60& β 1.0 RMSE: 0.6274974311198191
(Ne, β) = (60, 1.01)
[32mProgress: 18%|███████▌ | ETA: 0:00:09[39m
Max number of iterations is reached during the optimization
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.092600 seconds (164.43 M allocations: 12.735 GiB, 10.24% gc time)
Ne = 60
Ne 60& β 1.01 RMSE: 1.7487710387136073
(Ne, β) = (60, 1.02)
[32mProgress: 81%|█████████████████████████████████▎ | ETA: 0:00:02[39m
Max number of iterations is reached during the optimization
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.163995 seconds (164.31 M allocations: 12.722 GiB, 10.29% gc time)
Ne = 60
Ne 60& β 1.02 RMSE: 0.5275483215977215
(Ne, β) = (60, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.175971 seconds (164.49 M allocations: 12.743 GiB, 10.25% gc time)
Ne = 60
Ne 60& β 1.03 RMSE: 0.5491029006406792
(Ne, β) = (60, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.204574 seconds (164.38 M allocations: 12.730 GiB, 10.33% gc time)
Ne = 60
Ne 60& β 1.04 RMSE: 0.5726929692575105
(Ne, β) = (100, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.524091 seconds (266.66 M allocations: 20.592 GiB, 10.54% gc time)
Ne = 100
Ne 100& β 0.97 RMSE: 0.44591138246561113
(Ne, β) = (100, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.480044 seconds (266.53 M allocations: 20.577 GiB, 10.57% gc time)
Ne = 100
Ne 100& β 0.98 RMSE: 0.4890427877006731
(Ne, β) = (100, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.589312 seconds (266.16 M allocations: 20.532 GiB, 10.66% gc time)
Ne = 100
Ne 100& β 0.99 RMSE: 0.4576815812298288
(Ne, β) = (100, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.624616 seconds (266.39 M allocations: 20.560 GiB, 10.56% gc time)
Ne = 100
Ne 100& β 1.0 RMSE: 0.4400694902194817
(Ne, β) = (100, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.644344 seconds (266.49 M allocations: 20.572 GiB, 10.67% gc time)
Ne = 100
Ne 100& β 1.01 RMSE: 0.43327759594502135
(Ne, β) = (100, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.763290 seconds (266.41 M allocations: 20.564 GiB, 10.72% gc time)
Ne = 100
Ne 100& β 1.02 RMSE: 0.454250036795357
(Ne, β) = (100, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.820835 seconds (266.42 M allocations: 20.563 GiB, 10.67% gc time)
Ne = 100
Ne 100& β 1.03 RMSE: 0.4442369845515785
(Ne, β) = (100, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
[32mProgress: 40%|████████████████▍ | ETA: 0:05:46[39m
17.856874 seconds (266.32 M allocations: 20.554 GiB, 10.61% gc time)
Ne = 100
Ne 100& β 1.04 RMSE: 0.46971626606379907
(Ne, β) = (200, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.439676 seconds (523.17 M allocations: 40.434 GiB, 11.09% gc time)
Ne = 200
Ne 200& β 0.97 RMSE: 0.432652545199624
(Ne, β) = (200, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:30[39m
30.377418 seconds (523.43 M allocations: 40.463 GiB, 10.28% gc time)
Ne = 200
Ne 200& β 0.98 RMSE: 0.41821355702386404
(Ne, β) = (200, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:30[39m
30.750777 seconds (523.07 M allocations: 40.416 GiB, 10.26% gc time)
Ne = 200
Ne 200& β 0.99 RMSE: 0.3867893454100539
(Ne, β) = (200, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.166905 seconds (523.17 M allocations: 40.432 GiB, 10.39% gc time)
Ne = 200
Ne 200& β 1.0 RMSE: 0.43394864526160815
(Ne, β) = (200, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.509066 seconds (522.81 M allocations: 40.385 GiB, 10.44% gc time)
Ne = 200
Ne 200& β 1.01 RMSE: 0.40698352056882675
(Ne, β) = (200, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.827822 seconds (522.71 M allocations: 40.379 GiB, 10.55% gc time)
Ne = 200
Ne 200& β 1.02 RMSE: 0.4086659180635214
(Ne, β) = (200, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.255502 seconds (522.48 M allocations: 40.344 GiB, 10.67% gc time)
Ne = 200
Ne 200& β 1.03 RMSE: 0.3937659077099665
(Ne, β) = (200, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
[32mProgress: 60%|████████████████████████▋ | ETA: 0:05:22[39m
32.745148 seconds (523.07 M allocations: 40.416 GiB, 10.68% gc time)
Ne = 200
Ne 200& β 1.04 RMSE: 0.40885683388927235
(Ne, β) = (400, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:05[39m
65.666912 seconds (1.04 G allocations: 80.724 GiB, 10.86% gc time)
Ne = 400
Ne 400& β 0.97 RMSE: 0.37384839717719875
(Ne, β) = (400, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:02[39m
62.676805 seconds (1.04 G allocations: 80.701 GiB, 11.04% gc time)
Ne = 400
Ne 400& β 0.98 RMSE: 0.39354121650497725
(Ne, β) = (400, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:00[39m
60.373196 seconds (1.04 G allocations: 80.734 GiB, 10.14% gc time)
Ne = 400
Ne 400& β 0.99 RMSE: 0.37829409428645105
(Ne, β) = (400, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:01[39m
61.515880 seconds (1.04 G allocations: 80.559 GiB, 10.20% gc time)
Ne = 400
Ne 400& β 1.0 RMSE: 0.37359498025137877
(Ne, β) = (400, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:02[39m
62.734997 seconds (1.04 G allocations: 80.393 GiB, 10.40% gc time)
Ne = 400
Ne 400& β 1.01 RMSE: 0.36894555762732945
(Ne, β) = (400, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.045168 seconds (1.04 G allocations: 80.626 GiB, 10.58% gc time)
Ne = 400
Ne 400& β 1.02 RMSE: 0.3832808534209229
(Ne, β) = (400, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.437053 seconds (1.04 G allocations: 80.514 GiB, 11.02% gc time)
Ne = 400
Ne 400& β 1.03 RMSE: 0.382527624499421
(Ne, β) = (400, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:00[39m
60.124696 seconds (1.04 G allocations: 80.296 GiB, 10.13% gc time)
Ne = 400
[32mProgress: 80%|████████████████████████████████▊ | ETA: 0:04:06[39m
Ne 400& β 1.04 RMSE: 0.39534356082064825
(Ne, β) = (600, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:33[39m
93.386281 seconds (1.57 G allocations: 121.978 GiB, 10.29% gc time)
Ne = 600
Ne 600& β 0.97 RMSE: 0.3842446300754131
(Ne, β) = (600, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:35[39m
95.918974 seconds (1.57 G allocations: 121.795 GiB, 10.55% gc time)
Ne = 600
Ne 600& β 0.98 RMSE: 0.3871661562073406
(Ne, β) = (600, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:38[39m
98.087995 seconds (1.57 G allocations: 121.563 GiB, 10.65% gc time)
Ne = 600
Ne 600& β 0.99 RMSE: 0.36857055390517507
(Ne, β) = (600, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:32[39m
92.233819 seconds (1.57 G allocations: 121.381 GiB, 10.68% gc time)
Ne = 600
Ne 600& β 1.0 RMSE: 0.3653503314310143
(Ne, β) = (600, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:32[39m
92.125905 seconds (1.57 G allocations: 121.435 GiB, 10.22% gc time)
Ne = 600
Ne 600& β 1.01 RMSE: 0.37299582622642413
(Ne, β) = (600, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:34[39m
94.475293 seconds (1.57 G allocations: 121.048 GiB, 10.53% gc time)
Ne = 600
Ne 600& β 1.02 RMSE: 0.3694989111578834
(Ne, β) = (600, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:36[39m
96.962215 seconds (1.57 G allocations: 120.991 GiB, 10.64% gc time)
Ne = 600
Ne 600& β 1.03 RMSE: 0.37262147782230703
(Ne, β) = (600, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:29[39m
89.806533 seconds (1.56 G allocations: 120.596 GiB, 10.63% gc time)
Ne = 600
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:29:00[39m
Ne 600& β 1.04 RMSE: 0.3840133098523752
```julia
save(path*"metric_srmf"*string(p)*".jld", "metric", metric_srmf2)
```
Benchmark for the Lorenz-63 problem with the sequential stochastic adaptive radial map filter
p = 0
```julia
p = 0
```
0
```julia
metric_sadaptivermf0 = benchmark_sadaptivermf_lorenz63(model, data, path, Ne_array, β_array, p);
```
(Ne, β) = (10, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.974081 seconds (28.33 M allocations: 1.988 GiB, 10.83% gc time)
Ne = 10
Ne 10& β 0.95 RMSE: 0.9566654682239755
(Ne, β) = (10, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.652652 seconds (27.68 M allocations: 1.959 GiB, 11.30% gc time)
Ne = 10
Ne 10& β 0.96 RMSE: 1.1823886359653848
(Ne, β) = (10, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.653917 seconds (27.68 M allocations: 1.959 GiB, 10.60% gc time)
Ne = 10
Ne 10& β 0.97 RMSE: 1.0851965350111241
(Ne, β) = (10, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.658336 seconds (27.68 M allocations: 1.959 GiB, 11.44% gc time)
Ne = 10
Ne 10& β 0.98 RMSE: 0.9245081916696115
(Ne, β) = (10, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.667061 seconds (27.68 M allocations: 1.959 GiB, 11.60% gc time)
Ne = 10
Ne 10& β 0.99 RMSE: 1.0465526229205173
(Ne, β) = (10, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.664138 seconds (27.68 M allocations: 1.959 GiB, 12.10% gc time)
Ne = 10
Ne 10& β 1.0 RMSE: 0.9547094978719818
(Ne, β) = (10, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.671501 seconds (27.68 M allocations: 1.959 GiB, 12.15% gc time)
Ne = 10
Ne 10& β 1.01 RMSE: 0.8360577586273457
(Ne, β) = (10, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.664647 seconds (27.68 M allocations: 1.959 GiB, 12.21% gc time)
Ne = 10
Ne 10& β 1.02 RMSE: 1.1279717953753738
(Ne, β) = (10, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.657348 seconds (27.68 M allocations: 1.959 GiB, 12.37% gc time)
Ne = 10
Ne 10& β 1.03 RMSE: 0.9258773857633635
(Ne, β) = (10, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.671194 seconds (27.68 M allocations: 1.959 GiB, 12.38% gc time)
Ne = 10
Ne 10& β 1.04 RMSE: 0.927985503851607
(Ne, β) = (10, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:01[39m
1.665555 seconds (27.68 M allocations: 1.959 GiB, 11.12% gc time)
Ne = 10
Ne 10& β 1.05 RMSE: 0.924720460283887
(Ne, β) = (20, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.775704 seconds (50.40 M allocations: 3.548 GiB, 12.32% gc time)
Ne = 20
Ne 20& β 0.95 RMSE: 0.8139104676949105
(Ne, β) = (20, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.774063 seconds (50.40 M allocations: 3.548 GiB, 12.45% gc time)
Ne = 20
Ne 20& β 0.96 RMSE: 0.6994820551284352
(Ne, β) = (20, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.782942 seconds (50.40 M allocations: 3.548 GiB, 12.76% gc time)
Ne = 20
Ne 20& β 0.97 RMSE: 0.7654801325586719
(Ne, β) = (20, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.768506 seconds (50.40 M allocations: 3.548 GiB, 12.20% gc time)
Ne = 20
Ne 20& β 0.98 RMSE: 0.7189298330695608
(Ne, β) = (20, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.772240 seconds (50.40 M allocations: 3.548 GiB, 12.34% gc time)
Ne = 20
Ne 20& β 0.99 RMSE: 0.6869531126415794
(Ne, β) = (20, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.780749 seconds (50.40 M allocations: 3.548 GiB, 12.75% gc time)
Ne = 20
Ne 20& β 1.0 RMSE: 0.7461623559860884
(Ne, β) = (20, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.758831 seconds (50.40 M allocations: 3.548 GiB, 12.07% gc time)
Ne = 20
Ne 20& β 1.01 RMSE: 0.7132314906205584
(Ne, β) = (20, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.783423 seconds (50.40 M allocations: 3.548 GiB, 12.07% gc time)
Ne = 20
Ne 20& β 1.02 RMSE: 0.7049354793647375
(Ne, β) = (20, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.775348 seconds (50.40 M allocations: 3.548 GiB, 12.13% gc time)
Ne = 20
Ne 20& β 1.03 RMSE: 0.7090855312453264
(Ne, β) = (20, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
2.794565 seconds (50.40 M allocations: 3.548 GiB, 13.09% gc time)
Ne = 20
Ne 20& β 1.04 RMSE: 0.8175949653276705
(Ne, β) = (20, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
[32mProgress: 25%|██████████▎ | ETA: 0:02:33[39m
2.789966 seconds (50.40 M allocations: 3.548 GiB, 12.30% gc time)
Ne = 20
Ne 20& β 1.05 RMSE: 0.7005885489023267
(Ne, β) = (40, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.073675 seconds (95.84 M allocations: 6.735 GiB, 13.37% gc time)
Ne = 40
Ne 40& β 0.95 RMSE: 0.5654249052247533
(Ne, β) = (40, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.074172 seconds (95.85 M allocations: 6.735 GiB, 13.05% gc time)
Ne = 40
Ne 40& β 0.96 RMSE: 0.576539407227566
(Ne, β) = (40, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.068132 seconds (95.85 M allocations: 6.735 GiB, 12.90% gc time)
Ne = 40
Ne 40& β 0.97 RMSE: 0.5597444967534441
(Ne, β) = (40, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.068032 seconds (95.85 M allocations: 6.735 GiB, 12.77% gc time)
Ne = 40
Ne 40& β 0.98 RMSE: 0.5484451237175191
(Ne, β) = (40, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.083632 seconds (95.85 M allocations: 6.735 GiB, 12.90% gc time)
Ne = 40
Ne 40& β 0.99 RMSE: 0.5650880974462081
(Ne, β) = (40, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.114767 seconds (95.85 M allocations: 6.735 GiB, 13.31% gc time)
Ne = 40
Ne 40& β 1.0 RMSE: 0.5796427733808884
(Ne, β) = (40, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.126907 seconds (95.85 M allocations: 6.735 GiB, 13.43% gc time)
Ne = 40
Ne 40& β 1.01 RMSE: 0.5772056743988228
(Ne, β) = (40, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.137920 seconds (95.85 M allocations: 6.735 GiB, 13.04% gc time)
Ne = 40
Ne 40& β 1.02 RMSE: 0.6027802620416329
(Ne, β) = (40, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.147779 seconds (95.85 M allocations: 6.735 GiB, 12.85% gc time)
Ne = 40
Ne 40& β 1.03 RMSE: 0.5639484689619054
(Ne, β) = (40, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.129830 seconds (95.85 M allocations: 6.735 GiB, 12.73% gc time)
Ne = 40
Ne 40& β 1.04 RMSE: 0.5815802065454888
(Ne, β) = (40, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:05[39m
5.148844 seconds (95.85 M allocations: 6.735 GiB, 12.86% gc time)
Ne = 40
[32mProgress: 38%|███████████████▍ | ETA: 0:03:00[39m
Ne 40& β 1.05 RMSE: 0.5789283193430101
(Ne, β) = (60, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.489058 seconds (141.29 M allocations: 9.918 GiB, 13.07% gc time)
Ne = 60
Ne 60& β 0.95 RMSE: 0.5647694813680117
(Ne, β) = (60, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.521565 seconds (141.29 M allocations: 9.918 GiB, 13.01% gc time)
Ne = 60
Ne 60& β 0.96 RMSE: 0.5666987571265703
(Ne, β) = (60, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.512716 seconds (141.29 M allocations: 9.918 GiB, 13.26% gc time)
Ne = 60
Ne 60& β 0.97 RMSE: 0.5757490198181406
(Ne, β) = (60, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.534791 seconds (141.29 M allocations: 9.918 GiB, 13.18% gc time)
Ne = 60
Ne 60& β 0.98 RMSE: 0.5628986205103143
(Ne, β) = (60, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.579279 seconds (141.29 M allocations: 9.918 GiB, 13.13% gc time)
Ne = 60
Ne 60& β 0.99 RMSE: 0.5690403774278968
(Ne, β) = (60, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.587824 seconds (141.29 M allocations: 9.918 GiB, 13.45% gc time)
Ne = 60
Ne 60& β 1.0 RMSE: 0.576516968985408
(Ne, β) = (60, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.640177 seconds (141.29 M allocations: 9.918 GiB, 13.18% gc time)
Ne = 60
Ne 60& β 1.01 RMSE: 0.524708535557424
(Ne, β) = (60, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.655775 seconds (141.29 M allocations: 9.918 GiB, 13.34% gc time)
Ne = 60
Ne 60& β 1.02 RMSE: 0.559624329805463
(Ne, β) = (60, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.873624 seconds (141.29 M allocations: 9.918 GiB, 16.49% gc time)
Ne = 60
Ne 60& β 1.03 RMSE: 0.5394654794866246
(Ne, β) = (60, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:06[39m
6.640755 seconds (141.29 M allocations: 9.918 GiB, 12.26% gc time)
Ne = 60
Ne 60& β 1.04 RMSE: 0.5380258986753249
(Ne, β) = (60, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:06[39m
[32mProgress: 50%|████████████████████▌ | ETA: 0:03:11[39m
6.687258 seconds (141.29 M allocations: 9.918 GiB, 12.50% gc time)
Ne = 60
Ne 60& β 1.05 RMSE: 0.5448587589407118
(Ne, β) = (100, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.303162 seconds (232.18 M allocations: 16.277 GiB, 13.20% gc time)
Ne = 100
Ne 100& β 0.95 RMSE: 0.5432067072321636
(Ne, β) = (100, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.379165 seconds (232.18 M allocations: 16.277 GiB, 13.08% gc time)
Ne = 100
Ne 100& β 0.96 RMSE: 0.5198162025036264
(Ne, β) = (100, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.497675 seconds (232.18 M allocations: 16.277 GiB, 13.13% gc time)
Ne = 100
Ne 100& β 0.97 RMSE: 0.5415376907139499
(Ne, β) = (100, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.574689 seconds (232.18 M allocations: 16.277 GiB, 13.35% gc time)
Ne = 100
Ne 100& β 0.98 RMSE: 0.5097204270059973
(Ne, β) = (100, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.652492 seconds (232.18 M allocations: 16.277 GiB, 13.27% gc time)
Ne = 100
Ne 100& β 0.99 RMSE: 0.5091069460923634
(Ne, β) = (100, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.744165 seconds (232.18 M allocations: 16.277 GiB, 13.45% gc time)
Ne = 100
Ne 100& β 1.0 RMSE: 0.5296499729136441
(Ne, β) = (100, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.763833 seconds (232.18 M allocations: 16.277 GiB, 13.31% gc time)
Ne = 100
Ne 100& β 1.01 RMSE: 0.502653671566322
(Ne, β) = (100, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.833289 seconds (232.18 M allocations: 16.277 GiB, 13.42% gc time)
Ne = 100
Ne 100& β 1.02 RMSE: 0.5647632001026323
(Ne, β) = (100, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.901618 seconds (232.18 M allocations: 16.277 GiB, 13.43% gc time)
Ne = 100
Ne 100& β 1.03 RMSE: 0.5290138270346935
(Ne, β) = (100, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.970388 seconds (232.18 M allocations: 16.277 GiB, 13.36% gc time)
Ne = 100
Ne 100& β 1.04 RMSE: 0.5395243942919546
(Ne, β) = (100, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
[32mProgress: 62%|█████████████████████████▋ | ETA: 0:03:13[39m
11.995166 seconds (232.18 M allocations: 16.277 GiB, 13.23% gc time)
Ne = 100
Ne 100& β 1.05 RMSE: 0.5693612195170448
(Ne, β) = (200, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:23[39m
23.613392 seconds (459.40 M allocations: 32.239 GiB, 13.31% gc time)
Ne = 200
Ne 200& β 0.95 RMSE: 0.4971459987177327
(Ne, β) = (200, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:23[39m
23.749764 seconds (459.40 M allocations: 32.239 GiB, 13.26% gc time)
Ne = 200
Ne 200& β 0.96 RMSE: 0.5265554657002808
(Ne, β) = (200, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:23[39m
23.951713 seconds (459.39 M allocations: 32.239 GiB, 13.20% gc time)
Ne = 200
Ne 200& β 0.97 RMSE: 0.5171541872235875
(Ne, β) = (200, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:24[39m
24.115553 seconds (459.39 M allocations: 32.239 GiB, 13.14% gc time)
Ne = 200
Ne 200& β 0.98 RMSE: 0.4955855880652565
(Ne, β) = (200, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:24[39m
24.265459 seconds (459.40 M allocations: 32.239 GiB, 13.12% gc time)
Ne = 200
Ne 200& β 0.99 RMSE: 0.49139089621796633
(Ne, β) = (200, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:24[39m
24.076204 seconds (459.39 M allocations: 32.238 GiB, 13.61% gc time)
Ne = 200
Ne 200& β 1.0 RMSE: 0.5036378313604629
(Ne, β) = (200, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:21[39m
21.165349 seconds (459.39 M allocations: 32.238 GiB, 12.68% gc time)
Ne = 200
Ne 200& β 1.01 RMSE: 0.486173478289185
(Ne, β) = (200, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:21[39m
21.602909 seconds (459.39 M allocations: 32.238 GiB, 12.77% gc time)
Ne = 200
Ne 200& β 1.02 RMSE: 0.5156295459976489
(Ne, β) = (200, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:21[39m
21.971110 seconds (459.39 M allocations: 32.238 GiB, 13.01% gc time)
Ne = 200
Ne 200& β 1.03 RMSE: 0.520214454578662
(Ne, β) = (200, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.271472 seconds (459.39 M allocations: 32.238 GiB, 12.95% gc time)
Ne = 200
Ne 200& β 1.04 RMSE: 0.5248278235812874
(Ne, β) = (200, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:22[39m
22.377376 seconds (459.39 M allocations: 32.238 GiB, 13.12% gc time)
Ne = 200
[32mProgress: 75%|██████████████████████████████▊ | ETA: 0:03:12[39m
Ne 200& β 1.05 RMSE: 0.5025195186456124
(Ne, β) = (400, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:45[39m
45.309819 seconds (913.84 M allocations: 63.949 GiB, 13.09% gc time)
Ne = 400
Ne 400& β 0.95 RMSE: 0.4927408597900772
(Ne, β) = (400, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:46[39m
46.411396 seconds (913.84 M allocations: 63.949 GiB, 13.02% gc time)
Ne = 400
Ne 400& β 0.96 RMSE: 0.4902087858467313
(Ne, β) = (400, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:46[39m
46.988441 seconds (913.84 M allocations: 63.949 GiB, 12.91% gc time)
Ne = 400
Ne 400& β 0.97 RMSE: 0.5002589918075921
(Ne, β) = (400, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:47[39m
47.692606 seconds (913.84 M allocations: 63.949 GiB, 13.12% gc time)
Ne = 400
Ne 400& β 0.98 RMSE: 0.5061647710185455
(Ne, β) = (400, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:42[39m
42.076641 seconds (913.84 M allocations: 63.949 GiB, 12.71% gc time)
Ne = 400
Ne 400& β 0.99 RMSE: 0.4907094732183371
(Ne, β) = (400, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:43[39m
43.172238 seconds (913.84 M allocations: 63.949 GiB, 12.90% gc time)
Ne = 400
Ne 400& β 1.0 RMSE: 0.493472129171848
(Ne, β) = (400, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:44[39m
44.283404 seconds (913.84 M allocations: 63.949 GiB, 12.89% gc time)
Ne = 400
Ne 400& β 1.01 RMSE: 0.495792416869993
(Ne, β) = (400, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:45[39m
45.324448 seconds (913.84 M allocations: 63.949 GiB, 12.81% gc time)
Ne = 400
Ne 400& β 1.02 RMSE: 0.4935521805854145
(Ne, β) = (400, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:46[39m
46.180474 seconds (913.84 M allocations: 63.949 GiB, 12.74% gc time)
Ne = 400
Ne 400& β 1.03 RMSE: 0.49503668759595526
(Ne, β) = (400, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:47[39m
47.022929 seconds (913.84 M allocations: 63.949 GiB, 12.64% gc time)
Ne = 400
Ne 400& β 1.04 RMSE: 0.4909709442450609
(Ne, β) = (400, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:47[39m
47.638604 seconds (913.84 M allocations: 63.949 GiB, 12.58% gc time)
Ne = 400
[32mProgress: 88%|███████████████████████████████████▉ | ETA: 0:02:34[39m
Ne 400& β 1.05 RMSE: 0.5046100141977978
(Ne, β) = (600, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:03[39m
63.435821 seconds (1.37 G allocations: 95.738 GiB, 12.98% gc time)
Ne = 600
Ne 600& β 0.95 RMSE: 0.4905676760960779
(Ne, β) = (600, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.023099 seconds (1.37 G allocations: 95.738 GiB, 12.93% gc time)
Ne = 600
Ne 600& β 0.96 RMSE: 0.5003489425243199
(Ne, β) = (600, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:06[39m
66.639464 seconds (1.37 G allocations: 95.738 GiB, 12.91% gc time)
Ne = 600
Ne 600& β 0.97 RMSE: 0.4942655969849272
(Ne, β) = (600, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:08[39m
68.852267 seconds (1.37 G allocations: 95.739 GiB, 12.83% gc time)
Ne = 600
Ne 600& β 0.98 RMSE: 0.4961657523030417
(Ne, β) = (600, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:05[39m
65.700171 seconds (1.37 G allocations: 95.738 GiB, 12.94% gc time)
Ne = 600
Ne 600& β 0.99 RMSE: 0.4891149658543204
(Ne, β) = (600, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:03[39m
63.947033 seconds (1.37 G allocations: 95.738 GiB, 12.96% gc time)
Ne = 600
Ne 600& β 1.0 RMSE: 0.4846666783164921
(Ne, β) = (600, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:06[39m
66.449782 seconds (1.37 G allocations: 95.738 GiB, 12.94% gc time)
Ne = 600
Ne 600& β 1.01 RMSE: 0.4794950697973096
(Ne, β) = (600, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:08[39m
68.847259 seconds (1.37 G allocations: 95.739 GiB, 12.80% gc time)
Ne = 600
Ne 600& β 1.02 RMSE: 0.48679494546885616
(Ne, β) = (600, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:09[39m
69.867051 seconds (1.37 G allocations: 95.739 GiB, 12.70% gc time)
Ne = 600
Ne 600& β 1.03 RMSE: 0.4892431914634444
(Ne, β) = (600, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:02[39m
62.983135 seconds (1.37 G allocations: 95.738 GiB, 13.00% gc time)
Ne = 600
Ne 600& β 1.04 RMSE: 0.48933039817719837
(Ne, β) = (600, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.027485 seconds (1.37 G allocations: 95.738 GiB, 12.94% gc time)
Ne = 600
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:30:06[39m
Ne 600& β 1.05 RMSE: 0.4913794081735623
```julia
save(path*"metric_sadaptivermf0"*string(p)*".jld", "metric", metric_sadaptivermf0)
```
p = 1
```julia
p = 1
```
1
```julia
metric_sadaptivermf1 = benchmark_sadaptivermf_lorenz63(model, data, path, [40, 60, 100, 200, 400, 600], β_array, p);
```
(Ne, β) = (40, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.295259 seconds (111.13 M allocations: 8.349 GiB, 10.44% gc time)
Ne = 40
Ne 40& β 0.95 RMSE: 0.5816500372451683
(Ne, β) = (40, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.343593 seconds (111.14 M allocations: 8.351 GiB, 10.55% gc time)
Ne = 40
Ne 40& β 0.96 RMSE: 0.60686438289528
(Ne, β) = (40, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.320188 seconds (111.18 M allocations: 8.354 GiB, 10.57% gc time)
Ne = 40
Ne 40& β 0.97 RMSE: 0.6874824083532917
(Ne, β) = (40, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.336291 seconds (111.20 M allocations: 8.356 GiB, 10.69% gc time)
Ne = 40
Ne 40& β 0.98 RMSE: 0.6372021508760763
(Ne, β) = (40, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.382063 seconds (111.21 M allocations: 8.357 GiB, 10.89% gc time)
Ne = 40
Ne 40& β 0.99 RMSE: 0.5999735491495773
(Ne, β) = (40, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.367610 seconds (111.14 M allocations: 8.350 GiB, 10.79% gc time)
Ne = 40
Ne 40& β 1.0 RMSE: 0.6226987121806933
(Ne, β) = (40, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.373300 seconds (111.12 M allocations: 8.347 GiB, 10.61% gc time)
Ne = 40
Ne 40& β 1.01 RMSE: 0.6680312143302773
(Ne, β) = (40, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.359668 seconds (111.10 M allocations: 8.346 GiB, 10.57% gc time)
Ne = 40
Ne 40& β 1.02 RMSE: 0.6676791748232688
(Ne, β) = (40, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.383228 seconds (111.14 M allocations: 8.350 GiB, 10.68% gc time)
Ne = 40
Ne 40& β 1.03 RMSE: 0.6379738921203495
(Ne, β) = (40, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.401098 seconds (111.11 M allocations: 8.347 GiB, 10.76% gc time)
Ne = 40
Ne 40& β 1.04 RMSE: 0.6085645708629982
(Ne, β) = (40, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:07[39m
7.422559 seconds (111.12 M allocations: 8.348 GiB, 10.95% gc time)
Ne = 40
Ne 40& β 1.05 RMSE: 0.6329055205258576
(Ne, β) = (60, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.589940 seconds (162.17 M allocations: 12.200 GiB, 11.03% gc time)
Ne = 60
Ne 60& β 0.95 RMSE: 0.5189499127377465
(Ne, β) = (60, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.697569 seconds (162.15 M allocations: 12.198 GiB, 10.98% gc time)
Ne = 60
Ne 60& β 0.96 RMSE: 0.5480085297307752
(Ne, β) = (60, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.709696 seconds (162.12 M allocations: 12.194 GiB, 11.04% gc time)
Ne = 60
Ne 60& β 0.97 RMSE: 0.5529176150391631
(Ne, β) = (60, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.692793 seconds (162.12 M allocations: 12.194 GiB, 11.02% gc time)
Ne = 60
Ne 60& β 0.98 RMSE: 0.5064095478191013
(Ne, β) = (60, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.723771 seconds (162.05 M allocations: 12.187 GiB, 11.04% gc time)
Ne = 60
Ne 60& β 0.99 RMSE: 0.6034483944541053
(Ne, β) = (60, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.781578 seconds (162.04 M allocations: 12.184 GiB, 11.23% gc time)
Ne = 60
Ne 60& β 1.0 RMSE: 0.5308893794217211
(Ne, β) = (60, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.748617 seconds (162.01 M allocations: 12.181 GiB, 11.23% gc time)
Ne = 60
Ne 60& β 1.01 RMSE: 0.5634817637084917
(Ne, β) = (60, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.734524 seconds (161.99 M allocations: 12.179 GiB, 11.19% gc time)
Ne = 60
Ne 60& β 1.02 RMSE: 0.5206857409802078
(Ne, β) = (60, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.719530 seconds (161.91 M allocations: 12.170 GiB, 11.17% gc time)
Ne = 60
Ne 60& β 1.03 RMSE: 0.5490613726414819
(Ne, β) = (60, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
10.748890 seconds (161.96 M allocations: 12.176 GiB, 11.04% gc time)
Ne = 60
Ne 60& β 1.04 RMSE: 0.5861284710553968
(Ne, β) = (60, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:10[39m
[32mProgress: 33%|█████████████▋ | ETA: 0:06:43[39m
10.804596 seconds (161.92 M allocations: 12.172 GiB, 11.22% gc time)
Ne = 60
Ne 60& β 1.05 RMSE: 0.5592338149095932
(Ne, β) = (100, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.691630 seconds (264.86 M allocations: 19.911 GiB, 12.28% gc time)
Ne = 100
Ne 100& β 0.95 RMSE: 0.4703341774832472
(Ne, β) = (100, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.399038 seconds (265.04 M allocations: 19.934 GiB, 10.46% gc time)
Ne = 100
Ne 100& β 0.96 RMSE: 0.4850842222980064
(Ne, β) = (100, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.496194 seconds (265.00 M allocations: 19.929 GiB, 10.47% gc time)
Ne = 100
Ne 100& β 0.97 RMSE: 0.49728986871822706
(Ne, β) = (100, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.512663 seconds (264.77 M allocations: 19.901 GiB, 10.63% gc time)
Ne = 100
Ne 100& β 0.98 RMSE: 0.5049186420220526
(Ne, β) = (100, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.568694 seconds (264.68 M allocations: 19.891 GiB, 10.58% gc time)
Ne = 100
Ne 100& β 0.99 RMSE: 0.4938017303186739
(Ne, β) = (100, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.694176 seconds (264.69 M allocations: 19.891 GiB, 10.72% gc time)
Ne = 100
Ne 100& β 1.0 RMSE: 0.4832406370077498
(Ne, β) = (100, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:15[39m
15.826669 seconds (264.34 M allocations: 19.849 GiB, 10.76% gc time)
Ne = 100
Ne 100& β 1.01 RMSE: 0.4874166854582006
(Ne, β) = (100, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:16[39m
16.066140 seconds (264.31 M allocations: 19.845 GiB, 10.72% gc time)
Ne = 100
Ne 100& β 1.02 RMSE: 0.4773748336470545
(Ne, β) = (100, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:16[39m
16.127543 seconds (264.40 M allocations: 19.856 GiB, 10.77% gc time)
Ne = 100
Ne 100& β 1.03 RMSE: 0.47801359261932386
(Ne, β) = (100, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:16[39m
16.100153 seconds (264.09 M allocations: 19.818 GiB, 10.79% gc time)
Ne = 100
Ne 100& β 1.04 RMSE: 0.4873842038880048
(Ne, β) = (100, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:16[39m
16.179602 seconds (264.18 M allocations: 19.829 GiB, 10.87% gc time)
Ne = 100
[32mProgress: 50%|████████████████████▌ | ETA: 0:06:16[39m
Ne 100& β 1.05 RMSE: 0.4639826690372533
(Ne, β) = (200, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.570532 seconds (525.77 M allocations: 39.822 GiB, 11.46% gc time)
Ne = 200
Ne 200& β 0.95 RMSE: 0.43389219625230563
(Ne, β) = (200, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.773504 seconds (525.36 M allocations: 39.770 GiB, 11.48% gc time)
Ne = 200
Ne 200& β 0.96 RMSE: 0.42754639000758865
(Ne, β) = (200, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.895089 seconds (525.25 M allocations: 39.756 GiB, 11.40% gc time)
Ne = 200
Ne 200& β 0.97 RMSE: 0.4347875583023235
(Ne, β) = (200, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:33[39m
33.056212 seconds (525.00 M allocations: 39.724 GiB, 11.43% gc time)
Ne = 200
Ne 200& β 0.98 RMSE: 0.41550552817802655
(Ne, β) = (200, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:33[39m
33.179877 seconds (524.74 M allocations: 39.690 GiB, 11.41% gc time)
Ne = 200
Ne 200& β 0.99 RMSE: 0.4331878815045497
(Ne, β) = (200, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:33[39m
33.367944 seconds (524.50 M allocations: 39.660 GiB, 11.39% gc time)
Ne = 200
Ne 200& β 1.0 RMSE: 0.42672448308129707
(Ne, β) = (200, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.173684 seconds (523.23 M allocations: 39.496 GiB, 11.95% gc time)
Ne = 200
Ne 200& β 1.01 RMSE: 0.4218883315624745
(Ne, β) = (200, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:30[39m
30.053704 seconds (523.26 M allocations: 39.500 GiB, 10.76% gc time)
Ne = 200
Ne 200& β 1.02 RMSE: 0.43069175544341926
(Ne, β) = (200, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:30[39m
30.524481 seconds (523.39 M allocations: 39.516 GiB, 10.78% gc time)
Ne = 200
Ne 200& β 1.03 RMSE: 0.4280924636183132
(Ne, β) = (200, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:30[39m
30.749075 seconds (523.01 M allocations: 39.468 GiB, 10.94% gc time)
Ne = 200
Ne 200& β 1.04 RMSE: 0.4144043865589155
(Ne, β) = (200, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:30[39m
[32mProgress: 67%|███████████████████████████▍ | ETA: 0:06:06[39m
30.920963 seconds (522.19 M allocations: 39.362 GiB, 11.06% gc time)
Ne = 200
Ne 200& β 1.05 RMSE: 0.4290368804274407
(Ne, β) = (400, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:03[39m
63.831784 seconds (1.05 G allocations: 80.304 GiB, 11.11% gc time)
Ne = 400
Ne 400& β 0.95 RMSE: 0.39947399595590327
(Ne, β) = (400, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.822237 seconds (1.05 G allocations: 80.164 GiB, 11.19% gc time)
Ne = 400
Ne 400& β 0.96 RMSE: 0.4226653412247428
(Ne, β) = (400, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:05[39m
65.304990 seconds (1.05 G allocations: 80.037 GiB, 11.23% gc time)
Ne = 400
Ne 400& β 0.97 RMSE: 0.39942011351237205
(Ne, β) = (400, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:05[39m
65.833996 seconds (1.05 G allocations: 79.847 GiB, 11.18% gc time)
Ne = 400
Ne 400& β 0.98 RMSE: 0.4012058137232895
(Ne, β) = (400, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:01[39m
61.530367 seconds (1.05 G allocations: 79.577 GiB, 10.93% gc time)
Ne = 400
Ne 400& β 0.99 RMSE: 0.4019370273165972
(Ne, β) = (400, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:00[39m
60.260713 seconds (1.05 G allocations: 79.733 GiB, 10.56% gc time)
Ne = 400
Ne 400& β 1.0 RMSE: 0.38560306867033545
(Ne, β) = (400, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:01[39m
61.224313 seconds (1.05 G allocations: 79.501 GiB, 10.78% gc time)
Ne = 400
Ne 400& β 1.01 RMSE: 0.3968062358520171
(Ne, β) = (400, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:02[39m
62.319005 seconds (1.05 G allocations: 79.305 GiB, 10.92% gc time)
Ne = 400
Ne 400& β 1.02 RMSE: 0.4031604009106461
(Ne, β) = (400, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:03[39m
63.234627 seconds (1.05 G allocations: 79.334 GiB, 11.02% gc time)
Ne = 400
Ne 400& β 1.03 RMSE: 0.39445885930673724
(Ne, β) = (400, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:03[39m
63.845710 seconds (1.05 G allocations: 79.156 GiB, 11.06% gc time)
Ne = 400
Ne 400& β 1.04 RMSE: 0.4090051731418221
(Ne, β) = (400, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.811989 seconds (1.05 G allocations: 79.040 GiB, 11.21% gc time)
Ne = 400
[32mProgress: 83%|██████████████████████████████████▏ | ETA: 0:04:46[39m
Ne 400& β 1.05 RMSE: 0.40666533631688506
(Ne, β) = (600, 0.95)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:32[39m
92.143531 seconds (1.59 G allocations: 121.306 GiB, 10.57% gc time)
Ne = 600
Ne 600& β 0.95 RMSE: 0.3881702861474375
(Ne, β) = (600, 0.96)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:34[39m
94.463537 seconds (1.59 G allocations: 120.987 GiB, 10.82% gc time)
Ne = 600
Ne 600& β 0.96 RMSE: 0.39631220909035253
(Ne, β) = (600, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:36[39m
96.211607 seconds (1.58 G allocations: 120.572 GiB, 10.95% gc time)
Ne = 600
Ne 600& β 0.97 RMSE: 0.40056770822373033
(Ne, β) = (600, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:37[39m
97.725809 seconds (1.58 G allocations: 120.234 GiB, 10.97% gc time)
Ne = 600
Ne 600& β 0.98 RMSE: 0.3954487210191602
(Ne, β) = (600, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:38[39m
98.649268 seconds (1.58 G allocations: 120.241 GiB, 11.04% gc time)
Ne = 600
Ne 600& β 0.99 RMSE: 0.39188484362563464
(Ne, β) = (600, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:31[39m
91.390643 seconds (1.58 G allocations: 119.892 GiB, 10.72% gc time)
Ne = 600
Ne 600& β 1.0 RMSE: 0.40133153856534226
(Ne, β) = (600, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:33[39m
93.960383 seconds (1.58 G allocations: 119.968 GiB, 10.93% gc time)
Ne = 600
Ne 600& β 1.01 RMSE: 0.396469674139247
(Ne, β) = (600, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:35[39m
95.784640 seconds (1.58 G allocations: 119.650 GiB, 10.98% gc time)
Ne = 600
Ne 600& β 1.02 RMSE: 0.3867934711197573
(Ne, β) = (600, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:37[39m
97.160167 seconds (1.58 G allocations: 119.516 GiB, 11.01% gc time)
Ne = 600
Ne 600& β 1.03 RMSE: 0.4009939808313468
(Ne, β) = (600, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:38[39m
98.496963 seconds (1.57 G allocations: 119.231 GiB, 10.95% gc time)
Ne = 600
Ne 600& β 1.04 RMSE: 0.3926593806082884
(Ne, β) = (600, 1.05)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:28[39m
88.658584 seconds (1.57 G allocations: 119.161 GiB, 10.93% gc time)
Ne = 600
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:41:19[39m
Ne 600& β 1.05 RMSE: 0.4046611674824419
```julia
save(path*"metric_sadaptivermf1"*string(p)*".jld", "metric", metric_sadaptivermf1)
```
p = 2
```julia
p = 2
```
2
```julia
metric_sadaptivermf2 = benchmark_sadaptivermf_lorenz63(model, data, path, [60, 100, 200, 400, 600], collect(0.97:0.01:1.04), p);
```
(Ne, β) = (60, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.194188 seconds (165.64 M allocations: 12.854 GiB, 10.06% gc time)
Ne = 60
Ne 60& β 0.97 RMSE: 0.7590563412868957
(Ne, β) = (60, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.173764 seconds (165.46 M allocations: 12.832 GiB, 10.13% gc time)
Ne = 60
Ne 60& β 0.98 RMSE: 0.8679825855626035
(Ne, β) = (60, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.214412 seconds (165.64 M allocations: 12.853 GiB, 10.12% gc time)
Ne = 60
Ne 60& β 0.99 RMSE: 0.6529078349545235
(Ne, β) = (60, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.219773 seconds (165.57 M allocations: 12.845 GiB, 10.25% gc time)
Ne = 60
Ne 60& β 1.0 RMSE: 0.6508484003916076
(Ne, β) = (60, 1.01)
[32mProgress: 4%|█▉ | ETA: 0:00:11[39m
Max number of iterations is reached during the optimization
[32mProgress: 64%|██████████████████████████▍ | ETA: 0:00:04[39m
Max number of iterations is reached during the optimization
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.253598 seconds (165.51 M allocations: 12.837 GiB, 10.21% gc time)
Ne = 60
Ne 60& β 1.01 RMSE: 0.6583068050502413
(Ne, β) = (60, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.258021 seconds (165.55 M allocations: 12.843 GiB, 10.19% gc time)
Ne = 60
Ne 60& β 1.02 RMSE: 0.6663030970256395
(Ne, β) = (60, 1.03)
[32mProgress: 31%|████████████▊ | ETA: 0:00:08[39m
Max number of iterations is reached during the optimization
[32mProgress: 77%|███████████████████████████████▍ | ETA: 0:00:03[39m
Max number of iterations is reached during the optimization
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.258181 seconds (165.54 M allocations: 12.841 GiB, 10.13% gc time)
Ne = 60
Ne 60& β 1.03 RMSE: 0.6668140161760241
(Ne, β) = (60, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:11[39m
11.327077 seconds (165.57 M allocations: 12.846 GiB, 10.19% gc time)
Ne = 60
Ne 60& β 1.04 RMSE: 0.6779401243652106
(Ne, β) = (100, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.630186 seconds (268.20 M allocations: 20.717 GiB, 10.47% gc time)
Ne = 100
Ne 100& β 0.97 RMSE: 0.5565894498405414
(Ne, β) = (100, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.728110 seconds (268.24 M allocations: 20.720 GiB, 10.47% gc time)
Ne = 100
Ne 100& β 0.98 RMSE: 0.5475053775518939
(Ne, β) = (100, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.705667 seconds (268.24 M allocations: 20.722 GiB, 10.51% gc time)
Ne = 100
Ne 100& β 0.99 RMSE: 0.5299543101495159
(Ne, β) = (100, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.733962 seconds (268.13 M allocations: 20.711 GiB, 10.62% gc time)
Ne = 100
Ne 100& β 1.0 RMSE: 0.6972184368673797
(Ne, β) = (100, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.853502 seconds (268.03 M allocations: 20.696 GiB, 10.59% gc time)
Ne = 100
Ne 100& β 1.01 RMSE: 0.530822647039223
(Ne, β) = (100, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.891391 seconds (268.05 M allocations: 20.697 GiB, 10.66% gc time)
Ne = 100
Ne 100& β 1.02 RMSE: 0.5820304518003213
(Ne, β) = (100, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.918606 seconds (268.18 M allocations: 20.715 GiB, 10.71% gc time)
Ne = 100
Ne 100& β 1.03 RMSE: 0.6142850586852093
(Ne, β) = (100, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:17[39m
17.962147 seconds (268.09 M allocations: 20.705 GiB, 10.66% gc time)
Ne = 100
[32mProgress: 40%|████████████████▍ | ETA: 0:05:51[39m
Ne 100& β 1.04 RMSE: 0.5511160436329612
(Ne, β) = (200, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:34[39m
34.433561 seconds (525.70 M allocations: 40.580 GiB, 10.86% gc time)
Ne = 200
Ne 200& β 0.97 RMSE: 0.43316952409423526
(Ne, β) = (200, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:34[39m
34.580355 seconds (525.71 M allocations: 40.583 GiB, 10.98% gc time)
Ne = 200
Ne 200& β 0.98 RMSE: 0.43299272275610734
(Ne, β) = (200, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:33[39m
33.301230 seconds (525.89 M allocations: 40.602 GiB, 11.11% gc time)
Ne = 200
Ne 200& β 0.99 RMSE: 0.458654362085586
(Ne, β) = (200, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.562168 seconds (525.08 M allocations: 40.501 GiB, 10.30% gc time)
Ne = 200
Ne 200& β 1.0 RMSE: 0.43578484743174445
(Ne, β) = (200, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:31[39m
31.921267 seconds (525.80 M allocations: 40.598 GiB, 10.38% gc time)
Ne = 200
Ne 200& β 1.01 RMSE: 0.45526941535832083
(Ne, β) = (200, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.299866 seconds (525.61 M allocations: 40.574 GiB, 10.51% gc time)
Ne = 200
Ne 200& β 1.02 RMSE: 0.440965552553993
(Ne, β) = (200, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.653636 seconds (524.72 M allocations: 40.454 GiB, 10.56% gc time)
Ne = 200
Ne 200& β 1.03 RMSE: 0.44676234560602235
(Ne, β) = (200, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:00:32[39m
32.945237 seconds (525.12 M allocations: 40.504 GiB, 10.62% gc time)
Ne = 200
[32mProgress: 60%|████████████████████████▋ | ETA: 0:05:33[39m
Ne 200& β 1.04 RMSE: 0.44626185722453343
(Ne, β) = (400, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:06[39m
66.327544 seconds (1.05 G allocations: 81.008 GiB, 10.87% gc time)
Ne = 400
Ne 400& β 0.97 RMSE: 0.4085977207026707
(Ne, β) = (400, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:06[39m
66.808436 seconds (1.05 G allocations: 80.687 GiB, 10.92% gc time)
Ne = 400
Ne 400& β 0.98 RMSE: 0.39607788822285156
(Ne, β) = (400, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:07[39m
67.761607 seconds (1.05 G allocations: 80.788 GiB, 10.84% gc time)
Ne = 400
Ne 400& β 0.99 RMSE: 0.39343591377600945
(Ne, β) = (400, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:06[39m
66.939916 seconds (1.05 G allocations: 80.781 GiB, 11.03% gc time)
Ne = 400
Ne 400& β 1.0 RMSE: 0.40318565392301375
(Ne, β) = (400, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:02[39m
62.321313 seconds (1.04 G allocations: 80.438 GiB, 10.20% gc time)
Ne = 400
Ne 400& β 1.01 RMSE: 0.4100854726693693
(Ne, β) = (400, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:03[39m
63.730846 seconds (1.05 G allocations: 80.572 GiB, 10.40% gc time)
Ne = 400
Ne 400& β 1.02 RMSE: 0.38176763430547456
(Ne, β) = (400, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:04[39m
64.822346 seconds (1.04 G allocations: 80.225 GiB, 10.62% gc time)
Ne = 400
Ne 400& β 1.03 RMSE: 0.4085645704656088
(Ne, β) = (400, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:05[39m
65.626958 seconds (1.04 G allocations: 80.065 GiB, 10.65% gc time)
Ne = 400
[32mProgress: 80%|████████████████████████████████▊ | ETA: 0:04:16[39m
Ne 400& β 1.04 RMSE: 0.4079567728595374
(Ne, β) = (600, 0.97)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:40[39m
100.680386 seconds (1.58 G allocations: 121.898 GiB, 10.66% gc time)
Ne = 600
Ne 600& β 0.97 RMSE: 0.3974459895687504
(Ne, β) = (600, 0.98)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:41[39m
101.216513 seconds (1.58 G allocations: 121.846 GiB, 10.82% gc time)
Ne = 600
Ne 600& β 0.98 RMSE: 0.38141685599321956
(Ne, β) = (600, 0.99)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:35[39m
95.466758 seconds (1.58 G allocations: 121.596 GiB, 10.47% gc time)
Ne = 600
Ne 600& β 0.99 RMSE: 0.38328505870695373
(Ne, β) = (600, 1.0)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:37[39m
97.764218 seconds (1.57 G allocations: 121.162 GiB, 10.63% gc time)
Ne = 600
Ne 600& β 1.0 RMSE: 0.40937635938388467
(Ne, β) = (600, 1.01)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:39[39m
99.705125 seconds (1.57 G allocations: 121.203 GiB, 10.65% gc time)
Ne = 600
Ne 600& β 1.01 RMSE: 0.38736704936083116
(Ne, β) = (600, 1.02)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:41[39m
101.027703 seconds (1.57 G allocations: 121.069 GiB, 10.65% gc time)
Ne = 600
Ne 600& β 1.02 RMSE: 0.39010126309094506
(Ne, β) = (600, 1.03)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:42[39m
102.742683 seconds (1.57 G allocations: 120.919 GiB, 10.84% gc time)
Ne = 600
Ne 600& β 1.03 RMSE: 0.3844961954810035
(Ne, β) = (600, 1.04)
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:01:35[39m
95.489899 seconds (1.57 G allocations: 120.768 GiB, 10.57% gc time)
Ne = 600
[32mProgress: 100%|█████████████████████████████████████████| Time: 0:30:21[39m
Ne 600& β 1.04 RMSE: 0.38634497310726157
```julia
save(path*"metric_sadaptivermf2"*string(p)*".jld", "metric", metric_sadaptivermf2)
```
|
d3e19a5cae4205eaf77302d2185420119d247f25
| 248,106 |
ipynb
|
Jupyter Notebook
|
notebooks/Benchmark Lorenz 63 linear and nonlinear filters.ipynb
|
mleprovost/TransportBasedInference.jl
|
bdcedf72e9ea23c24678fe6af7a00202c5f9d5d7
|
[
"MIT"
] | 1 |
2022-03-23T03:16:56.000Z
|
2022-03-23T03:16:56.000Z
|
notebooks/Benchmark Lorenz 63 linear and nonlinear filters.ipynb
|
mleprovost/TransportBasedInference.jl
|
bdcedf72e9ea23c24678fe6af7a00202c5f9d5d7
|
[
"MIT"
] | null | null | null |
notebooks/Benchmark Lorenz 63 linear and nonlinear filters.ipynb
|
mleprovost/TransportBasedInference.jl
|
bdcedf72e9ea23c24678fe6af7a00202c5f9d5d7
|
[
"MIT"
] | null | null | null | 26.810676 | 762 | 0.428619 | true | 56,371 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.76908 | 0.695958 | 0.535248 |
__label__yue_Hant
| 0.101778 | 0.081889 |
### Preaamble
```python
import sys
sys.path.append('../src/')
from preamble import *
```
'Loaded typical imports v 0.0.1'
```python
import IFTA as bidirect
```
```python
from mpl_toolkits.axes_grid1.anchored_artists import AnchoredSizeBar
import matplotlib.font_manager as fm
fontprops = fm.FontProperties(size=12)
from matplotlib.patches import Wedge
from matplotlib import patches
```
```python
# Required for plotting Figure from paper, if not install skip plot_does_paper function
# install pacakge [matplotlib-scalebar] with conda or pip
from matplotlib_scalebar.scalebar import ScaleBar
from matplotlib_scalebar.scalebar import SI_LENGTH_RECIPROCAL
```
# Designing a ring in the reciprocal k-space
We are going to established different rules to generate a QR. We have to find the suitable rule to generate different structure. We will use the effective refractive index to understand the rules.
```python
# For the design we have not taken in account the dispersion of the materials. We use fixed values.
n_front = 1.0
n_back = 1.4 # We assume a fixed SiO2 1.4, lower limit.
n_active = 3.6
# The PC is made between SiO2
```
## Selection of the wavelength to use
We limit the ring to two fundamental wavelengths,
the band-gap + Urbach Tail of the semiconductor and the full absorption of the structure (100 nm GaAs layer).
$l_1 = 900 nm$, $l_2 = 440 nm$. With this values we define the $k_1,k_2$ as the $R$ and $r$ for the ring. Therefore:
\begin{equation}
k_1 = 1/l_1,\\
k_2 = 1/l_2.
\end{equation}
The spatial frequencies will be scaled using the effective refractive index of the layer.
Therefore:
\begin{equation}
k_{1,m} = n_{m} k_{1,0},\\
k_{2,m} = n_{m} k_{2,0}.
\end{equation}
```python
l1 = 0.9 # Bandgap + Urbach Tail
l2 = 0.440 # Single pass with absorption > 95 %
k1 = 1/l1
k2 = 1/l2
k_i = np.array([k1,k2]) * n_front# k in the inc. medium
k_t = np.array([k1,k2]) * n_back# k in the tran. medium
k_c = np.array([k1,k2]) * n_active# using the active
```
```python
# As we are going to work on transmission we use the k_t with the SiO2
k_q = k_t
```
## Lattice Parameter Rules
We have to select the lattice parameter. We can define different rules to established it.
First of all, we want to see the progression from a conventional PC to a QR-PC structure. Therefore, somehow we have to increase the number o diffracted orders between, $k_1$ and $k_2$. Each diffracted order is spaced $k_a$ being, $k_a = 1/a$. Increasing the lattice parameter we increase the number of orders inside the ring, as we are joining more orders together.
**Rule $\Delta m = 0$:**
We want the first diffracted order between $k_2$ and $k_1$. Therefore
\begin{equation}
\label{eq:krule0}
k_a= (k_2 + k_1) / 2.
\end{equation}
**Rule $\Delta m \geq 0$:**
We want to increase the number of orders $\Delta m$, so we establish $k_a$ by dividing linearly the number of orders ($R-r$).
\begin{equation}
\label{eq:krule1}
k_a= (k_2 - k_1) / \Delta m.
\end{equation}
(We use $k_a = 1/a$ for simplicity/numeric but it is equivalent to $k_a = 2\pi / a$ if the constant $2 \pi$ is removed/added in all the steps.)
#### Functions
```python
def qrule(k_int, k_ext, delta_m=1):
""" Function that generates the rule for creating the Q space
k_int (float): interior radious in microns^-1
k_ext (float): exterior radious in microns^-1
delta_m (int)
"""
if delta_m == 0:
ka = (k_int + k_ext) / 2
else:
ka = (k_ext - k_int) / delta_m
return ka
```
```python
def apply_LUT(I, LUT1, LUT2):
""" Dummy function to apply two LUTs"""
I[LUT1] = 0.
I[LUT2] = 0.
# no need for return write directly
def create_DOEs(Q, M=2, steps=200, seed=23):
""" Function to create the structure with objective Q using the function from IFTA
Q(np.array NxN): Matrix containing the target Q
M (int): multiplier for the zero-padding frame. The final targe matrix rank is [2*N+1 + M*N].
seed (int): Seed used in the numpy random to generate the phase.
"""
F_WA, Q_WA, error_WA = bidirect.WA_A(Q, M=M, steps=steps, seed=seed)
Q_WA = abs(Q_WA) / abs(Q_WA).max()
DOE = dict(F=F_WA,Q=Q_WA,error=error_WA)
return DOE
```
```python
#from pyphotonics.utils import images as simages
def generate_qr(a, k_int, k_ext, M=10, steps=200, seed=23): #23 is just my cake day
""" Function to generate the QR for a lattice parameter a, a k_int and k_ext.
a (float): lattice in microns.
k_int (float): interior radious in microns^-1
k_ext (float): exterior radious in microns^-1
M (int): Zero-Padding frame used for the structure.
steps (int): steps to do the iteration
seed (int): Initial seed for the random generator of numpy.
"""
r1 = k_int #* n_doe #n_doe / l1
r2 = k_ext #* n_doe #n_doe / l2
N = int(np.round(r2 * a, 0)) + 1 # + 1 # Longest radius/samples x microns -1
sN = 2 * N + 1
R1 = r1 * a
R2 = r2 * a
conditions = dict(a=a, R1=R1,R2=R2,M=M)
# We create the Q(qx,qy)
x, y = np.meshgrid(np.arange(-N, N + 1), np.arange(-N, N + 1))
Q = np.ones((2 * N + 1, 2 * N + 1))
LUT1 = x ** 2 + y ** 2 <= R1 ** 2
Q[LUT1] = 0.
LUT2 = x ** 2 + y ** 2 > R2 ** 2
Q[LUT2] = 0.
Q = Q/Q.max()
#print(i+1, Q.sum())
# Let' create the DOE
L = create_DOEs(Q, M=M, steps=steps, seed=seed)
print("Sizes", Q.shape, L['F'].shape)
return L, Q, conditions
```
## Generating the L (real space) from the Q (reciprocal space)
```python
k_int = k_q[0] # The interior radius circle
k_ext = k_q[1]
```
```python
delta_m = np.array([0, 1, 2, 3, 4], dtype='float64')
kas = np.zeros_like(delta_m)
for i, m_i in enumerate(delta_m):
ka = qrule(k_int, k_ext, delta_m=m_i)
kas[i] = ka
a_s = 1. / kas ## We do not rescale twice the k
print("Lattice parameter (microns)")
print (a_s)
```
Lattice parameter (microns)
[0.4222 0.6149 1.2298 1.8447 2.4596]
```python
Ls = []
CONDs = []
Qs = []
seed = 2
for i, a in enumerate(a_s):
L, Q, conditions = generate_qr(a, k_int, k_ext, M=10)
Ls.append(L)
CONDs.append(conditions)
Qs.append(Q)
## PLOTTTING
# Plotting uncomment for seeing each iteration
F_WA = L['F']
Q_WA = np.fft.fft2(F_WA)
Q_WA[0,0] = 0 # We remove the 0th order for plotting
Q_WA = np.abs(np.fft.fftshift(Q_WA)) # Re-center
Q_WA = Q_WA/Q_WA.max() # We normalize the Q
N = Q.shape[0]/2
sN = Q.shape[0]
# We have to zoom into the Zero-padding for plotting Q
MN = (Q_WA.shape[0] - 1 )//2
sNM = (2*MN + 1) // a
# Window to check
win2 = [(Q_WA.shape[0] - Q.shape[0])//2,
(Q_WA.shape[0] - Q.shape[0])//2 + Q.shape[0]]
settings = {"cmap":"magma", "vmin":0, "vmax":1,
"origin":"lower",}
fg, ax = plt.subplots(1,4)
ax[0].set_title('Target Q')
ax[0].imshow(Q, **settings)
ax[1].set_title('Structure L')
ax[1].imshow(L['F'], **settings)
ax[2].set_title('PSD(L) \nwith padding')
ax[2].imshow(Q_WA, **settings)
ax[3].set_title('PSD(L) \ninterest region')
ax[3].imshow(Q_WA[win2[0]:win2[1], win2[0]:win2[1]], **settings)
```
```python
def plot_does_paper(Is, DOEs, a_s, title1, title2, scales_r, scales_k, cmap1='magma',
cmap2='binary', ncols=5,
ltop_as = "", ltop_bs= "", **kwargs,
):
"""It returns the figures for Fig.1 with Q and L for each Delta m"""
fontprops = fm.FontProperties(size=12)
plt.rcParams['font.size']=9
plt.rcParams['svg.fonttype']='none'
figsize = kwargs.get('figsize', (7.48, 2.7))
fg, ax = plt.subplots(nrows=2, ncols=ncols, figsize=figsize)
ii = range(ncols)
for i, I, DOE, a in zip(ii, Is, DOEs, a_s):
F_WA = DOE['F']
I_WA = DOE['Q']
I_WA = np.fft.fft2(F_WA)
I_WA[0,0] = 0
I_WA = np.abs(np.fft.fftshift(I_WA))
I_WA = I_WA/I_WA.max()
N = I.shape[0]/2
sN = I.shape[0]
# We have to zoom into the Zero-padding for plotting Q
MN = (I_WA.shape[0] - 1 )//2
sNM = (2*MN + 1) // a
win2 = [(I_WA.shape[0] - I.shape[0])//2,
(I_WA.shape[0] - I.shape[0])//2 + I.shape[0]]
xlim = [-a / 2., a / 2. ]
xftlim = np.array([-sN // 2 , sN // 2 +1] * 2) /a
xftlim2 = np.array([-sNM // 2, sNM // 2+1] * 2) /a
ax1 = ax[0,i]
ax2 = ax[1,i]
ax1.text(1., 1.03, title1[i], fontsize=10, horizontalalignment='right',
verticalalignment='bottom', transform=ax1.transAxes)
ax1.text(-.05, 1.03, ltop_as[i], fontsize=11, horizontalalignment='left',
verticalalignment='bottom', transform=ax1.transAxes)
ax2.text(1., 1.02, title1[i], fontsize=10, horizontalalignment='right',
verticalalignment='bottom', transform=ax2.transAxes)
ax2.text(-.05, 1.02, ltop_bs[i], fontsize=11, horizontalalignment='left',
verticalalignment='bottom', transform=ax2.transAxes)
im1 = ax1.imshow(
abs(I_WA[win2[0]:win2[1], win2[0]:win2[1]]), #interpolation='nearest',
extent=xftlim,
cmap=cmap1,
vmin=0,vmax=1)
im2 = ax2.imshow(F_WA.real / np.pi, #interpolation='nearest',
extent=xlim * 2,
origin='lower',
cmap=cmap2)
label_formatter = lambda x,y: ""
scalebar1 = ScaleBar(1,units='1/um',dimension=SI_LENGTH_RECIPROCAL,
height_fraction=0.05,
fixed_value=scales_k[i],
box_alpha=0.,
box_color='k',
#frameon=True,
color='w',
label_formatter=label_formatter)
ax1.add_artist(scalebar1)
[axi.set_xticks([]) for axi in[ax1,ax2]]
[axi.set_xticklabels([]) for axi in[ax1,ax2]]
[axi.set_yticks([]) for axi in[ax1,ax2]]
[axi.set_yticklabels([]) for axi in[ax1,ax2]]
ax2.annotate('', xy=(0, -0.08), xycoords='axes fraction', xytext=(1, -0.08),
arrowprops=dict(arrowstyle="<->", color='k'))
ax2.text(0.5, -0.19, title2[i], fontsize=10, horizontalalignment='center',
verticalalignment='center', transform=ax2.transAxes,)
#fg.tight_layout()
sbot = kwargs.get("sbot",0.05)
stop = kwargs.get("stop",0.9)
fg.subplots_adjust(left=0.01,bottom=sbot, top=stop,right=0.99)
cb = fg.colorbar(im1, ax=ax[0,:].ravel().tolist(), pad=0.01, aspect = 30, label="|FT|$^2$")
cb = fg.colorbar(im2, ax=ax[1,:].ravel().tolist(), pad=0.01, aspect = 30, label="Phi/$\pi$")
#algaas = patches.Patch(facecolor='black', edgecolor='black', label='AlGaAs')
#vacuum = patches.Patch(facecolor='white', edgecolor='black', label='SiO$_2$')
#ax[1,-1].legend(loc=2,bbox_to_anchor=(1,1.1),frameon=False,handles=[algaas, vacuum])
return fg
```
```python
scale_rs = [0.1, 0.1, 0.2, 0.2, 0.2, 0.2, 0.2, 4]
label_aleft = 'abcde'
label_bleft = 'fghij'
titles1 = []
titles2 = []
ltopsa = []
ltopsb = []
scales = [0.1, 0.25, 0.5, 0.75, 1.]
scales = [0.4, 0.4, 0.4, 0.4, .4]
for i, a in enumerate(a_s):
title1 = "$\Delta m={0}$".format(int(delta_m[i]),0)
title2 = f"a = {a:.3f} $\mu$m"
ltop_a = '({})'.format(label_aleft[i])
ltop_b = '({})'.format(label_bleft[i])
ltopsa.append(ltop_a)
ltopsb.append(ltop_b)
titles1.append(title1)
titles2.append(title2)
print(a)
fg2 = plot_does_paper(Qs, Ls, a_s, titles1,titles2,
ltop_as=ltopsa,
ltop_bs=ltopsb, scales_r=scales, scales_k=[2]*5)
fg2.savefig('../figures/Fig_K_Growth.pdf', dpi=600, bbox_inches='tight')
```
### Saving the Structures
```python
## Used for saving the structures as images
# If not installed use pip or conda
import skimage.io
def saver_image(saveas, DOE):
A = DOE['F']/DOE['F'].max()
A = np.array( A * 255,dtype=np.uint8)
im = skimage.io.imsave(saveas+".png", A)
```
```python
for ii, L in enumerate(Ls):
saver_image("../figures/ls/" + "qr_{0}_s{1}".format(ii,0), L)
```
# END
The structures are saved as .png bitmaps so can be loaded on the EM solver.
# Appendix: Example design of Q with two delta m
```python
def plot_bragg_harmonics(ks, kal=0.4, kar=0.4):
fig, axes = plt.subplots(1,1,figsize=(3.3,3.3))
axes = [axes]
colors = ['C0', 'C0', 'C3']
linestyles=['-','-.', '--']
axes[0].plot([0],[0], '+', color='k')
#labels = ['k$_0$', 'k$_{\mathrm{SiO}_2}$', 'k$_\mathrm{GaAs}$']
i = 1
color_c =['0.9', 'w']
axes[0].plot([0,0],[0,0], '-', color='k',label='$k_1$')
axes[0].plot([0,0],[0,0], '--', color='k',label='$k_2$')
circle1 = plt.Circle((0,0), ks[1], fill=True, edgecolor='k', linestyle='--', facecolor='0.85', label='$k_2$')
axes[0].add_artist(circle1)
circle1 = plt.Circle((0,0), ks[0], fill=True, facecolor='white', edgecolor='k',linestyle='-', label='$k_1$')
axes[0].add_artist(circle1)
ring = Wedge((0,0),ks[1],360,360, width=ks[1]-ks[0])
axes[0].add_artist(ring)
xc = np.arange(-100,100,1) * kal
Xc, Yc = np.meshgrid(xc,xc)
axes[0].plot(Xc[Xc<=0],Yc[Xc<=0],'x', color='0.6', label='$k_{G,\Delta m=0}$')
lutA = Xc**2 + Yc**2 >= ks[0]**2
lutB = Xc**2 + Yc**2 < ks[1]**2
lut = (lutA * lutB) * (Xc <=0)
axes[0].plot(Xc[lut],Yc[lut],'x', color='C3')
xc = np.arange(-100,100,1) * kar
Xc, Yc = np.meshgrid(xc,xc)
axes[0].plot(Xc[Xc>=0],Yc[Xc>=0],'.', color='0.6', label='$k_{G,\Delta m=2}$')
lutA = Xc**2 + Yc**2 >= ks[0]**2
lutB = Xc**2 + Yc**2 < ks[1]**2
lut = (lutA * lutB) * (Xc >=0)
axes[0].plot(Xc[lut],Yc[lut],'.', color='C3')
axes[0].axis('scaled')
axes[0].set_xlabel(r'$k_x/\pi$($\mu$m$^{-1}$)')
axes[0].set_ylabel(r'$k_y/\pi$($\mu$m$^{-1}$)')
axes[0].legend(loc=2, bbox_to_anchor=(1,1), frameon=False, fontsize=12)
#
axes[0].set_ylim(-1,1)
axes[0].set_xticks([-10.0, -5, 0,5, 10])
axes[0].set_yticks([-10.0, -5, 0,5, 10])
axes[0].set_ylim(-10,10)
axes[0].set_xlim(-10,10)
return fig, axes
ks = k_t[:2]
fig, axes = plot_bragg_harmonics(k_t*2, kal=2.36969*2, kar=1.62)
#axes[0].axhline(0, color='k')
axes[0].axvline(0, color='k', linestyle='dotted', linewidth=1.2)
fig.savefig('../figures/target_design.svg', bbox_inches='tight')
#fig.savefig('../reports/Fig_rings_kb.pdf', dpi=600, bbox_inches='tight')
```
```python
```
```python
```
|
be9f6ee546dfd4330eacb7b327c7459c30b5a1b6
| 448,157 |
ipynb
|
Jupyter Notebook
|
notebooks/01-K-Space Desing 100 nm GaAs.ipynb
|
jbuencuerpo/qr_ifta
|
48c0d9fdfcd5e092735480de32bf6ddf5e58ddf0
|
[
"MIT"
] | null | null | null |
notebooks/01-K-Space Desing 100 nm GaAs.ipynb
|
jbuencuerpo/qr_ifta
|
48c0d9fdfcd5e092735480de32bf6ddf5e58ddf0
|
[
"MIT"
] | null | null | null |
notebooks/01-K-Space Desing 100 nm GaAs.ipynb
|
jbuencuerpo/qr_ifta
|
48c0d9fdfcd5e092735480de32bf6ddf5e58ddf0
|
[
"MIT"
] | null | null | null | 472.241307 | 92,928 | 0.935257 | true | 5,002 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.819893 | 0.774583 | 0.635076 |
__label__eng_Latn
| 0.625523 | 0.313824 |
```python
from sympy import *
x, y, z, t = symbols('x y z t')
```
## Mechanics
The module called [`sympy.physics.mechanics`](http://pyvideo.org/video/2653/dynamics-and-control-with-python)
contains elaborate tools for describing mechanical systems,
manipulating reference frames, forces, and torques.
These specialized functions are not necessary for a first-year mechanics course.
The basic `SymPy` functions like `solve`,
and the vector operations you learned in the previous sections are powerful enough for basic Newtonian mechanics.
### Dynamics
The net force acting on an object is the sum of all the external forces acting on it $\vec{F}_{\textrm{net}} = \sum \vec{F}$.
Since forces are vectors,
we need to use vector addition to compute the net force.
Compute
$\vec{F}_{\textrm{net}}=\vec{F}_1 + \vec{F}_2$,
where $\vec{F}_1=4\hat{\imath}[\mathrm{N}]$ and $\vec{F}_2 = 5\angle 30^\circ[\mathrm{N}]$:
```python
F_1 = Matrix( [4,0] )
F_2 = Matrix( [5*cos(30*pi/180), 5*sin(30*pi/180) ] )
F_net = F_1 + F_2
F_net # in Newtons
```
$\displaystyle \left[\begin{matrix}4 + \frac{5 \sqrt{3}}{2}\\\frac{5}{2}\end{matrix}\right]$
```python
F_net.evalf() # in Newtons
```
$\displaystyle \left[\begin{matrix}8.33012701892219\\2.5\end{matrix}\right]$
To express the answer in length-and-direction notation,
use `norm` to find the length of $\vec{F}_{\textrm{net}}$
and `atan2` (The function `atan2(y,x)` computes the correct direction
for all vectors $(x,y)$, unlike `atan(y/x)` which requires corrections for angles in the range $[\frac{\pi}{2}, \frac{3\pi}{2}]$.) to find its direction:
```python
F_net.norm().evalf() # |F_net| in [N]
```
$\displaystyle 8.69718438067042$
```python
(atan2( F_net[1],F_net[0] )*180/pi).n() # angle in degrees
```
$\displaystyle 16.70531380601$
The net force on the object is $\vec{F}_{\textrm{net}}= 8.697\angle 16.7^\circ$[N].
### Kinematics
Let $x(t)$ denote the position of an object,
$v(t)$ denote its velocity,
and $a(t)$ denote its acceleration.
Together $x(t)$, $v(t)$, and $a(t)$ are known as the *equations of motion* of the object.
The equations of motion are related by the derivative operation:
$$
a(t) \overset{\frac{d}{dt} }{\longleftarrow} v(t) \overset{\frac{d}{dt} }{\longleftarrow} x(t).
$$
Assume we know the initial position $x_i\equiv x(0)$ and the initial velocity $v_i\equiv v(0)$ of the object
and we want to find $x(t)$ for all later times.
We can do this starting from the dynamics of the problem—the forces acting on the object.
Newton's second law $\vec{F}_{\textrm{net}} = m\vec{a}$ states that a net force $\vec{F}_{\textrm{net}}$
applied on an object of mass $m$ produces acceleration $\vec{a}$.
Thus, we can obtain an objects acceleration if we know the net force acting on it.
Starting from the knowledge of $a(t)$, we can obtain $v(t)$ by integrating
then find $x(t)$ by integrating $v(t)$:
$$
a(t) \ \ \ \overset{v_i+ \int\!dt }{\longrightarrow} \ \ \ v(t) \ \ \ \overset{x_i+ \int\!dt }{\longrightarrow} \ \ \ x(t).
$$
The reasoning follows from the fundamental theorem of calculus:
if $a(t)$ represents the change in $v(t)$,
then the total of $a(t)$ accumulated between $t=t_1$ and $t=t_2$
is equal to the total change in $v(t)$ between these times: $\Delta v = v(t_2) - v(t_1)$.
Similarly, the integral of $v(t)$ from $t=0$ until $t=\tau$ is equal to $x(\tau) - x(0)$.
### Uniform acceleration motion (UAM)
Let's analyze the case where the net force on the object is constant.
A constant force causes a constant acceleration $a = \frac{F}{m} = \textrm{constant}$.
If the acceleration function is constant over time $a(t)=a$.
We find $v(t)$ and $x(t)$ as follows:
```python
t, a, v_i, x_i = symbols('t a v_i x_i')
v = v_i + integrate(a, (t, 0,t) )
v
```
$\displaystyle a t + v_{i}$
```python
x = x_i + integrate(v, (t, 0,t) )
x
```
$\displaystyle \frac{a t^{2}}{2} + t v_{i} + x_{i}$
You may remember these equations from your high school physics class.
They are the *uniform accelerated motion* (UAM) equations:
\begin{align*}
a(t) &= a, \\
v(t) &= v_i + at, \\[-2mm]
x(t) &= x_i + v_it + \frac{1}{2}at^2.
\end{align*}
In high school, you probably had to memorize these equations.
Now you know how to derive them yourself starting from first principles.
For the sake of completeness, we'll now derive the fourth UAM equation,
which relates the object's final velocity to the initial velocity,
the displacement, and the acceleration, without reference to time:
```python
(v*v).expand()
```
$\displaystyle a^{2} t^{2} + 2 a t v_{i} + v_{i}^{2}$
```python
((v*v).expand() - 2*a*x).simplify()
```
$\displaystyle - 2 a x_{i} + v_{i}^{2}$
The above calculation shows $v_f^2 - 2ax_f = -2ax_i + v_i^2$.
After moving the term $2ax_f$ to the other side of the equation, we obtain
\begin{align*}
(v(t))^2 \ = \ v_f^2 = v_i^2 + 2a\Delta x \ = \ v_i^2 + 2a(x_f-x_i).
\end{align*}
The fourth equation is important for practical purposes
because it allows us to solve physics problems in a time-less manner.
#### Example
Find the position function of an object at time $t=3[\mathrm{s}]$,
if it starts from $x_i=20[\mathrm{m}]$ with $v_i=10[\mathrm{m/s}]$ and undergoes
a constant acceleration of $a=5[\mathrm{m/s^2}]$.
What is the object's velocity at $t=3[\mathrm{s}]$?
```python
x_i = 20 # initial position
v_i = 10 # initial velocity
a = 5 # acceleration (constant during motion)
x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t,0,t) )
x
```
$\displaystyle \frac{5 t^{2}}{2} + 10 t + 20$
```python
x.subs({t:3}).n() # x(3) in [m]
```
$\displaystyle 72.5$
```python
diff(x,t).subs({t:3}).n() # v(3) in [m/s]
```
$\displaystyle 25.0$
If you think about it,
physics knowledge combined with computer skills is like a superpower!
### General equations of motion
The procedure
$a(t) \ \overset{v_i+ \int\!dt }{\longrightarrow} \ v(t) \ \overset{x_i+ \int\!dt }{\longrightarrow} \ x(t)$
can be used to obtain the position function $x(t)$ even when the acceleration is not constant.
Suppose the acceleration of an object is $a(t)=\sqrt{k t}$;
what is its $x(t)$?
```python
t, v_i, x_i, k = symbols('t v_i x_i k')
a = sqrt(k*t)
x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t, 0,t) )
x
```
$\displaystyle t v_{i} + x_{i} + \frac{4 \left(k t\right)^{\frac{5}{2}}}{15 k^{2}}$
### Potential energy
Instead of working with the kinematic equations of motion $x(t)$, $v(t)$, and $a(t)$ which depend on time,
we can solve physics problems using *energy* calculations.
A key connection between the world of forces and the world of energy is the concept of *potential energy*.
If you move an object against a conservative force (think raising a ball in the air against the force of gravity),
you can think of the work you do agains the force as being stored in the potential energy of the object.
For each force $\vec{F}(x)$ there is a corresponding potential energy $U_F(x)$.
The change in potential energy associated with the force $\vec{F}(x)$ and displacement $\vec{d}$
is defined as the negative of the work done by the force during the displacement: $U_F(x) = - W = - \int_{\vec{d}} \vec{F}(x)\cdot d\vec{x}$.
The potential energies associated with gravity $\vec{F}_g = -mg\hat{\jmath}$
and the force of a spring $\vec{F}_s = -k\vec{x}$ are calculated as follows:
```python
x, y = symbols('x y')
m, g, k, h = symbols('m g k h')
F_g = -m*g # Force of gravity on mass m
U_g = - integrate( F_g, (y,0,h) )
U_g # Grav. potential energy
```
$\displaystyle g h m$
```python
F_s = -k*x # Spring force for displacement x
U_s = - integrate( F_s, (x,0,x) )
U_s # Spring potential energy
```
$\displaystyle \frac{k x^{2}}{2}$
Note the negative sign in the formula defining the potential energy.
This negative is canceled by the negative sign of the dot product $\vec{F}\cdot d\vec{x}$:
when the force acts in the direction opposite to the displacement,
the work done by the force is negative.
### Simple harmonic motion
The force exerted by a spring is given by the formula $F=-kx$.
If the only force acting on a mass $m$ is the force of a spring,
we can use Newton's second law to obtain the following equation:
$$
F=ma
\quad \Rightarrow \quad
-kx = ma
\quad \Rightarrow \quad
-kx(t) = m\frac{d^2}{dt^2}\Big[x(t)\Big].
$$
The motion of a mass-spring system is described by the *differential equation* $\frac{d^2}{dt^2}x(t) + \omega^2 x(t)=0$,
where the constant $\omega = \sqrt{\frac{k}{m}}$ is called the angular frequency.
We can find the position function $x(t)$ using the `dsolve` method:
```python
t = Symbol('t') # time t
x = Function('x') # position function x(t)
w = Symbol('w', positive=True) # angular frequency w
sol = dsolve( diff(x(t),t,t) + w**2*x(t), x(t) )
sol
```
$\displaystyle x{\left(t \right)} = C_{1} \sin{\left(t w \right)} + C_{2} \cos{\left(t w \right)}$
```python
x = sol.rhs
x
```
$\displaystyle C_{1} \sin{\left(t w \right)} + C_{2} \cos{\left(t w \right)}$
Note the solution $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ is equivalent to $x(t) = A\cos(\omega t + \phi)$,
which is more commonly used to describe simple harmonic motion.
We can use the `expand` function with the argument `trig=True` to convince ourselves of this equivalence:
```python
A, phi = symbols("A phi")
(A*cos(w*t - phi)).expand(trig=True)
```
$\displaystyle A \sin{\left(\phi \right)} \sin{\left(t w \right)} + A \cos{\left(\phi \right)} \cos{\left(t w \right)}$
If we define $C_1=A\sin(\phi)$ and $C_2=A\cos(\phi)$,
we obtain the form $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ that `SymPy` found.
### Conservation of energy
We can verify that the total energy of the mass-spring system is conserved by showing
$E_T(t) = U_s(t) + K(t) = \textrm{constant}$:
```python
x = sol.rhs.subs({"C1":0,"C2":A})
x
```
$\displaystyle A \cos{\left(t w \right)}$
```python
v = diff(x, t)
v
```
$\displaystyle - A w \sin{\left(t w \right)}$
```python
E_T = (0.5*k*x**2 + 0.5*m*v**2).simplify()
E_T
```
$\displaystyle 0.5 A^{2} \left(k \cos^{2}{\left(t w \right)} + m w^{2} \sin^{2}{\left(t w \right)}\right)$
```python
E_T.subs({k:m*w**2}).simplify() # = K_max
```
$\displaystyle 0.5 A^{2} m w^{2}$
```python
E_T.subs({w:sqrt(k/m)}).simplify() # = U_max
```
$\displaystyle 0.5 A^{2} k$
|
1b363adbe34a544608fe25a35a98b85e372ad1bf
| 23,756 |
ipynb
|
Jupyter Notebook
|
tutorials/Mechanics.ipynb
|
minireference/noBSmech_lessons
|
5ee84601f78cc6a50704f1df9fb415c9217d34e5
|
[
"MIT"
] | null | null | null |
tutorials/Mechanics.ipynb
|
minireference/noBSmech_lessons
|
5ee84601f78cc6a50704f1df9fb415c9217d34e5
|
[
"MIT"
] | null | null | null |
tutorials/Mechanics.ipynb
|
minireference/noBSmech_lessons
|
5ee84601f78cc6a50704f1df9fb415c9217d34e5
|
[
"MIT"
] | null | null | null | 24.365128 | 163 | 0.495791 | true | 3,367 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.926304 | 0.812867 | 0.752962 |
__label__eng_Latn
| 0.971665 | 0.587715 |
```python
from scipy import stats as ss
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.ion()
```
# 1 - Moedas Justa com a Binomial
Vamos explorar a probabilidade de uma moeda ser justa usando estatística e amostragem (conceitos não exclusivos).
Lembrando, temos um espaço amostral:
\begin{align}
\mathcal{S} &= \{h, t\} \\
P(h) &= 0.5 \\
P(t) &= 0.5
\end{align}
```python
p = 0.5 # probabilidade de heads/tails
n = 30 # temos 33 jogadas
```
```python
x = np.arange(0, 31)
x
```
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30])
```python
p = 0.5 # probabilidade de heads/tails
n = 30 # temos 33 jogadas
x = np.arange(0, 31)
prob_binom = ss.distributions.binom.pmf(x, n, p)
plt.step(x, prob_binom, 'r-')
plt.xlabel('Num Caras - x')
plt.ylabel('P(sair x caras)')
```
```python
ss.distributions.binom.pmf(22, n, p) + \
ss.distributions.binom.pmf(23, n, p) + \
ss.distributions.binom.pmf(24, n, p) + \
ss.distributions.binom.pmf(25, n, p) + \
ss.distributions.binom.pmf(26, n, p) + \
ss.distributions.binom.pmf(27, n, p) + \
ss.distributions.binom.pmf(28, n, p) + \
ss.distributions.binom.pmf(29, n, p) + \
ss.distributions.binom.pmf(30, n, p)
```
0.0080624008551239308
```python
x_extreme = np.arange(22, 31)
x_extreme
```
array([22, 23, 24, 25, 26, 27, 28, 29, 30])
```python
ss.distributions.binom.pmf(x_extreme, n, p).sum()
```
0.0080624008551239291
# 2 - Moedas Justas Simulando
Vamos simular sem se preocupar com uma binomial. Só jogar uma moeda para cima várias vezes.
```python
# Jogando uma única moeda
np.random.randint(2)
```
0
```python
# Jogando 30 moedas
np.random.randint(2, size=30)
```
array([0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1,
1, 0, 1, 1, 0, 1, 1])
```python
NUM_SIMULACOES = 100000
resultados = 0
n = 30
for i in range(NUM_SIMULACOES):
jogadas = np.random.randint(2, size=n) # joga 30 moedas para cima
n_caras = (jogadas == 1).sum() # conta quantas foram == 1, caras
if n_caras >= 22:
resultados += 1 # quantas vezes vi >= 22 caras
print(resultados / NUM_SIMULACOES)
```
0.00824
## 3 Caso onde Batman está certo
```python
p = 0.9 # probabilidade de heads/tails
n = 30 # temos 33 jogadas
x = np.arange(0, 31)
prob_binom = ss.distributions.binom.pmf(x, n, p)
plt.step(x, prob_binom, 'r-')
plt.xlabel('Num Caras - x')
plt.ylabel('P(sair x caras)')
```
```python
NUM_SIMULACOES = 100000
resultados = 0
n = 30
for i in range(NUM_SIMULACOES):
jogadas = np.random.rand(30) < 0.9
n_caras = (jogadas == 1).sum()
if n_caras >= 22:
resultados += 1
print(resultados / NUM_SIMULACOES)
```
0.99784
## 4 Tem vezes que precisamos testar a coroa também
```python
p = 0.2 # probabilidade de heads/tails
n = 30 # temos 33 jogadas
x = np.arange(0, 31)
prob_binom = ss.distributions.binom.pmf(x, n, p)
plt.step(x, prob_binom, 'r-')
plt.xlabel('Num Caras - x')
plt.ylabel('P(sair x caras)')
```
```python
```
|
c736b28dcd6a7a3fa39b4adefcdf1b7b2a4ed347
| 35,812 |
ipynb
|
Jupyter Notebook
|
01-Moedas.ipynb
|
flaviovdf/evcomp2018
|
104db0d7fb8d77cd1d0edaf64c02f149bdda388c
|
[
"BSD-3-Clause"
] | 4 |
2018-02-20T01:30:34.000Z
|
2021-03-04T12:04:37.000Z
|
01-Moedas.ipynb
|
flaviovdf/evcomp2018
|
104db0d7fb8d77cd1d0edaf64c02f149bdda388c
|
[
"BSD-3-Clause"
] | null | null | null |
01-Moedas.ipynb
|
flaviovdf/evcomp2018
|
104db0d7fb8d77cd1d0edaf64c02f149bdda388c
|
[
"BSD-3-Clause"
] | 12 |
2018-02-24T00:42:49.000Z
|
2022-03-03T04:17:25.000Z
| 85.880096 | 10,134 | 0.840556 | true | 1,248 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.912436 | 0.782662 | 0.71413 |
__label__por_Latn
| 0.564466 | 0.497494 |
```python
%matplotlib inline
```
Introduction to PyTorch
============
PyTorch's tensor library
------------------------------------
The most of PyTorch operations are running on <b>tensors</b>.
A tensor is an multidimensional array.
Lets have a look on some basic tensor operations.
But first, lets import some important PyTorch libraries:
- <b>torch</b> - a Tensor library similar to NumPy, with strong GPU support
- <b>torch.autograd</b> - a "tape-based" (about this - later on) automatic differentiation library
- <b>torch.nn</b> - a neural networks library deeply integrated with autograd
- <b>torch.optim</b> - an optimization package to be used with torch.nn with standard optimization methods such as SGD, RMSProp, LBFGS, Adam etc.
We also set a seed to be able to reproduce the same results later.
```python
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.optim as optim
torch.manual_seed(123)
```
<torch._C.Generator at 0x7ff5711eb1c8>
```python
torch.__version__
```
'1.7.0'
Creating Tensors
----------------
Tensors can be created from Python lists with the <b>torch.Tensor()</b> function.
```python
# Create a torch.Tensor object from python list
v = [1, 2, 3]
print(type(v))
v_tensor = torch.Tensor(v)
print(v_tensor)
# Create a torch.Tensor object of size 2x3 from 2x3 matrix
m2x3 = [[1, 2, 3], [4, 5, 6]]
m2x3_tensor = torch.Tensor(m2x3)
print(m2x3_tensor)
# Create a 3D torch.Tensor object of size 3x3x3.
m3x3x3 = [[[1, 2, 3], [4, 5, 6], [7, 8, 9]],
[[10, 11, 12],[13, 14, 15], [16, 17, 18]],
[[19, 20, 21],[22, 23, 24], [25, 26, 27]]]
m3x3x3_tensor = torch.Tensor(m3x3x3)
print(m3x3x3_tensor)
#Create a 4Dtensor from random data and given dimensions (in this case 3x4x5x6) with torch.randn()
m4x3x3x3_tensor = torch.randn((4, 3, 3, 3))
m4x3x3x3_tensor.shape
print(m4x3x3x3_tensor)
```
<class 'list'>
tensor([1., 2., 3.])
tensor([[1., 2., 3.],
[4., 5., 6.]])
tensor([[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[10., 11., 12.],
[13., 14., 15.],
[16., 17., 18.]],
[[19., 20., 21.],
[22., 23., 24.],
[25., 26., 27.]]])
tensor([[[[ 3.3737e-01, -1.7778e-01, -3.0353e-01],
[-5.8801e-01, 3.4861e-01, 6.6034e-01],
[-2.1964e-01, -3.7917e-01, 7.6711e-01]],
[[-1.1925e+00, 6.9835e-01, -1.4097e+00],
[ 1.7938e-01, 1.8951e+00, 4.9545e-01],
[ 2.6920e-01, -7.7020e-02, -1.0205e+00]],
[[-1.6896e-01, 9.1776e-01, 1.5810e+00],
[ 1.3010e+00, 1.2753e+00, -2.0095e-01],
[ 4.9647e-01, -1.5723e+00, 9.6657e-01]]],
[[[-1.1481e+00, -1.1589e+00, 3.2547e-01],
[-6.3151e-01, -2.8400e+00, -1.3250e+00],
[ 1.7843e-01, -2.1338e+00, 1.0524e+00]],
[[-3.8848e-01, -9.3435e-01, -4.9914e-01],
[-1.0867e+00, 8.8054e-01, 1.5542e+00],
[ 6.2662e-01, -1.7549e-01, 9.8284e-02]],
[[-9.3507e-02, 2.6621e-01, -5.8504e-01],
[ 8.7684e-01, 1.6221e+00, -1.4779e+00],
[ 1.1331e+00, -1.2203e+00, 1.3139e+00]]],
[[[ 1.0533e+00, 1.3881e-01, 2.2473e+00],
[-8.0364e-01, -2.8084e-01, 7.6968e-01],
[-6.5956e-01, -7.9793e-01, 1.8383e-01]],
[[ 2.2935e-01, 5.1463e-01, 9.9376e-01],
[-2.5873e-01, -1.0826e+00, -4.4382e-02],
[ 1.6236e+00, -2.3229e+00, 1.0878e+00]],
[[ 6.7155e-01, 6.9330e-01, -9.4872e-01],
[-7.6507e-02, -1.5264e-01, 1.1674e-01],
[ 4.4026e-01, -1.4465e+00, 2.5529e-01]]],
[[[-5.4963e-01, 1.0042e+00, 8.2723e-01],
[-3.9481e-01, 4.8923e-01, -2.1681e-01],
[-1.7472e+00, -1.6025e+00, -1.0764e+00]],
[[ 9.0315e-01, -7.2184e-01, 1.2311e+00],
[-1.0973e+00, -9.6690e-01, 6.7125e-01],
[-9.4053e-01, -4.6806e-01, 1.0322e+00]],
[[-2.8300e-01, 1.1124e+00, -4.1684e-01],
[-1.7106e+00, -3.2902e-01, 1.3966e+00],
[-9.9491e-01, -1.5822e-03, 1.2471e+00]]]])
What is a multidimensional tensor?
-------------------
Since we frequently deal with n > 3 dimensional tensors, its understanding is very important.
The best way to think of a higher (n) dimensional object (and tensor in particular) is as of a container which keeps a series of n-1 dimensional objects "inside" of it. We can "pull out" these "inner" objects by indexing in to higher dimensional tensor container.
Let's have a look on some examples:
- For a vector v (dim(v)=1), indexing into it ("pulling out of it") returns its "slice" - a scalar s (dim(s)=0).
- For a matrix, indexing into it returns its "slice" - a (row or column) vector.
- 3D tensor can be seen as a cube or 3D rectangular consisting of horizontally "stacked" matrices. So if we index into a such tensor it will give us its slice which is a matrix!
- We can't easily visualize 5D (or n-D) tensors, but the idea is actually the same. If we index in to them, we will pull out an object of dimension n-1.
- E.g. a 4D tensor can be seen as a list of cubes or 3D reactangulars. If we index in to a 4D tensor, we will get 3D rectangulars.
```python
# Index into v_tensor and get a scalar
print(v_tensor[0])
# Index into m2x3_tensor and get a vector
print(m2x3_tensor[0])
# Index into m3x3x3_tensor and get a matrix
print(m3x3x3_tensor[0])
# Index into m4x3x3x3_tensor and get a 3D rectangular of size 4x5x6
print(m4x3x3x3_tensor[0])
```
tensor(1.)
tensor([1., 2., 3.])
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
tensor([[[ 0.3374, -0.1778, -0.3035],
[-0.5880, 0.3486, 0.6603],
[-0.2196, -0.3792, 0.7671]],
[[-1.1925, 0.6984, -1.4097],
[ 0.1794, 1.8951, 0.4954],
[ 0.2692, -0.0770, -1.0205]],
[[-0.1690, 0.9178, 1.5810],
[ 1.3010, 1.2753, -0.2010],
[ 0.4965, -1.5723, 0.9666]]])
Operations with Tensors
----------------------
You can operate on tensors in the ways you would expect.
See the documentation <http://pytorch.org/docs/torch.html> for a complete list of operations.
Simple mathematical operations: <b>Addition, Multiplication</b>
```python
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([4, 5, 6])
print(x)
print(y)
w = torch.matmul(x, y)
print(w)
```
tensor([1., 2., 3.])
tensor([4., 5., 6.])
tensor(32.)
Helpful operation: <b>Concatenation</b>
```python
# By default, it concatenates along the axis with 0 (rows). It's "stacking" the rows.
x_1 = torch.randn(2, 5)
print(x_1)
y_1 = torch.randn(3, 5)
print(y_1)
z_1 = torch.cat([x_1, y_1])
print(z_1)
# Concatenate columns:
x_2 = torch.randn(2, 3)
print(x_2)
y_2 = torch.randn(2, 5)
print(y_2)
# second arg specifies which axis to concat along. Here we select 1 (columns). It's attaching the columns.
z_2 = torch.cat([x_2, y_2], 1)
print(z_2)
# If your tensors are not compatible, torch will complain. Uncomment to see the error
torch.cat([x_1, x_2])
```
Reshaping Tensors
----------------
We can use the <code>.view()</code> method to reshape a tensor. Often we will need to reshape our data before passing it
to a neuronal network.
Let's assume we have 64000 RGB images with the size of 28x28 pixels.
We can define an array fo shape (64000, 3, 28, 28) to hold them, where 3 is number of color channels:
```python
x = torch.randn(64000, 3, 28, 28)
# Now we want to add a batch dimension of size 32. We can then infer the second dimension by placing -1:
x_rehsaped = x.view(32, -1, 3, 28, 28)
print(x_rehsaped.shape)
```
torch.Size([32, 2000, 3, 28, 28])
Computation Graphs and Automatic Differentiation
---------------------------------------------
A computation graph is a specification of what parameters with which operations are involved in the computation to give the output.
The fundamental class of Pytorch <code>autograd.Variable</code> keeps track of how it was created.
Computational graphs in Pytorch:
In Keras or Tensorflow computational graphs (models) are fixed. Once they are compiled, they are fixed and during runtime one can not change them anymore.
In Pytorch, there is a flexibility to change the computational graph at runtime. This is achieved by the component called `autograd.Variable`.
```python
# Variables wrap tensor objects
x = autograd.Variable(torch.Tensor([1, 2, 3]), requires_grad=True)
# You can access the data with the .data attribute
print(x.data)
y = autograd.Variable(torch.Tensor([4, 5, 6]), requires_grad=True)
# With autograd.Variable you can also perform all the same operations you did with tensors
z = x + y
print(z.data)
# w knows also that it's result of addition of z lements (AddBackward)
operation = z.grad_fn
print(operation)
```
tensor([1., 2., 3.])
tensor([5., 7., 9.])
<AddBackward0 object at 0x7f15d3c487b8>
The autograd.Variable knows which operation has created it. But how does that help <b>compute a gradient</b>?
```python
# Lets sum up all the entries in z
s = z.sum()
print(s)
print(s.grad_fn)
```
tensor(21., grad_fn=<SumBackward0>)
<SumBackward0 object at 0x7f15d3c44dd8>
Gradient
-------
So now, what is the derivative of this sum with respect to the first component of x? Remember, that x is a tensor of 3 elements: $x = (x_0, x_1, x_2)$
In math, we want a partial derivative of $s$ with respect to $x_0$: $\frac{\partial s}{\partial x_0}$
Well, $s$ knows that it was created as a $sum$ of the tensor $z$ elements $(z_0, z_1, z_2)$. $z$ knows
that it was the sum $x + y$. So
\begin{align}s = \overbrace{x_0 + y_0}^\text{$z_0$} + \overbrace{x_1 + y_1}^\text{$z_1$} + \overbrace{x_2 + y_2}^\text{$z_2$}\end{align}
And so $s$ contains enough information to determine that the derivative of $s$ with respect to $x_0$ is 1!
*Reminder:* If you compute the partial derivative with respect to one variable, you handle all other variables as constants. Therefore they all $(x_1, x_2, y_0, y_1, y_2)$ get zeroes, and the derivative of $f(x_0) = x_0$ is 1.
First we need to run <b>backpropagation</b> and calculate gradients with respect to every variable.
*Note:* if you run <code>backward</code> multiple times, the gradient will increment.
That is because Pytorch *accumulates* the gradient into the <b>.grad
property</b>, since for many models this is very convenient.
Lets now have Pytorch compute the gradient, and see that we were right with our guess of 1:
```python
# calling .backward() on any variable will run backprop, starting from it.
s.backward(retain_graph=True)
```
```python
print(x)
print(x.grad)
print(y.grad)
```
tensor([1., 2., 3.], requires_grad=True)
tensor([3., 3., 3.])
tensor([3., 3., 3.])
How NOT to break the computational graph
----------------------------------
Let's create two torch tensors and add them up:
```python
x = torch.randn((2, 2))
y = torch.randn((2, 2))
z = x + y # These are Tensor types, and backprop would not be possible
print(z)
```
tensor([[1.4264, 0.0791],
[0.4193, 1.2077]])
Now we wrap the torch tensors in <code>autograd.Variable</code>. The <code>var_z</code> contains the information for backpropagation:
```python
var_x = autograd.Variable(x, requires_grad=True)
var_y = autograd.Variable(y, requires_grad=True)
# var_z contains enough information to compute gradients, as we saw above
var_z = var_x + var_y
print(var_z.grad_fn)
```
<AddBackward0 object at 0x7f15d3c48630>
But what happens if we extract the wrapped tensor object out of <code>var_z</code> and re-wrap the tensor in a new <code>autograd.Variable</code>?
```python
var_z_data = var_z.data
new_var_z = autograd.Variable(var_z_data)
print(new_var_z.grad_fn)
```
None
The variable chain is not existing anymore, since we have extracted only data and the whole operations chain was lost.
If we try now to compute <code>backward</code> on <code>new_var_z</code>, it will throw an error:
```python
new_var_z.backward(retain_graph=True)
```
CUDA
----
Check wether GPU accelaration with **CUDA** is available
```python
torch.cuda.is_available()
```
False
```python
# let us run this cell only if CUDA is available
if torch.cuda.is_available():
# creates a LongTensor and transfers it
# to GPU as torch.cuda.LongTensor
a = torch.LongTensor(10).fill_(3).cuda()
print(type(a))
b = a.cpu()
# transfers it to CPU, back to
# being a torch.LongTensor
```
Linear Model
=======
```python
from torch.autograd import Variable
import numpy as np
```
```python
x = [i for i in range(20)] #list comprehention
x_train = np.array(x, dtype=np.float32)
x_train = x_train.reshape(-1, 1)
print(x)
print(x_train.shape)
y = [(5*i + 2) for i in x] #list comprehention
y_train = np.array(y, dtype=np.float32)
y_train = y_train.reshape(-1, 1)
print(y)
print(y_train.shape)
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
(20, 1)
[2, 7, 12, 17, 22, 27, 32, 37, 42, 47, 52, 57, 62, 67, 72, 77, 82, 87, 92, 97]
(20, 1)
Create Model Class
-----------------
```python
# every model is created from a class, usually inherited from nn.Module class
class LinearRegressor(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegressor, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
input_dim = 1
output_dim = 1
model = LinearRegressor(input_dim, output_dim)
model
```
LinearRegressor(
(linear): Linear(in_features=1, out_features=1, bias=True)
)
Loss & Optimizer
---------------
```python
loss_function = nn.MSELoss()
print(loss_function)
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
print(optimizer)
```
MSELoss()
SGD (
Parameter Group 0
dampening: 0
lr: 0.001
momentum: 0
nesterov: False
weight_decay: 0
)
```python
epochs = 500
for epoch in range(epochs):
epoch += 1
#Convert inputs and outputs to torch variable
inputs = Variable(torch.from_numpy(x_train))
real_outputs = Variable(torch.from_numpy(y_train))
# Reset Gradients
optimizer.zero_grad()
# Forward - compute the output
pred_outputs = model(inputs)
# Loss
loss = loss_function(pred_outputs, real_outputs)
# Backward - compute gradients
loss.backward()
# Update parameters
optimizer.step()
# print('epoch {}, loss {}'.format(epoch, loss.data[0]))
print('epoch {}, loss {}'.format(epoch, loss.data))
```
epoch 1, loss 1571.15771484375
epoch 2, loss 887.5265502929688
epoch 3, loss 501.407958984375
epoch 4, loss 283.32598876953125
epoch 5, loss 160.15206909179688
epoch 6, loss 90.5826416015625
epoch 7, loss 51.289329528808594
epoch 8, loss 29.096120834350586
epoch 9, loss 16.5611629486084
epoch 10, loss 9.481161117553711
epoch 11, loss 5.482235431671143
epoch 12, loss 3.2234604358673096
epoch 13, loss 1.9475624561309814
epoch 14, loss 1.226802110671997
epoch 15, loss 0.8195762634277344
epoch 16, loss 0.5894352197647095
epoch 17, loss 0.4593135714530945
epoch 18, loss 0.385688453912735
epoch 19, loss 0.34396791458129883
epoch 20, loss 0.32026800513267517
epoch 21, loss 0.3067483603954315
epoch 22, loss 0.29897719621658325
epoch 23, loss 0.2944541275501251
epoch 24, loss 0.2917642593383789
epoch 25, loss 0.2901107966899872
epoch 26, loss 0.2890428900718689
epoch 27, loss 0.28830578923225403
epoch 28, loss 0.28775641322135925
epoch 29, loss 0.28731128573417664
epoch 30, loss 0.28692615032196045
epoch 31, loss 0.2865757346153259
epoch 32, loss 0.2862444818019867
epoch 33, loss 0.2859245538711548
epoch 34, loss 0.2856106162071228
epoch 35, loss 0.28530043363571167
epoch 36, loss 0.28499269485473633
epoch 37, loss 0.2846853733062744
epoch 38, loss 0.28438055515289307
epoch 39, loss 0.28407567739486694
epoch 40, loss 0.28377068042755127
epoch 41, loss 0.28346866369247437
epoch 42, loss 0.283164918422699
epoch 43, loss 0.2828616499900818
epoch 44, loss 0.2825589179992676
epoch 45, loss 0.2822568416595459
epoch 46, loss 0.28195297718048096
epoch 47, loss 0.28165143728256226
epoch 48, loss 0.28135088086128235
epoch 49, loss 0.2810494303703308
epoch 50, loss 0.28074944019317627
epoch 51, loss 0.2804480195045471
epoch 52, loss 0.2801481783390045
epoch 53, loss 0.2798476219177246
epoch 54, loss 0.2795485556125641
epoch 55, loss 0.2792488932609558
epoch 56, loss 0.27895087003707886
epoch 57, loss 0.27865126729011536
epoch 58, loss 0.278354287147522
epoch 59, loss 0.2780556082725525
epoch 60, loss 0.2777576446533203
epoch 61, loss 0.27746155858039856
epoch 62, loss 0.2771640419960022
epoch 63, loss 0.2768673002719879
epoch 64, loss 0.27657103538513184
epoch 65, loss 0.276275098323822
epoch 66, loss 0.27597853541374207
epoch 67, loss 0.2756834030151367
epoch 68, loss 0.2753884196281433
epoch 69, loss 0.2750936448574066
epoch 70, loss 0.2747994065284729
epoch 71, loss 0.274505615234375
epoch 72, loss 0.2742115557193756
epoch 73, loss 0.27391839027404785
epoch 74, loss 0.2736256718635559
epoch 75, loss 0.2733322083950043
epoch 76, loss 0.27303972840309143
epoch 77, loss 0.27274805307388306
epoch 78, loss 0.2724554240703583
epoch 79, loss 0.27216392755508423
epoch 80, loss 0.2718721032142639
epoch 81, loss 0.2715816795825958
epoch 82, loss 0.2712906301021576
epoch 83, loss 0.27100104093551636
epoch 84, loss 0.27071064710617065
epoch 85, loss 0.27042028307914734
epoch 86, loss 0.27013158798217773
epoch 87, loss 0.26984262466430664
epoch 88, loss 0.269553005695343
epoch 89, loss 0.2692657709121704
epoch 90, loss 0.26897716522216797
epoch 91, loss 0.2686888575553894
epoch 92, loss 0.2684010863304138
epoch 93, loss 0.2681140601634979
epoch 94, loss 0.26782718300819397
epoch 95, loss 0.26754051446914673
epoch 96, loss 0.267254114151001
epoch 97, loss 0.26696810126304626
epoch 98, loss 0.26668277382850647
epoch 99, loss 0.2663962244987488
epoch 100, loss 0.26611125469207764
epoch 101, loss 0.265826940536499
epoch 102, loss 0.26554325222969055
epoch 103, loss 0.26525741815567017
epoch 104, loss 0.26497429609298706
epoch 105, loss 0.2646913528442383
epoch 106, loss 0.26440733671188354
epoch 107, loss 0.26412448287010193
epoch 108, loss 0.2638419568538666
epoch 109, loss 0.2635599970817566
epoch 110, loss 0.26327764987945557
epoch 111, loss 0.2629950940608978
epoch 112, loss 0.26271411776542664
epoch 113, loss 0.26243287324905396
epoch 114, loss 0.26215147972106934
epoch 115, loss 0.2618717551231384
epoch 116, loss 0.26159149408340454
epoch 117, loss 0.2613113820552826
epoch 118, loss 0.2610318064689636
epoch 119, loss 0.26075202226638794
epoch 120, loss 0.2604730427265167
epoch 121, loss 0.2601938247680664
epoch 122, loss 0.25991564989089966
epoch 123, loss 0.25963765382766724
epoch 124, loss 0.25935983657836914
epoch 125, loss 0.25908204913139343
epoch 126, loss 0.25880545377731323
epoch 127, loss 0.25852853059768677
epoch 128, loss 0.25825077295303345
epoch 129, loss 0.2579750120639801
epoch 130, loss 0.25769931077957153
epoch 131, loss 0.2574223577976227
epoch 132, loss 0.25714734196662903
epoch 133, loss 0.25687164068222046
epoch 134, loss 0.25659725069999695
epoch 135, loss 0.25632235407829285
epoch 136, loss 0.2560481131076813
epoch 137, loss 0.2557739019393921
epoch 138, loss 0.2555006146430969
epoch 139, loss 0.255227267742157
epoch 140, loss 0.2549532651901245
epoch 141, loss 0.25468137860298157
epoch 142, loss 0.2544081509113312
epoch 143, loss 0.25413593649864197
epoch 144, loss 0.25386378169059753
epoch 145, loss 0.25359177589416504
epoch 146, loss 0.2533215880393982
epoch 147, loss 0.25305062532424927
epoch 148, loss 0.2527799606323242
epoch 149, loss 0.25250911712646484
epoch 150, loss 0.25223881006240845
epoch 151, loss 0.25196751952171326
epoch 152, loss 0.25169822573661804
epoch 153, loss 0.25142911076545715
epoch 154, loss 0.2511600852012634
epoch 155, loss 0.25089162588119507
epoch 156, loss 0.2506222128868103
epoch 157, loss 0.2503542900085449
epoch 158, loss 0.25008654594421387
epoch 159, loss 0.24981799721717834
epoch 160, loss 0.2495514452457428
epoch 161, loss 0.2492850124835968
epoch 162, loss 0.24901774525642395
epoch 163, loss 0.2487517148256302
epoch 164, loss 0.24848470091819763
epoch 165, loss 0.24821801483631134
epoch 166, loss 0.2479534149169922
epoch 167, loss 0.24768801033496857
epoch 168, loss 0.24742209911346436
epoch 169, loss 0.24715843796730042
epoch 170, loss 0.24689345061779022
epoch 171, loss 0.2466294765472412
epoch 172, loss 0.2463652342557907
epoch 173, loss 0.24610117077827454
epoch 174, loss 0.24583733081817627
epoch 175, loss 0.24557442963123322
epoch 176, loss 0.24531146883964539
epoch 177, loss 0.24504907429218292
epoch 178, loss 0.24478688836097717
epoch 179, loss 0.24452488124370575
epoch 180, loss 0.24426360428333282
epoch 181, loss 0.24400198459625244
epoch 182, loss 0.24374108016490936
epoch 183, loss 0.24348023533821106
epoch 184, loss 0.24321945011615753
epoch 185, loss 0.24295909702777863
epoch 186, loss 0.2426999807357788
epoch 187, loss 0.24243924021720886
epoch 188, loss 0.2421802580356598
epoch 189, loss 0.24192039668560028
epoch 190, loss 0.24166186153888702
epoch 191, loss 0.24140286445617676
epoch 192, loss 0.2411452829837799
epoch 193, loss 0.24088692665100098
epoch 194, loss 0.24062839150428772
epoch 195, loss 0.240371435880661
epoch 196, loss 0.24011436104774475
epoch 197, loss 0.2398567944765091
epoch 198, loss 0.23960097134113312
epoch 199, loss 0.23934414982795715
epoch 200, loss 0.2390875518321991
epoch 201, loss 0.23883208632469177
epoch 202, loss 0.23857632279396057
epoch 203, loss 0.2383209764957428
epoch 204, loss 0.23806555569171906
epoch 205, loss 0.23781077563762665
epoch 206, loss 0.23755650222301483
epoch 207, loss 0.23730233311653137
epoch 208, loss 0.2370481789112091
epoch 209, loss 0.23679497838020325
epoch 210, loss 0.23654110729694366
epoch 211, loss 0.2362874448299408
epoch 212, loss 0.23603519797325134
epoch 213, loss 0.23578305542469025
epoch 214, loss 0.23552961647510529
epoch 215, loss 0.23527801036834717
epoch 216, loss 0.2350267916917801
epoch 217, loss 0.2347748577594757
epoch 218, loss 0.23452389240264893
epoch 219, loss 0.23427243530750275
epoch 220, loss 0.2340221405029297
epoch 221, loss 0.23377101123332977
epoch 222, loss 0.23352058231830597
epoch 223, loss 0.2332715541124344
epoch 224, loss 0.23302121460437775
epoch 225, loss 0.23277130722999573
epoch 226, loss 0.23252296447753906
epoch 227, loss 0.23227448761463165
epoch 228, loss 0.23202534019947052
epoch 229, loss 0.2317771464586258
epoch 230, loss 0.23152926564216614
epoch 231, loss 0.2312810868024826
epoch 232, loss 0.23103365302085876
epoch 233, loss 0.23078635334968567
epoch 234, loss 0.23053927719593048
epoch 235, loss 0.23029276728630066
epoch 236, loss 0.2300461232662201
epoch 237, loss 0.22980007529258728
epoch 238, loss 0.22955472767353058
epoch 239, loss 0.2293076515197754
epoch 240, loss 0.22906279563903809
epoch 241, loss 0.22881770133972168
epoch 242, loss 0.22857359051704407
epoch 243, loss 0.22832822799682617
epoch 244, loss 0.2280840426683426
epoch 245, loss 0.2278391420841217
epoch 246, loss 0.22759577631950378
epoch 247, loss 0.22735139727592468
epoch 248, loss 0.22710879147052765
epoch 249, loss 0.2268654853105545
epoch 250, loss 0.22662317752838135
epoch 251, loss 0.22638042271137238
epoch 252, loss 0.2261374294757843
epoch 253, loss 0.22589640319347382
epoch 254, loss 0.22565457224845886
epoch 255, loss 0.2254127562046051
epoch 256, loss 0.22517092525959015
epoch 257, loss 0.2249315083026886
epoch 258, loss 0.22469055652618408
epoch 259, loss 0.22444948554039001
epoch 260, loss 0.2242092341184616
epoch 261, loss 0.2239692509174347
epoch 262, loss 0.22372941672801971
epoch 263, loss 0.2234901636838913
epoch 264, loss 0.22325082123279572
epoch 265, loss 0.22301247715950012
epoch 266, loss 0.22277379035949707
epoch 267, loss 0.2225344479084015
epoch 268, loss 0.22229643166065216
epoch 269, loss 0.22205865383148193
epoch 270, loss 0.2218218743801117
epoch 271, loss 0.22158312797546387
epoch 272, loss 0.22134628891944885
epoch 273, loss 0.22111034393310547
epoch 274, loss 0.22087275981903076
epoch 275, loss 0.22063691914081573
epoch 276, loss 0.2204003632068634
epoch 277, loss 0.22016501426696777
epoch 278, loss 0.219928577542305
epoch 279, loss 0.21969421207904816
epoch 280, loss 0.2194582223892212
epoch 281, loss 0.21922290325164795
epoch 282, loss 0.21898964047431946
epoch 283, loss 0.21875450015068054
epoch 284, loss 0.21852059662342072
epoch 285, loss 0.21828603744506836
epoch 286, loss 0.21805214881896973
epoch 287, loss 0.21782031655311584
epoch 288, loss 0.21758683025836945
epoch 289, loss 0.2173534631729126
epoch 290, loss 0.21712107956409454
epoch 291, loss 0.21688847243785858
epoch 292, loss 0.21665653586387634
epoch 293, loss 0.21642501652240753
epoch 294, loss 0.216193288564682
epoch 295, loss 0.21596205234527588
epoch 296, loss 0.21572980284690857
epoch 297, loss 0.21549928188323975
epoch 298, loss 0.2152685821056366
epoch 299, loss 0.21503882110118866
epoch 300, loss 0.21480774879455566
epoch 301, loss 0.21457834541797638
epoch 302, loss 0.21434924006462097
epoch 303, loss 0.21411874890327454
epoch 304, loss 0.21389031410217285
epoch 305, loss 0.21366067230701447
epoch 306, loss 0.21343278884887695
epoch 307, loss 0.21320375800132751
epoch 308, loss 0.2129760980606079
epoch 309, loss 0.21274776756763458
epoch 310, loss 0.21252091228961945
epoch 311, loss 0.21229267120361328
epoch 312, loss 0.2120654284954071
epoch 313, loss 0.21183809638023376
epoch 314, loss 0.21161217987537384
epoch 315, loss 0.21138529479503632
epoch 316, loss 0.21115915477275848
epoch 317, loss 0.21093308925628662
epoch 318, loss 0.2107069194316864
epoch 319, loss 0.21048128604888916
epoch 320, loss 0.21025589108467102
epoch 321, loss 0.2100309133529663
epoch 322, loss 0.20980635285377502
epoch 323, loss 0.20958176255226135
epoch 324, loss 0.20935769379138947
epoch 325, loss 0.2091337889432907
epoch 326, loss 0.20890995860099792
epoch 327, loss 0.2086869776248932
epoch 328, loss 0.20846255123615265
epoch 329, loss 0.20823976397514343
epoch 330, loss 0.20801690220832825
epoch 331, loss 0.20779523253440857
epoch 332, loss 0.2075720578432083
epoch 333, loss 0.20734989643096924
epoch 334, loss 0.207127183675766
epoch 335, loss 0.20690631866455078
epoch 336, loss 0.20668430626392365
epoch 337, loss 0.20646338164806366
epoch 338, loss 0.20624224841594696
epoch 339, loss 0.20602257549762726
epoch 340, loss 0.2058015763759613
epoch 341, loss 0.2055804431438446
epoch 342, loss 0.20536132156848907
epoch 343, loss 0.20514142513275146
epoch 344, loss 0.20492127537727356
epoch 345, loss 0.20470301806926727
epoch 346, loss 0.20448391139507294
epoch 347, loss 0.20426468551158905
epoch 348, loss 0.20404569804668427
epoch 349, loss 0.2038276493549347
epoch 350, loss 0.20360930263996124
epoch 351, loss 0.2033914029598236
epoch 352, loss 0.20317375659942627
epoch 353, loss 0.20295631885528564
epoch 354, loss 0.20273897051811218
epoch 355, loss 0.20252224802970886
epoch 356, loss 0.2023058384656906
epoch 357, loss 0.20208808779716492
epoch 358, loss 0.20187243819236755
epoch 359, loss 0.20165666937828064
epoch 360, loss 0.20144124329090118
epoch 361, loss 0.20122432708740234
epoch 362, loss 0.20100970566272736
epoch 363, loss 0.20079490542411804
epoch 364, loss 0.20057909190654755
epoch 365, loss 0.20036515593528748
epoch 366, loss 0.20014996826648712
epoch 367, loss 0.19993630051612854
epoch 368, loss 0.19972196221351624
epoch 369, loss 0.19950857758522034
epoch 370, loss 0.199294775724411
epoch 371, loss 0.1990819275379181
epoch 372, loss 0.19886861741542816
epoch 373, loss 0.19865596294403076
epoch 374, loss 0.1984427273273468
epoch 375, loss 0.1982312798500061
epoch 376, loss 0.1980188488960266
epoch 377, loss 0.19780686497688293
epoch 378, loss 0.19759488105773926
epoch 379, loss 0.197383314371109
epoch 380, loss 0.19717171788215637
epoch 381, loss 0.19696074724197388
epoch 382, loss 0.19674964249134064
epoch 383, loss 0.1965392380952835
epoch 384, loss 0.19632910192012787
epoch 385, loss 0.19611942768096924
epoch 386, loss 0.19590966403484344
epoch 387, loss 0.19569972157478333
epoch 388, loss 0.195490762591362
epoch 389, loss 0.19528165459632874
epoch 390, loss 0.19507162272930145
epoch 391, loss 0.19486328959465027
epoch 392, loss 0.19465534389019012
epoch 393, loss 0.19444610178470612
epoch 394, loss 0.19423826038837433
epoch 395, loss 0.194031223654747
epoch 396, loss 0.1938226968050003
epoch 397, loss 0.19361570477485657
epoch 398, loss 0.19340796768665314
epoch 399, loss 0.19320163130760193
epoch 400, loss 0.19299452006816864
epoch 401, loss 0.19278714060783386
epoch 402, loss 0.19258150458335876
epoch 403, loss 0.192375048995018
epoch 404, loss 0.19216927886009216
epoch 405, loss 0.19196359813213348
epoch 406, loss 0.191758394241333
epoch 407, loss 0.19155295193195343
epoch 408, loss 0.19134730100631714
epoch 409, loss 0.1911439597606659
epoch 410, loss 0.19093912839889526
epoch 411, loss 0.19073493778705597
epoch 412, loss 0.19053038954734802
epoch 413, loss 0.1903264969587326
epoch 414, loss 0.19012261927127838
epoch 415, loss 0.18991973996162415
epoch 416, loss 0.1897164285182953
epoch 417, loss 0.18951338529586792
epoch 418, loss 0.18930935859680176
epoch 419, loss 0.1891074776649475
epoch 420, loss 0.18890510499477386
epoch 421, loss 0.18870294094085693
epoch 422, loss 0.18850168585777283
epoch 423, loss 0.1882992833852768
epoch 424, loss 0.18809756636619568
epoch 425, loss 0.18789707124233246
epoch 426, loss 0.18769516050815582
epoch 427, loss 0.18749502301216125
epoch 428, loss 0.18729332089424133
epoch 429, loss 0.18709351122379303
epoch 430, loss 0.18689265847206116
epoch 431, loss 0.18669365346431732
epoch 432, loss 0.1864931285381317
epoch 433, loss 0.18629436194896698
epoch 434, loss 0.18609416484832764
epoch 435, loss 0.1858951896429062
epoch 436, loss 0.18569675087928772
epoch 437, loss 0.18549761176109314
epoch 438, loss 0.18529897928237915
epoch 439, loss 0.18510064482688904
epoch 440, loss 0.18490183353424072
epoch 441, loss 0.18470527231693268
epoch 442, loss 0.18450728058815002
epoch 443, loss 0.18431003391742706
epoch 444, loss 0.18411226570606232
epoch 445, loss 0.18391521275043488
epoch 446, loss 0.1837185174226761
epoch 447, loss 0.18352225422859192
epoch 448, loss 0.1833258867263794
epoch 449, loss 0.18312963843345642
epoch 450, loss 0.18293263018131256
epoch 451, loss 0.18273743987083435
epoch 452, loss 0.18254177272319794
epoch 453, loss 0.18234661221504211
epoch 454, loss 0.18215200304985046
epoch 455, loss 0.18195633590221405
epoch 456, loss 0.1817614585161209
epoch 457, loss 0.1815676987171173
epoch 458, loss 0.18137270212173462
epoch 459, loss 0.18117934465408325
epoch 460, loss 0.1809845268726349
epoch 461, loss 0.18079116940498352
epoch 462, loss 0.18059727549552917
epoch 463, loss 0.1804044544696808
epoch 464, loss 0.18021124601364136
epoch 465, loss 0.1800190508365631
epoch 466, loss 0.1798257827758789
epoch 467, loss 0.17963269352912903
epoch 468, loss 0.17944113910198212
epoch 469, loss 0.1792488694190979
epoch 470, loss 0.17905692756175995
epoch 471, loss 0.17886461317539215
epoch 472, loss 0.17867445945739746
epoch 473, loss 0.17848312854766846
epoch 474, loss 0.17829182744026184
epoch 475, loss 0.1781007945537567
epoch 476, loss 0.17791017889976501
epoch 477, loss 0.17771980166435242
epoch 478, loss 0.17752958834171295
epoch 479, loss 0.17733964323997498
epoch 480, loss 0.17714965343475342
epoch 481, loss 0.17696043848991394
epoch 482, loss 0.17677125334739685
epoch 483, loss 0.17658214271068573
epoch 484, loss 0.17639218270778656
epoch 485, loss 0.17620375752449036
epoch 486, loss 0.1760152280330658
epoch 487, loss 0.17582742869853973
epoch 488, loss 0.17563794553279877
epoch 489, loss 0.1754508912563324
epoch 490, loss 0.1752634346485138
epoch 491, loss 0.17507480084896088
epoch 492, loss 0.17488788068294525
epoch 493, loss 0.17470011115074158
epoch 494, loss 0.17451369762420654
epoch 495, loss 0.17432650923728943
epoch 496, loss 0.1741403043270111
epoch 497, loss 0.17395375669002533
epoch 498, loss 0.17376847565174103
epoch 499, loss 0.17358197271823883
epoch 500, loss 0.17339560389518738
|
f3a804ece8d0ccb31be9d85cac2b821f8334db5f
| 557,820 |
ipynb
|
Jupyter Notebook
|
03_Applied_AI_DeepLearning/notebooks/week_2_5_pytorch_tutorial.ipynb
|
hsotoparada/Advanced-Data-Science-Specialization
|
2bf1804e4c2a7c1841e58c34ca20c717c6174899
|
[
"MIT"
] | null | null | null |
03_Applied_AI_DeepLearning/notebooks/week_2_5_pytorch_tutorial.ipynb
|
hsotoparada/Advanced-Data-Science-Specialization
|
2bf1804e4c2a7c1841e58c34ca20c717c6174899
|
[
"MIT"
] | null | null | null |
03_Applied_AI_DeepLearning/notebooks/week_2_5_pytorch_tutorial.ipynb
|
hsotoparada/Advanced-Data-Science-Specialization
|
2bf1804e4c2a7c1841e58c34ca20c717c6174899
|
[
"MIT"
] | null | null | null | 389.811321 | 504,600 | 0.931046 | true | 12,970 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.843895 | 0.695958 | 0.587316 |
__label__eng_Latn
| 0.197452 | 0.202861 |
<a href="https://www.bigdatauniversity.com"></a>
# <center>Non Linear Regression Analysis</center>
If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression because, as the name implies, linear regression presumes that the data is linear.
Let's learn about non linear regressions and apply an example on python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014.
### Importing required libraries
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = 2*(x) + 3.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$).
$$ \ y = a x^3 + b x^2 + c x + d \ $$
Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \log(x)$$
Or even, more complicated such as :
$$ y = \log(a x^3 + b x^2 + c x + d)$$
Let's take a look at a cubic function's graph.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function.
Some other types of non-linear functions are:
### Quadratic
$$ Y = X^2 $$
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
### Exponential
An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
```python
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
### Logarithmic
The response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of __log()__: i.e. $$ y = \log(x)$$
Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as
\begin{equation}
y = \log(X)
\end{equation}
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
### Sigmoidal/Logistic
$$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
<a id="ref2"></a>
# Non-Linear Regression example
For an example, we're going to try and fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
```python
import numpy as np
import pandas as pd
# downloading dataset
# !wget -nv -O china_gdp.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv
df = pd.read_csv("china_gdp.csv")
df.head(10)
```
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
### Plotting the Dataset ###
This is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it deccelerates slightly in the 2010s.
```python
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'bo')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
### Choosing a model ###
From an initial look at the plot, we determine that the logistic function could be a good approximation,
since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
The formula for the logistic function is the following:
$$ \hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$
$\beta_1$: Controls the curve's steepness,
$\beta_2$: Slides the curve on the x-axis.
### Building The Model ###
Now, let's build our regression model and initialize its parameters.
```python
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
```
Lets look at a sample sigmoid line that might fit with the data:
```python
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
```
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
```python
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
```
#### How we find the best parameters for our fit line?
we can use __curve_fit__ which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, *popt) - ydata is minimized.
popt are our optimized parameters.
```python
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
```
Now we plot our resulting regresssion model.
```python
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
## Practice
Can you calculate what is the accuracy of our model?
```python
# write your code here
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
popt, pcov = curve_fit(sigmoid, train_x, train_y)
y_hat = sigmoid(test_x, *popt)
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
```
Double-click __here__ for the solution.
<!-- Your answer is below:
# split data into train/test
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
-->
## Want to learn more?
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: [SPSS Modeler](http://cocl.us/ML0101EN-SPSSModeler).
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at [Watson Studio](https://cocl.us/ML0101EN_DSX)
### Thanks for completing this lesson!
Notebook created by: <a href = "https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>
<hr>
Copyright © 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
|
d63e662eef7033f3e704f4f0026d57ffe9ac5595
| 33,885 |
ipynb
|
Jupyter Notebook
|
machine_learning_with_python/notebooks/2.2_non-linear_regression.ipynb
|
harisonmg/cognitive-class
|
8978a5a42eca86b46091e931c615eaa5d74c144a
|
[
"MIT"
] | null | null | null |
machine_learning_with_python/notebooks/2.2_non-linear_regression.ipynb
|
harisonmg/cognitive-class
|
8978a5a42eca86b46091e931c615eaa5d74c144a
|
[
"MIT"
] | null | null | null |
machine_learning_with_python/notebooks/2.2_non-linear_regression.ipynb
|
harisonmg/cognitive-class
|
8978a5a42eca86b46091e931c615eaa5d74c144a
|
[
"MIT"
] | null | null | null | 55.097561 | 16,644 | 0.762225 | true | 2,847 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.849971 | 0.684764 |
__label__eng_Latn
| 0.977532 | 0.429268 |
```python
import sympy as sm
```
```python
sm.init_printing() # improves symbolic math display
```
```python
mb, mc, Ib, Ic, r, l, d, g = sm.symbols('m_b, m_c, I_b, I_c, r, l, d, g')
```
```python
mb
```
```python
Ib
```
```python
r+Ib**2
```
```python
sm.sin(r) + sm.sqrt(mb)
```
```python
t = sm.symbols('t')
```
```python
theta = sm.Function('theta')(t)
theta
```
```python
omega = sm.Function('omega')(t)
omega
```
```python
sm.diff(theta, t)
```
```python
sm.diff(theta**2+sm.sin(theta**2/2), t)
```
```python
d = 2*r*sm.sin(theta/2)
d
```
```python
h = l - sm.sqrt(l**2 - d**2)
h
```
```python
v = sm.diff(h, t)
v
```
```python
v = h.diff(t)
v
```
```python
v = v.subs({theta.diff(t): omega})
v
```
```python
T = (mb + mc)*v**2/2
T
```
```python
sm.S(1)/2*(mb+mc)*v**2
```
```python
T = (mb + mc)*v**2/2 + (Ib + Ic)*omega**2/2 # no time dervitives!!
T
```
```python
U = (mb + mc)*g*h # no time derivatives
U
```
```python
L = T - U
```
```python
L.diff(omega)
```
```python
L.diff(theta)
```
```python
L.diff(omega).diff(t)
```
```python
L.diff(omega).diff(t).subs({theta.diff(t): omega})
```
```python
f = L.diff(omega).diff(t).subs({theta.diff(t): omega}) - L.diff(theta)
f
```
```python
g = f.subs({omega.diff(t): 0})
g
```
```python
I_sys = f.coeff(omega.diff(t))
I_sys
```
```python
omegadot = -g/I_sys
omegadot
```
```python
f.diff(omega.diff(t))
```
```python
from resonance.nonlinear_systems import SingleDoFNonLinearSystem
import numpy as np
```
```python
sys = SingleDoFNonLinearSystem()
```
```python
sys.constants['m_b'] = 1 # kg
sys.constants['m_c'] = 2.6 # kg
sys.constants['r'] = 0.3 # m
sys.constants['l'] = 0.75 # m
sys.constants['g'] = 9.81 # m/s**2
sys.constants['I_b'] = 1.0*0.3**2 # kg m**2
sys.constants['I_c'] = 2.6*0.3**2 # kg m**2
```
```python
sys.constants
```
{'m_b': 1,
'm_c': 2.6,
'r': 0.3,
'l': 0.75,
'g': 9.81,
'I_b': 0.09,
'I_c': 0.23399999999999999}
```python
sys.coordinates['theta'] = np.deg2rad(10.0) # rad
sys.speeds['omega'] = 0.0
```
```python
str(omegadot).replace('(t)', '').replace('sin(', 'np.sin(').replace('cos(', 'np.cos(')
```
'(-2*g*r**2*(m_b + m_c)*np.sin(theta/2)*np.cos(theta/2)/sqrt(l**2 - 4*r**2*np.sin(theta/2)**2) - 8*r**6*(m_b + m_c)*omega**2*np.sin(theta/2)**3*np.cos(theta/2)**3/(l**2 - 4*r**2*np.sin(theta/2)**2)**2 + 2*r**4*(m_b + m_c)*omega**2*np.sin(theta/2)**3*np.cos(theta/2)/(l**2 - 4*r**2*np.sin(theta/2)**2) - 2*r**4*(m_b + m_c)*omega**2*np.sin(theta/2)*np.cos(theta/2)**3/(l**2 - 4*r**2*np.sin(theta/2)**2))/(I_b + I_c + 4*r**4*(m_b + m_c)*np.sin(theta/2)**2*np.cos(theta/2)**2/(l**2 - 4*r**2*np.sin(theta/2)**2))'
```python
omegadot.free_symbols
```
```python
sys.states
```
_StatesDict([('theta', 0.17453292519943295), ('omega', 0.0)])
```python
def calc_derivatives(theta, omega, l, g, r, m_b, m_c, I_b, I_c):
thetadot = omega
omegadot = ((-2*g*r**2*(m_b + m_c)*np.sin(theta/2)*np.cos(theta/2)/np.sqrt(l**2 - 4*r**2*np.sin(theta/2)**2) -
8*r**6*(m_b + m_c)*omega**2*np.sin(theta/2)**3*np.cos(theta/2)**3/(l**2 - 4*r**2*np.sin(theta/2)**2)**2 + 2*r**4*(m_b + m_c)*omega**2*np.sin(theta/2)**3*np.cos(theta/2)/(l**2 - 4*r**2*np.sin(theta/2)**2) - 2*r**4*(m_b + m_c)*omega**2*np.sin(theta/2)*np.cos(theta/2)**3/(l**2 - 4*r**2*np.sin(theta/2)**2))/(I_b + I_c +
4*r**4*(m_b + m_c)*np.sin(theta/2)**2*np.cos(theta/2)**2/(l**2 - 4*r**2*np.sin(theta/2)**2)))
return thetadot, omegadot # order matters here, match sys.states
```
```python
calc_derivatives(1.0, 2.0, 10.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0)
```
```python
sys.diff_eq_func = calc_derivatives
```
```python
traj = sys.free_response(10.0)
```
```python
%matplotlib widget
```
```python
traj.plot(subplots=True)
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
array([<matplotlib.axes._subplots.AxesSubplot object at 0x7f928324bd30>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f92831c2c88>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f9283182128>],
dtype=object)
```python
def calc_height(theta, r, l):
return l - np.sqrt(l**2 - (2*r*np.sin(theta/2))**2)
```
```python
sys.add_measurement('h', calc_height)
```
```python
traj = sys.free_response(10.0)
```
```python
traj
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>theta</th>
<th>theta_acc</th>
<th>omega</th>
<th>h</th>
</tr>
<tr>
<th>time</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>0.00</th>
<td>0.174533</td>
<td>-2.265874</td>
<td>0.000000</td>
<td>0.001825</td>
</tr>
<tr>
<th>0.01</th>
<td>0.174420</td>
<td>-2.264439</td>
<td>-0.022654</td>
<td>0.001823</td>
</tr>
<tr>
<th>0.02</th>
<td>0.174080</td>
<td>-2.260137</td>
<td>-0.045279</td>
<td>0.001816</td>
</tr>
<tr>
<th>0.03</th>
<td>0.173514</td>
<td>-2.252970</td>
<td>-0.067847</td>
<td>0.001804</td>
</tr>
<tr>
<th>0.04</th>
<td>0.172723</td>
<td>-2.242947</td>
<td>-0.090329</td>
<td>0.001788</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>9.96</th>
<td>-0.032323</td>
<td>0.424660</td>
<td>0.619826</td>
<td>0.000063</td>
</tr>
<tr>
<th>9.97</th>
<td>-0.026105</td>
<td>0.343017</td>
<td>0.623665</td>
<td>0.000041</td>
</tr>
<tr>
<th>9.98</th>
<td>-0.019852</td>
<td>0.260890</td>
<td>0.626685</td>
<td>0.000024</td>
</tr>
<tr>
<th>9.99</th>
<td>-0.013574</td>
<td>0.178396</td>
<td>0.628881</td>
<td>0.000011</td>
</tr>
<tr>
<th>10.00</th>
<td>-0.007277</td>
<td>0.095650</td>
<td>0.630252</td>
<td>0.000003</td>
</tr>
</tbody>
</table>
<p>1001 rows × 4 columns</p>
</div>
```python
traj.plot(subplots=True)
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
array([<matplotlib.axes._subplots.AxesSubplot object at 0x7f92830ae5f8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f92830dc860>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f928308ec88>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f9283042f60>],
dtype=object)
```python
f
```
```python
f_lin = f.subs({
sm.sin(theta/2): theta/2,
sm.cos(theta/2): 1,
omega**2: 0,
theta**2: 0,
})
f_lin
```
```python
w = sm.symbols('w', real=True, positive=True)
```
```python
sm.sqrt(w**2)
```
```python
m = f_lin.coeff(omega.diff(t))
m
```
```python
k = f_lin.coeff(theta)
k
```
```python
wn = sm.sqrt(k/m)
wn
```
```python
period = 2*sm.pi/wn
period
```
```python
T = sm.symbols('T')
```
```python
period_eq = sm.Eq(T, period)
period_eq
```
```python
sm.solve(period_eq, Ic)
```
```python
from resonance.linear_systems import SingleDoFLinearSystem
```
```python
linsys = SingleDoFLinearSystem()
```
```python
linsys.constants['m_b'] = 1 # kg
linsys.constants['m_c'] = 2.6 # kg
linsys.constants['r'] = 0.3 # m
linsys.constants['l'] = 0.75 # m
linsys.constants['g'] = 9.81 # m/s**2
linsys.constants['I_b'] = 1.0*0.3**2 # kg m**2
linsys.constants['I_c'] = 2.6*0.3**2 # kg m**2
```
```python
linsys.coordinates['theta'] = np.deg2rad(10.0)
linsys.speeds['omega'] = 0.0
```
```python
m, k
```
```python
def calc_coeffs(I_b, I_c, g, r, m_b, m_c, l):
m = I_b + I_c
b = 0.0
k = g*r**2*(m_b+m_c)/l
return m, b, k
```
```python
linsys.canonical_coeffs_func = calc_coeffs
```
```python
traj = linsys.free_response(10.0)
```
```python
traj.plot(subplots=True)
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
array([<matplotlib.axes._subplots.AxesSubplot object at 0x7f92832b7c88>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f9282f0dda0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f9282ebbd68>],
dtype=object)
```python
from resonance.linear_systems import SingleDoFLinearSystem as CarModel
```
```python
sys = CarModel()
```
|
91d880766b0a17e240e72849bfae6430de0bfd75
| 474,290 |
ipynb
|
Jupyter Notebook
|
content/materials/notebooks/2020/l09_trifilar_with_sympy.ipynb
|
moorepants/eng122
|
7bcd502eae4ab0ec9d463389acca2290b263ba64
|
[
"CC-BY-4.0"
] | 2 |
2020-05-11T05:56:54.000Z
|
2020-12-20T13:44:12.000Z
|
content/materials/notebooks/2020/l09_trifilar_with_sympy.ipynb
|
moorepants/eng122
|
7bcd502eae4ab0ec9d463389acca2290b263ba64
|
[
"CC-BY-4.0"
] | 12 |
2016-09-20T01:26:33.000Z
|
2020-01-22T02:10:33.000Z
|
content/materials/notebooks/2020/l09_trifilar_with_sympy.ipynb
|
moorepants/eng122
|
7bcd502eae4ab0ec9d463389acca2290b263ba64
|
[
"CC-BY-4.0"
] | 4 |
2017-06-09T01:28:45.000Z
|
2020-02-29T02:36:18.000Z
| 250.681818 | 46,760 | 0.877337 | true | 3,405 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90053 | 0.7773 | 0.699982 |
__label__yue_Hant
| 0.247435 | 0.464623 |
**SVM Example**
**A hyperplane in space is defined as**
<font size="3">
$$\mathbf{w}^T\mathbf{x} + b = 0$$
</font>
**SVM Margin**
<font size="3">
$$margin = \underset{x}{min}\; \frac{y_n(\mathbf{w}^Tx_n+b)}{{\left \| \mathbf{w} \right \|}_2}$$
_SVM optimization is the problem of finding $\mathbf{w}$ and $b$ such that $margin$ is maximized._
\begin{equation}
\tag{1}
(\mathbf{w},b) = arg\;\underset{\mathbf{w},b}{max}\left \{ \underset{n}{min}\;\frac{y_n(\mathbf{w}^Tx_n+b)}{{\left \| \mathbf{w} \right \|}_2} \right \}= arg\;\underset{\mathbf{w},b}{max}\left \{\frac{1}{{{\left \| \mathbf{w} \right \|}_2}} \underset{n}{min}\;y_n(\mathbf{w}^Tx_n+b) \right \}
\end{equation}
Note that: $\forall n, \; y_n(\mathbf{w}^Tx_n + b) \geq 1$
</font>
<font size="3">
Eq. (1) can be turned to a constrained optimization problem as follows:
\begin{equation}
\tag{2}
(\mathbf{w},b) = arg\;\underset{\mathbf{w},b}{min} \frac{1}{2} {\left \| \mathbf{w} \right \|}_2^2
\end{equation}
subject to
$$1-y_n(\mathbf{w}^Tx_n+b) \leq 0,\; \forall n=1,2,...,N$$
</font>
### Lagrangian of SVM
<font size="3">
Lagrangian of (2) is:
\begin{equation}
\tag{3}
\mathbf{L}(\mathbf{w},b,\lambda) = \frac{1}{2} {\left \| \mathbf{w} \right \|}_2^2 \;+\; \sum_{n=1}^{N}\lambda_n(1-y_n(\mathbf{w}^Tx_n+b))
\end{equation}
with
$\lambda=[\lambda_1,\lambda_2,...,\lambda_N]^T$ and $\lambda_n \geq 0,\;\forall n=1,2,...,N$
</font>
### SVM Dual Lagrangian function
<font size="3">
\begin{equation}
\tag{4}
g(\lambda) = \underset{\mathbf{w},b}{min}\;\mathbf{L}(\mathbf{w},b,\lambda)
\end{equation}
with $$\lambda\succeq0$$
It's equivalent:
\begin{equation}
\nabla_{w,b,\lambda} \mathbf{L}(\mathbf{w},b,\lambda) = 0
\end{equation}
\begin{cases}
\frac{\partial\mathbf{L}(\mathbf{w}, b, \lambda)}{\partial_{w}} = \mathbf{w} - \sum_{n=1}^{N}\lambda_n\\
\frac{\partial\mathbf{L}(\mathbf{w}, b, \lambda)}{\partial_{b}}
\end{cases}
By solving (4), we get
\begin{equation}
\tag{5}
g(\lambda)=\sum_{n=1}^{N}\lambda_n - \frac{1}{2}\sum_{n=1}^{N}\sum_{m=1}^{N}\lambda_n\lambda_my_ny_m\mathbf{x}_n^T\mathbf{x}_m
\end{equation}
</font>
**Matrix Representation**
<font size="3">
Set $\mathbf{V}=[y_1x_1,y_2x_2,...,y_Nx_N]$
and vector $\mathbf{1}=[1,1,...,1]^T$, $g(\lambda)$ in (5) can then be represented as:
\begin{equation}
\tag{6}
g(\lambda)=-\frac{1}{2}\lambda^T\mathbf{V}^T\mathbf{V}\lambda + \mathbf{1}^T\lambda
\end{equation}
</font>
<font size="3">
Let $\mathbf{K}=\mathbf{V}^T\mathbf{V}$, we can prove that $\mathbf{K} \succeq 0$
(6) can be then represented as:
\begin{equation}
\tag{7}
g(\lambda) = -\frac{1}{2}\lambda^T\mathbf{K}\lambda + \mathbf{1}^T\lambda
\end{equation}
$g(\lambda)$ is a concave function
</font>
### SVM Dual Lagrangian problem
<font size="3">
We want to solve the dual problem of (7), which is:
\begin{equation}
\tag{8}
\lambda=arg\; \underset{\lambda}{max}\;g(\lambda)
\end{equation}
subject to: $$\lambda \succeq 0$$
$$\sum_{n=1}^{N}\lambda_ny_n=0$$
</font>
<font size="3">
We can prove that
\begin{equation}
\tag{9}
b = \frac{1}{N_{S}}\sum_{n\in S}(y_n-\mathbf{w}^Tx_n) = \frac{1}{N_{S}}\sum_{n\in S} \left ( y_n - \sum_{m \in S} \lambda_my_mx_m^Tx_n \right )
\end{equation}
</font>
<font size="3">
\begin{equation}
\tag{10}
\mathbf{w} = \sum_{m \in S} \lambda_my_mx_m
\end{equation}
</font>
### Self-programming
```python
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import cdist
np.random.seed(22)
#Generate simulation data
means = [[2, 2], [4, 2]]
cov = [[.3, .2], [.2, .3]]
N = 10
X0 = np.random.multivariate_normal(means[0], cov, N) # class 1
X1 = np.random.multivariate_normal(means[1], cov, N) # class -1
X = np.concatenate((X0.T, X1.T), axis = 1) # all data
y = np.concatenate((np.ones((1, N)), -1*np.ones((1, N))), axis = 1) # labels
```
##### Solving equation (8) by using CVXOPT
```python
from cvxopt import matrix, solvers
# build K
V = np.concatenate((X0.T, -X1.T), axis = 1)
K = matrix(V.T.dot(V)) # see definition of V, K near eq (6)
p = matrix(-np.ones((2*N, 1))) # all-one vector
# build A, b, G, h
G = matrix(-np.eye(2*N)) # for all lambda_n >= 0
h = matrix(np.zeros((2*N, 1)))
A = matrix(y) # the equality constrain is actually y^T lambda = 0
print(A)
b = matrix(np.zeros((1, 1)))
print(b)
solvers.options['show_progress'] = False
sol = solvers.qp(K, p, G, h, A, b)
l = np.array(sol['x'])
print('lambda = ')
print(l.T)
```
[ 1.00e+00 1.00e+00 1.00e+00 1.00e+00 1.00e+00 1.00e+00 1.00e+00 ... ]
[ 0.00e+00]
lambda =
[[8.54018321e-01 2.89132533e-10 1.37095535e+00 6.36030818e-10
4.04317408e-10 8.82390106e-10 6.35001881e-10 5.49567576e-10
8.33359230e-10 1.20982928e-10 6.86678649e-10 1.25039745e-10
2.22497367e+00 4.05417905e-09 1.26763684e-10 1.99008949e-10
2.13742578e-10 1.51537487e-10 3.75329509e-10 3.56161975e-10]]
```python
epsilon = 1e-6 # just a small number, greater than 1e-9
print(np.where(l > epsilon))
S = np.where(l > epsilon)[0]
print(S)
VS = V[:, S]
print(V)
print(VS)
XS = X[:, S]
yS = y[:, S]
lS = l[S]
# calculate w and b
w = VS.dot(lS)
b = np.mean(yS.T - w.T.dot(XS))
print('w = ', w.T)
print('b = ', b)
```
(array([ 0, 2, 12]), array([0, 0, 0]))
[ 0 2 12]
[[ 2.37319011 1.51261889 2.4696794 1.78736889 1.81231157 2.03717355
1.53790057 2.29312867 1.38805594 1.57279694 -3.42746579 -4.24760864
-3.33595491 -3.69420104 -4.53897645 -3.3071994 -4.13924705 -4.47383468
-4.00512009 -4.28205624]
[ 1.71875981 1.40558943 2.02144973 1.29380961 1.56119497 1.93397133
1.87434722 2.76537389 1.86419379 0.90707347 -0.71254431 -2.39846497
-1.61731637 -1.94273986 -2.54957308 -0.19362396 -2.09561534 -2.41269466
-1.89290099 -1.79675607]]
[[ 2.37319011 2.4696794 -3.33595491]
[ 1.71875981 2.02144973 -1.61731637]]
w = [[-2.00984381 0.64068336]]
b = 4.668560633868063
### Using scikit-learn
```python
from sklearn.svm import SVC
y1 = y.reshape((2*N,))
X1 = X.T # each sample is one row
clf = SVC(kernel = 'linear', C = 1e5) # just a big number
clf.fit(X1, y1)
w = clf.coef_
b = clf.intercept_
print('w = ', w)
print('b = ', b)
```
w = [[-2.00971102 0.64194082]]
b = [4.66595309]
```python
```
|
d37ab87cae1b9cb78334ca90c0617386fa0e31d6
| 10,678 |
ipynb
|
Jupyter Notebook
|
svm/tutorial/SVM-programming.ipynb
|
giangtranml/framgia-training
|
c7fb343bd43b1bceb241b447ff956febb99c94a8
|
[
"MIT"
] | 1 |
2020-08-06T09:39:42.000Z
|
2020-08-06T09:39:42.000Z
|
svm/tutorial/SVM-programming.ipynb
|
giangtranml/framgia-training
|
c7fb343bd43b1bceb241b447ff956febb99c94a8
|
[
"MIT"
] | null | null | null |
svm/tutorial/SVM-programming.ipynb
|
giangtranml/framgia-training
|
c7fb343bd43b1bceb241b447ff956febb99c94a8
|
[
"MIT"
] | null | null | null | 28.026247 | 334 | 0.486795 | true | 2,691 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.73412 | 0.618001 |
__label__eng_Latn
| 0.259011 | 0.274153 |
# Übungsblatt 3: sWeights
* [Aufgabe 1](#Aufgabe-1)
* [Aufgabe 2](#Aufgabe-2)
Eine experimentelle Verteilung in den Variablen $(x, m)$ habe eine Signalkomponente $s(x, m)$ = $s(x)s(m)$ und eine Untergrundkomponente $b(x,m)$ = $b(x)b(m)$. Der erlaubte Bereich ist $0 < x < 1$ und $0 < m < 1$. Es sei $s(m)$ eine Gaussverteilung mit Mittelwert $\mu = 0.5$ und Standardabweichung $\sigma = 0.05$. Die Verteilungen der anderen Komponenten werden aus gleichverteilten Zufallzahlen $z$ gewonnen. Für $s(x)$ verwende man $x = −0.2\ln{z}$, für $b(m)$ verwende man $m = \sqrt{z}$ und für $b(x)$ die Transformation $x = 1 − \sqrt{z}$.
Erzeugen Sie für zwei angenommene Effizienzfunktionen
* $\varepsilon(x, m) = 1$
* $\varepsilon(x, m) = (x + m) / 2$
Datensätze von Paaren $(x, m)$ die 20000 akzeptierte Signalereignisse und 100000 akzeptierte Untergrundereignisse umfassen.
Betrachten Sie nun die gemeinsame $m$-Verteilung und parametrisieren Sie diese durch
\begin{equation}
f(m) = s(m) + b(m)
\end{equation}
mit
\begin{equation}
s(m) = p_0 \exp\left(-\frac{(m - p_1)^2}{2p_p^2}\right)
\end{equation}
und
\begin{equation}
b(m) = p_3 + p_4m + p_5m^2 + p_6\sqrt{m} \,.
\end{equation}
Für den Fall $\varepsilon(x, m) = (x + m)/2$ benutzen Sie die obige Parametrisierung auch zur Beschreibung der $m_c$ und $m_{cc}$-Verteilungen, für die jeder $m$-Wert mit $1/\varepsilon(x, m)$, bzw. $1/\varepsilon^2(x, m)$ gewichtet wird, und die für die korrekte Behandlung von nicht-konstanten Effizienzen benötigt werden.
---
```python
```
---
## Aufgabe 1
Bestimmen Sie für beide Effizienzfunktion die sWeights $w(m)$ aus den beobachteten $m$-Verteilungen, und verwenden Sie $w(m)/\varepsilon(x, m)$ um die Verteilung $N_{s}s(x)$ aus den Daten heraus zu projizieren. Vergleichen Sie für beide Effizienzfunktionen das Resultat mit der Erwartung.
---
```python
```
---
## Aufgabe 2
Bestimmen Sie für $\varepsilon(x, m) = (x + m)/2$ unter Berücksichtigung der Funktion $\varepsilon(x, m)$ in der Bestimmung von $w(m)$ die korrekten sWeights aus den mit $1/\varepsilon(x, m)$ gewichteten Daten. Verwenden Sie die korrekten sWeights um mit $w(m)/\varepsilon(x, m)$ um die Verteilung $N_{s}s(x)$ zu extrahieren.
```python
```
|
0b1f566d9513451035eac5e678cfdf7501532a43
| 3,686 |
ipynb
|
Jupyter Notebook
|
assignments/3.ipynb
|
kdungs/teaching-SMD2-2016
|
3bd58c56c952204d198ee676e0a1d23396b8f4be
|
[
"MIT"
] | 4 |
2016-02-22T13:50:27.000Z
|
2017-02-13T09:14:20.000Z
|
assignments/3.ipynb
|
kdungs/teaching-SMD2-2016
|
3bd58c56c952204d198ee676e0a1d23396b8f4be
|
[
"MIT"
] | null | null | null |
assignments/3.ipynb
|
kdungs/teaching-SMD2-2016
|
3bd58c56c952204d198ee676e0a1d23396b8f4be
|
[
"MIT"
] | null | null | null | 32.619469 | 563 | 0.568638 | true | 859 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.798187 | 0.709396 |
__label__deu_Latn
| 0.98843 | 0.486495 |
# 微積分の計算について N0.4 定積分の内容-2 <定積分の置換積分>
### 学籍番号[_________]クラス[_____] クラス番号[_____] 名前[_______________]
##### 置換積分の手法
置換積分 公式ー1 $ \int_{a}^{b} f(x)dx $ において 、$x = g(t)$ とおく
$$ \int_{a}^{b} f(x)dx = \int_{\alpha}^{\beta} f((g(t))g'(t)dt $$ ただし、$ a = g(\alpha) , b= g(\beta) $
もしくは 置換積分 公式ー2 $ t = g(x)$ とおく
$$ \int_{a}^{b} f((g(x))g'(x)dx = \int_{\alpha}^{\beta} f(t)dt $$ ただし、$ \alpha = g(a) , \beta= g(b) $
```python
from sympy import *
x, n , y, a = symbols('x n y a')
init_printing()
m ='4//2'
i =0
```
```python
expr = (x+1)**2
itg = Integral(expr,(x,2,3))
i=i+1
print( 'No.',m,'---',i)
itg
```
```python
simplify(itg.doit())
```
```python
expr = x*sqrt(2**2-x*x)
itg = Integral(expr,(x,0,2))
i=i+1
print( 'No.',m,'---',i)
itg
```
```python
simplify(itg.doit())
```
```python
expr = sqrt(3**2-x*x)
itg = Integral(expr,(x,0,3))
i=i+1
print( 'No.',m,'---',i)
itg
```
```python
simplify(itg.doit())
```
```python
expr = x / (x**2 +1)
itg = Integral(expr,(x,0,2))
i=i+1
print( 'No.',m,'---',i)
itg
```
```python
simplify(itg.doit())
```
```python
expr = 1/(1- x)**2
itg = Integral(expr,(x,2,4))
i=i+1
print( 'No.',m,'---',i)
itg
```
```python
simplify(itg.doit())
```
```python
expr = x/sqrt(4+x**2)
itg = Integral(expr,(x,0,1))
i=i+1
print( 'No.',m,'---',i)
itg
```
```python
simplify(itg.doit())
```
```python
expr = (sin(x)**2 )*cos(x)
itg = Integral(expr,(x,0,pi/2))
i=i+1
print( 'No.',m,'---',i)
itg
```
```python
simplify(itg.doit())
```
```python
expr = 1/(x*log(x))
itg = Integral(expr,(x,exp(1),exp(1)**2))
i=i+1
print( 'No.',m,'---',i)
itg
```
```python
simplify(itg.doit())
```
```python
```
|
6427932c0bab6ea497ca8e97954f6b8430a17ad0
| 36,364 |
ipynb
|
Jupyter Notebook
|
07_20181118-sekibun-6-2-Ex&ans.ipynb
|
kt-pro-git-1/Calculus_Differential_Equation-public
|
d5deaf117e6841c4f6ceb53bc80b020220fd4814
|
[
"MIT"
] | 1 |
2019-07-10T11:33:18.000Z
|
2019-07-10T11:33:18.000Z
|
07_20181118-sekibun-6-2-Ex&ans.ipynb
|
kt-pro-git-1/Calculus_Differential_Equation-public
|
d5deaf117e6841c4f6ceb53bc80b020220fd4814
|
[
"MIT"
] | null | null | null |
07_20181118-sekibun-6-2-Ex&ans.ipynb
|
kt-pro-git-1/Calculus_Differential_Equation-public
|
d5deaf117e6841c4f6ceb53bc80b020220fd4814
|
[
"MIT"
] | null | null | null | 59.418301 | 2,856 | 0.768122 | true | 864 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.926304 | 0.782662 | 0.724983 |
__label__roh_Latn
| 0.13183 | 0.522711 |
# Characterization of Systems in the Time Domain
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Impulse Response
The response $y(t)$ of a linear time-invariant (LTI) system $\mathcal{H}$ to an arbitrary input signal $x(t)$ is derived in the following. The input signal can be represented as an integral when applying the [sifting-property of the Dirac impulse](../continuous_signals/standard_signals.ipynb#Dirac-Impulse)
\begin{equation}
x(t) = \int_{-\infty}^{\infty} x(\tau) \cdot \delta(t-\tau) \; d \tau
\end{equation}
The output signal of the system is then given as
\begin{equation}
y(t) = \mathcal{H} \left\{ \int_{-\infty}^{\infty} x(\tau) \cdot \delta(t-\tau) \; d \tau \right\}
\end{equation}
The integration and system response operator can be exchanged under the assumption that the system is linear
\begin{equation}
y(t) = \int_{-\infty}^{\infty} x(\tau) \cdot \mathcal{H} \left\{ \delta(t-\tau) \right\} \; d \tau
\end{equation}
where $\mathcal{H} \{\cdot\}$ was only applied to the Dirac impulse, since $x(\tau)$ can be regarded as constant factor with respect to the time $t$. It is obvious that the response of a system to a Dirac impulse plays an important role in the calculation of the output signal for arbitrary input signals.
The response of a system to a Dirac impulse as input signal is denoted as [*impulse response*](https://en.wikipedia.org/wiki/Impulse_response). It is defined as
\begin{equation}
h(t) = \mathcal{H} \left\{ \delta(t) \right\}
\end{equation}
If the system is time-invariant, the response to a shifted Dirac impulse is $\mathcal{H} \left\{ \delta(t-\tau) \right\} = h(t-\tau)$. Hence, for an LTI system we finally get
\begin{equation}
y(t) = \int_{-\infty}^{\infty} x(\tau) \cdot h(t-\tau) \; d \tau
\end{equation}
Due to its relevance in the theory of LTI systems, this operation is explicitly termed as [*convolution*](https://en.wikipedia.org/wiki/Convolution). It is commonly abbreviated by $*$, hence for above integral we get $y(t) = x(t) * h(t)$.
The properties of an LTI system are entirely characterized by its impulse response. The response $y(t)$ of a system to an arbitrary input signal $x(t)$ is given by the convolution of the input signal $x(t)$ with its impulse response $h(t)$.
**Example**
The following example considers an LTI system whose relation between input $x(t)$ and output $y(t)$ is given by an ordinary differential equation (ODE) with constant coefficients
\begin{equation}
y(t) + \frac{d}{dt} y(t) = x(t)
\end{equation}
The system response is computed for $x(t) = e^{- 2 t} \cdot \epsilon(t)$ by
1. explicitly solving the ODE and by
2. computing the impulse response $h(t)$ and convolution with the input signal.
The solution should fulfill the initial conditions $y(t)\big\vert_{t = 0-} = 0$ and $\frac{d}{dt}y(t)\big\vert_{t = 0-} = 0$ due to causality.
First the ODE is defined in `SymPy`
```python
%matplotlib inline
import sympy as sym
sym.init_printing()
t = sym.symbols('t', real=True)
x = sym.Function('x')(t)
y = sym.Function('y')(t)
ode = sym.Eq(y + y.diff(t) , x)
ode
```
The ODE is solved for the given input signal in order to calculate the output signal. Note that the integration constant is set to zero to fulfill the initial conditions
```python
solution = sym.dsolve(ode.subs(x, sym.exp(-2*t)*sym.Heaviside(t)))
y1 = solution.rhs.subs('C1', 0)
y1
```
Lets plot the output signal derived by explicit solution of the ODE
```python
sym.plot(y1, (t,-1,10), ylabel=r'$y(t)$');
```
The impulse response $h(t)$ is computed by solving the ODE for a Dirac impulse as input signal, $x(t) = \delta(t)$
```python
solution2 = sym.dsolve(ode.subs(x, sym.DiracDelta(t)))
h = solution2.rhs.subs('C1', 0)
h
```
Lets plot the impulse response $h(t)$ of the LTI system
```python
sym.plot(h, (t,-1,10), ylabel=r'$h(t)$');
```
As alternative to the explicit solution of the ODE, the system response is computed by evaluating the convolution integral. Since `SymPy` cannot handle the Heaviside function properly in integrands, the convolution integral is first simplified. Both the input signal $x(t)$ and the impulse response $h(t)$ are causal signals. Hence, the convolution integral degenerates to
\begin{equation}
y(t) = \int_{0}^{t} x(\tau) \cdot h(t - \tau) \; d\tau
\end{equation}
for $t \geq 0$. Note that $y(t) = 0$ for $t<0$.
```python
tau = sym.symbols('tau', real=True)
y2 = sym.integrate(sym.exp(-2*tau) * h.subs(sym.Heaviside(t), 1).subs(t, t-tau), (tau, 0, t))
y2
```
Lets plot the output signal derived by evaluation of the convolution
```python
sym.plot(y2, (t,0,10), ylabel=r'$y(t)$');
```
**Exercise**
* Compare the output signal derived by explicit solution of the ODE with the signal derived by convolution. Are both equal?
* Check if the impulse response $h(t)$ is a solution of the ODE by manual calculation. Hint $\frac{d}{dt} \epsilon(t) = \delta(t)$.
* Check the solution of the convolution integral by manual calculation including the Heaviside functions.
**Copyright**
The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
|
38a5fa1b5b37c7b41e0a397a697eaafada0d6aaf
| 56,402 |
ipynb
|
Jupyter Notebook
|
systems_time_domain/impulse_response.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | 1 |
2019-01-11T02:04:18.000Z
|
2019-01-11T02:04:18.000Z
|
systems_time_domain/impulse_response.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | null | null | null |
systems_time_domain/impulse_response.ipynb
|
xushoucai/signals-and-systems-lecture
|
30dbbf9226d93b454639955f5462d57546a921c5
|
[
"MIT"
] | null | null | null | 161.610315 | 15,356 | 0.888993 | true | 1,628 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.843895 | 0.870597 | 0.734693 |
__label__eng_Latn
| 0.980126 | 0.545269 |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Transformations,-Eigenvectors,-and-Eigenvalues" data-toc-modified-id="Transformations,-Eigenvectors,-and-Eigenvalues-1"><span class="toc-item-num">1 </span>Transformations, Eigenvectors, and Eigenvalues</a></span><ul class="toc-item"><li><span><a href="#Linear-Transformations" data-toc-modified-id="Linear-Transformations-1.1"><span class="toc-item-num">1.1 </span>Linear Transformations</a></span></li><li><span><a href="#Transformations-of-Magnitude-and-Amplitude" data-toc-modified-id="Transformations-of-Magnitude-and-Amplitude-1.2"><span class="toc-item-num">1.2 </span>Transformations of Magnitude and Amplitude</a></span><ul class="toc-item"><li><span><a href="#Afine-Transformations" data-toc-modified-id="Afine-Transformations-1.2.1"><span class="toc-item-num">1.2.1 </span>Afine Transformations</a></span></li></ul></li><li><span><a href="#Eigenvectors-and-Eigenvalues" data-toc-modified-id="Eigenvectors-and-Eigenvalues-1.3"><span class="toc-item-num">1.3 </span>Eigenvectors and Eigenvalues</a></span></li><li><span><a href="#Eigendecomposition" data-toc-modified-id="Eigendecomposition-1.4"><span class="toc-item-num">1.4 </span>Eigendecomposition</a></span></li><li><span><a href="#Rank-of-a-Matrix" data-toc-modified-id="Rank-of-a-Matrix-1.5"><span class="toc-item-num">1.5 </span>Rank of a Matrix</a></span></li><li><span><a href="#Inverse-of-a-Square-Full-Rank-Matrix" data-toc-modified-id="Inverse-of-a-Square-Full-Rank-Matrix-1.6"><span class="toc-item-num">1.6 </span>Inverse of a Square Full Rank Matrix</a></span></li></ul></li></ul></div>
# Transformations, Eigenvectors, and Eigenvalues
Matrices and vectors are used together to manipulate spatial dimensions. This has a lot of applications, including the mathematical generation of 3D computer graphics, geometric modeling, and the training and optimization of machine learning algorithms. We're not going to cover the subject exhaustively here; but we'll focus on a few key concepts that are useful to know when you plan to work with machine learning.
## Linear Transformations
You can manipulate a vector by multiplying it with a matrix. The matrix acts a function that operates on an input vector to produce a vector output. Specifically, matrix multiplications of vectors are *linear transformations* that transform the input vector into the output vector.
For example, consider this matrix ***A*** and vector ***v***:
$$ A = \begin{bmatrix}2 & 3\\5 & 2\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\\2\end{bmatrix}$$
We can define a transformation ***T*** like this:
$$ T(\vec{v}) = A\vec{v} $$
To perform this transformation, we simply calculate the dot product by applying the *RC* rule; multiplying each row of the matrix by the single column of the vector:
$$\begin{bmatrix}2 & 3\\5 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\2\end{bmatrix} = \begin{bmatrix}8\\9\end{bmatrix}$$
Here's the calculation in Python:
```python
import numpy as np
v = np.array([1,2])
A = np.array([[2,3],
[5,2]])
t = A@v
print (t)
```
[8 9]
In this case, both the input vector and the output vector have 2 components - in other words, the transformation takes a 2-dimensional vector and produces a new 2-dimensional vector; which we can indicate like this:
$$ T: \rm I\!R^{2} \to \rm I\!R^{2} $$
Note that the output vector may have a different number of dimensions from the input vector; so the matrix function might transform the vector from one space to another - or in notation, ${\rm I\!R}$<sup>n</sup> -> ${\rm I\!R}$<sup>m</sup>.
For example, let's redefine matrix ***A***, while retaining our original definition of vector ***v***:
$$ A = \begin{bmatrix}2 & 3\\5 & 2\\1 & 1\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\\2\end{bmatrix}$$
Now if we once again define ***T*** like this:
$$ T(\vec{v}) = A\vec{v} $$
We apply the transformation like this:
$$\begin{bmatrix}2 & 3\\5 & 2\\1 & 1\end{bmatrix} \cdot \begin{bmatrix}1\\2\end{bmatrix} = \begin{bmatrix}8\\9\\3\end{bmatrix}$$
So now, our transformation transforms the vector from 2-dimensional space to 3-dimensional space:
$$ T: \rm I\!R^{2} \to \rm I\!R^{3} $$
Here it is in Python:
```python
import numpy as np
v = np.array([1,2])
A = np.array([[2,3],
[5,2],
[1,1]])
t = A@v
print (t)
```
[8 9 3]
```python
import numpy as np
v = np.array([1,2])
A = np.array([[1,2],
[2,1]])
t = A@v
print (t)
```
[5 4]
## Transformations of Magnitude and Amplitude
When you multiply a vector by a matrix, you transform it in at least one of the following two ways:
* Scale the length (*magnitude*) of the matrix to make it longer or shorter
* Change the direction (*amplitude*) of the matrix
For example consider the following matrix and vector:
$$ A = \begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\\0\end{bmatrix}$$
As before, we transform the vector ***v*** by multiplying it with the matrix ***A***:
\begin{equation}\begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}\end{equation}
In this case, the resulting vector has changed in length (*magnitude*), but has not changed its direction (*amplitude*).
Let's visualize that in Python:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,0],
[0,2]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([t,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
The original vector ***v*** is shown in orange, and the transformed vector ***t*** is shown in blue - note that ***t*** has the same direction (*amplitude*) as ***v*** but a greater length (*magnitude*).
Now let's use a different matrix to transform the vector ***v***:
\begin{equation}\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}0\\1\end{bmatrix}\end{equation}
This time, the resulting vector has been changed to a different amplitude, but has the same magnitude.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[0,-1],
[1,0]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=10)
plt.show()
```
Now let's see change the matrix one more time:
\begin{equation}\begin{bmatrix}2 & 1\\1 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\1\end{bmatrix}\end{equation}
Now our resulting vector has been transformed to a new amplitude *and* magnitude - the transformation has affected both direction and scale.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,1],
[1,2]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=10)
plt.show()
```
### Afine Transformations
An Afine transformation multiplies a vector by a matrix and adds an offset vector, sometimes referred to as *bias*; like this:
$$T(\vec{v}) = A\vec{v} + \vec{b}$$
For example:
\begin{equation}\begin{bmatrix}5 & 2\\3 & 1\end{bmatrix} \cdot \begin{bmatrix}1\\1\end{bmatrix} + \begin{bmatrix}-2\\-6\end{bmatrix} = \begin{bmatrix}5\\-2\end{bmatrix}\end{equation}
This kind of transformation is actually the basis of linear regression, which is a core foundation for machine learning. The matrix defines the *features*, the first vector is the *coefficients*, and the bias vector is the *intercept*.
here's an example of an Afine transformation in Python:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,1])
A = np.array([[5,2],
[3,1]])
b = np.array([-2,-6])
t = A@v + b
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=15)
plt.show()
```
## Eigenvectors and Eigenvalues
So we can see that when you transform a vector using a matrix, we change its direction, length, or both. When the transformation only affects scale (in other words, the output vector has a different magnitude but the same amplitude as the input vector), the matrix multiplication for the transformation is the equivalent operation as some scalar multiplication of the vector.
For example, earlier we examined the following transformation that dot-mulitplies a vector by a matrix:
$$\begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
You can achieve the same result by mulitplying the vector by the scalar value ***2***:
$$2 \times \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
The following python performs both of these calculation and shows the results, which are identical.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,0],
[0,2]])
t1 = A@v
print (t1)
t2 = 2*v
print (t2)
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
In cases like these, where a matrix transformation is the equivelent of a scalar-vector multiplication, the scalar-vector pairs that correspond to the matrix are known respectively as eigenvalues and eigenvectors. We generally indicate eigenvalues using the Greek letter lambda (λ), and the formula that defines eigenvalues and eigenvectors with respect to a transformation is:
$$ T(\vec{v}) = \lambda\vec{v}$$
Where the vector ***v*** is an eigenvector and the value ***λ*** is an eigenvalue for transformation ***T***.
When the transformation ***T*** is represented as a matrix multiplication, as in this case where the transformation is represented by matrix ***A***:
$$ T(\vec{v}) = A\vec{v} = \lambda\vec{v}$$
Then ***v*** is an eigenvector and ***λ*** is an eigenvalue of ***A***.
A matrix can have multiple eigenvector-eigenvalue pairs, and you can calculate them manually. However, it's generally easier to use a tool or programming language. For example, in Python you can use the ***linalg.eig*** function, which returns an array of eigenvalues and a matrix of the corresponding eigenvectors for the specified matrix.
Here's an example that returns the eigenvalue and eigenvector pairs for the following matrix:
$$A=\begin{bmatrix}2 & 0\\0 & 3\end{bmatrix}$$
```python
import numpy as np
A = np.array([[2,0],
[0,3]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
```
[2. 3.]
[[1. 0.]
[0. 1.]]
So there are two eigenvalue-eigenvector pairs for this matrix, as shown here:
$$ \lambda_{1} = 2, \vec{v_{1}} = \begin{bmatrix}1 \\ 0\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 3, \vec{v_{2}} = \begin{bmatrix}0 \\ 1\end{bmatrix} $$
Let's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:
$$ 2 \times \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 3\end{bmatrix} \cdot \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} $$
So far so good. Now let's check the second pair:
$$ 3 \times \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 3\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 3\end{bmatrix} \cdot \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 3\end{bmatrix} $$
So our eigenvalue-eigenvector scalar multiplications do indeed correspond to our matrix-eigenvector dot-product transformations.
Here's the equivalent code in Python, using the ***eVals*** and ***eVecs*** variables you generated in the previous code cell:
```python
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
```
Matrix A:
[[2 0]
[0 3]]
-------
lam1: 2.0
v1: [1. 0.]
Av1: [2. 0.]
lam1 x v1: [2. 0.]
-------
lam2: 3.0
v2: [0. 1.]
Av2: [0. 3.]
lam2 x v2: [0. 3.]
You can use the following code to visualize these transformations:
```python
t1 = lam1*vec1
print (t1)
t2 = lam2*vec2
print (t2)
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
Similarly, earlier we examined the following matrix transformation:
$$\begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
And we saw that you can achieve the same result by mulitplying the vector by the scalar value ***2***:
$$2 \times \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
This works because the scalar value 2 and the vector (1,0) are an eigenvalue-eigenvector pair for this matrix.
Let's use Python to determine the eigenvalue-eigenvector pairs for this matrix:
```python
import numpy as np
A = np.array([[2,0],
[0,2]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
```
[2. 2.]
[[1. 0.]
[0. 1.]]
So once again, there are two eigenvalue-eigenvector pairs for this matrix, as shown here:
$$ \lambda_{1} = 2, \vec{v_{1}} = \begin{bmatrix}1 \\ 0\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 2, \vec{v_{2}} = \begin{bmatrix}0 \\ 1\end{bmatrix} $$
Let's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:
$$ 2 \times \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} $$
Well, we already knew that. Now let's check the second pair:
$$ 2 \times \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 2\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 2\end{bmatrix} $$
Now let's use Pythonto verify and plot these transformations:
```python
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
# Plot the resulting vectors
t1 = lam1*vec1
t2 = lam2*vec2
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
Let's take a look at one more, slightly more complex example. Here's our matrix:
$$\begin{bmatrix}2 & 1\\1 & 2\end{bmatrix}$$
Let's get the eigenvalue and eigenvector pairs:
```python
import numpy as np
A = np.array([[2,1],
[1,2]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
```
[3. 1.]
[[ 0.70710678 -0.70710678]
[ 0.70710678 0.70710678]]
This time the eigenvalue-eigenvector pairs are:
$$ \lambda_{1} = 3, \vec{v_{1}} = \begin{bmatrix}0.70710678 \\ 0.70710678\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 1, \vec{v_{2}} = \begin{bmatrix}-0.70710678 \\ 0.70710678\end{bmatrix} $$
So let's check the first pair:
$$ 3 \times \begin{bmatrix}0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}2.12132034 \\ 2.12132034\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 1\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}2.12132034 \\ 2.12132034\end{bmatrix} $$
Now let's check the second pair:
$$ 1 \times \begin{bmatrix}-0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}-0.70710678\\0.70710678\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 1\\1 & 2\end{bmatrix} \cdot \begin{bmatrix}-0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}-0.70710678\\0.70710678\end{bmatrix} $$
With more complex examples like this, it's generally easier to do it with Python:
```python
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
# Plot the results
t1 = lam1*vec1
t2 = lam2*vec2
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
## Eigendecomposition
So we've learned a little about eigenvalues and eigenvectors; but you may be wondering what use they are. Well, one use for them is to help decompose transformation matrices.
Recall that previously we found that a matrix transformation of a vector changes its magnitude, amplitude, or both. Without getting too technical about it, we need to remember that vectors can exist in any spatial orientation, or *basis*; and the same transformation can be applied in different *bases*.
We can decompose a matrix using the following formula:
$$A = Q \Lambda Q^{-1}$$
Where ***A*** is a trasformation that can be applied to a vector in its current base, ***Q*** is a matrix of eigenvectors that defines a change of basis, and ***Λ*** is a matrix with eigenvalues on the diagonal that defines the same linear transformation as ***A*** in the base defined by ***Q***.
Let's look at these in some more detail. Consider this matrix:
$$A=\begin{bmatrix}3 & 2\\1 & 0\end{bmatrix}$$
***Q*** is a matrix in which each column is an eigenvector of ***A***; which as we've seen previously, we can calculate using Python:
```python
import numpy as np
A = np.array([[3,2],
[1,0]])
l, Q = np.linalg.eig(A)
print(Q)
```
[[ 0.96276969 -0.48963374]
[ 0.27032301 0.87192821]]
```python
importer numpy comme np
A = np.array ([[3,2],
[1,0]])
l, Q = np.linalg.eig (A)
imprimer (Q)
```
So for matrix ***A***, ***Q*** is the following matrix:
$$Q=\begin{bmatrix}0.96276969 & -0.48963374\\0.27032301 & 0.87192821\end{bmatrix}$$
***Λ*** is a matrix that contains the eigenvalues for ***A*** on the diagonal, with zeros in all other elements; so for a 2x2 matrix, Λ will look like this:
$$\Lambda=\begin{bmatrix}\lambda_{1} & 0\\0 & \lambda_{2}\end{bmatrix}$$
In our Python code, we've already used the ***linalg.eig*** function to return the array of eigenvalues for ***A*** into the variable ***l***, so now we just need to format that as a matrix:
```python
L = np.diag(l)
print (L)
```
So ***Λ*** is the following matrix:
$$\Lambda=\begin{bmatrix}3.56155281 & 0\\0 & -0.56155281\end{bmatrix}$$
Now we just need to find ***Q<sup>-1</sup>***, which is the inverse of ***Q***:
```python
Qinv = np.linalg.inv(Q)
print(Qinv)
```
The inverse of ***Q*** then, is:
$$Q^{-1}=\begin{bmatrix}0.89720673 & 0.50382896\\-0.27816009 & 0.99068183\end{bmatrix}$$
So what does that mean? Well, it means that we can decompose the transformation of *any* vector multiplied by matrix ***A*** into the separate operations ***QΛQ<sup>-1</sup>***:
$$A\vec{v} = Q \Lambda Q^{-1}\vec{v}$$
To prove this, let's take vector ***v***:
$$\vec{v} = \begin{bmatrix}1\\3\end{bmatrix} $$
Our matrix transformation using ***A*** is:
$$\begin{bmatrix}3 & 2\\1 & 0\end{bmatrix} \cdot \begin{bmatrix}1\\3\end{bmatrix} $$
So let's show the results of that using Python:
```python
v = np.array([1,3])
t = A@v
print(t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'b'], scale=20)
plt.show()
```
And now, let's do the same thing using the ***QΛQ<sup>-1</sup>*** sequence of operations:
```python
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
t = (Q@(L@(Qinv)))@v
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'b'], scale=20)
plt.show()
```
So ***A*** and ***QΛQ<sup>-1</sup>*** are equivalent.
If we view the intermediary stages of the decomposed transformation, you can see the transformation using ***A*** in the original base for ***v*** (orange to blue) and the transformation using ***Λ*** in the change of basis decribed by ***Q*** (red to magenta):
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
t1 = Qinv@v
t2 = L@t1
t3 = Q@t2
# Plot the transformations
vecs = np.array([v,t1, t2, t3])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'red', 'magenta', 'blue'], scale=20)
plt.show()
```
So from this visualization, it should be apparent that the transformation ***Av*** can be performed by changing the basis for ***v*** using ***Q*** (from orange to red in the above plot) applying the equivalent linear transformation in that base using ***Λ*** (red to magenta), and switching back to the original base using ***Q<sup>-1</sup>*** (magenta to blue).
## Rank of a Matrix
The **rank** of a square matrix is the number of non-zero eigenvalues of the matrix. A **full rank** matrix has the same number of non-zero eigenvalues as the dimension of the matrix. A **rank-deficient** matrix has fewer non-zero eigenvalues as dimensions. The inverse of a rank deficient matrix is singular and so does not exist (this is why in a previous notebook we noted that some matrices have no inverse).
Consider the following matrix ***A***:
$$A=\begin{bmatrix}1 & 2\\4 & 3\end{bmatrix}$$
Let's find its eigenvalues (***Λ***):
```python
import numpy as np
A = np.array([[1,2],
[4,3]])
l, Q = np.linalg.eig(A)
L = np.diag(l)
print(L)
```
$$\Lambda=\begin{bmatrix}-1 & 0\\0 & 5\end{bmatrix}$$
This matrix has full rank. The dimensions of the matrix is 2. There are two non-zero eigenvalues.
Now consider this matrix:
$$B=\begin{bmatrix}3 & -3 & 6\\2 & -2 & 4\\1 & -1 & 2\end{bmatrix}$$
Note that the second and third columns are just scalar multiples of the first column.
Let's examine it's eigenvalues:
```python
B = np.array([[3,-3,6],
[2,-2,4],
[1,-1,2]])
lb, Qb = np.linalg.eig(B)
Lb = np.diag(lb)
print(Lb)
```
$$\Lambda=\begin{bmatrix}3 & 0& 0\\0 & -6\times10^{-17} & 0\\0 & 0 & 3.6\times10^{-16}\end{bmatrix}$$
Note that matrix has only 1 non-zero eigenvalue. The other two eigenvalues are so extremely small as to be effectively zero. This is an example of a rank-deficient matrix; and as such, it has no inverse.
## Inverse of a Square Full Rank Matrix
You can calculate the inverse of a square full rank matrix by using the following formula:
$$A^{-1} = Q \Lambda^{-1} Q^{-1}$$
Let's apply this to matrix ***A***:
$$A=\begin{bmatrix}1 & 2\\4 & 3\end{bmatrix}$$
Let's find the matrices for ***Q***, ***Λ<sup>-1</sup>***, and ***Q<sup>-1</sup>***:
```python
import numpy as np
A = np.array([[1,2],
[4,3]])
l, Q = np.linalg.eig(A)
L = np.diag(l)
print(Q)
Linv = np.linalg.inv(L)
Qinv = np.linalg.inv(Q)
print(Linv)
print(Qinv)
```
So:
$$A^{-1}=\begin{bmatrix}-0.70710678 & -0.4472136\\0.70710678 & -0.89442719\end{bmatrix}\cdot\begin{bmatrix}-1 & -0\\0 & 0.2\end{bmatrix}\cdot\begin{bmatrix}-0.94280904 & 0.47140452\\-0.74535599 & -0.74535599\end{bmatrix}$$
Let's calculate that in Python:
```python
Ainv = (Q@(Linv@(Qinv)))
print(Ainv)
```
That gives us the result:
$$A^{-1}=\begin{bmatrix}-0.6 & 0.4\\0.8 & -0.2\end{bmatrix}$$
We can apply the ***np.linalg.inv*** function directly to ***A*** to verify this:
```python
print(np.linalg.inv(A))
```
|
6f1f089939d782320dc638c11d2927875a5fa0f9
| 110,592 |
ipynb
|
Jupyter Notebook
|
Vectores y Matrices/03-05-Transformations Eigenvectors and Eigenvalues.ipynb
|
eblancoh/Curso-Nivelador-Machine-Learning
|
cae9693af0f24e7af666f4cd7b6719f67ea0b495
|
[
"Apache-2.0"
] | 3 |
2020-12-07T18:49:53.000Z
|
2021-08-20T15:03:46.000Z
|
Vectores y Matrices/03-05-Transformations Eigenvectors and Eigenvalues.ipynb
|
eblancoh/Curso-Nivelador-Machine-Learning
|
cae9693af0f24e7af666f4cd7b6719f67ea0b495
|
[
"Apache-2.0"
] | null | null | null |
Vectores y Matrices/03-05-Transformations Eigenvectors and Eigenvalues.ipynb
|
eblancoh/Curso-Nivelador-Machine-Learning
|
cae9693af0f24e7af666f4cd7b6719f67ea0b495
|
[
"Apache-2.0"
] | 1 |
2020-12-07T18:49:56.000Z
|
2020-12-07T18:49:56.000Z
| 78.937901 | 6,504 | 0.795085 | true | 8,675 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.931463 | 0.835484 | 0.778222 |
__label__eng_Latn
| 0.859838 | 0.646402 |
###### Material desenvolvido para o minicurso: Introdução à solução numérica de EDP's, ministrado no ERMAC/2018 de 5 a 6 de abril de 2018 na Universidade Federal de Lavras, Lavras/MG, Brasil. Autor: [Jonas Laerte Ansoni](http://jonasansoni.blogspot.com.br/).
# <center> Minicurso:<font color='blue'> Introdução à solução numérica de EDP's
### 2.6 Estabilidade e condição CFL
Na aula anterior desta série, estudamos a solução numérica das equações de convecção lineares, usando o método das diferenças finitas. Você testou o método utilizado usando diferentes opções de parâmetros? Se você fez isto, provavelmente se deparou com algum comportamento inesperado. Sua solução chegou a explodir (às vezes de maneira legal!)?
#### [Olhar] Livro Fortuna, página 204.
### O que aconteceu??
Para responder a essa pergunta, temos que pensar um pouco sobre o que estamos realmente implementando no código quando resolvemos a equação de convecção linear com o método forward-time / backward-space.
Em cada iteração do _loop_ de tempo, usamos os dados existentes sobre a solução no tempo $n$ para calcular a solução no próximo período de tempo, $n + 1$. Nos primeiros casos, o aumento no número de pontos de grade retornou resultados mais precisos. Houve menos erros de discretização e a onda em movimento parecia mais uma onda quadrada do que no nosso primeiro exemplo.
Cada iteração do _loop_ de tempo avança a solução por uma etapa de tempo de comprimento $\Delta t$, que tinha o valor 0,025 nos exemplos acima. Durante essa iteração, avaliamos a solução $c$ em cada um dos $x_i$ pontos da grade. Mas no último plot, algo claramente deu errado.
O que aconteceu é que durante o período de tempo $\Delta t$, a onda está viajando a uma distância maior que `dx`, e dizemos que a solução se torna *instável* nessa situação (esta afirmação pode ser provada formalmente, ver abaixo). O comprimento `dx` do espaçamento da grade é inversamente proporcional ao número total de pontos `nx`: nós utilizamos mais pontos de grade, então `dx` ficou menor. Uma vez que o `dx` ficou menor que o $u \Delta t $ (a distância percorrida pela solução numérica em um único intervalo de tempo) não é mais possível para o esquema numérico resolver a equação corretamente!
#### Interpretação gráfica da condição CFL.
Considere a ilustração acima. O triângulo verde representa o domínio de dependência do esquema numérico. De fato, para cada etapa de tempo, a variável $c_i^{n + 1}$ depende apenas dos valores $ c_i^{n}$ e $u_{i-1}^{n}$.
Quando a distância $u\Delta t$ for menor que $\Delta x$, a linha característica traçada a partir da coordenada da grade $i, n + 1 $ permanece _entre_ os pontos $i-1, n$ e $i, n$ na grade. Em seguida, dizemos que o domínio _matemático da dependência_ da solução do EDP original está contido no _domínio de dependência_ do esquema numérico.
Pelo contrário, se $\Delta x$ for menor que $u \Delta t$, então as informações sobre a solução necessária para $c_i^{n + 1}$ não estão disponíveis no _domínio de dependência_ do esquema numérico, porque a linha característica traçada a partir da coordenada da grade $i, n + 1$ permanece _entre_ o ponto $i-1, n$ na grade.
A condição a seguir garante que o domínio de dependência da equação diferencial esteja contido no domínio _numerico_ de dependência:
\begin{equation}\sigma = \frac{u \Delta t}{\Delta x} \leq 1
\end{equation}
Como pode ser provado formalmente, a estabilidade da solução numérica requer que o tamanho do passo "dt" seja calculado em relação ao tamanho de "dx" para satisfazer a condição acima.
O valor de $u \Delta t/ \Delta x$ é chamado de **número de Courant-Friedrichs-Lewy** (número de CFL), frequentemente denotado por $\sigma$. O valor $\sigma_{\text{max}}$ que garantirá a estabilidade depende da discretização usada; Para o esquema forward-time/backward-space, a condição para a estabilidade é $\sigma <1$.
Em uma nova versão do nosso código, usaremos o número CFL para calcular o passo de tempo apropriado `dt` dependendo do tamanho de` dx`.
#### Existe um teorema afirmando que o método será estável se $0 < \sigma \leq 1$ e instável se $\sigma > 1$. (Ames, 1992). Difusão numérica.
<div class="alert alert-success" role="alert">
<h4> Note que esta condição estabelece que a velocidade numérica tem que ser maior ou igual à velocidade de advecção $u$.</h4>
</div>
```python
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
```
```python
def linearconv(nx):
"""Solve the linear convection equation.
Solves the equation d_t u + c d_x u = 0 where
* the wavespeed c is set to 1
* the domain is x \in [0, 2]
* 20 timesteps are taken, with \Delta t computed using the CFL 0.5
* the initial data is the hat function
Produces a plot of the results
Parameters
----------
nx : integer
number of internal grid points
Returns
-------
None : none
"""
dx = 2/(nx-1)
nt = 20
c = 1
sigma = .5
x = numpy.linspace(0,2,nx)
dt = sigma*dx
u = numpy.ones(nx)
lbound = numpy.where(x >= 0.5)
ubound = numpy.where(x <= 1)
u[numpy.intersect1d(lbound, ubound)]=2
un = numpy.ones(nx)
for n in range(nt):
un = u.copy()
u[1:] = un[1:] -c*dt/dx*(un[1:] -un[0:-1])
u[0] = 1.0
pyplot.plot(x, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
```
Agora, não importa quantos pontos usamos para a grade espacial: a solução será sempre estável!
```python
linearconv(41) #41, 101, 121
```
```python
from IPython.core.display import HTML
css_file = '../styles/custom.css'
HTML(open(css_file, "r").read())
```
<style>
@import url(http://fonts.googleapis.com/css?family=Lato|Source+Code+Pro|Montserrat:400,700);
body {
font-family: 'Lato', sans-serif !important;
font-size: 1.25em !important;
line-height: 30px !important;
font-weight: 400 !important;
color: #000000 !important;
}
#notebook-container {
-webkit-box-shadow: none;
box-shadow: none;
}
.rendered_html h1 { font-size: 4rem !important; }
.rendered_html h2 { font-size: 3rem !important; }
.rendered_html h3 { font-size: 2.5rem !important; }
.rendered_html h4 { font-size: 2rem !important; }
.rendered_html h5 { font-size: 1.5rem !important; }
.rendered_html h6 { font-size: 1.5rem !important; }
.rendered_html h1,
.rendered_html h2,
.rendered_html h3,
.rendered_html h4,
.rendered_html h5,
.rendered_html h6 {
font-family: 'Montserrat', sans-serif !important;
font-weight: 300 !important;
line-height: 1.5em !important;
color: rgb(0,51,102) !important;
}
h1 { font-size: 4.5rem !important; }
h2 { font-size: 4rem !important; }
h3 { font-size: 3.5rem !important; }
h4 { font-size: 3rem !important; }
h5 { font-size: 2.5rem !important; }
h6 { font-size: 1.5rem !important; }
h1, h2, h3, h4, h5, h6 {
font-family: 'Montserrat', sans-serif !important;
color: #e6ae48 !important;
line-height: 150px !important;
}
p {
font-family: 'Lato', sans-serif !important;
font-size: 1.25em !important;
line-height: 30px !important;
font-weight: 400 !important;
color: #000000 !important;
}
li {
font-family: 'Lato', sans-serif !important;
font-size: 1.25em !important;
line-height: 30px !important;
font-weight: 400 !important;
color: #000000 !important;
}
code {
font-family: 'Source Code Pro', sans-serif !important;
font-size: 1em !important;
}
pre {
font-family: 'Source Code Pro', sans-serif !important;
font-size: 1em !important;
}
div.input_area {
border: none !important;
background: whitesmoke !important;
}
.alert-box {
padding:10px 10px 10px 36px;
margin:5px;
}
<style>
|
9f27b88241c0af2ae2f4aa919e84431bac0e2812
| 23,592 |
ipynb
|
Jupyter Notebook
|
aula1/aula1.3.ipynb
|
jonasansoni/ermac2018
|
c737b5dfbc3eb7fbc69c557be4afc52614f372fe
|
[
"MIT"
] | null | null | null |
aula1/aula1.3.ipynb
|
jonasansoni/ermac2018
|
c737b5dfbc3eb7fbc69c557be4afc52614f372fe
|
[
"MIT"
] | null | null | null |
aula1/aula1.3.ipynb
|
jonasansoni/ermac2018
|
c737b5dfbc3eb7fbc69c557be4afc52614f372fe
|
[
"MIT"
] | null | null | null | 70.005935 | 11,028 | 0.743854 | true | 2,329 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.771843 | 0.904651 | 0.698249 |
__label__por_Latn
| 0.992511 | 0.460597 |
# 概率论和信息论笔记
## 常用概率分布
### Bernoulli分布
Bernoulli分布是单个二值随机变量的分布。它由单个参数$\phi\in[0,1]$控制,$\phi$给出了随机变量等于1的概率。它有如下一些性质:
\begin{align}
P(\text{x}=1)&=\phi \\
P(\text{x}=0)&=1-\phi \\
P(\text{x}=x)&=\phi^{x}(1-\phi)^{1-\phi} \\
\mathbb{E}_{\text{x}}[\text{x}]&=\phi \\
\text{Var}_{\text{x}}(\text{x})&=\phi(1-\phi)
\end{align}
### Multinoulli分布
Multinoulli分布或者**范畴分布**是指**在具有k个不同状态的单个离散型随机变量上的分布,其中k是一个有限值**。
Multinoulli分布通常用来表示对象分类的分布。我们通常不需要计算Multinoulli分布的期望和方差。
Bernoulli分布和Multinoulli分布走狗用来描述它们领域内的任何分布,因为它们领域很简单:**它们可以对那些能够将所有状态进行枚举的离散型随机变量进行建模**。
### 高斯分布或者正态分布
实数上最常用的分布就是**正太分布(normal distribution)**,也称为**高斯分布Gaussian distribution**。
公式如下:
$$\mathcal{N}(x;\mu,\sigma^2)=\sqrt\frac{1}{2\pi\sigma^2}\text{exp}(-\frac{1}{2\sigma^2}(x-\mu)^2)$$
高斯分布如下图所示:
我们用$\beta^{-1}$替代$\sigma^2$,表示**精度(precision)**,那么有:
$$\mathcal{N}(x;\mu,\beta^{-1})=\sqrt\frac{\beta}{2\pi}\text{exp}(-\frac{1}{2}\beta(x-\mu)^2)$$
当我们由于缺乏关于某个实数上分布的先验知识而不知道该选择怎样的形式时,正态分布是默认的比较好的选择,其中有两个原因。
* 我们想要建模的很多分布的真实情况是比较接近正态分布的,**中心极限定理(central limit theorem)**说明很多独立随机变量的和近似服从正态分布
* 在具有相同方差的所有可能的概率分布中, 正态分布在实数上具有最大的不确定性,因此,我们可以认为正态分布是对模型加入的先验知识量最少的分布
### 指数分布和Laplace分布
在深度学习中,我们经常会需要一个在 `x = 0` 点处取得**边界点 (sharp point)** 的分布。为了实现这一目的,我们可以使用**指数分布(exponential distribution)**:
$$p(x;\lambda)=\lambda\mathbb{1}_{x>=0} exp(-\lambda x)$$
其中,**指示函数(indicator function)**$\mathbb{1}_{x>=0}$使得$x$取负数的时候值为零。
一个联系紧密的概率分布是**Laplace 分布(Laplace distribution)**,它允许我们在任意一点 µ 处设置概率质量的峰值:
$$Laplace(x;\mu,\gamma)=\frac{1}{2\gamma}exp(-\frac{\vert{x-\mu}\vert}{\gamma})$$
## 常用函数的有用性质
### sigmoid函数
sigmoid函数常用来产生Bernoulli分布的参数$\phi$,因为它的范围是(0,1),处在$\phi$的有效值范围。它的公式如下:
$$\sigma(x)=\frac{1}{1+e^{-x}}$$
### softplus函数
softplus 函数可以用来产生正态分布的 β 和 σ 参数,因为它的范围是 (0, $\infty$)。它的公式如下:
$$\zeta(x)=\log(1+e^x)$$
它的名称来源是因为它是另一个函数的“软化”,这个函数是:
$$x^+=max(0,x)$$
以下是一些很有用的性质:
\begin{align}
\sigma(x)&=\frac{1}{1+e^{-x}}=\frac{e^x}{e^x+e^0} \\
\frac{d}{d_x}\sigma(x)&=\sigma(x)(1-\sigma(x)) \\
1-\sigma(x)&=\sigma(-x) \\
\log(\sigma(x))&=-\zeta(-x) \\
\frac{d}{d_x}\zeta(x)&=\sigma(x) \\
\forall{x} \in (0,1),\sigma(x)^{-1}&=\log(\frac{x}{1-x}) \\
\forall{x}>0,\zeta(x)^{-1}&=\log(exp(x)-1) \\
\zeta(x)&=\int_{-\infty}^{+\infty}\sigma(y)d_y \\
\zeta(x)-\zeta(-x)&=x
\end{align}
函数$\sigma(x)^{-1}$在统计学中称为**分对数(logits)**,但是这个函数在机器学习里面很少用到。
## 贝叶斯法则
我们经常会需要在已知$P(y\vert x)$时计算$P(x\vert y)$。幸运的是,如果还知道$P(x)$,我们可以用**贝叶斯规则(Bayes’ rule)**来实现这一目的:
$$P(x\vert y)=\frac{P(x)P(y\vert x)}{P(y)}$$
其中,
$$P(y)=\sum_xP(y\vert x)P(x)$$
## 信息论
定义一个事件$\text{x}=x$的**自信息量(self-information)**:
$$I(x)=-\log P(x)$$
其中$\log$以e为底,$I(x)$的单位是**奈特(nats)**,$\log$以2为底,$I(x)$的单位是**比特(bit)或者香农(shannons)**。默认使用e为底。
自信息只处理单个的输出。我们可以用**香农熵(Shannon entropy)**来对整个概率分布中的不确定性总量进行量化:
$$H(x)=\mathbb{E}_{x\sim P}[I(x)]=-\mathbb{E}_{x\sim P}[\log P(x)]$$
也记做$H(P)$。
换言之,**一个分布的香农熵是指遵循这个分布的事件所产生的期望信息总量**。
如果我们对于同一个随机变量$x$有两个单独的概率分布$P(x)$ 和$Q(x)$,我们可以使用**KL 散度(Kullback-Leibler (KL) divergence)**来衡量这两个分布的差异:
$$D_{KL}(P\parallel Q)=\mathbb{E}_{x\sim P}[\log\frac{P(x)}{Q(x)}]=\mathbb{E}_{x\sim P}[\log P(x)-\log Q(x)]$$
一个和 KL 散度密切联系的量是**交叉熵(cross-entropy)**$H(P;Q) = H(P) +D_{KL}(P\parallel Q)$,它和KL散度很像但是缺少左边一项:
$$H(P,Q)=-\mathbb{E}_{x\sim P}\log Q(x)$$
|
13c35850e22caa5ac73fcd6bbb45d3687da13fa5
| 5,877 |
ipynb
|
Jupyter Notebook
|
probability_and_information_theory.ipynb
|
luozhouyang/machine-learning-notes
|
332bea905398891fed4a98aa139eac02c88cb5ae
|
[
"Apache-2.0"
] | 73 |
2018-09-07T06:47:18.000Z
|
2022-01-25T06:14:41.000Z
|
probability_and_information_theory.ipynb
|
luozhouyang/machine-learning-notes
|
332bea905398891fed4a98aa139eac02c88cb5ae
|
[
"Apache-2.0"
] | 2 |
2018-10-18T06:40:19.000Z
|
2019-11-16T01:48:39.000Z
|
probability_and_information_theory.ipynb
|
luozhouyang/machine-learning-notes
|
332bea905398891fed4a98aa139eac02c88cb5ae
|
[
"Apache-2.0"
] | 47 |
2018-09-27T10:50:21.000Z
|
2022-01-25T06:20:23.000Z
| 27.208333 | 128 | 0.495831 | true | 2,014 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.746139 | 0.877477 | 0.65472 |
__label__yue_Hant
| 0.419863 | 0.359464 |
```python
%matplotlib inline
from IPython.display import display,Math
from sympy import *
init_session()
```
```python
%%time
# 平方根の第k位まで
p = 2
k = 2
q = 0
while ( q**2 <= p*(10**(2*k)) ):
q = q+1
# 第k桁まで r*(10**k)
if q**2 == p*(10**(2*k)):
r = q
else:
r = q-1
print("{0:d} の平方根の小数第{1:d}位までは {2:.{digits}f}".format(p,k,r*(10**(-k)),digits=k))
```
```python
%%time
# 平方根の第k位まで(変数追加による改良)
p = 2
k = 2
q = 0
pk = p * (10**(2*k)) # 繰り返し使うので変数として設定
while ( q**2 <= pk ):
q = q+1
# 第k桁まで r*(10**k)
if q**2 == pk:
r = q
else:
r = q-1
print("{0:d} の平方根の小数第{1:d}位までは {2:.{digits}f}".format(p,k,r*(10**(-k)),digits=k))
```
```python
%%time
# 平方根の第k位まで(和の公式を利用)
p = 2
k = 2
s = 0
q = 0
pk = p*(10**(2*k))
while ( s <= pk ):
s = s+(2*q+1)
q = q+1
# 第k桁まで r*(10**k)
if s == p:
r = q
else:
r = q-1
print("{0:d} の平方根の小数第{1:d}位までは {2:.{digits}f}".format(p,k,r*(10**(-k)),digits=k))
```
```python
%%time
# 平方根を用いる(ニュートン法)
p = 3
k = 2 # 誤差の精度
e = 10**(-k) # 誤差
q = p
while ( (q**2-p)/q >= e):
q = (p+q**2)/(2*q)
print("{0:d} の平方根の誤差{1:.{digits}f}の値は {2:.{digits}f}".format(p,e,q,digits=k))
```
```python
from ipywidgets import interact
from ipywidgets import BoundedIntText
import time
def mynewton(p,k,mstep=10**3):
e = 10**(-k)
q = p
step = 0
while ( (q**2-p)/q >= e) and (step < mstep):
q = (p+q**2)/(2*q)
step += 1
return q,step
@interact
def _(p=BoundedIntText(value=2,min=1,max=1000,step=1,description="p"),
k=BoundedIntText(value=1,min=1,max=100,step=1,description="精度")):
p,k = int(p),int(k)
q,step = mynewton(p,k)
return display(Math("$\sqrt{{ {0:d} }}\\fallingdotseq {1:.{digits}f}\\\\ \
\\text{{{2:d}step(s)}}".format(p,q,step,digits=k)))
```
```python
```
|
75f9dba66880c837478c4273cbe3c6843e1e1ad8
| 4,215 |
ipynb
|
Jupyter Notebook
|
21jk1-0623.ipynb
|
ritsumei-aoi/21jk1
|
2d49628ef8721a507193a58aa1af4b31a60dfd8b
|
[
"Apache-2.0"
] | null | null | null |
21jk1-0623.ipynb
|
ritsumei-aoi/21jk1
|
2d49628ef8721a507193a58aa1af4b31a60dfd8b
|
[
"Apache-2.0"
] | null | null | null |
21jk1-0623.ipynb
|
ritsumei-aoi/21jk1
|
2d49628ef8721a507193a58aa1af4b31a60dfd8b
|
[
"Apache-2.0"
] | null | null | null | 21.615385 | 96 | 0.450297 | true | 857 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.933431 | 0.835484 | 0.779866 |
__label__yue_Hant
| 0.239423 | 0.650223 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D3_ModelFitting/student/W1D3_Tutorial2.ipynb" target="_parent"></a>
# Neuromatch Academy: Week 1, Day 3, Tutorial 2
# Model Fitting: Linear regression with MLE
**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith
**Content reviewers**: Lina Teichmann, Madineh Sarvestani, Patrick Mineault, Ella Batty, Michael Waskom
---
#Tutorial Objectives
This is Tutorial 2 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).
In this tutorial, we will use a different approach to fit linear models that incorporates the random 'noise' in our data.
- Learn about probability distributions and probabilistic models
- Learn how to calculate the likelihood of our model parameters
- Learn how to implement the maximum likelihood estimator, to find the model parameter with the maximum likelihood
---
# Setup
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
%config Completer.use_jedi = False
```
```python
#@title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
```
```python
# @title Helper Functions
def plot_density_image(x, y, theta, sigma=1, ax=None):
""" Plots probability distribution of y given x, theta, and sigma
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
theta (float): Slope parameter
sigma (float): standard deviation of Gaussian noise
"""
# plot the probability density of p(y|x,theta)
if ax is None:
fig, ax = plt.subplots()
xmin, xmax = np.floor(np.min(x)), np.ceil(np.max(x))
ymin, ymax = np.floor(np.min(y)), np.ceil(np.max(y))
xx = np.linspace(xmin, xmax, 50)
yy = np.linspace(ymin, ymax, 50)
surface = np.zeros((len(yy), len(xx)))
for i, x_i in enumerate(xx):
surface[:, i] = stats.norm(theta * x_i, sigma).pdf(yy)
ax.set(xlabel='x', ylabel='y')
return ax.imshow(surface, origin='lower', aspect='auto', vmin=0, vmax=None,
cmap=plt.get_cmap('Wistia'),
extent=[xmin, xmax, ymin, ymax])
def solve_normal_eqn(x, y):
"""Solve the normal equations to produce the value of theta_hat that minimizes
MSE.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
theta_hat (float): An estimate of the slope parameter.
Returns:
float: The mean squared error of the data with the estimated parameter.
"""
theta_hat = (x.T @ y) / (x.T @ x)
return theta_hat
```
---
# Section 1: Maximum Likelihood Estimation (MLE)
```python
#@title Video 1: Maximum Likelihood Estimation
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="8mpNmzLKNfU", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
In the previous tutorial we made the assumption that the data was drawn from a linear relationship with noise added, and found an effective approach for estimating model parameters based on minimizing the mean squared error.
In that case we treated the noise as simply a nuisance, but what if we factored it directly into our model?
Recall our linear model:
\begin{align}
y = \theta x + \epsilon.
\end{align}
The noise component $\epsilon$ is often modeled as a random variable drawn from a Gaussian distribution (also called the normal distribution).
The Gaussian distribution is described by its [probability density function](https://en.wikipedia.org/wiki/Probability_density_function) (pdf)
\begin{align}
\mathcal{N}(x; \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{1}{2\sigma^2}(x-\mu)^2}
\end{align}
and is dependent on two parameters: the mean $\mu$ and the variance $\sigma^2$. We often consider the noise signal to be Gaussian "white noise", with zero mean and unit variance:
\begin{align}
\epsilon \sim \mathcal{N}(0, 1).
\end{align}
## Interactive Demo: Gaussian Distribution Explorer
Use the explorer widget below to see how varying the $\mu$ and $\sigma$ parameters change the location and shape of the samples.
```python
# @title
# @markdown Make sure you execute this cell to enable the widget!
@widgets.interact(mu=widgets.FloatSlider(0.0, min=-2.0, max=2.0),
sigma=widgets.FloatSlider(1.0, min=0.5, max=2.0))
def plot_normal_dist(mu=0, sigma=1):
# Generate pdf & samples from normal distribution with mu/sigma
rv = stats.norm(mu, sigma)
x = np.linspace(-5, 5, 100)
y = rv.pdf(x)
samples = rv.rvs(1000)
# Plot
fig, ax = plt.subplots()
ax.hist(samples, 20, density=True, color='g', histtype='stepfilled', alpha=0.8,
label='histogram')
ax.plot(x, y, color='orange', linewidth=3, label='pdf')
ax.vlines(mu, 0, rv.pdf(mu), color='y', linewidth=3, label='$\mu$')
ax.vlines([mu-sigma, mu+sigma], 0, rv.pdf([mu-sigma, mu+sigma]), colors='red',
color='b', linewidth=3, label='$\sigma$')
ax.set(xlabel='x', ylabel='probability density',
xlim=[-5, 5], ylim=[0, 1.0])
ax.legend()
```
interactive(children=(FloatSlider(value=0.0, description='mu', max=2.0, min=-2.0), FloatSlider(value=1.0, desc…
## Section 1.1: Probabilistic Models
Now that we have a model of our noise component $\epsilon$ as random variable, how do we incorporate this back into our original linear model from before? Consider again our simplified model $y = \theta x + \epsilon$ where the noise has zero mean and unit variance $\epsilon \sim \mathcal{N}(0, 1)$. We can now also treat $y$ as a random variable drawn from a Gaussian distribution where $\mu = \theta x$ and $\sigma^2 = 1$:
\begin{align}
y \sim \mathcal{N}(\theta x, 1)
\end{align}
which is to say that the probability of observing $y$ given $x$ and parameter $\theta$ is
\begin{align}
p(y|x,\theta) = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(y-\theta x)^2}
\end{align}
---
Let's revisit our original sample dataset where the true underlying model has $\theta = 1.2$.
```python
# @title
# @markdown Execute this cell to generate some simulated data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
theta = 1.2
n_samples = 30
x = 10 * np.random.rand(n_samples) # sample from a uniform distribution over [0,10)
noise = np.random.randn(n_samples) # sample from a standard normal distribution
y = theta * x + noise
```
This time we can plot the density of $p(y|x,\theta=1.2)$ and see how $p(y)$ changes for different values of $x$.
```python
#@title
#@markdown Execute this cell to visualize p(y|x, theta=1.2)
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10, 4))
# Invokes helper function to generate density image plots from data and parameters
im = plot_density_image(x, y, 1.2, ax=ax1)
plt.colorbar(im, ax=ax1)
ax1.axvline(8, color='k')
ax1.set(title=r'p(y | x, $\theta$=1.2)')
# Plot pdf for given x
ylim = ax1.get_ylim()
yy = np.linspace(ylim[0], ylim[1], 50)
ax2.plot(yy, stats.norm(theta * 8, 1).pdf(yy), color='orange', linewidth=2)
ax2.set(
title=r'p(y|x=8, $\theta$=1.2)',
xlabel='y',
ylabel='probability density');
```
## Section 1.2: Likelihood Estimation
Now that we have our probabilistic model, we turn back to our original challenge of finding a good estimate for $\theta$ that fits our data. Given the inherent uncertainty when dealing in probabilities, we talk about the [likelihood](https://en.wikipedia.org/wiki/Likelihood_function) that some estimate $\hat \theta$ fits our data. The likelihood function $\mathcal{L(\theta)}$ is equal to the probability density function parameterized by that $\theta$:
\begin{align}
\mathcal{L}(\theta|x,y) = p(y|x,\theta) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{1}{2\sigma^2}(y-\theta x)^2}
\end{align}
### Exercise 1: Likelihood Function
In this exercise you will implement the likelihood function $\mathcal{L}(\theta|x,y)$ for our linear model where $\sigma = 1$.
After implementing this function, we can produce probabilities that our estimate $\hat{\theta}$ generated the provided observations. We will try with one of the samples from our dataset.
TIP: Use `np.exp` and `np.sqrt` for the exponential and square root functions, respectively.
```python
def likelihood(theta_hat, x, y):
"""The likelihood function for a linear model with noise sampled from a
Gaussian distribution with zero mean and unit variance.
Args:
theta_hat (float): An estimate of the slope parameter.
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
Returns:
ndarray: the likelihood values for the theta_hat estimate
"""
sigma = 1
# Compute Gaussian likelihood
pdf = 1 / np.sqrt(2 * np.pi * sigma**2) * np.exp(-(y - theta_hat * x)**2 / (2 * sigma**2))
return pdf
# Uncomment below to test your function
print(likelihood(1.0, x[1], y[1]))
```
0.11344443599846923
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D3_ModelFitting/solutions/W1D3_Tutorial2_Solution_328a0c30.py)
We should see that $\mathcal{L}(\theta=1.0|x=2.1,y=3.7) \approx 0.11$. So far so good, but how does this tell us how this estimate is better than any others?
When dealing with a set of data points, as we are with our dataset, we are concerned with their joint probability -- the likelihood that all data points are explained by our parameterization. Since we have assumed that the noise affects each output independently, we can factorize the likelihood, and write:
\begin{align}
\mathcal{L}(\theta|X,Y) = \prod_{i=1}^N \mathcal{L}(\theta|x_i,y_i),
\end{align}
where we have $N$ data points $X = \{x_1,...,x_N\}$ and $Y = \{y_1,...,y_N\}$.
In practice, such a product can be numerically unstable. Indeed multiplying small values together can lead to [underflow](https://en.wikipedia.org/wiki/Arithmetic_underflow), the situation in which the digital representation of floating point number reaches its limit. This problem can be circumvented by taking the logarithm of the likelihood because the logarithm transforms products into sums:
\begin{align}
\operatorname{log}\mathcal{L}(\theta|X,Y) = \sum_{i=1}^N \operatorname{log}\mathcal{L}(\theta|x_i,y_i)
\end{align}
We can take the sum of the log of the output of our `likelihood` method applied to the full dataset to get a better idea of how different $\hat\theta$ compare. We can also plot the different distribution densities over our dataset and see how they line up qualitatively.
```python
# @title
# @markdown Execute this cell to visualize different distribution densities
theta_hats = [0.5, 1.0, 2.2]
fig, axes = plt.subplots(ncols=3, figsize=(16, 4))
for theta_hat, ax in zip(theta_hats, axes):
ll = np.sum(np.log(likelihood(theta_hat, x, y))) # log likelihood
im = plot_density_image(x, y, theta_hat, ax=ax)
ax.scatter(x, y)
ax.set(title=fr'$\hat{{\theta}}$ = {theta_hat}, log likelihood: {ll:.2f}')
plt.colorbar(im, ax=ax);
```
Using the log likelihood calculation, we see that $\mathcal{L}(\theta=1.0) > \mathcal{L}(\theta=0.5) > \mathcal{L}(\theta=2.2)$.
This is great: now we have a way to compare estimators based on likelihood. But like with the MSE approach, we want an analytic solution to find the best estimator. In this case, we want to find the estimator that maximizes the likelihood.
## Section 1.3: Finding the Maximum Likelihood Estimator
We want to find the parameter value $\hat\theta$ that makes our data set most likely:
\begin{align}
\hat{\theta}_{\textrm{MLE}} = \underset{\theta}{\operatorname{argmax}} \mathcal{L}(\theta|X,Y)
\end{align}
We discussed how taking the logarithm of the likelihood helps with numerical stability, the good thing is that it does so without changing the parameter value that maximizes the likelihood. Indeed, the $\textrm{log}()$ function is *monotonically increasing*, which means that it preserves the order of its inputs. So we have:
\begin{align}
\hat{\theta}_{\textrm{MLE}} = \underset{\theta}{\operatorname{argmax}} \sum_{i=1}^m \textrm{log} \mathcal{L}(\theta|x_i,y_i)
\end{align}
Now substituting our specific likelihood function and taking its logarithm, we get:
\begin{align}
\hat{\theta}_{\textrm{MLE}} = \underset{\theta}{\operatorname{argmax}} [-\frac{N}{2} \operatorname{log} 2\pi\sigma^2 - \frac{1}{2\sigma^2}\sum_{i=1}^N (y_i-\theta x_i)^2].
\end{align}
Note that maximizing the log likelihood is the same as minimizing the negative log likelihood (in practice optimization routines are developed to solve minimization not maximization problems). Because of the convexity of this objective function, we can take the derivative of our negative log likelihhood, set it to 0, and solve - just like our solution to minimizing MSE.
\begin{align}
\frac{\partial\operatorname{log}\mathcal{L}(\theta|x,y)}{\partial\theta}=\frac{1}{\sigma^2}\sum_{i=1}^N(y_i-\theta x_i)x_i = 0
\end{align}
This looks remarkably like the equation we had to solve for the optimal MSE estimator, and, in fact, we arrive to the exact same solution!
\begin{align}
\hat{\theta}_{\textrm{MLE}} = \hat{\theta}_{\textrm{MSE}} = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}
\end{align}
```python
# Compute theta_hat_MLE
theta_hat_mle = (x @ y) / (x @ x)
```
```python
#@title
#@markdown Execute this cell to visualize density with theta_hat_mle
# Plot the resulting distribution density
fig, ax = plt.subplots()
ll = np.sum(np.log(likelihood(theta_hat_mle, x, y))) # log likelihood
im = plot_density_image(x, y, theta_hat_mle, ax=ax)
plt.colorbar(im, ax=ax);
ax.scatter(x, y)
ax.set(title=fr'$\hat{{\theta}}$ = {theta_hat_mle:.2f}, log likelihood: {ll:.2f}');
```
---
# Summary
- Likelihood vs probability
- $\mathcal{L}(\theta|x, y) = p(y|\theta, x)$
- $p(y|\theta, x)$ -> "probability of observing the response $y$ given parameter $\theta$ and input $x$"
- $\mathcal{L}(\theta|x, y)$ -> "likelihood model that parameters $\theta$ produced response $y$ from input $x$"
- Log-likelihood maximization
- We take the $\textrm{log}$ of the likelihood function for computational convenience
- The parameters $\theta$ that maximize $\textrm{log}\mathcal{L}(\theta|x, y)$ are the model parameters that maximize the probability of observing the data.
- **Key point**:
- the log-likelihood is a flexible cost function, and is often used to find model parameters that best fit the data.
---
# Appendix
We can also see $\mathrm{p}(\mathrm{y} | \mathrm{x}, \theta)$ as a function of $x$. This is the stimulus likelihood function, and it is useful in case we want to decode the input $x$ from observed responses $y$. This is what is relevant from the point of view of a neuron that does not have access to the outside world and tries to infer what's out there from the responses of other neurons!
|
a9c5f64b29663ee6e651b993c515429fd5a96d25
| 226,800 |
ipynb
|
Jupyter Notebook
|
tutorials/W1D3_ModelFitting/student/W1D3_Tutorial2.ipynb
|
Benorli/course-content
|
41fde960b801cc702f0c2eb06179ba36e4e16fc7
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null |
tutorials/W1D3_ModelFitting/student/W1D3_Tutorial2.ipynb
|
Benorli/course-content
|
41fde960b801cc702f0c2eb06179ba36e4e16fc7
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null |
tutorials/W1D3_ModelFitting/student/W1D3_Tutorial2.ipynb
|
Benorli/course-content
|
41fde960b801cc702f0c2eb06179ba36e4e16fc7
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | 272.268908 | 83,220 | 0.91705 | true | 4,340 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.689306 | 0.800692 | 0.551921 |
__label__eng_Latn
| 0.953909 | 0.120628 |
XRF quantification
========
The fluorescence/scattered intensity $I_f$ due to irradiation with flux $I_s$
$$
\begin{equation}
I_f=I_s(Hz)\Delta t(s)\Omega(sr)\sum_{i,j}\epsilon_{i,j} c_{i,j}(sr^{-1})
\end{equation}
$$
where $I$ the detected intensity (sum of selected fluorescence and or scattering lines), $\Delta t$ the exposure time, $\Omega$ the solid angle of the detector, $\epsilon_{i,j}$ a product of filter transmission and detector absorbance and $c_{i,j}$ the rate of line $j$ with energy $E_j$ due to source line $i$ with energy $E_i$ (depending on sample composition).
As an example, the fluorescence rate of a flat-multilayer sample can be written as (only primary interactions)
$$
\begin{equation}
\begin{split}
c_{i,j}(sr^{-1})=&\frac{d\mu_{i,j}}{d\Omega}\sum_k w_{j,k}\rho_k t_k^\prime(E_i,E_j)\\
\frac{d\mu_{i,j}^{fluo}}{d\Omega} =& \frac{\mu_j(E_i)}{4\pi}\\
\frac{d\mu_{i,j}^R}{d\Omega} =& r_e^2 K_R(\phi,\theta) \frac{N_A}{M_j}f_j^2(E_i,\theta)\\
\frac{d\mu_{i,j}^C}{d\Omega} =& r_e^2 K_C(\phi,\theta) \frac{N_A}{M_j}S_j(E_i,\theta)
\end{split}
\end{equation}
$$
where $k$ loops over the layers. Note that $j$ refers to a particular interaction type (fluorescence of element $Z$, elastic or inelastic scattering). See [polarization](polarization.ipynb) for the definition of the differential scattering cross-sections (SI units $cm^2/g/sr$ and $M_j$ the molar mass of the atom ($g/mol$), $N_A$ the Avogadro constant ($1/mol$), $f$ the atomic form factor and $S$ the incoherent scattering function of the atom).
The corrected layer thickness $t_k^\prime$ takes attenuation of primary X-rays and fluorescence/scattering into account. For a single layer in reflection geometry it can be written as
$$
\begin{equation}
\begin{split}
t^\prime(E_i,E_j) =& \frac{e^{\chi(E_i,E_j) t}-1}{\chi(E_i,E_j)\cos\alpha_{in}}\\
\chi(E_i,E_j) =& \rho\left(\frac{\mu(E_j)}{\cos\alpha_{out}}-\frac{\mu(E_i)}{\cos\alpha_{in}}\right)
\end{split}
\end{equation}
$$
where $\alpha$ the angle between the sample surface normal (pointing away from the source) and the incident(in) or fluorescence/scattering(out) direction ($\alpha_{out}>90^\circ$ in reflection geometry). Note that $\lim_{\chi\to\infty}t^\prime=\frac{t}{\cos\alpha_{in}}$.
## Geometry calibration
### Solid-angle parameterization (without standard)
See the notebook on [diodes](diodes.ipynb) on how $I_s$ is measured. We will assume the detector has a centric-cone geometry with solid angle
$$
\begin{equation}
\Omega=2\pi\left(1-\frac{x+d_0}{\sqrt{\frac{A}{\pi}+\left(x+d_0\right)^2}}\right)
\end{equation}
$$
where $A(mm^2)$ the active area of the detector, $x(mm)$ the position of the detector and $d_0(mm)$ the distance to the sample for $x=0$. To determine $A$ and $d_0$ we can measure the fluorescence of any sample as function of $x$:
$$
\begin{equation}
I_f(x,c,d_0,A)=c\Omega(x,d_0,A)
\end{equation}
$$
As an illustration we will define a detector geometry and multilayer sample. A thin-film standard is used here but any other material can be considered:
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
from spectrocrunch.materials import xrfstandards
from spectrocrunch.detectors import xrf as xrfdetectors
from spectrocrunch.geometries import xrf as xrfgeometries
from spectrocrunch.sources import xray as xraysources
source = xraysources.factory("synchrotron")
detector = xrfdetectors.factory("leia")
geometry = xrfgeometries.factory("sxm120",detectorposition=-10,positionunits="mm",\
detector=detector,source=source)
addnoise = True # add noise to simulations done below
method = "fisx" # XRF simulation method
realsample = xrfstandards.factory("RF7-200-S2371-03",geometry=geometry,\
filmthickness=10e-7) # 10 nm
print(realsample)
```
Multilayer (ordered top-bottom):
Layer 0. 0.01 um (RF7-200-S2371-03)
Layer 1. 0.2 um (silicon nitride)
Simulate a detector scan $I_f(x,c,d_0,A)$:
```python
from spectrocrunch.utils import units
from spectrocrunch.math import noisepropagation
from spectrocrunch.materials import pymca
# Geometry at which the data is collected
geometry.zerodistance = units.Quantity(5.,"cm")
detector.activearea = units.Quantity(70.,"mm^2")
print("\nTheoretical geometry:")
print(" Zero-distance: {:~}".format(geometry.zerodistance.to("cm")))
print(" Active area: {:~}".format(detector.activearea.to("mm^2")))
# Simulate measurement at current distance
energy = 7.3
flux = 1e9
time = 5
pymcahandle = pymca.PymcaHandle(sample=realsample,energy=energy,flux=flux,time=time,\
linear=True,escape=False,continuum=False,scatter=False)
mcaref = pymcahandle.mca(histogram=True,scattering=False,method=method)
# Simulate detector scan
n = 100
x = units.Quantity(np.linspace(-20,60,n),"mm")
I0 = np.full(n,flux*time)
solidangle = geometry.detector.solidangle_calc(activearea=detector.activearea,distance=x+geometry.zerodistance)
fluo = mcaref.sum()/geometry.solidangle*solidangle
if addnoise:
I0 = np.random.poisson(np.round(I0).astype(int))
fluo = np.random.poisson(np.round(fluo).astype(int))
fig,axs = plt.subplots(1,2,figsize=(12,5))
u = x.units
plt.sca(axs[0])
plt.plot(x,fluo/I0.astype(float))
xref = geometry.detectorposition.to(u).magnitude
iref = mcaref.sum()/(flux*time)
lines = plt.plot([xref,xref,x[0].magnitude],[0,iref,iref])
color = lines[0].get_color()
plt.ylabel("Normalized fluorescence")
plt.xlabel("Motor position ({:~})".format(u))
plt.sca(axs[1])
plt.plot(mcaref,color=color)
plt.gca().set_yscale('log', base=10)
plt.xlim([0,len(mcaref)-1])
plt.ylim([1,np.max(mcaref)*1.1])
plt.ylabel("ph/channel")
plt.xlabel("MCA channels")
plt.title("\nSpectrum at x ={:~}:".format(geometry.detectorposition.to("mm")))
plt.show()
```
Calibrate the geometry (starting from different values as the ones used to simulate the data):
```python
# Calibration resources
intensities = noisepropagation.poisson(fluo)/noisepropagation.poisson(I0)
calibrc = {"signal":noisepropagation.E(intensities),\
"var":noisepropagation.VAR(intensities),\
"detectorposition":x.magnitude,\
"positionunits":x.units}
# Calibrate the geometry (starting from wrong values)
geometry.zerodistance += units.Quantity(-5.,"cm")
detector.activearea += units.Quantity(10.,"mm^2")
print("\nInitial geometry:")
print("Zero-distance: {:~}".format(geometry.zerodistance.to("cm")))
print("Active area: {:f~}".format(detector.activearea.to("mm^2")))
geometry.calibrate(calibrc=calibrc,plot=True,fit=True,fixedactivearea=False)
plt.show()
print("Calibrated geometry:")
print("Zero-distance: {:~}".format(geometry.zerodistance_rv.to("cm")))
print("Active area: {:f~}".format(detector.activearea_rv.to("mm^2")))
```
The correlation between $c$, $A$ and $d_0$ is too high to provide a usable result (also when fixing the active area).
### Solid-angle parameterization (with standard)
To solve the correlation issue, we determine $\Omega_{ref}$ at a particular motor position $x=x_{ref}$ by fitting the fluorescence spectrum of a standard measured with the detector at this position:
$$
\begin{equation}
\Omega_{ref}=\frac{I_{f}}{I_s(Hz)\Delta t(s)\sum_{i,j}\epsilon_{i,j} c_{i,j}(sr^{-1})}
\end{equation}
$$
where $I_s$, $\Delta t$, $\epsilon_{i,j}$ and $c_{i,j}$ are assumed to be known (flux measured by calibrated diodes, known sample and filter composition).
This provides a fixed relationship between $A$ and $d_0$ which can be substituted in the expression used for calibrating the geometry
$$
\begin{equation}
\begin{split}
I_f(x,c,d_0)=&c\Omega(x,d_0,A(d_0))\\
A(d_0)=&\pi\left(\frac{\left(x_{ref}+d_0\right)^2}{\left(1-\frac{\Omega_{ref}}{2\pi}\right)^2}-\left(x_{ref}+d_0\right)^2\right)
\end{split}
\end{equation}
$$
When using a thin-film standard, the thickness and density of the film are unknown but the areal densities of the elements in the film are known. For elements only present in the film and assuming absorption and other secondary effects are negligible, we can write
$$
\begin{equation}
\begin{split}
c_{i,j}^{film}=&\frac{d\mu_{i,j}}{d\Omega} w_{j}^{film}\rho_{film} t_{film}^{\prime}(E_i,E_j)\\
\approx&\frac{1}{\cos\alpha_{in}}\frac{d\mu_{i,j}}{d\Omega} w_{Z}^{film}\rho_{film} t_{film}\\
=&\frac{1}{\cos\alpha_{in}}\frac{d\mu_{i,j}}{d\Omega}\rho_{Z,A}^{film}
\end{split}
\end{equation}
$$
Hence for elements in the thin-film, it is enough to know their areal densities. In practice however we use mass fractions calculated from the areal densities using the density and the thickness of the substrate. The mass fractions obtained are physically meaningless but valid for the purpose of calculating $\Omega_{ref}$.
For elements in the substrate, density and thickness need to be known if self-absorption is not non-negligible:
$$
\begin{equation}
t_{subs}^{\prime}(E_i,E_j)\neq \frac{t_{subs}}{\cos\alpha_{in}}
\end{equation}
$$
Simulate and fit an XRF spectrum of a thin-film standard (simulation and fit are done with a different $d_0$ and $A$; scattering, escape and sum peaks are omitted):
```python
# Thin film standards have an unknown film thickness and density,
# only the areal densities of the different elements and the
# composition and thickness of the substrate are known.
thinfilmapprox = True
# Geometry at which the data is collected
geometry.zerodistance = units.Quantity(5.,"cm")
detector.activearea = units.Quantity(70.,"mm^2")
print("\nTheoretical geometry:")
print(" Zero-distance: {:~}".format(geometry.zerodistance_rv.to("cm")))
print(" Active area: {:~}".format(detector.activearea_rv.to("mm^2")))
# Simulate measurement (use the sample with known film thickness)
energy = 7.3
flux = 1e9
time = 1000 # the sum spectrum of a 2D map
pymcahandle = pymca.PymcaHandle(sample=realsample,energy=energy,flux=flux,time=time,\
linear=True,escape=False,continuum=False,scatter=False)
mca = pymcahandle.mca(histogram=True,scattering=False,method=method)
if method=="fisx":
mca1 = mca
mca2 = pymcahandle.mca(histogram=True,scattering=False,method="analytical")
if addnoise:
mca = np.random.poisson(np.round(mca).astype(int))
# Calibrate with unknown film thickness
if thinfilmapprox:
thinfilmsample = xrfstandards.factory("RF7-200-S2371-03",geometry=geometry)
pymcahandle.sample = thinfilmsample
# Initialize fit with the wrong geometry
pymcahandle.setdata(mca)
geometry.zerodistance += units.Quantity(5.,"cm")
detector.activearea += units.Quantity(-10.,"mm^2")
pymcahandle.addtopymca(fresh=True)
# Adapt config manually if needed:
#config = pymcahandle.mcafit.getConfiguration()
#config["fit"]["stripflag"] = 0
#...
#pymcahandle.mcafit.configure(config)
# Perform fit
fitresult = pymcahandle.fit()
# Print errors
def strwerror(e,wfrac,exwfrac):
error = (wfrac-exwfrac)/exwfrac
return " {}: {:6.02f} wt% (expected: {:6.02f} wt%, error: {:.02f}%)".\
format(e,wfrac*100,exwfrac*100,error*100)
def straderror(e,ad,exad):
error = (ad-exad)/exad
return " {}: {:6.02f} ng/mm^2 (expected: {:6.02f} ng/mm^2, error: {:.02f}%)".\
format(e,ad*1e7,exad*1e7,error*100)
def printerrors(fitresult,sample):
out = {}
if thinfilmapprox:
exarealdensities = sample.arealdensity()
rho = sample[0].density
t = sample[0].thickness
for k,wfrac in fitresult["massfractions"].items():
element = k.element
ad = wfrac*rho*t
if element in exarealdensities:
exad = exarealdensities[element]
exwfrac = exad/(rho*t)
out[element] = {"ad":straderror(element,ad,exad),\
"wfrac":strwerror(element,wfrac,exwfrac)}
else:
exarealdensities = sample.arealdensity()
arealdensities = {}
massfractions = {}
exmassfractions = {}
exarealdensities = {}
for layer,wfracs in zip(sample,fitresult["lmassfractions"]):
rho = layer.density
t = layer.thickness
exwfracs = layer.elemental_massfractions()
exad = layer.arealdensity()
for k,wfrac in wfracs.items():
if wfrac!=0:
element = k.element
arealdensities[k] = wfrac*rho*t
massfractions[k] = wfrac
exmassfractions[k] = exwfracs[element]
exarealdensities[k] = exad[element]
for k,wfrac in massfractions.items():
if k in exmassfractions:
element = k.element
exwfrac = exmassfractions[k]
exad = exarealdensities[k]
ad = arealdensities[k]
out[element] = {"ad":straderror(element,ad,exad),\
"wfrac":strwerror(element,wfrac,exwfrac)}
print(" Mass fractions and areal densities (within one layer):")
for k in out:
print(out[k]["wfrac"])
print(out[k]["ad"])
print("\nFitted vs. theory (before geometry calibration):")
printerrors(fitresult,pymcahandle.sample)
# Plot fit
def plotfit(fitresult):
plt.plot(fitresult["energy"],fitresult["y"],label='data')
plt.plot(fitresult["energy"],fitresult["yfit"],label='pymca fit')
backfunc = fitresult["interpol_energy"](fitresult["yback"])
plt.plot(fitresult["energy"],backfunc(fitresult["energy"]),label='background')
plt.gca().set_yscale('log', base=10)
plt.ylim([1,np.max(fitresult["y"])*1.1])
plt.ylabel("ph/channel")
plt.xlabel("Energy (keV)")
plotfit(fitresult)
plt.show()
if method=="fisx":
plt.plot(mca1,label="fisx")
plt.plot(mca2,label="xraylib")
plt.gca().set_yscale('log', base=10)
plt.ylim([1,np.max(mca)*1.1])
plt.ylabel("ph/ch")
plt.xlabel("Channels")
plt.legend()
plt.show()
```
Determine $\Omega_{ref}$ by comparing the fitted and theoretical fluorescence intensities:
```python
caliblines = ["Ca"]
useline = lambda k: any(str(k).startswith(e) for e in caliblines)
# rate = Ifluo/I0 with I0 = flux * time
Rfit = {k:v for k,v in fitresult["fitrates"].items() if useline(k)}
Rinit = {k:v for k,v in fitresult["rates"].items() if useline(k)}
if thinfilmapprox:
# for an element within the film:
# - pymca mass fraction = 1
# - substrate density and thicknes
rho = pymcahandle.sample[0].density
t = pymcahandle.sample[0].thickness
arealdensities = pymcahandle.sample.arealdensity()
substrate = pymcahandle.sample[0].elements
for k in Rinit:
el = k.element
if el not in substrate:
Rinit[k] *= arealdensities[el]/(rho*t)
solidangleref = geometry.solidangle * sum(Rfit.values())/sum(Rinit.values())
```
Calibrate the geometry ($d_0$ and $A$) with a known $[\Omega_{ref},x_{ref}]$ pair as constraint:
```python
geometry.calibrate(calibrc=calibrc,solidanglecalib=solidangleref,\
plot=True,fit=True,fixedactivearea=False)
# Force to real values for testing:
#geometry.zerodistance = units.Quantity(5,"cm")
#detector.activearea = units.Quantity(70,"mm^2")
print("\nCalibrate geometry using {}:".format(caliblines))
print(" Zero-distance: {:~}".format(geometry.zerodistance_rv.to("cm")))
print(" Active area: {:~}".format(detector.activearea_rv.to("mm^2")))
print("\nCurrent distance:")
print(geometry.detectorposition)
print(" Motor position = {:~}".format(geometry.detectorposition))
print(" Distance: {:~}".format(geometry.distance_rv.to("cm")))
```
The correlation between the two unknowns $c$ and $d_0$ is low enough to provide estimates of $d_0$ and $A$ with acceptable uncertainty. Known and fitted areal densities should be the same:
```python
pymcahandle.addtopymca(fresh=False)
fitresult = pymcahandle.fit()
print("\nFitted vs. theory (after geometry calibration):")
printerrors(fitresult,pymcahandle.sample)
plt.figure(figsize=(12,5))
plotfit(fitresult)
spectrum = realsample.xrayspectrum(energy,emin=1,emax = energy+0.5,scattering=False,method=method)
matplotlib.rcParams.update({'font.size': 15})
spectrum.plot(histogram=True,decompose=True,fluxtime=pymcahandle.I0,\
legend=False,forcelines=True)
matplotlib.rcParams.update({'font.size': 14})
plt.show()
```
Errors originate from least-squares fitting, the thin-film approximation (if enabled) and discrepancies between the cross-sections used to similate fluorescence and the ones used for fitting (if method not "fisx").
|
26628a3bd2c54806a07032d2709fa995e8bf5588
| 309,578 |
ipynb
|
Jupyter Notebook
|
doc/source/tutorials/xrfquant.ipynb
|
woutdenolf/spectrocrunch
|
fde4b6e0f462f464ce7af6a942b355d3d8f39f77
|
[
"MIT"
] | 3 |
2018-04-16T15:51:36.000Z
|
2019-12-16T11:21:05.000Z
|
doc/source/tutorials/xrfquant.ipynb
|
woutdenolf/spectrocrunch
|
fde4b6e0f462f464ce7af6a942b355d3d8f39f77
|
[
"MIT"
] | null | null | null |
doc/source/tutorials/xrfquant.ipynb
|
woutdenolf/spectrocrunch
|
fde4b6e0f462f464ce7af6a942b355d3d8f39f77
|
[
"MIT"
] | null | null | null | 420.05156 | 105,596 | 0.928386 | true | 4,752 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.665411 | 0.588249 |
__label__eng_Latn
| 0.817494 | 0.20503 |
```python
%matplotlib inline
```
```python
%config InlineBackend.figure_format = "retina"
import matplotlib.pyplot as plt
plt.style.use("default")
plt.rcParams["savefig.dpi"] = 100
plt.rcParams["figure.dpi"] = 100
plt.rcParams["font.size"] = 16
plt.rcParams["font.family"] = "sans-serif"
plt.rcParams["font.sans-serif"] = ["Liberation Sans"]
plt.rcParams["mathtext.fontset"] = "custom"
```
```python
import sympy as sm
fB, fC, fD, a1, a2, N0, N1_a1, N1_a2, N2 = sm.symbols("fA, fB, fC, a1, a2, N0, N1_a1, N1_a2, N2")
fA = 1 - fB - fC - fD
P0 = fA + fB * (1 - 1/a1) + fC * (1 - 1/a1) + fD * (1 - 1/a2)
P1_a1 = fC / a1 + fB * (1/a1 - 1/a2)
P1_a2 = fD / a2
P2 = fB / a2
L = N0 * sm.log(P0) + N1_a1 * sm.log(P1_a1) + N1_a2 * sm.log(P1_a2) + N2 * sm.log(P2)
sys = [
sm.Eq(sm.simplify(sm.diff(L, fB)), 0),
sm.Eq(sm.simplify(sm.diff(L, fC)), 0),
sm.Eq(sm.simplify(sm.diff(L, fD)), 0),
]
sm.simplify(sm.solve(sys, (fB, fC, fD)))
```
[(N2*a2/(N0 + N1_a1 + N1_a2 + N2), (N1_a1*a1 + N2*a1 - N2*a2)/(N0 + N1_a1 + N1_a2 + N2), N1_a2*a2/(N0 + N1_a1 + N1_a2 + N2))]
```python
import numpy as np
import matplotlib.pyplot as plt
```
```python
K = 10000
a1 = 5
a2 = 12
a = np.array([a1, a2])
def simulate(fB_true=0.3, fC_true=0.1, fD_true=0.1):
cosinc_sim = np.random.uniform(0, 1, K)
b = a[:, None] * cosinc_sim[None, :]
obs = (b < 1)
f = np.random.rand(K)
m = (fB_true + fC_true + fD_true) <= f
obs[:, m] = False
m = ((fB_true + fC_true) <= f) & (f < (fB_true + fC_true + fD_true))
obs[0, m] = False
m = (fB_true <= f) & (f < (fB_true + fC_true))
obs[1, m] = False
N_obs = np.sum(obs, axis=0)
# Treat planets independently
Na, Nb = np.sum(obs, axis=1)
Pa = 1. / a1
Pb = 1. / a2
occ_a = (Na / Pa + Nb / Pb) / K
# Deal with multiplicity
bins = -0.5 + np.arange(4)
N0, N1, N2 = np.histogram(N_obs, bins)[0]
P0 = 1 - 1. / a1
P1 = 1. / a1 - 1. / a2
P2 = 1. / a2
occ_b = (N1 + N2) / ((1 - P0) * K)
# Fit for multiplicity
N1_a1 = np.sum((N_obs == 1) & obs[0])
N1_a2 = np.sum((N_obs == 1) & obs[1])
fB = N2*a2/K
fC = (N1_a1*a1 + N2*a1 - N2*a2)/K
fD = N1_a2*a2/K
occ_c = fB + fC + fD
return N0, N1, N2, occ_a, occ_b, occ_c
```
```python
np.random.seed(1234)
sims = np.array([simulate() for k in range(5000)])
```
```python
plt.hist(sims[:, -3], density=True, histtype="step", label="A")
plt.hist(sims[:, -2], density=True, histtype="step", label="B")
plt.hist(sims[:, -1], density=True, histtype="step", label="C")
plt.axvline(0.5, color="k", linewidth=1, alpha=0.5)
plt.legend()
plt.xlabel("occurence rate")
plt.ylabel("fraction of simulations")
plt.yticks([])
plt.savefig("simulated.pdf", bbox_inches="tight");
```
```python
```
|
9edce1289b1bfe97e238bf1c27336cd4857df5fb
| 55,540 |
ipynb
|
Jupyter Notebook
|
simulated.ipynb
|
dfm/exostar19
|
b1d2446a7380d9d10f6e963c608c8f29fbb47063
|
[
"MIT"
] | null | null | null |
simulated.ipynb
|
dfm/exostar19
|
b1d2446a7380d9d10f6e963c608c8f29fbb47063
|
[
"MIT"
] | null | null | null |
simulated.ipynb
|
dfm/exostar19
|
b1d2446a7380d9d10f6e963c608c8f29fbb47063
|
[
"MIT"
] | null | null | null | 272.254902 | 50,276 | 0.914278 | true | 1,163 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.79053 | 0.708514 |
__label__eng_Latn
| 0.132729 | 0.484447 |
```python
%pylab inline
```
Populating the interactive namespace from numpy and matplotlib
```python
from sympy import *
init_printing()
```
```python
x, y, z, nu = symbols('x t z nu')
diff(sin(x)*exp(x), x)
```
```python
expr=exp(x)*sin(x)+exp(x)*cos(x)
expr
```
```python
integrate(expr, x)
```
```python
pi
```
```python
r = symbols('r')
```
```python
2*integrate(pi * (r**2-y**2), (y, 0, r))
```
|
df9de71b68a548fca27d56213d7fdcca53b00b2d
| 7,388 |
ipynb
|
Jupyter Notebook
|
Programming/SymPy.ipynb
|
darkeclipz/jupyter-notebooks
|
5de784244ad9db12cfacbbec3053b11f10456d7e
|
[
"Unlicense"
] | 1 |
2018-08-28T12:16:12.000Z
|
2018-08-28T12:16:12.000Z
|
Programming/SymPy.ipynb
|
darkeclipz/jupyter-notebooks
|
5de784244ad9db12cfacbbec3053b11f10456d7e
|
[
"Unlicense"
] | null | null | null |
Programming/SymPy.ipynb
|
darkeclipz/jupyter-notebooks
|
5de784244ad9db12cfacbbec3053b11f10456d7e
|
[
"Unlicense"
] | null | null | null | 38.884211 | 1,152 | 0.710883 | true | 141 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92523 | 0.819893 | 0.75859 |
__label__eng_Latn
| 0.417876 | 0.600791 |
# Introdução à Computação Simbólica com _sympy_
## Motivação
- O valor de $\pi$ que você usa é finito...
```python
from math import pi
print(pi)
3.141592653589793
```
- E se pudéssemos usá-lo com precisão infinita?
- 3.141592653589793 é um valor razoavelmente aceitável
- Exemplo: a equipe de engenharia da NASA explica que, usando este valor para calcular o perímetro de uma circunferência com diâmetro igual a 25 bilhões de milhas, o erro de cálculo é próximo de 1,5 polegada [[NASA]](https://www.jpl.nasa.gov/edu/news/2016/3/16/how-many-decimals-of-pi-do-we-really-need/).
## O que é computação simbólica
> *Computação Simbólica* (CS) é uma subárea de estudo da matemática e da ciência da computação que se preocupa em resolver problemas usando objetos simbólicos representáveis em um computador.
## Para que serve CS?
- Base de vários *sistemas de computação algébrica* (SCAs).
- Álgebra computacional, projetos assistidos por computação (CAD)
- Raciocínio automatizado, gestão do conhecimento, lógica computacional, sistemas formais de verificação etc.
### Como a CS está integrada?
<!-- Figura -->
<center>
</img>
</center>
Fonte: [[RISC/JKU]](https://risc.jku.at/studying-symbolic-computation/)
## Principais SCAs
- Maple
- Mathematica
- MuPad
- Sagemath
- ...
## Por que *sympy*?
> O objetivo principal do *sympy* é ser uma biblioteca de manipulação simbólica para Python.
- 2006 em diante (2019, v. 1.5.1)
Principais características:
- é gratuito;
- é baseado inteiramente em Python;
- é leve e independente.
## Objetos numéricos x objetos simbólicos
Importaremos os módulos `math` e `sympy` para ver diferenças
```python
import math as mt
import sympy as sy
sy.init_printing(pretty_print=True) # melhor impressão de símbolos
```
```python
mt.pi # numérico
```
```python
sy.pi # simbólico
```
Verifiquemos com `type`.
```python
type(mt.pi)
```
float
```python
type(sy.pi) # é um objeto simbólico
```
sympy.core.numbers.Pi
Vejamos mais um exemplo:
```python
mt.sqrt(2)
```
```python
sy.sqrt(2)
```
```python
type(mt.sqrt(2))
```
float
```python
type(sy.sqrt(2)) # é um objeto simbólico
```
sympy.core.power.Pow
### Função x método
A partir deste ponto, poderemos ver situações como as seguintes:
- `f(x)`: a função `f` é aplicada ao parâmetro `x`; ex. `print('a')`; `type('a')`
- `a.f()`: `f` é um método sem parâmetro do objeto `a`; ex. `z.conjugate()`
- `a.f(x)`: `f` é um método com parâmetro `x` do objeto `a`; ex. `mt.sqrt(2)`
A partir do último exemplo, podemos dizer que um método é, na verdade, uma função que pertence a um objeto.
### Atribuições com símbolos
Podemos atribuir símbolos a variáveis usando a função `symbols`.
```python
x = sy.symbols('x')
y = sy.symbols('y')
```
`x` e `y` são símbolos sem valor definido.
```python
x
```
```python
y
```
Podemos operar aritmeticamente com símbolos e obter uma expressão simbólica como resultado.
```python
z = sy.symbols('z')
x*y + z**2/3 + sy.sqrt(x*y - z)
```
**Exemplo**: escreva o produto notável $(x - y)^2$ como uma expressão simbólica.
```python
x**2 - 2*x*y + y**2
```
Note que o nome da variável não tem a ver com o nome do símbolo. Poderíamos fazer o seguinte:
```python
y = sy.symbols('x') # y é variável; x é símbolo
y
```
### Atribuição por desempacotamento
Também poderíamos realizar as atribuições anteriores da seguinte forma:
```python
x, y, z = sy.symbols('x y z')
```
### Alfabeto de símbolos
O *sympy* dispõe de um submódulo chamado `abc` do qual podemos importar símbolos para letras latinas (maiúsculas e minúsculas) e gregas (minúsculas).
```python
from sympy.abc import a,b,c,alpha,beta,gamma
(a + 2*b - 3*c)*(alpha/3 + beta/2 - gamma) # símbolico
```
```python
from sympy.abc import D,G,psi,theta
D**a * G**b * psi**c * theta**2 # símbolico
```
**Nota**: algumas letras já são usadas como símbolos especiais, tais como `O`, que indica "ordem" e `I`, que é o complexo $i$. Neste caso, cuidado deve ser tomado com nomes de variáveis
```python
sy.I # imaginário simbólico
```
```python
type(sy.I)
```
sympy.core.numbers.ImaginaryUnit
### Símbolos com nomes genéricos
Para criar símbolos genéricos, temos de usar `symbols` ou `Symbol`.
```python
sem_nocao = sy.symbols('nada')
sem_nocao
```
```python
muito_louco = sy.Symbol('massa')
muito_louco
```
### Variáveis e símbolos
```python
sem_medo = sem_nocao + 2
sem_medo
```
```python
soma = muito_louco + 2
muito_louco = 3 # 'muito_louco' aqui não é o simbólico
soma
```
## Substituição
A operação de *substituição* permite que:
1. substituamos variáveis por valores numéricos para avaliar uma expressão ou calcular valores de uma função em um dado ponto.
2. substituamos uma subexpressão por outra.
Para tanto, procedemos da seguinte forma:
```python
expressao.subs(variavel,valor)
```
**Exemplo**: considere o polinômio $P(x) = 2x^3 - 4x -6$. Calcule o valor de $P(-1)$, $P(e/3)$, $P(\sqrt{3.2})$.
```python
from sympy.abc import x
P = 2*x**3 - 4*x - 6
P1 = P.subs(x,-1)
Pe3 = P.subs(x,mt.e/3)
P32 = P.subs(x,mt.sqrt(3.2))
print(P1, Pe3, P32)
```
-4 -8.13655822141297 -1.70674948320040
**Exemplo:** sejam $f(x) = 4^x$ e $g(x) = 2x - 1$. Compute o valor da função composta $f(g(x))$ em $x = 3$.
```python
f = 4**x
fg = f.subs(x,2*x - 1)
```
```python
fg.subs(x,3)
```
Poderíamos também fazer isso com um estilo "Pythônico":
```python
fg = 4**x.subs(x,2*x - 1).subs(x,3)
fg
```
**Exemplo:** se $a(x) = 2^x$, $b(x) = 6^x$ e $c(x) = \cos(x)$, compute o valor de $a(x)b(c(x))$ em $x = 4$
```python
a = 2**x
b = 6**x
c = sy.cos(x)
(a * b.subs(x,c)).subs(x,4)
```
Ou, de modo direto:
```python
valor = ( 2**x * ( 6**x.subs(x,sy.cos(x))) ).subs(x,4)
valor
```
### Avaliação de expressão em ponto flutuante
Note que a expressão anterior não foi computada em valor numérico. Para obter seu valor numérico, podemos usar o método `evalf`.
```python
valor.evalf()
```
#### Precisão arbitrária
`evalf` permite que escolhamos a precisão do cálculo impondo o número de dígitos de precisão. Por exemplo, a última expressão com 20 dígitos de precisão seria:
```python
valor.evalf(20)
```
Com 55, seria:
```python
valor.evalf(55)
```
E com 90 seria:
```python
valor.evalf(90)
```
**Exemplo**: calcule o valor de $e$ com 200 dígitos de precisão.
```python
sy.exp(1).evalf(200)
```
## Funções predefinidas x funções regulares
Apresentaremos 3 grupos de funções que podem ser criadas em Python
- **funções predefinidas** (*built-in functions*): funções já prontas que podemos usar (ex. `print()`, `type()` , `int()`, `float()`
- ** funções regulares**, ou *normais*, *definidas pelo usuário* (do inglês *user-defined functions*, ou simplesmente *UDF*): aquelas que você cria!
Podemos fazer isto de uma maneira usando uma "palavra-chave" (*keyword*) chamada `def` da seguinte forma:
```python
def f(x):
(...)
return y
```
- uma UDF **pode ter zero ou mais argumentos**, tantos quantos se queira;
- uma UDF **pode ou não ter valor de retorno**;
Vamos entender as UDFs com exemplos.
**Exemplo:** Suponha que você é um(a) analista de dados do mercado imobiliário e está estudando o impacto do repasse de comissões pagas a corretores mediante vendas de imóveis. Você, então, começa a raciocinar e cria um modelo matemático bastante simples que, antes de tudo, precisa calcular o valor do repasse a partir do preço de venda.
Se $c$ for o percentual de comissão, $V$ o valor da venda do imóvel e $r$ o valor a ser repassado para o corretor, então, a função a ser definida é
$$r(V) = c\, V,$$
assumindo que $c$ seja um valor fixo.
Digamos que $c$ corresponda a 1.03% do valor da venda do imóvel. Neste caso podemos criar uma UDF para calcular $r$ para nós da seguinte forma:
```python
def repasse(V):
r = 0.0103*V
return r
```
Para $V = \, R\$ \, 332.130,00$:
```python
repasse(332130)
```
O que é necessário observar:
- `def` seguido pelo *nome* da função
- argumentos enclausurados por parênteses
- os dois-pontos (`:`) são obrigatórios
- *escopo* da função, que deve ser escrito em uma ou mais linhas indentadas (pressione `TAB` para isso, ou use 4 espaços)
- o valor de retorno, se houver, é posto na última linha do escopo.
Podemos atribuir os valores do argumento e resultado a variáveis:
```python
V = 332130
rep = repasse(V)
rep
```
Nomes iguais de variável e função são permissíveis.
```python
repasse = repasse(V) # 'repasse' à esquerda é uma variável; à direita, função
print(repasse)
```
3420.939
Todavia, isto pode ser confuso e é bom evitar.
O estilo "Pythônico" de escrever permite que o valor de retorno não seja explicitamente declarado. No escopo
```python
...
r = 0.0103*V
return r
```
a variável `r` não é necessária.
Python é inteligente para permitir o seguinte:
```python
def repasse(V):
return 0.0103*V
# note que aqui não indentamos a linha.
# Logo esta instrução NÃO pertence ao escopo da função.
repasse(V)
```
Podemos criar uma função para diferentes valores de `c` e `V` usando *dois* argumentos:
```python
def repasse_c(c,V): # esta função tem outro nome
return c*V
```
```python
c = 0.0234 # equivaleria a uma taxa de repasse de 2.34%
V = 197432 # o valor do imóvel agora é R$ 197.432,00
repasse_c(c,V)
```
A ordem dos argumentos importa:
```python
V = 0.0234 # este deveria ser o valor de c
c = 197432 # este deveria ser o valor de V
repasse_c(c,V)
```
Por que o valor resultante é o mesmo? Porque a operação no escopo da função é uma multiplicação, `c*V`, que é comutativa independentemente do valor das variáveis. Porém, digamos que um segundo modelo tenha uma forma de cálculo distinta para a comissão dada por
$$r_2(V) = c^{3/5} \, V$$
Neste caso:
```python
def repasse_2(c,V):
return c**(3/5)*V
V = 197432
c = 0.0234
repasse_2(c,V)
```
Porém, se trocarmos o valor das variáveis, a função `repasse_2` calculará um valor distinto. Embora exista um produto também comutativo, o expoente `3/4` modifica apenas o valor de `c`.
```python
# variáveis com valores trocados
c = 197432
V = 0.0234
repasse_2(c,V)
```
A ordem com que escrevemos os argumentos tem importância relativa aos valores que passamos e ao que definimos:
```python
# variáveis com valores corretos
V = 197432
c = 0.0234
def repasse_2_trocada(V,c): # V vem antes de c
return c**(3/5)*V
repasse_2_trocada(V,c)
```
Mas,
```python
# os valores das variáveis estão corretos,
# mas foram passados para a função na ordem errada
repasse_2_trocada(c,V)
```
e
```python
# a ordem dos argumentos está de acordo com a que foi definida
# mas os valores das variáveis foram trocados
V = 197432
c = 0.0234
repasse_2_trocada(c,V)
```
## Modelos matemáticos simbólicos
A partir do que aprendemos, podemos definir modelos matemáticos completamente simbólicos.
```python
from sympy.abc import c,V
def repasse_2_simbolica(c,V):
return c**(3/5)*V
```
Se chamarmos esta função, ela será um objeto simbólico.
```python
repasse_2_simbolica(c,V)
```
Atribuindo em variável:
```python
rep_simb = repasse_2_simbolica(c,V)
```
```python
type(rep_simb) # é um objeto simbólico
```
sympy.core.mul.Mul
**Exemplo:** Suponha, agora, que seu modelo matemático de repasse deva considerar não apenas um percentual $c$ pré-estabelecido, mas também um valor de "bônus" adicional concedido como prêmio pela venda do imóvel. Considere, então, que o valor deste bônus seja $b$. Diante disso, nosso novo modelo teria uma fórmula como a seguinte:
$$r_3(V) = c\,V + b$$
Simbolicamente:
```python
# importaremos apenas o símbolo b,
# uma vez que c e V já foram importados
# como símbolos anteriormente
from sympy.abc import b
def r3(V):
return c*V + b
rep_3 = r3(V)
rep_3
```
### Substituindo valores
Podemos usar a função `subs` para atribuir quaisquer valores para o modelo.
**Exemplo:** $c = 0.119$
```python
rep_3.subs(c,0.119) # substituindo para c
```
**Exemplo:** $c = 0.222$
```python
rep_3.subs(c,0.222) # substituindo para c
```
**Exemplo:** $c = 0.222$ e $b = 12.0$
```python
rep_3.subs(c,0.222).subs(b,12.0) # substituindo para c, depois para b
```
### Substituição múltipla
O modo anterior de substituição não é "Pythônico". Para substituirmos mais de uma variável de uma vez, devemos usar *pares ordenados* separados por vírgula sequenciados entre colchetes como uma *lista*. Mais tarde, aprenderemos sobre pares ordenados e listas.
**Exemplo:** Modifique o modelo $r_3$ para que $c = 0.043$ e $b = 54.0$
```python
# espaços foram adicionados para dar legibilidade
rep_3.subs( [ (c,0.043), (b,54.0) ] )
```
#### Pares ordenados
$$X \times Y = \{ (x,y) ; x \in X \text{ e } y \in Y \}$$
onde $X$ e $Y$ são conjuntos quaisquer e $x$ e $y$ são as *coordenadas*.
- ex. $X = Y = \mathbb{R}$; $(3,2)$, $(-1,3)$, $(\pi,2.18)$ etc.
- Este é o caso de $\mathbb{R} \times \mathbb{R} = \mathbb{R}^2$, que é exatamente o *plano cartesiano*.
A substituição múltipla com `subs` ocorre da seguinte forma;
- a primeira coordenada é o *símbolo*;
- a segunda coordenada é o *valor* que você quer dar para o símbolo.
**Exemplo:** Calcule $r_3(V)$ considerando $c = 0.021$, $b = 34.0$ e $V = 432.000$.
```python
# armazenaremos o valor na variável 'valor'
valor = r3(V)
# subsituição
valor.subs( [ (c,0.021), (b,54.0) ] )
```
Com o estilo "Pythônico":
```python
valor = r3(V).subs( [ (c,0.021), (b,54.0) ] ) #
valor
```
Podemos seguir esta regra de pares para substituir todos os valores de um modelo simbólico genérico não necessariamente definido através de uma função. Veja o exemplo aplicado a seguir.
## Exemplo de aplicação: o índice de caminhabilidade
O *índice de caminhabilidade* $W$ para uma vizinhança de casas é uma medida matemática que assume valores no intervalo $[0,1]$. A fórmula é definida por:
$$W(d) = e^{-5 \left( \dfrac{d}{M} \right)^5},$$
onde $d$ é a distância medida entre a vizinhança (0 metro) e um dado ponto de referência, e $M$ é a distância máxima de avaliação considerada a partir da qual a caminhabilidade é assumida como nula.
<!-- Figura -->
<center>
</img>
</center>
### Interpretação
- quando estamos na vizinhança, $d = 0$, $W = 1$ e a caminhabilidade é considerada ótima.
- à medida que nos afastamos da vizinhança em direção ao local da amenidade, $d$ aumenta e o valor $W$ decai vertiginosamente até atingir o valor limite $M$ a partir do qual $W = 0$ e a caminhabilidade é considerada "péssima".
- W é calculado com relação a um ponto de destino definido
- A distância deve levar em consideração as vias de circulação (ruas, rodovias etc) e não a distância mais curta (raio do perímetro).
- ex. Para $M = 500 \, m$ , um bar a 100 metros da vizinhança teria um índice de caminhabilidade maior do que o de uma farmácia localizada a 300 m e muito maior do que o de um shopping localizado a 800 m, ainda que muito famoso.
Fonte: *De Nadai, M. and Lepri, B. [[The economic value of neighborhoods: Predicting real estate prices from the urban environment]](https://arxiv.org/pdf/1808.02547.pdf)*.
### Modelo simbólico
Podemos modelar $W$ simbolicamente e calcular seu valor para diferentes valores de $d$ e $M$ usando a substituição múltipla.
```python
from sympy.abc import d,M,W
W = sy.exp(-5*(d/M)**5) # função exponencial simbólica
W
```
**Exemplo:** A nossa corretora de imóveis gostaria de entender a relação de preços de imóveis para o Condomínio Pedras de Marfim. Considerando $M = 1 km$, calcule:
- o índice de caminhabilidade $W_1$ em relação à farmácia Dose Certa, localizada a 222 m do condomínio.
- o índice de caminhabilidade $W_2$ em relação ao restaurante Sabor da Arte, localizada a 628 m do condomínio.
- o índice de caminhabilidade $W_3$ em relação ao Centro Esportivo Physicalidade, localizada a 998 m do condomínio.
- o índice de caminhabilidade $W_4$ em relação à Padaria Dolce Panini, localizada a 1,5 km do condomínio.
```python
# note que 1 km = 1000 m
W1 = W.subs([ (d,222), (M,1000) ])
W2 = W.subs([ (d,628), (M,1000) ])
W3 = W.subs([ (d,998), (M,1000) ])
W4 = W.subs([ (d,1500), (M,1000) ])
```
Perceba, entretanto, que os valores calculados ainda não são numéricos, como esperado.
```python
W1
```
```python
W2
```
```python
W3
```
```python
W4
```
Lembre-se que podemos usar `evalf` para calcular esses valores. Faremos isso considerando 3 casas decimais.
```python
# reatribuindo todos os valores
W1n = W1.evalf(3)
W2n = W2.evalf(3)
W3n = W3.evalf(3)
W4n = W4.evalf(3)
print('W1 =', W1n, '; ' \
'W2 =', W2n, '; ' \
'W3 =', W3n, '; ' \
'W4 =', W4n)
```
W1 = 0.997 ; W2 = 0.614 ; W3 = 0.00708 ; W4 = 3.24e-17
Como era de se esperar, os valores decaem de 0.997 a 3.24e-17, que é um valor considerado nulo em termos de aproximação numérica.
#### Quebrando instruções com `\`
A contra-barra `\` pode ser usada para quebrar instruções e continuá-las nas próximas linhas, porém não poderá haver nenhum caracter após ela, nem mesmo espaços. Caso contrário, um erro será lançado.
```python
print('Continuando' \
'na linha abaixo')
```
Continuandona linha abaixo
```python
# neste exemplo, há um caracter de espaço após \
print('Continuando' \
'na linha abaixo')
```
### O tipo `bool`
Em Python, temos mais um tipo de dado bastante útil, o `bool`, que é uma redução de "booleano". Objetos `bool`, que têm sua raiz na chamada Álgebra de Boole, são baseados nos conceitos *true* (verdadeiro) e *false*, ou *0* e *1* e são estudados em algumas disciplinas, tais como Circuitos Lógicos, Matemática Discreta, Lógica Aplicada, entre outras.
Aprenderemos sobre operadores lógicos mais à frente. Por enquanto, cabe mencionar as entidades fundamentais `True` e `False`.
```python
True
```
True
```python
False
```
False
```python
type(True)
```
bool
```python
type(False)
```
bool
Podemos realizar testes lógicos para concluir verdades ou falsidades quando temos dúvidas sobre objetos e relações entre eles. Por exemplo, retomemos os seguintes valores:
```python
W1
```
```python
W2
```
A princípio, é difícil determinar qual dos dois é o maior. Porém, podemos realizar "perguntas" lógicas para o interpretador Python com operadores lógicos. Mostraremos apenas dois exemplos com `>` e `<`.
```python
W1 > W2 # isto quer dizer: "W1 é maior do que W2?"
```
O valor `True` confirma que o valor de `W1` é maior do que `W2`.
```python
W4 < 0
```
> Note que, de acordo com nosso modelo de caminhabilidade, este valor deveria ser zero. Porém, numericamente, ele é uma aproximação para zero. Embora muito pequeno, não é exatamente zero! Por que isso ocorre? Porque o computador lida com uma matemática inexata e aproximada, mas com precisão satisfatória.
|
0e76251fb5dfe22f1b21dfa7154701db4aa3d6c6
| 164,119 |
ipynb
|
Jupyter Notebook
|
rise/02a-computacao-simbolica-rise.ipynb
|
gcpeixoto/FMECD
|
9bca72574c6630d1594396fffef31cfb8d58dec2
|
[
"CC0-1.0"
] | null | null | null |
rise/02a-computacao-simbolica-rise.ipynb
|
gcpeixoto/FMECD
|
9bca72574c6630d1594396fffef31cfb8d58dec2
|
[
"CC0-1.0"
] | null | null | null |
rise/02a-computacao-simbolica-rise.ipynb
|
gcpeixoto/FMECD
|
9bca72574c6630d1594396fffef31cfb8d58dec2
|
[
"CC0-1.0"
] | null | null | null | 52.0187 | 14,536 | 0.778167 | true | 6,352 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.934395 | 0.947381 | 0.885228 |
__label__por_Latn
| 0.999355 | 0.895016 |
# SINDy
> Discovering governing equations using the SINDy algorithm
- toc: true
- hide: false
- branch: master
- search_exclude: false
- badges: true
- comments: true
- categories: [differential equations, machine learning]
```python
# hide
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.integrate import odeint
from sklearn.metrics import r2_score, mean_absolute_error
from sklearn.metrics import mean_squared_error, median_absolute_error
```
```python
# hide
def mean_absolute_percentage_error(y_true, y_pred):
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def symmetric_mape(y_true, y_pred, eps = 1e-8):
summ = ((np.abs(y_true) + np.abs(y_pred)) + eps)
return np.mean(np.abs(y_pred - y_true) / summ) * 100
```
```python
# hide
def print_scores(y_test, y_pred):
print(f"R2 score: {r2_score(y_test, y_pred)}")
print(f"MSE score: {mean_squared_error(y_test, y_pred)}")
print(f"MAE score: {mean_absolute_error(y_test, y_pred)}")
print(f"Median AE score: {median_absolute_error(y_test, y_pred)}")
print(f"MAPE score: {mean_absolute_percentage_error(y_test, y_pred)}")
print(f"SMAPE score: {symmetric_mape(y_test, y_pred)}")
```
Sparse Identification of Nonlinear Dynamical Systems (SINDy) is an algorithm to discover governing dynamical equations for time series ${\bf x}(t)$. The key idea is to construct a differential equation:
$$ \frac{d{\bf x}}{d t} = \Theta({\bf x}^T) {\bf x}$$,
where the time derivative is computed from the time series data and $\Theta({\bf x})$ is a non-linear basis.
Steps:
- Compute time derivative: Perhaps the trickiest part, especially for noisy time series although one can use total variation regularized derivatives for such cases as in [here](https://github.com/stur86/tvregdiff)
- Choose basis: Some non-linear basis constructed from the time series data, for example, polynomial basis.
- Apply regularized linear regression: Apply Lasso/Ridge with one step or sequential thresholding least squares.
Paper: https://www.pnas.org/content/113/15/3932
Once the underlying dynamical equations are discovered, **forecasting** becomes a lot easier.
**Extensions**:
Knowing the right coordinates and basis functions to use is often difficult. The most interesting extension of SINDy so far has been to use latent basis / coordinates instead of physical space basis. In latent space,
$$ {\bf x} \longrightarrow {\bf z} = Encoder({\bf x}).$$
A non-linear basis in ${\bf z}$ is used to perform SINDy. The full cost function also takes into account the physical space SINDy expression. For more details, see the paper:
Paper: https://www.pnas.org/content/116/45/22445
## Example
### Create synthetic dataset
```python
# hide
# **REMEMBER**: change `dx1dt`, `dx2dt` also if you change the ranfun below.
def fun(y, t):
x1, x2 = y
dxdt = [-x1**3 - x2, x1 - x2**3]
return dxdt
t_full = np.linspace(0, 15, 1501)
# The following two functions work great, with alpha = 0.0001, they are being correctly classified.
#y0 = [0.1, 0.05] # LINEAR: small x1, x2 => Linear coupled oscillator
y0 = [0.5, 0.5] # NON-LINEAR: Getting close to triggering non-linear effects
```
**Solve** the ODE:
$$
\begin{align}
\frac{dx_1}{dt} &= -x_1^3 - x_2 \\
\frac{dx_2}{dt} &= x_1 - x_2^3
\end{align}
$$
```python
# hide
sol = odeint(fun, y0, t_full)
sol_new = sol[:1001]
```
Plot the solutions
```python
# hide_input
t_discover = t_full[:1001]
plt.plot(t_discover, sol_new[:, 0], 'b', label='x1(t)')
plt.plot(t_discover, sol_new[:, 1], 'g', label='x2(t)')
plt.legend(loc='best')
plt.xlabel('t')
plt.grid()
# plt.savefig('time_series.png', bbox_inches='tight')
```
## Compute the time derivative.
```python
x1 = sol_new[:,0]
x2 = sol_new[:,1]
```
**Actual** time derivatives
```python
dx1dt = -x1**3 - x2
dx2dt = x1 - x2**3
```
**Numerically** computed derivatives from data
```python
# hide
dt = t_discover[1] - t_discover[0]
dx1dt_data = np.gradient(x1, dt)
dx2dt_data = np.gradient(x2, dt)
# or using TVRegDiff
# dx1dt = TVRegDiff(x1, 10, 0.1, dx=dt, plotflag=0)
# See https://github.com/stur86/tvregdiff
# for information on different parameters
```
```python
# hide
print_scores(dx1dt, dx1dt_data)
```
R2 score: 0.9999999965191242
MSE score: 2.8269156140981004e-10
MAE score: 4.206507155221408e-06
Median AE score: 3.6181352647546294e-06
MAPE score: 0.004477342875542224
SMAPE score: 0.0022391799382288345
```python
# hide
print_scores(dx2dt, dx2dt_data)
```
R2 score: 0.9999996026514059
MSE score: 2.195307457531667e-08
MAE score: 9.892101759080555e-06
Median AE score: 4.016199533329878e-06
MAPE score: 0.009102822948966595
SMAPE score: 0.004535741601342113
```python
# hide
plt.plot(dx1dt[-50:], label='Actual', alpha=0.4)
plt.plot(dx1dt_data[-50:], ':', lw=2, label='from_data')
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$dx_1/dt$', fontsize=14)
plt.legend()
```
```python
# hide
plt.plot(dx2dt[-50:], label='Actual', alpha=0.4)
plt.plot(dx2dt_data[-50:], ':', lw=2, label='from_data')
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$dx_2/dt$', fontsize=14)
plt.legend()
```
```python
# hide
X = np.zeros((sol_new.shape[0], sol_new.shape[1]))
X[:, 0] = x1 # dx1/dt
X[:, 1] = x2 # dx2/dt
```
## Construct a basis: polynomial (or trig)
```python
# collapse
from sklearn.preprocessing import PolynomialFeatures
dum_data = pd.DataFrame({'x1': x1, 'x2': x2})
deg = 3 # Polynomial degree to use
p = PolynomialFeatures(degree=deg,include_bias=True).fit(dum_data)
xpoly = p.fit_transform(dum_data)
newdf = pd.DataFrame(xpoly, columns = p.get_feature_names(dum_data.columns))
print("Feature names:", list(newdf))#newdf.columns.values.tolist())
print("Feature array shape:", newdf.shape)
```
Feature names: ['1', 'x1', 'x2', 'x1^2', 'x1 x2', 'x2^2', 'x1^3', 'x1^2 x2', 'x1 x2^2', 'x2^3']
Feature array shape: (1001, 10)
## Regression using Lasso
(or Ridge/OLS with sequential thresholding)
Lasso does regularized linear regression with L1-norm. **alpha** is a hyperparameter.
- low alpha -> OLS
- high alpha -> most features zero
```python
from sklearn.linear_model import Lasso
mod = Lasso(alpha=0.0001)
mod
```
Lasso(alpha=0.0001)
```python
# hide
# Prepare training data
newdf_train, newdf_test = newdf[:800], newdf[800:]
dx1dt_train, dx1dt_test = dx1dt[:800], dx1dt[800:]
dx2dt_train, dx2dt_test = dx2dt[:800], dx2dt[800:]
```
### $dx_1 / dt$
```python
# hide
mod.fit(newdf_train, dx1dt_train)
print(mod.coef_) # should give the 3rd (x2) + 4th (x1^2) argument non-zero
print(mod.intercept_)
mod.score(newdf_test, dx1dt_test)
```
[ 0. -0.05576026 -0.99852418 0. -0. -0.01819165
-0.50202469 -0. 0. -0. ]
0.0033302444886464527
0.9943287924302203
```python
# hide
fit_dx1 = pd.DataFrame(columns=newdf.columns)
fit_dx1.loc[0] = mod.coef_
fit_dx1.abs().sort_values(by=0, axis=1, ascending=False)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>x2</th>
<th>x1^3</th>
<th>x1</th>
<th>x2^2</th>
<th>1</th>
<th>x1^2</th>
<th>x1 x2</th>
<th>x1^2 x2</th>
<th>x1 x2^2</th>
<th>x2^3</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.998524</td>
<td>0.502025</td>
<td>0.05576</td>
<td>0.018192</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
</tbody>
</table>
</div>
```python
# hide
ypred_x1 = mod.predict(newdf_test)
print_scores(dx1dt_test, ypred_x1)
```
R2 score: 0.9943287924302203
MSE score: 8.443005835967624e-05
MAE score: 0.009127412469564932
Median AE score: 0.00942351242559783
MAPE score: 59.78370313040213
SMAPE score: 9.034670809972091
Identified ODE:
$dx_1/dt \sim -x_2 - x_1^3$
with minor contributions from other terms that could be gotten rid of by applying a threshold
```python
# Drop features with absolute values less than 0.1
dx1_thr = fit_dx1[fit_dx1.columns[fit_dx1.abs().max() > 0.1]]
dx1_thr
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>x2</th>
<th>x1^3</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.998524</td>
<td>-0.502025</td>
</tr>
</tbody>
</table>
</div>
```python
# hide_input
# PLOT results
t_test = np.linspace(8, 10, 201)
#plt.plot(t, sol_new[:, 1], label='Actual', alpha = 0.4)
#plt.plot(ypred, label='Actual')
plt.plot(t_test, ypred_x1, 'r--', lw = 2, label='Prediction')
plt.plot(t_discover, dx1dt, 'g', label='Actual', alpha=0.4)
plt.axvline(x = 8, color='k', linestyle='--')
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$dx_1/dt$', fontsize=14)
plt.grid()
plt.legend()
# plt.savefig('dx1dt_fit.png', bbox_inches='tight')
```
### $dx_2 / dt$
```python
# hide
mod.fit(newdf_train, dx2dt_train)
print(mod.coef_) # should give the 2nd (x1) + last (x2^3) argument non-zero
print(mod.intercept_)
mod.score(newdf_test, dx2dt_test)
```
[ 0. 0.99491832 -0.02198579 -0. -0. -0.04791986
0. 0. -0. -0.82689872]
0.0022126622685510813
0.9965028656692735
```python
# hide
fit_dx2 = pd.DataFrame(columns=newdf.columns)
fit_dx2.loc[0] = mod.coef_
fit_dx2.abs().sort_values(by=0, axis=1, ascending=False)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>x1</th>
<th>x2^3</th>
<th>x2^2</th>
<th>x2</th>
<th>1</th>
<th>x1^2</th>
<th>x1 x2</th>
<th>x1^3</th>
<th>x1^2 x2</th>
<th>x1 x2^2</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.994918</td>
<td>0.826899</td>
<td>0.04792</td>
<td>0.021986</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
</tbody>
</table>
</div>
```python
# hide
ypred_x2 = mod.predict(newdf_test)
print_scores(dx2dt_test, ypred_x2)
```
R2 score: 0.9965028656692735
MSE score: 1.3128657613492703e-05
MAE score: 0.003284761866157678
Median AE score: 0.0036951175861815455
MAPE score: 1.962931213051045
SMAPE score: 0.9952953131151899
Identified ODE:
$dx_2/dt \sim x_1 - x_2^3$
with minor contributions from other terms that could be gotten rid of by applying a threshold
```python
# Drop features with absolute values less than 0.1
dx2_thr = fit_dx2[fit_dx2.columns[fit_dx2.abs().max() > 0.1]]
dx2_thr
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>x1</th>
<th>x2^3</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.994918</td>
<td>-0.826899</td>
</tr>
</tbody>
</table>
</div>
```python
# hide_input
# PLOT results
t_test = np.linspace(8, 10, 201)
#plt.plot(t, sol_new[:, 0], label='Actual', alpha = 0.4)
#plt.plot(ypred, label='Actual')
plt.plot(t_test, ypred_x2, 'r--', lw = 2, label='Prediction')
plt.plot(t_discover, dx2dt, 'g', label='Actual', alpha = 0.4)
plt.axvline(x = 8, color='k', linestyle='--')
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$dx_2/dt$', fontsize=14)
plt.grid()
plt.legend()
# plt.savefig('dx2dt_fit.png', bbox_inches='tight')
```
## Forecasting using SINDy
Now that we have discovered the differential equations, we can use the ODE solver to forecast the future.
```python
# Manually entering values for coefficients but this can be automated
def fun_forecast(y, t):
x1, x2 = y
dxdt = [-0.5*(x1**3) - 0.9985*x2,
0.995*x1 - 0.8269*x2**3]
return dxdt
t_forecast = np.linspace(10, 15, 500)
y0 = [x1[-1], x2[-1]]
sol_forecast = odeint(fun_forecast, y0, t_forecast)
```
```python
# hide
plt.plot(t_forecast, sol_forecast[:, 0], 'b', label='x1(t)')
plt.plot(t_forecast, sol_forecast[:, 1], 'g', label='x2(t)')
plt.legend(loc='best')
plt.xlabel('t')
plt.grid()
```
```python
# hide_input
plt.plot(t_forecast, sol_forecast[:, 0], 'b--', label='Forecast')
plt.plot(t_full, sol[:, 0], 'g', label='Actual', alpha=0.4)
plt.axvline(x = 10, color='k', linestyle='--')
plt.legend(loc='best')
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$x_1$', fontsize=14)
plt.grid()
```
```python
# hide_input
plt.plot(t_forecast, sol_forecast[:, 1], 'b--', label='Forecast')
plt.plot(t_full, sol[:, 1], 'g', label='Actual', alpha=0.4)
plt.axvline(x = 10, color='k', linestyle='--')
plt.legend(loc='best')
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$x_2$', fontsize=14)
plt.grid()
```
```python
# hide
y_pred1, y_pred2 = sol_forecast[:, 0], sol_forecast[:, 1]
y_test1, y_test2 = sol[1001:, 0], sol[1001:, 1]
```
```python
# hide
for (y_test, y_pred) in zip([y_test1, y_test2], [y_pred1, y_pred2]):
print(f"R2 score: {r2_score(y_test, y_pred)}")
print(f"MSE score: {mean_squared_error(y_test, y_pred)}")
print(f"MAE score: {mean_absolute_error(y_test, y_pred)}")
print(f"Median AE score: {median_absolute_error(y_test, y_pred)}")
print(f"MAPE score: {mean_absolute_percentage_error(y_test, y_pred)}")
print(f"SMAPE score: {symmetric_mape(y_test, y_pred)}")
```
R2 score: 0.9986014512322046
MSE score: 2.9194295629861532e-05
MAE score: 0.004498499556886438
Median AE score: 0.004095611629683124
MAPE score: 8.356195293001417
SMAPE score: 3.427959986309787
R2 score: 0.9988787480251635
MSE score: 2.8082044451838518e-05
MAE score: 0.004387715903581068
Median AE score: 0.0032235369101030545
MAPE score: 8.17055503299457
SMAPE score: 3.0643658219157226
```python
```
|
f569a834c02ee2dd9dfe7f64231dde771eb83c07
| 195,651 |
ipynb
|
Jupyter Notebook
|
_notebooks/2019-09-10-sindy-basic.ipynb
|
fnauman/ds-blog
|
2620141d4758b3d14e20478e30756905d0b3773e
|
[
"Apache-2.0"
] | null | null | null |
_notebooks/2019-09-10-sindy-basic.ipynb
|
fnauman/ds-blog
|
2620141d4758b3d14e20478e30756905d0b3773e
|
[
"Apache-2.0"
] | 2 |
2020-10-03T12:26:27.000Z
|
2021-01-16T06:41:44.000Z
|
_notebooks/2019-09-10-sindy-basic.ipynb
|
fnauman/ds-blog
|
2620141d4758b3d14e20478e30756905d0b3773e
|
[
"Apache-2.0"
] | null | null | null | 170.131304 | 25,372 | 0.893525 | true | 4,990 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.859664 | 0.855851 | 0.735744 |
__label__eng_Latn
| 0.435286 | 0.547712 |
# 07 Practise Problems
# Review Exercises - Solutions
### Lesson Goal
This lesson is a series of practise problems to test your understanding before we move onto the __Applications of Programming__ section of the course.
### Fundamental programming concepts
To solve the problems you must apply what you have learnt knowledge from the __Fundamentals of Programming__ section of the course.
<a id='Differentiation'></a>
# 1. Plotting Data
Import the data from the file `temperature_data.csv`.
The file contains the mean temperature (°C) of London, Philadelphia, and Hong Kong recorded during some months of the year.
Plot the temperature for each city against the number of the month (starting with 1 for January) as a single plot.
Label the axes.
Add a figure legend showing which line on the grpah represents London, Philadelphia, and Hong Kong respectively.
Interpolate to estimate the temperature data for the missing months for each city.
Plot the interpolated data.
*Extension : Change the x-axis tick labels from the number of the month to the name of the month (e.g. change 1 to January)*
```python
# Exercise 1 : Plotting Data
# Example Solution
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.interpolate import interp1d
# import data
data = np.loadtxt('sample_data/temperature_data.csv', delimiter=",", dtype=str)
print(data)
# we can see from the result that we should skip row one and import columns 1-12
data = np.loadtxt('sample_data/temperature_data.csv',
delimiter=",",
skiprows=1,
usecols=(tuple(range(1,7))))
# months for which we have data
months = [1, 3, 7, 9, 11, 12]
# plot data
plt.plot(months, data[0], label="London");
plt.plot(months, data[1], label="Philadelphia");
plt.plot(months, data[2], label="Hong Kong");
#plt.xlim(months[0], months[-1])
# label axes and add legend
plt.xlabel('Month')
plt.ylabel('Temperature °C')
plt.xticks(range(1,13), range(1,13))
plt.legend()
all_months = np.arange(1, 13, 1)
# interpolate
for i in range(3):
interp = interp1d(months, data[i], 'cubic') # type = ‘linear’, ‘nearest’, ‘zero’, ‘cubic’...
plt.plot(all_months, interp(all_months), 'k--');
# relabel x ticks
plt.xticks(range(1,13), ('Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'));
```
<a id='SymbolicMathematics'></a>
# 2. Curve Fitting
Import the data from the file `sample_data/air_temperature.dat`.
The file contains the air temperature (°C) recorded every 2 hours during a 24 hour period.
Plot the data as a scatter plot and label the axes.
Look at the shape of the plot. Based on your observation, approximate a continuous function that describes the data by curve fitting.
Plot the function as a line on the same graph as the original data.
Calculate the root mean sqaured error between your fitted function and the original data.
```python
# Exercise 2 : Curve Fitting
# Example Solution
# import data
data = np.loadtxt('sample_data/air_temperature.dat', delimiter=",")
time = np.array(range(0, 24, 2))
# plot data as scatter plot
plt.plot(time, data, 'o')
plt.xlim(time[0], time[-1])
# label axes
plt.xlabel('time (hours)')
plt.ylabel('Temperature (°C)')
b = np.polyfit(time, data, 3) # third order polynolmial
data_fit = np.poly1d(b)(time)
plt.plot(time, data_fit)
```
<a id='Symbolic_Differentiation'></a>
# 3. Functions and Libraries
Write a Python function for the following function:
$f(x)=e^{\frac{-K x}{50}}\sin(x)$
The function should:
- take `x` and `K` as input arguments
- return the value of the function
- have a dcoumentation string
Store your function in a seperate file called `my_functions` and import it to your main program.
In your main program, import the data from the file `sample_data/air_temperature.dat`.
For each data point, make a plot of `f` vs. `x` where:
`K` = the value of the imported data point
`x` = integers in the range [0 , 15]
Add a legend.
```python
# Exercise 3 : Functions and Libraries
# Example Solution
from sample_data.my_functions.my_functions import *
data = np.loadtxt('sample_data/air_temperature.dat', delimiter=",")
x = np.array(range(15))
def test(x, K):
return np.exp(-K * x /50) * np.sin(x)
for d in data:
plt.plot(x, test(x, d), label=f'K={d}')
plt.legend(bbox_to_anchor=(1,1), loc="upper left")
```
# 4. Systems of Equations
The following four measurements of the quantity $g$ were made at time $t_0, t_1, t_2, t_3$ :
$(t_0,g_0)=(0,3)$
$(t_1,g1)=(0.25,1)$
$(t_2,g_2)=(0.5,-3)$
$(t_3,g_3)=(0.75,1)$.
The measurements lie on a wave function that may be expresssed as:
$g = a\cos(\pi t) + b\cos(2\pi t) + c\cos(3\pi t) + d\cos(4\pi t)$
where $a$, $b$, $c$, and $d$ are constants.
Solve for the four constants by arranging them as a system of four linear equations.
Plot of the wave for $t$ in the range [0, 1].
Indicate the four measurements by ploting them as dots.
```python
# Exercise 4 : Systems of Equations
# Example Solution
#data
t = np.array([0, 0.25, 0.5, 0.75])
g = np.array([ 3, 1, -3, 1])
# array to hold data
lhs = np.zeros((4, 4))
rhs = np.zeros(4)
# populate arrays
for i in range(4):
lhs[i] = np.cos(1 * np.pi * t[i]), \
np.cos(2 * np.pi * t[i]), \
np.cos(3 * np.pi * t[i]), \
np.cos(4 * np.pi * t[i]) # Store one row at a time
rhs[i] = g[i]
# solve system of equations
sol = np.linalg.solve(lhs, rhs)
# print value of coefficients
print('a,b,c,d: ',sol)
# range of values for t
t_all = np.linspace(0, 1, 100)
# g for all t
g_all = sol[0] * np.cos(1 * np.pi * t_all) + \
sol[1] * np.cos(2 * np.pi * t_all) + \
sol[2] * np.cos(3 * np.pi * t_all) + \
sol[3] * np.cos(4 * np.pi * t_all)
# plot wave function
plt.plot(t_all, g_all, 'b', label='wave')
# plot data points
plt.plot(t, g, 'ro', label='data')
plt.legend(loc='best');
```
<a id='Symbolic_Differentiation'></a>
# 5. Numerical Integration
__(a)__ *Analytically* show that the integral of the function <br> $f(x)=\text{e}^{-x}$ <br> for $x$ in the range [1, 5] is equal to $-\text{e}^{-5} + \text{e}^{-1}$.
<br>
<br>
__(b)__ Show that the following integral is equal to 0.218236 (6 s.f.):
$$\int_1^5 \frac{\text{e}^{-x}}{x}\text{d}x$$
Perform the integration numerically.
```python
# Exercise 5 : Numerical Integration
# Example Solution
# a) Analytical solution
import sympy as sp
from sympy import symbols, integrate
from scipy.integrate import quad
x = symbols('x')
f = sp.exp(-x)
print(integrate(f, (x, 1, 5))) # definite integral from x=0..1
# b) numerical solution
# first create a function to describe the integrand
def integrand(x):
return np.exp(-x) / x
# create two variables to store the integral and the error
ans, err = quad(integrand, 1, 5)
print(ans)
```
-exp(-5) + exp(-1)
0.21823563880424607
<a id='Symbolic_Differentiation'></a>
# 6. Predator Prey
Also known as Lotka-Volterra equations, the predator-prey equations are a pair of first-order non-linear ordinary differential equations.
They represent a simplified model of the change in populations of two or more species which interact via predation.
Let $x$ represent the population of predators and $y$ represent the population of prey.
The rate of change of each population can be represented by two first order differential equations.
$$\frac{dx}{dt} = x (a - by)$$
$$\frac{dy}{dt} = y (dx - c)$$
$a , b , c$ and $d$ are parameters, which are assumed to be positive.
$a$ : growth of prey population <br>
$b$ : decay of prey population due to predation<br>
$c$ : growth of predator population <br>
$d$ : decay of predator population due to natural death <br>
Let $a = b = c = d = 1$
__(a)__ Find the numerical solution to the system of equations for $x$ and $y$, for time, $t$, in the range [0, 12].
__(b)__ Plot $x$ and $y$, against $t$ = range [0, 12].
```python
# Exercise 6 : Predator Prey
# Example Solution
# Part A
# Import the required modules
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import odeint
# constants
a,b,c,d = 1,1,1,1
def dP_dt(P, t):
"""
P is a vector such that P[0] = x and P[1] = y.
Returns [x', y']
"""
return [ P[0]*(a - b*P[1]),
-P[1]*(c - d*P[0])]
ts = np.linspace(0, 12, 100) # the value(s) of t at which to evaluate P
P0 = [1.5, 1.0] # the initial value of each population
# odeint returns solution for x,y at each value of t
Ps = odeint(dP_dt, P0, ts)
prey = Ps[:,0] # column 0 = solution for x
predators = Ps[:,1] # column 1 = solution for y
```
```python
# Part B
plt.plot(ts, prey, label="prey")
plt.plot(ts, predators, label="predators")
plt.xlabel("Time")
plt.ylabel("Population")
plt.legend();
```
|
64a311c0379b32d496c7acbbe1f258da016a072d
| 171,010 |
ipynb
|
Jupyter Notebook
|
ReviewQuestions_ExampleSolutions/07_PractiseEngineeringlProblems__ClassMaterial.ipynb
|
hphilamore/UoB_PythonForEngineers_2020
|
27eb7e07edecd2003d4672c83ebc6c355d92b46b
|
[
"MIT"
] | null | null | null |
ReviewQuestions_ExampleSolutions/07_PractiseEngineeringlProblems__ClassMaterial.ipynb
|
hphilamore/UoB_PythonForEngineers_2020
|
27eb7e07edecd2003d4672c83ebc6c355d92b46b
|
[
"MIT"
] | null | null | null |
ReviewQuestions_ExampleSolutions/07_PractiseEngineeringlProblems__ClassMaterial.ipynb
|
hphilamore/UoB_PythonForEngineers_2020
|
27eb7e07edecd2003d4672c83ebc6c355d92b46b
|
[
"MIT"
] | null | null | null | 297.926829 | 56,716 | 0.923747 | true | 2,601 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.887205 | 0.867036 | 0.769238 |
__label__eng_Latn
| 0.967965 | 0.62553 |
# The generalized model
The equation of the generalized model for 1 species can be written as follows:
$
\dot{X} = X\left[\left(r+b X\right)-\left(a + c b X\right)X\right],
$
```python
from sympy.abc import x, y
from sympy.solvers import solve
from sympy import *
```
```python
# We define the symbols for the equations and functiones
a,b,r,c = symbols('a b r c')
f = Function('f')
```
```python
# We define the function to solve
eq = r+(b-a)*x - c*b*x**2
f= x*eq
```
We calculate the fixed points
```python
s1,s2,s3=solve(f,x)
print('fixed points =',(s1,s2,s3))
```
fixed points = (0, (-a + b - sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))/(2*b*c), (-a + b + sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))/(2*b*c))
We calculate the Jacobian of the function
```python
dfdx = f.diff(x)
print("f'(x) =", simplify(dfdx))
```
f'(x) = -2*a*x - 3*b*c*x**2 + 2*b*x + r
We evaluate the jacobian in the fixed points
```python
L1=dfdx.subs(x,s1)
L1
```
r
```python
L2=dfdx.subs(x,s2)
L2
```
r + (-a + b)*(-a + b - sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))/(2*b*c) - (-a + b - sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))**2/(4*b*c) + (-a + b - sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))*sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r)/(2*b*c)
```python
L3=simplify(dfdx.subs(x,s3))
L3
```
-(a**2 - 2*a*b - a*sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r) + b**2 + 4*b*c*r + b*sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))/(2*b*c)
```python
# We can evaluate the eigenvalues with the numerical values of $r,c,b,a$
L3.subs(r,0.1).subs(b,-0.01).subs(a,0.001).subs(c,0.1)
```
-0.1395 - 0.0918681119866954*I
```python
# Another way to calculate the Jacobian Matrix and the eigenvalues.
equilibria = solve(f,x)
eqMat = Matrix([f])
Mat = Matrix([x])
jacMat =eqMat.jacobian(Mat)
print('Jacobian %s' % jacMat)
print('---------------------')
# iterate through list of equilibria
for item in equilibria:
eqmat = jacMat.subs([ (x, item)])
print('The eigenvalues for the fixed point (%s) is %s:'
%(item, eqmat.eigenvals()))
print('-------------------------------------------')
```
Jacobian Matrix([[-b*c*x**2 + r + x*(-a + b) + x*(-a - 2*b*c*x + b)]])
---------------------
The eigenvalues for the fixed point (0) is {r: 1}:
-------------------------------------------
The eigenvalues for the fixed point ((-a + b - sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))/(2*b*c)) is {(-a**2 + 2*a*b - a*sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r) - b**2 - 4*b*c*r + b*sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))/(2*b*c): 1}:
-------------------------------------------
The eigenvalues for the fixed point ((-a + b + sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))/(2*b*c)) is {(-a**2 + 2*a*b + a*sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r) - b**2 - 4*b*c*r - b*sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))/(2*b*c): 1}:
-------------------------------------------
```python
eqmat = jacMat.subs([ (x, s2)])
eqmat.refine(Q.positive(b)).refine(Q.negative(r))
```
Matrix([[r + (-a + b)*(-a + b - sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))/(2*b*c) - (-a + b - sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))**2/(4*b*c) + (-a + b - sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r))*sqrt(a**2 - 2*a*b + b**2 + 4*b*c*r)/(2*b*c)]])
```python
```
|
9f1da457b88f8ff87f17d02b722c7bed5b9af526
| 6,665 |
ipynb
|
Jupyter Notebook
|
Chapter2_1species/Generalized_eq_1_species_Sympy.ipynb
|
JJ-Lab/Lucianos_Thesis
|
7ed50b0d5d12903066f8caec7df8cdf38490a060
|
[
"MIT"
] | null | null | null |
Chapter2_1species/Generalized_eq_1_species_Sympy.ipynb
|
JJ-Lab/Lucianos_Thesis
|
7ed50b0d5d12903066f8caec7df8cdf38490a060
|
[
"MIT"
] | null | null | null |
Chapter2_1species/Generalized_eq_1_species_Sympy.ipynb
|
JJ-Lab/Lucianos_Thesis
|
7ed50b0d5d12903066f8caec7df8cdf38490a060
|
[
"MIT"
] | null | null | null | 23.718861 | 241 | 0.436759 | true | 1,229 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92944 | 0.855851 | 0.795463 |
__label__eng_Latn
| 0.441169 | 0.686459 |
# Trying out Bayesian inference with PyMC3 on covid data
_Disclaimer: this is in no way intended to be relied on!_
_this was done purely for me to learn something_
It doesn't respect reactions of the countries, it doesn't respect the testing capabilities / numbers in the countries, it doesn't respect real biological models and past research in the field of virology and pandemics.
```python
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import StrMethodFormatter
import seaborn as sns
import pandas as pd
import theano
%matplotlib inline
import warnings
from scipy.stats import halfnorm
warnings.filterwarnings('ignore')
```
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
### Data based on a dump from a wiki page offering country specific infections.
Data is a snapshot form Kaggle taken from around mid April 2020 and wasn't updated since!
To make the data more representative, days before 2000 infections were reached were removed, since there might have been just single hotspots that were under control, also only those timeseries were looked at, that had in it's current state more than 20.000 infections counted.
Furthermore the data was restricted to series of at least 10 days.
These restrictions allow to look at a smaller set.
```python
infections = []
countries = {}
MIN_DATES = 10
with open('untitled1.txt', 'r') as csv:
intermediate = []
counter = 0
for line in csv:
line = line.strip().split(',')
country = line[2]+'-'+line[1]
infection = int(float(line[4]))
deaths = int(float(line[5]))
# print(line)
if infection < 2000:
continue
if not country in countries:
countries[country] = 0
counter = 0
if len(intermediate) > MIN_DATES and intermediate[-1][2] > 10000:
for i in intermediate:
infections.append(i)
intermediate = []
counter += 1
intermediate.append([country, counter, infection, deaths])
if len(intermediate) > MIN_DATES:
for i in intermediate:
infections.append(i)
full_df = None
full_df = pd.DataFrame(infections, columns=['country', 'day', 'infections', 'deaths'])
full_df = full_df.astype({'day': 'int32', 'infections': 'int32', 'deaths': 'int32'})
#filters = full_df.country.apply(lambda x: x in [
# 'China', 'Germany', 'Japan', 'South Korea', 'France', 'Netherlands'])
#full_df=full_df[filters]
countries = full_df.country.values
uniq_countries = full_df.country.unique()
n_countries = len(uniq_countries)
full_df['country_idx'] = [list(uniq_countries).index(x) for x in countries]
#print(full_df.country_idx)
#print(full_df)
print(list(enumerate(uniq_countries)))
```
[(0, 'Austria-'), (1, 'Belarus-'), (2, 'Belgium-'), (3, 'Brazil-'), (4, 'Canada-Ontario'), (5, 'Canada-Quebec'), (6, 'Chile-'), (7, 'China-Hubei'), (8, 'Ecuador-'), (9, 'France-'), (10, 'Germany-'), (11, 'India-'), (12, 'Iran-'), (13, 'Ireland-'), (14, 'Israel-'), (15, 'Italy-'), (16, 'Japan-'), (17, 'South Korea-'), (18, 'Mexico-'), (19, 'Netherlands-'), (20, 'Pakistan-'), (21, 'Peru-'), (22, 'Poland-'), (23, 'Portugal-'), (24, 'Qatar-'), (25, 'Romania-'), (26, 'Russia-'), (27, 'Saudi Arabia-'), (28, 'Singapore-'), (29, 'Spain-'), (30, 'Sweden-'), (31, 'Switzerland-'), (32, 'Turkey-'), (33, 'US-California'), (34, 'US-Colorado'), (35, 'US-Connecticut'), (36, 'US-Florida'), (37, 'US-Georgia'), (38, 'US-Illinois'), (39, 'US-Indiana'), (40, 'US-Louisiana'), (41, 'US-Maryland'), (42, 'US-Massachusetts'), (43, 'US-Michigan'), (44, 'US-New Jersey'), (45, 'US-New York'), (46, 'US-Ohio'), (47, 'US-Pennsylvania'), (48, 'US-Texas'), (49, 'US-Virginia'), (50, 'US-Washington'), (51, 'United Arab Emirates-'), (52, 'United Kingdom-')]
### here is the modeling part
the base idea is to fit a sigmoid like function to model the number of total infections. This assumption alone is probably already enough reason to not trust any output of this model. So _please don't trust_ the model.
Instead of using the regular sigmoid, I chose the _Gompertz Function_:
\begin{equation}
\large{
f(x) = a \cdot e^{b \cdot e^{c \cdot x} }
}
\end{equation}
The reason for using the Gompertz function is it's assymmetrie, allowing to adjust for the exponential increase ans slow down phases.
with $b, c < 0$ the value of $a$ determines the upper limit and therefore in our investigation the upper limit of infections.
$b$ and $c$ determine the speeed and acceleration.
To have some benefit from all the past countries, I tried to model $b$ and $c$ hierarchical, having a "mean value" across all time series, and the individual time series deviates from this according to a small normal distribution. The idea is, to have estimates for how things will develop even when very little hints are in the data.
```python
from theano import shared
predictors = full_df.day.values.copy()
predictors_shared = shared(predictors)
country_id = full_df.country_idx.values.copy()
country_idx = shared(country_id)
from theano import shared
predictors = full_df.day.values.copy()
predictors_shared = shared(predictors)
import scipy
with pm.Model() as model:
a = pm.Uniform('a', lower=1000, upper=2000000, shape=n_countries)
b_base = pm.Normal('b_base', mu=-4.5, sigma=0.5)
b = pm.Normal('b', mu=b_base, sigma=0.5, shape=n_countries)
c_base = pm.Normal('c_base', mu=-0.075, sigma=0.03)
c = pm.Normal('c', mu=c_base, sigma=0.03, shape=n_countries)
y = (a[country_idx] * pm.math.exp(b[country_idx] * pm.math.exp(c[country_idx] * (predictors_shared))))
obs = pm.Normal('obs', mu=y, sigma=15000, observed=full_df.infections.values)
trace = pm.sample(40000, cores=2)
```
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [c, c_base, b, b_base, a]
Sampling 2 chains, 0 divergences: 4%|▍ | 3538/81000 [02:59<1:04:25, 20.04draws/s]
### Now plotting the results of the fittings
The fittings did not work out very well, we will see why when we look at the traces.
We can see some pretty wide confidence intervals, so like the output suggested it didn't work out too well.
Interestingly this is especially then the case, when the counts haven't turned into the slow down phase where the infections are under control. This also makes sense, because the model has to guess which kind of behavior it will see when the infections get under control, without having any hints on it.
But here is the hierarchical model at least helping a bit, interpolating from overal behavior of the infections to the individual case.
```python
from pymc3 import forestplot
plt.figure(figsize=(20,20))
forestplot(trace, var_names=['a'])
forestplot(trace, var_names=['b'])
forestplot(trace, var_names=['c'])
pm.traceplot(trace)
print(list(enumerate(uniq_countries)))
```
### now predicting the future...
the traceplot above show what we already assumed, had some issues, especially the base values of c and b didn't fully converge to a single distribution, normally you would do a reparametrization and probably increase tuning steps to fix this.
But still let us try to now use the found model parameters to simulate how it's going to continue.
```python
#ppc = pm.sample_posterior_predictive(trace, samples=500, model=model)
x = np.tile(np.linspace(1, 100, 100).astype('int32'), n_countries)
print(len(x))
predictors_shared.set_value(x)
y = np.repeat(np.linspace(0,n_countries-1,n_countries).astype('int32'), 100)
print(len(y))
country_idx.set_value(y)
with model:
post_pred = pm.sample_posterior_predictive(trace, samples=10000)
```
### looking at fittings and predictions
What we can actually see is that the model fitted the given points quite ok, but the predictions have quite a lot uncertainty. Especially in those cases, where there is little hint as to how much the region was able to slow down.
So again don't rely on this model for anything.
This was done purely as an educational exercise.
```python
means = post_pred['obs'].mean(axis=0, keepdims=False).copy()
stds = post_pred['obs'].std(axis=0)
for i in range(n_countries):
choice = y==i
old_choice = full_df.country_idx==i
plt.figure(figsize=(10,10))
plt.errorbar(np.linspace(1,100,100),
means[choice],
stds[choice],
linestyle='None',
marker='.')
plt.plot(np.linspace(1,len(full_df[old_choice]), len(full_df[old_choice])),
full_df.infections[old_choice],
marker='o')
plt.title(uniq_countries[i])
plt.show()
```
```python
```
```python
```
|
b631b4623ad16a1f5fe0cf8ef2a44e398bafa397
| 12,445 |
ipynb
|
Jupyter Notebook
|
BayesianCovid.ipynb
|
kayr7/bayesInferenceSample
|
b0a4c9ea78ab61475d89086863a06dcaaea86f08
|
[
"MIT"
] | null | null | null |
BayesianCovid.ipynb
|
kayr7/bayesInferenceSample
|
b0a4c9ea78ab61475d89086863a06dcaaea86f08
|
[
"MIT"
] | null | null | null |
BayesianCovid.ipynb
|
kayr7/bayesInferenceSample
|
b0a4c9ea78ab61475d89086863a06dcaaea86f08
|
[
"MIT"
] | null | null | null | 37.712121 | 1,046 | 0.576537 | true | 2,305 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.828939 | 0.782662 | 0.648779 |
__label__eng_Latn
| 0.968794 | 0.345663 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.